Bug 17876

Summary: yelp performance and defunct process on solaris
Product: Rarian Reporter: Matt Keenan <matt.keenan>
Component: GeneralAssignee: Don Scorgie <Don>
Status: NEW --- QA Contact:
Severity: major    
Priority: medium    
Version: unspecified   
Hardware: All   
OS: Solaris   
Whiteboard:
i915 platform: i915 features:
Attachments: Patch to fix rarian-man.c child process/pipe issues

Description Matt Keenan 2008-10-02 08:40:29 UTC
Created attachment 19343 [details] [review]
Patch to fix rarian-man.c child process/pipe issues

Two issues here both being caused by librarian on solaris.

1. rarian-man.c, forks a child to run manpath, however it uses exit() to exit
   the child, which is incorrect, it should use _exit() to exit the child
   If it does not then the childs file descriptors are left open and cause
   confusion down the line and result in yelp constantly polling one of the 
   open FD's, which in turn results in yelp consuming 50%+ of CPU... not good.

2. rarian-man.c should also call waitpid() on the child process, to ensure it
   exits, if it dosen't you end up with a defunct process lying around.

3. I've also noticed that rarian-man.c, closes/dups stdin and stdout this is
   really unecessary, as they are not being used.

The attached patch resolves the above three issues by :
 
  1. Calling _exit(0) instead of exit(0) for the child, this is by far the 
     must have fix, as this resolves the CPU hogging...
  2. Remove the closing/dup'ing of stdin and stdout, not needed, and moving
     the dup2 calls inside the child logic
  3. calling waitpid to ensure child has exited, and thus not ending up
     with a defunct process lying around.
Comment 1 Matt Keenan 2008-10-02 09:25:38 UTC
Just an addendum, to the patch WRT using _exit() instead of exit(), these are
only called if execlp() fails, which on solaris it does because "manpath"
utility does not exist :(

Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.