suEXEC on OpenSolaris

One nice thing about having all dynamic content being generated by CGI is that you can use suEXEC to run the scripts as a different user. This is primarily used for systems where you have multiple untrusted users who run sites in one HTTP server. Then no one can interfere with anyone else. It can also be used simply for separating the application from the server.

I’m the only user on my server so I don’t necessarily have any of these security concerns, but I have enabled suEXEC for convenience. For example, WordPress will allow you to modify the stylesheets from the admin interface as long as it can write to them. With suEXEC, the admin interface can run as my Unix user, so I can edit the files from both the web interface and the command line without having wide-open permissions or switching to root.

Same applies for Trac where I can manage the project with the web interface or trac-admin on the command line. The same effect could pretty much be obtained by using Unix groups properly:

# groupadd wordpress
# usermod -G wordpress webservd
# usermod -G wordpress jlee  # my username
# chgrp -R wordpress /docs/  # virtualhost document root
# chmod -R g+ws /docs/  # make directory writable and always owned by
                                           the wordpress group

Then umask 002 would have to be set in Apache’s and my profile so any files that get created can be written to by the other users in the group. That’s all well and good, but it seems like a bit of work and I don’t like the idea of messing with the default umask.

On to suEXEC. First, let’s show the current user that PHP executes as. Create a file test.php containing <?php echo exec("id"); ?>. Accessing the script from your web browser should show something like uid=80(webservd) gid=80(webservd).

Next, in OpenSolaris, the suexec binary must be enabled:

# cd /usr/apache2/2.2/bin/  # go one directory further into the amd64 dir
                              if you're running 64-bit
# mv suexec.disabled suexec
# chown root:webservd suexec
# chmod 4750 suexec
# ./suexec -V
 -D AP_DOC_ROOT="/var/apache2/2.2/htdocs"
 -D AP_GID_MIN=100
 -D AP_HTTPD_USER="webservd"
 -D AP_LOG_EXEC="/var/apache2/2.2/logs/suexec_log"
 -D AP_SAFE_PATH="/usr/local/bin:/usr/bin:/bin"
 -D AP_UID_MIN=100
 -D AP_USERDIR_SUFFIX="public_html"

These variables were set at compile time and cannot be changed. They ensure that certain conditions must be met in order to use the binary. That’s very important because it’s setuid root. The first thing I had to do was move everything from my old document root to the one specified above in AP_DOC_ROOT. Then you can add SuexecUserGroup jlee jlee (with whatever username and group you want the scripts to run as) to your <VirtualHost> section of the Apache configuration. At this point if you try to execute test.php you’ll probably see one of a couple errors in the suEXEC log (/var/apache2/2.2/logs/suexec_log):

  • [2009-07-27 11:08:02]: uid: (1000/jlee) gid: (1000/jlee) cmd: php-cgi
    [2009-07-27 11:08:02]: command not in docroot (/usr/php/bin/php-cgi)

    In this case, php-cgi is going to have to be moved to the document root:

    $ cp /usr/php/bin/php-cgi /var/apache2/2.2/htdocs/
    $ pfexec vi /etc/apache2/2.2/conf.d/php-cgi.conf  # modify the ScriptAlias appropriately
    $ svcadm restart http
  • [2009-07-27 11:11:07]: uid: (1000/jlee) gid: (1000/jlee) cmd: php-cgi
    [2009-07-27 11:11:07]: target uid/gid (1000/1000) mismatch with directory (0/2) or program (0/0)

    Make sure everything that suexec is to execute is owned by the same user and group as specified in the SuexecUserGroup line of your Apache configuration.

Now, running test.php should give the correct results: uid=1000(jlee) gid=1000(jlee). Done!

As a side note, I lose all frame of reference while I write so I can’t remember if I’m writing this for you or me, explaining what I’ve done or what you should do. Sorry 🙂

Reducing Memory Footprint of Apache Services

An interesting thing happened when I set up this blog. It first manifested itself as a heap of junk mail in my inbox. Then no mail at all. I had run out of memory. WordPress requires me to run MySQL and that extra 12M pushed me over the 256M cap in my OpenSolaris 2009.06 zone. As a result SpamAssassin could not spawn, and ultimately Postfix died. So I sought out to try to reduce my memory footprint.

Let’s take a look at where things were when I got started:

$ prstat -s rss -Z 1 1 | cat
 13488 webservd  183M   92M sleep   59    0   0:00:28 0.0% trac.fcgi/1
 13479 webservd   59M   41M sleep   59    0   0:00:14 0.0% trac.fcgi/1
 13489 webservd   59M   41M sleep   59    0   0:00:14 0.0% trac.fcgi/1
  4463 mysql      64M   12M sleep   59    0   0:02:39 0.0% mysqld/10
 19296 root       13M 8444K sleep   59    0   0:00:25 0.0% svc.configd/16
 19619 named      11M 5824K sleep   59    0   0:03:51 0.0% named/7
 13473 root       64M 4352K sleep   59    0   0:00:00 0.0% httpd/1
 19358 root       12M 3688K sleep   59    0   0:00:54 0.0% nscd/31
 19294 root       12M 3180K sleep   59    0 244:37:22 0.0% svc.startd/13
 13476 webservd   64M 2940K sleep   59    0   0:00:00 0.0% httpd/1
 13486 webservd   64M 2924K sleep   59    0   0:00:00 0.0% httpd/1
 13745 root     6248K 2832K cpu1    59    0   0:00:00 0.0% prstat/1
 13721 root     5940K 2368K sleep   39    0   0:00:00 0.0% bash/1
 13485 webservd   64M 2252K sleep   59    0   0:00:00 0.0% httpd/1
 13482 webservd   64M 2168K sleep   59    0   0:00:00 0.0% httpd/1
ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE                        
    39       60  494M  246M    96% 244:47:13 0.1% case                        
Total: 60 processes, 149 lwps, load averages: 0.61, 0.62, 0.52

First thing I noticed is the 174M that Trac was taking up. I was running it as a FastCGI service for speed. The problem with that is it remains resident even when it’s not processing any requests, which is most of the time. One option I tried was setting DefaultMaxClassProcessCount 1 in my /etc/apache2/2.2/conf.d/fcgid.conf file. This effectively limits Trac to only one process at a time, which greatly reduces the memory utilization, but means it can only service one request at a time. That’s not an option.

Fortunately, my zone seems to have good, fast processors and disks, so I can put up with running it as standard CGI service. Easy enough to make the switch, just move some things around in my Apache configuration:

ScriptAlias /trac /usr/share/trac/cgi-bin/trac.cgi
#ScriptAlias /trac /usr/share/trac/cgi-bin/trac.fcgi
#DefaultInitEnv TRAC_ENV "/trac/iriverter"

<Location "/trac">
    SetEnv TRAC_ENV "/trac/iriverter"
    Order allow,deny
    Allow from all

So things are looking much better, but I’m still not happy with it:

$ prstat -s rss -Z 1 1 | cat
 15362 webservd   74M   31M sleep   59    0   0:00:00 0.0% httpd/1
 15388 webservd   69M   30M sleep   59    0   0:00:00 0.0% httpd/1
 15366 webservd   66M   22M sleep   59    0   0:00:00 0.0% httpd/1
ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE                        
    39       58  254M  113M    44% 244:46:20 0.2% case 

Now Apache is being a hog, and that’s only a few of the httpd processes. By default on Unix, Apache uses the prefork MPM which serves each request from its own process. It likes to keep around a handful of children for performance, so it doesn’t have to swawn a new one each time. The problem is if your request involves PHP, each httpd process will load its own instance of the PHP module and it doesn’t let it go when it’s finished. I get this. It’s all for performance. My initial reaction was: wouldn’t be nice if Apache was threaded so requests can all share the same PHP code. That’s when I was introduced to the worker MPM. It serves requests from threads so it’s efficient, but also has a couple of children for fault tolerance. This is easy to switch to in OpenSolaris:

# svcadm disable http
# svccfg -s http:apache22 setprop httpd/server_type=worker
# svcadm refresh http
# svcadm enable http

I also copied /etc/apache2/2.2/samples-conf.d/mpm.conf into /etc/apache2/2.2/conf.d/ which includes some sane defaults like only spawning two servers to start with. This was good:

$ prstat -s rss -Z 1 1 | cat
ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE                        
    39       50  125M   75M    29% 244:46:23 0.3% case

75M makes me feel safe, like I could take the occasional spam bomb. What I forgot to mention is that mod_php isn’t supported with the worker MPM since any of its extensions might not be thread-safe. This is okay, because PHP can be run as a CGI program which has the additional benefit of being memory efficient (at the cost of speed) since it’s only loaded when it’s executed. All I had to do was create a file /etc/apache2/2.2/conf.d/php-cgi.conf containing:

<IfModule worker.c>
    ScriptAlias /php-cgi /usr/php/bin/php-cgi

    <Location "/php-cgi">
        Order allow,deny
        Allow from all
    Action php-cgi /php-cgi
    AddHandler php-cgi .php
    DirectoryIndex index.php

I’ll be the first to admit, running Trac and WordPress as CGI have made them noticeably slower, but I’d rather them run slower for as much action that they get and know that my mail will get to me. If you’re faced with similar resource constraints, you may want to consider these changes. There may be other ways I can tweak Apache, such as unloading unused modules, but I’m not ready to face that yet.