The biggest problem with Apache is the amount of ram is uses. I’ll discuss the following techniques for speeding up Apache and lowering the ram used.
Loading Fewer Modules
Handle Fewer Simultaneous Requests
Recycle Apache Processes
Use KeepAlives, but not for too long
Lower your timeout
Log less
Don’t Resolve Hostnames
Don’t use .htaccess
Loading Fewer Modules
First things first, get rid of unnecessary modules. Look through your config files and see what modules you might be loading. Are you using CGI? Perl? If you’re not using modules, by all means, don’t load them. That will save you some ram, but the BIGGEST impact is in how Apache handles multiple requests.
Handle Fewer Simultaneous Requests
The more processes apache is allowed to run, the more simultaneous requests it can serve. As you increase that number, you increase the amount of ram that apache will take. Looking at TOP would suggest that each apache process takes up quite a bit of ram. However, there are a lot of shared libraries being used, so you can run some processes, you just can’t run a lot. With Debian 3.1 and Apache2, the following lines are the default:
I haven’t found documentation on this, but prefork.c seems to be the module that’s loaded to handle things w/ Apache2 and Debian 3.1. Other mechanisms could or could not be much more memory efficient, but I’m not digging that deep, yet. I’d like to know more, though, so post a comment and let me know. Anyway, the settings that have worked for me are:
What I’m basically saying is, “set the maximum amount of requests that this server can handle at any one time to 5.” This is pretty low, and I wouldn’t try to do this on a high volume server. However, there is something you can and should do on your webservers to get the most out of them, whether you’re going for low memory or not. That is tweak the keepalive timeout.
Recycle Apache Processes
If you noticed, I changed the MaxRequestsPerChild variable to 500, from 0. This variable tells Apache how many requests a given child process can handle before it should be killed. You want to kill processes, because different page requests will allocate more memory. If a script allocates a lot of memory, the Apache process under which it runs will allocate that memory, and it won’t let it go. If you’re bumping up against the memory limit of your system, this could cause you to have unnecessary swapping. Different people use different settings here. How to set this is probably a function of the traffic you receive and the nature of your site. Use your brain on this one.
Use KeepAlives, but not for too long
Keepalives are a way to have a persistent connection between a browser and a server. Originally, HTTP was envisioned as being “stateless.” Prior to keepalive, every image, javascript, frame, etc. on your pages had to be requested using a separate connection to the server. When keepalives came into wide use with HTTP/1.1, web browsers were able to keep a connection to a server open, in order to transfer multiple files across that same connection. Fewer connections, less overhead, more performance. There’s one thing wrong, though. Apache, by default, keeps the connections open for a bit too long. The default seems to be 15 seconds, but you can get by easily with 2 or 3 seconds.
This is saying, “when a browser stops requesting files, wait for X seconds before terminating the connection.” If you’re on a decent connection, 3 seconds is more than enough time to wait for the browser to make additional requests. The only reason I can think of for setting a higher KeepAliveTimeout is to keep a connection open for the NEXT page request. That is, user downloads page, renders completely, clicks another link. A timeout of 15 would be appropriate for a site that has people clicking from page to page, very often. If you’re running a low volume site where people click, read, click, etc., you probably don’t have this. You’re essentially taking 1 or more apache processes and saying, “for the next 15 seconds, don’t listen to anyone but this one guy, who may or may not actually ask for anything.” The server is optimizing one case at the expense of all the other people who are hopefully hitting your site.
Lower Your Timeout
Also, just in case, since you’re limiting the number of processes, you don’t want one to be “stuck” timing out for too long, so i suggest you lower your “normal” Timeout variable as well.
Log Less
If you’re trying to maximize performance, you can definitely log less. Modules such as Mod_Rewrite will log debugging info. If you don’t need the debugging info, get rid of it. The Rewrite log is set with the RewriteLog command. Also, if you don’t care about looking at certain statistics, you can choose to not log certain things, like the User-Agent or the Http-Referer. I like seeing those things, but it’s up to you. Don’t Resolve Hostnames
This one’s easy. Don’t do reverse lookups inside Apache. I can’t think of a good reason to do it. Any self respecting log parser can do this offline, in the background.
HostnameLookups Off
Don’t Use .htaccess
You’ve probably seen the AllowOverride None command. This says, “don’t look for .htaccess files” Using .htaccess will cause Apache to 1) look for files frequently and 2) parse the .htaccess file for each request. If you need per-directory changes, make the changes inside your main Apache configuration file, not in .htaccess. Well, that’s it for Part 1, I’ll be back soon with Part 2, where I’ll talk about MySQL optimization & possibly a few other things that crop up.