Archive for June, 2009

Say it Ain’t So Memcache

I will never claim that application profiling and stress testing are my strongpoints, but I’m having a really difficult time understanding the results of some tests I’ve been performing on my application.

Here’s the setup. My application is on 2 256 slices, 1 running nginx and PHP through fast cgi. The other is running MySQL. Outside of things like monit and munin, there is nothing else running on these slices. Perfect time to do some stress testing. The application is fairly database heavy, so I long ago decided to integrate memcache with an eye towards boosting peformance. Or so I thought (notice ominous foreshadowing).

My strategery with memcache is to never assume that it is either running, working, or contains the data I need. So my app will use it if it’s there but will carry on unaffected if it’s not. I left hooks in the app to be able to shut off memcache through config changes for cases where I’m testing via XAMPP and don’t have memcache running locally. This turned out to be very useful.

I have a third slice (which runs this blog and a couple of other smaller sites) that I installed http_load on. I used this box to drive the load tests.

One thing about http_load is that it doesn’t understand cookies. You provide it a URL or list of URLs and it just whacks on them until the server breaks. That poses a problem for apps like mine where being logged in is essential to the experience. So I had to make a few changes to the application to support a load testing mode. Once I change to this mode it will take the session identifier out of my config file instead of the cookie. No muss, no fuss, no meaningful change to the app’s behavior while in test mode, which is essential to ensure I’m comparing apples to apples.

OK, enough setup. Here’s one of my test scripts:

http_load -parallel 5 -seconds 30 test.url > test.out

So, run 5 threads for 30 seconds. While that’s going on I’m checking top on my nginx and MySQL slices. First thing I notice – MySQL is pretty much sleeping through the test. Good news. Load on that slice barely breaks above .2. But the fast-cgi processes on the nginx box launch to the top and hog up CPU and memory at an alarming rate. Before the 30 seconds is over load on the nginx box is over 3. Not good. End result was about 27 requests per second. Not horrible, but there’s no way the box could maintain that kind of load long term. I ran this test:

http_load -rate 20 -seconds 30 test.url > test.out

Which simulates 20 requests a second. So what I’m trying to do there is find a reasonable amount of traffic that will stress the server but not kill it. That seemed to be about the breaking point. The server handled 20 requests a second with some negative effects but it seemed it could be stable at that level.

So, had to do some thinking. In an effort to cheer myself up, I figured I’d disable memcache and see how bad it would be without its help. If I got between 20 and 30 with memcache surely I’d only get between 15 and 20 without it.

Well, guess what, nerd. Not so much. To my amazement, the result came in around 36 requests per second without memcache. Not only that, CPU consumption by the fast-cgi threads was reduced and their memory consumption was totally normal. Beyond that, load on the database server didn’t budge.

It’s almost like memcache is penalizing me. Things got a little weirder when I commented out the code to pull some objects out of memcache but left others in. The results got down to 10 per second, which is nearly unbearable. I wish I had a conclusive summary to give but right now my thinking is that the overhead of connecting to memcache and hydrating objects is slower than just getting the data from the database. Or maybe I’m just overusing memcache – storing and retrieving too many small objects for example.

So for the meantime I’m running without memcache, despite the hours and hours of work I put in to integrate it, and all the hopes and dreams of the children.

4 comments

Year 2009 Alert

Note to LAMP interviewees. It’s now 2009. Boasting that you wrote your own PHP framework is ridiculous and unimpressive. I’m sure you’re very clever but no, I will never agree to letting my project use 10K lines of unproven code that’s running your blog. Yea, we all can (and have) written a database interface class. Still not interested. Sorry.

Also, it might be a good idea to have some understanding of what the term “unit testing” means.

0 comments

CodeIgniter Autoloading and Performance

Got some interesting results tonight from my adventures with xdebug and CodeIgniter, specifically with the autoloading feature.

I had run xdebug to collect stats on my app’s landing page, the page where all users will be redirected after login. I’d naturally expect this to be one of the most heavily visited pages, therefore has to be as optimized as possible. After running the results of xdebug’s profiler (“xdebug.profiler_enable=On” in php.ini) through WinCacheGrind I found something like 300+ calls being made to the method CodeIgniter uses to load a library file/class. I had long suspected that liberal use of $CI->load->library(‘MY_Blah’) wasn’t necessarily good practice, but I didn’t suspect it could have been that bad.

So I decided to put my most-frequently loaded libraries into the autoload.php and remove any calls to load them in my libraries, controller, and views. The difference was noticeable, and a second pass through xdebug and WinCacheGrind proved the improvement was real. I tried not to go overboard by loading too many classes, and it seems like I was able to strike the right balance by autoloading less than ten of my dozens of classes.

Another interesting result was integrating memcache to save some of the objects that are frequently loaded on the landing page. These objects are for the most part shared across all users on the site. For some reason after I integrated memcache the memory usage for the controller (according to CodeIgniter) went up to around 8MB from 2MB. Very weird results that I’m going to have to think about. Database load on the page is near nothing, which is good news. I’m assuming the problem is in copying the objects out of memcache and creating PHP objects out of them.

Guess I’ll be doing some more profiling.

0 comments

Scary Moments in Administering nginx

So I was trying to install xdebug on my Slicehost slice and I couldn’t get the damn module to load.

I was following these instructions – installed via PECL, added the line to php.ini, restarted the web server, etc. Nothing. Wasn’t showing up in either “php -m” or the output of phpinfo().

So then I decided to compile from source, using instructions on the same page. Now it actually got worse. I was getting a 502 error and this in the logs:

2009/06/04 16:50:46 [error] 4461#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: XXX.XXX.XXX, server: myserver.com, request: “GET /index.php HTTP/1.1″, upstream: “fastcgi://127.0.0.1:9000″, host: “myserver.com”

Begin freakout.

Nothing was working. Bounce web server. Nothing. Bounce slice. Nothing.

Continue freakout.

Don’t totally know why I decided to restart fastcgi, but sweet mother that worked. And not only that, xdebug was loading as expected.

sudo /etc/init.d/php-fastcgi restart

End freakout.

(To be more precise, I restarted nginx first and then restarted fastcgi.) Hope that helps someone out there. Certainly scared the tuna salad out of me for a good half hour. Oh the joys of system administration for developers.

0 comments

Virgin Servers and Nerd Pr0n

As weird as it sounds, for legal reasons I had to move my side project onto a server I have root access to. This posed some serious problems for me. I’ve developed tons of sites, and I’m no stranger to a command prompt, but a sysadmin I am not. Postfix? iptables? munin? monit? The extent of my exposure to the intricacies of system administration was whatever was available from within cPanel plus whatever I could change from my shell account. Admittedly, this was limited, if not comfortable.

After hemming and hawing and researching I settled on a VPS setup on Slicehost, which was recently purchased by Rackspace. I expected a painful transition to a self-managed server, and honestly it wasn’t all shits and giggles, but the experience was (and is)….Amazing. Liberating. Invigorating. Confidence building.

I got plenty of help. The Slicehost tutorial articles were an incredible resource. I could have barely done it without them. I also got some key help from A. DeRose an ex-coworker who had recently worked through moving the Tripology site to Slicehost. Even still, I learned an incredible amount about how to configure and run a server. It’s an experience I wish all developers could have at least once.

Almost on a whim I decided to use nginx as my web server instead of Apache. Nginx is stupid fast. I don’t really have anything against Apache, but I can appreciate how simple nginx is to install and configure. For basic web sites, it makes Apache seem like a big fat mouth-breathing mooch that won’t leave your apartment. I haven’t regretted that decision yet, and I don’t expect that I ever will.

I also heartily recommend Monit, Munin, and apticron. Between those three I feel that if something happens to the server that I need to know about I’ll be the first to know. Lastly, I can recommend Pingdom as a external 3rd Party service to make sure the server is responding.

The most exciting part of all of this is that after all these years of nerdom there’s still something to learn. It’s what geeks like us live for.

Update: Don’t know how I managed to forget this article for setting up Ubuntu on Slicehost. This article was pretty much my bible for 2 days. Whoever wrote that deserves a special place in nerd heaven as far as I’m concerned. Great stuff.

0 comments