On Being a Website Performance Junkie
When it comes to high-performance, speed and optimization, I'm there. I have written several articles about this subject in the past with 5 Ways to Speed Up Your Site, How To: Optimize Your CSS Even More and a brief look into image maps to reduce HTTP calls and bandwidth usage. However, similar to modifying a car there is always something more that can be done.
First off, let's talk about what I have done before. With the last redesign I went through my WordPress theme and gutted it almost entirely; removing features I don't use, rewriting some PHP to be less server-taxing and compressing CSS with gzip via PHP. I actually don't cache pages since the WordPress plugin that handles page caching requires that pages not be compressed for caching. My server can handle non-cached pages just fine and I thought the gains of compressing XHTML markup were substantial - something like 35kB vs 8kB. That boils down to an issue of having a quickly served site with a larger file size and a smaller file size served slightly slower but that is faster to download.
MySQL WoesNot-caching pages does lead to inevitable MySQL slow down though. Your server's database query language, MySQL, is many times slower than PHP and PHP ends up waiting on MySQL to process things like posts and items in the sidebar such as the popular and latest posts. While that still is an issue for me, I have MySQL 4 caching setup on the server-level, or so my friends at Media Temple tell me.
You might be thinking something along the lines of "you're worried about 200 damn ms?" Yeah, I am. When my site loads in 700ms on my Internet connection, 200ms is a big deal.
S3Amazon's super-affordable and super-fast storage solution S3 is nothing new. I have talked about it before, Jeff has talked about it before, and Jeremy has talked about it before, along with the rest of the Internet. While I have been using S3 to backup personal files for months, I never ventured to using it on my site. That changed last night when I decided to experiment with hosting frequently-served images, such as my logo, mugshot and sidebar images, on Amazon S3.
I decided not to host every image, such as those in articles, on S3 like Jeff Atwood did because that seemed like too much trouble and would break my writing workflow of uploading images directly inside the WordPress Admin panel. However, for people paying for bandwidth S3 is an excellent solution to off-load images and pay a fraction of the bandwidth cost. Amazon's servers are fast which will definitely be noticeable if you're on a low-level or shared host. For me though, latency and download times for images hosted locally versus S3 was rather negligible.
Then why am I still doing it? The answer is twofold. By setting up a static file host like S3, which will generally have lower latency time, a greater maximum throughput and the ability to cope with more requests per second than your host, you give your server a chance to keep up with high traffic levels. Also, by utilizing more than one hostname (ie, yoursite.com is a hostname, yourbucket.s3.amazonaws.com is another hostname) to serve your content you increase your effective bandwidth, especially if HTTP keepalives are enabled.
The Firebug net tool helping my point. Time spans from left to right.
Of course there are some stipulations with that last part. Spreading your content/media across slow hostnames won't help your case and there is a point at which the latency encountered from multiple DNS lookups for each additional hostname becomes inefficient. Amazon S3 is a prime example of a speedy hostname to throw static files on. If your users have HTTP pipelining enabled, they'll see even greater benefits. An HTTP pipelining enabled web browser uses server keepalive signals to assume that a socket is open so it can receive an uninterrupted stream of packets. Without pipelining, the browser must communicate back and forth with the server to determine whether the last item was received successfully and uses more TCP packets to do so.
If Firefox is your primary browser, enabling HTTP pipelining is a simple process.
S3 101To avoid the inevitable questions about how to get started with using S3 as a file server, I'll give you a brief walk through. Assuming you have an S3 account and someway of connecting to it (I prefer S3fox), create a new bucket. Buckets can be considered as root-level directories of your S3 account. Bucket names are global so you'll have to pick a unique one, like you have to pick a unique user name on social networking sites.
In my case, my bucket name is "stammy". To access a file within that bucket, the URL syntax is the following:
As such, if I wanted to access a file "logo.png" inside if the "stammy" bucket, it would be found at
.. but not yet. You need to configure the Access Control List (ACL) which lets S3 know what is safe to give public permission to. In this case, we need to give everyone read access to "logo.png". The method of editing the ACL varies by how you are interfacing with S3, either by way of application or your own code, but with S3fox all you need to do is fire up a dialog box by right-clicking on a file and selecting "Edit ACL".