When it comes to high-performance, speed and optimization, I'm there. I have written several articles about this subject in the past with 5 Ways to Speed Up Your Site, How To: Optimize Your CSS Even More and a brief look into image maps to reduce HTTP calls and bandwidth usage. However, similar to modifying a car there is always something more that can be done.

Road Atlanta Track Cars cool down and await heading onto the track at Road Atlanta.

First off, let's talk about what I have done before. With the last redesign I went through my WordPress theme and gutted it almost entirely; removing features I don't use, rewriting some PHP to be less server-taxing and compressing CSS with gzip via PHP. I actually don't cache pages since the WordPress plugin that handles page caching requires that pages not be compressed for caching. My server can handle non-cached pages just fine and I thought the gains of compressing XHTML markup were substantial - something like 35kB vs 8kB. That boils down to an issue of having a quickly served site with a larger file size and a smaller file size served slightly slower but that is faster to download.

MySQL Woes

Not-caching pages does lead to inevitable MySQL slow down though. Your server's database query language, MySQL, is many times slower than PHP and PHP ends up waiting on MySQL to process things like posts and items in the sidebar such as the popular and latest posts. While that still is an issue for me, I have MySQL 4 caching setup on the server-level, or so my friends at Media Temple tell me.

This brings me to my next point, web statistics tracking. I have been using a locally-installed web stats application called Mint for 2 years. It has incredible expandability with add-on modules called peppers, but Mint succumbs to the MySQL slowdown issued mentioned above. Every page tracked by Mint includes a small JavaScript file which holds up the entire page load and is proportionate to the speed of your server's MySQL execution and number of peppers installed.

As such I have been using Google Analytics for roughly 2 weeks and plan on using it full time at the end of this month. Analytics works the same way as Mint, by including a JavaScript tracking file with each page load, but Google hosts the JavaScript and their servers are a tad faster and more optimized than yours. So what does this mean in numbers? In my testing with the Firebug net tool Mint consumed approximately 200 to 300ms (with crazy peaks of > 500ms at times) of loading time, where as Google Analytics only used around 40 - 60ms. That was with me using only basic Mint peppers - Visits, Referrers and Pages. I imagine your numbers would be much higher if you use more as a product of the additional JavaScript included with each page load and the MySQL issues.

You might be thinking something along the lines of "you're worried about 200 damn ms?" Yeah, I am. When my site loads in 700ms on my Internet connection, 200ms is a big deal.

S3

Amazon's super-affordable and super-fast storage solution S3 is nothing new. I have talked about it before, Jeff has talked about it before, and Jeremy has talked about it before, along with the rest of the Internet. While I have been using S3 to backup personal files for months, I never ventured to using it on my site. That changed last night when I decided to experiment with hosting frequently-served images, such as my logo, mugshot and sidebar images, on Amazon S3.

S3 Firebug PSTAM.com

I decided not to host every image, such as those in articles, on S3 like Jeff Atwood did because that seemed like too much trouble and would break my writing workflow of uploading images directly inside the WordPress Admin panel. However, for people paying for bandwidth S3 is an excellent solution to off-load images and pay a fraction of the bandwidth cost. Amazon's servers are fast which will definitely be noticeable if you're on a low-level or shared host. For me though, latency and download times for images hosted locally versus S3 was rather negligible.

Then why am I still doing it? The answer is twofold. By setting up a static file host like S3, which will generally have lower latency time, a greater maximum throughput and the ability to cope with more requests per second than your host, you give your server a chance to keep up with high traffic levels. Also, by utilizing more than one hostname (ie, yoursite.com is a hostname, yourbucket.s3.amazonaws.com is another hostname) to serve your content you increase your effective bandwidth, especially if HTTP keepalives are enabled.

Firebug - Mint and S3 The Firebug net tool helping my point. Time spans from left to right.

Of course there are some stipulations with that last part. Spreading your content/media across slow hostnames won't help your case and there is a point at which the latency encountered from multiple DNS lookups for each additional hostname becomes inefficient. Amazon S3 is a prime example of a speedy hostname to throw static files on. If your users have HTTP pipelining enabled, they'll see even greater benefits. An HTTP pipelining enabled web browser uses server keepalive signals to assume that a socket is open so it can receive an uninterrupted stream of packets. Without pipelining, the browser must communicate back and forth with the server to determine whether the last item was received successfully and uses more TCP packets to do so.

HTTP Pipelining

If Firefox is your primary browser, enabling HTTP pipelining is a simple process.

S3 101

To avoid the inevitable questions about how to get started with using S3 as a file server, I'll give you a brief walk through. Assuming you have an S3 account and someway of connecting to it (I prefer S3fox), create a new bucket. Buckets can be considered as root-level directories of your S3 account. Bucket names are global so you'll have to pick a unique one, like you have to pick a unique user name on social networking sites.

In my case, my bucket name is "stammy". To access a file within that bucket, the URL syntax is the following: http://[bucket_name].s3.amazonaws.com/

As such, if I wanted to access a file "logo.png" inside if the "stammy" bucket, it would be found at http://stammy.s3.amazonaws.com/logo.png

.. but not yet. You need to configure the Access Control List (ACL) which lets S3 know what is safe to give public permission to. In this case, we need to give everyone read access to "logo.png". The method of editing the ACL varies by how you are interfacing with S3, either by way of application or your own code, but with S3fox all you need to do is fire up a dialog box by right-clicking on a file and selecting "Edit ACL".

Overall

By moving to Google Analytics and off-loading commonly-served files to S3, my homepage will (I have not removed Mint yet) load in around 500ms based on my testing on a 5mbit line. What's next? I would love to come up with a bulletproof way of caching pages as I've had trouble with WP-Cache in the past. I am considering hosting my favicon on S3 after reading about a guy whose favicon consumed 27GB of bandwidth in a month, albeit a 70kB favicon! Also, I am in the process of experimenting with gains of spreading files across hostnames more throughly - gzip-compressed, local CSS versus a non-compressed S3-hosted CSS file. For the most part I am still an amateur with web optimization but I always get a little smirk when I can eliminate a few milliseconds here and there. As they say in car racing, for every 100 pounds shaved off your car's weight, your 1320 foot time will drop by 0.1 seconds. In other words, the small things add up.


Like this article? Leave a tip.

Handcrafted by Stammy for 19.29 years · Comments