Skip to main content

Caching for AOL Journals

We're continuing to work on improving the scalability of the AOL Journals servers.  Our major problem recently has been traffic spikes to particular blog pages or entries caused by links on the AOL Welcome page.  Most of this peak traffic is from AOL clients using AOL connections, meaning they're using AOL caches, known as Traffic Servers.  Unfortunately, there was a small compatibility issue that prevented the Traffic Servers from caching pages served with only ETags.  That was resolved last week in a staged Traffic Server rollout.  Everything we've tested looks good so far; the Traffic Servers are correctly caching pages that can be cached, and not the ones that can't.  We're continuing to monitor things, and we'll see what happens during the next real traffic spike.

The good thing about this type of caching is that our servers are still notified about every request through validation requests, so we'll be able to track things fairly closely, and we're able to add caching without major changes to the pages.

The down side is of course that our servers can still get hammered if these validation requests themselves go up enough.  This is actually the case even for time-based caching if you get enough traffic; you still have to scale up your origin server farm to accommodate peak traffic, because caches don't really guarantee they won't pass through traffic peaks. 

We're continuing to carefully add more caching to the system while monitoring things.  We're currently evaluating using a Squid reverse caching proxy.  It's open source and has been used in other situations -- and it would give us a lot of flexibility in our caching strategy.  We're also making modifications to our databases to both distribute load and take advantage of in-memory caching, of course.  But we can scale much higher and more cheaply by pushing caching out to the front end and avoiding any work at all in the 80% case where it's not really needed.

Tags: , , , , , , ,

Popular posts from this blog

Personal Web Discovery (aka Webfinger)

There's a particular discovery problem for open and distributed protocols such as OpenID, OAuth, Portable Contacts, Activity Streams, and OpenSocial.  It seems like a trivial problem, but it's one of the stumbling blocks that slows mass adoption.  We need to fix it.  So first, I'm going to name it:

The Personal Web Discovery Problem:  Given a person, how do I find out what services that person uses?
This does sound trivial, doesn't it?  And it is easy as long as you're service-centric; if you're building on top of social network X, there is no discovery problem, or at least only a trivial one that can be solved with proprietary APIs.  But what if you want to build on top of X,Y, and Z?  Well, you write code to make the user log in to each one so you can call those proprietary APIs... which means the user has to tell you their identity (and probably password) on each one... and the user has already clicked the Back button because this is complicated and annoying.

XAuth is a Lot Like Democracy

XAuth is a lot like democracy:  The worst form of user identity prefs, except for all those others that have been tried (apologies to Churchill).  I've just read Eran's rather overblown "XAuth - a Terrible, Horrible, No Good, Very Bad Idea", and I see that the same objections are being tossed around; I'm going to rebut them here to save time in the future.

Let's take this from the top.  XAuth is a proposal to let browsers remember that sites have registered themselves as a user's identity provider and let other sites know if the user has a session at that site.  In other words, it has the same information as proprietary solutions that already exist, except that it works across multiple identity providers.  It means that when you go to a new website, it doesn't have to ask you what your preferred services are, it can just look them up.  Note that this only tells the site that you have an account with Google or Yahoo or Facebook or Twitter, not what the…
Twister is interesting.  It's a decentralized "microblogging" system based on putting together existing protocols:  Bitcoin, distributed hash tables, and Bittorrent.  The most interesting part for me is using Bitcoin for user registration and spam control.  Federated systems handle this with federated trust, which is at least conceptually simple.  The Twister/Bitcoin mechanism looks intriguing though I don't know enough about Bitcoin to really comment.  Need to read further.