Skip to main content

On Magic

We discovered an interesting IE6 feature when we pushed out caching changes earlier this week for Journals.  For posterity, here are the technical details.  Our R8 code started using "conditional GET" caching, meaning that we supported both If-Modified-Since: and If-None-Match: HTTP headers.  The way this works is that, if a client has a version of a page in its cache, it can send one or both of these headers to our servers.  Like this:
If-Modified-Since: Tue, 26 Sep 2006 21:47:18 GMT
If-None-Match: "1159307238000-ow:c=2303"
If-None-Match, which passes an "entity tag" or ETag, is better to use and was designed to replace the If-Modified-Since header.   (If-Modified-Since has granularity only down to a second, and can't  be used to indicate non-time-based changes.)  In our case we actually have two versions of our pages which can be served up, one for viewers and another one for owners.  We really only want to cache the viewers' page.

When our server sees a request like the one above, it first does a quick check (in this case it'll ignore the If-Modified-Since and use the ETag) to see if the client already has the latest version; if it does, it returns a 304 Not Modified result.  The big win is that this can be done very very quickly and efficiently, while building a 200KB web page takes lots of work.  If the client doesn't have the right version, though, the server returns a 200 and sends new headers, like these:

Last-Modified: Tue, 26 Sep 2006 21:47:18 GMT
Etag: "1159307238000-c=2303"

If you're obsessive with details you might notice that the modification date is the same as before, but the ETag has changed (the -ow:c has changed to a -c).  When the second request was made, it sent cookies that told the server that the user was the owner of the blog.  So the page is different and therefore the ETag is different, but the last date modified is the same.  We're expecting browsers and caches to detect the change and refresh the page.

This all works fine... except for IE6 (and the AOL client, which uses IE6 under the hood).  IE6 seems to see the Last-Modified: timestamp above and simply stop, ignoring the Etag: header and the fact that we're returning a 200 response with new content.  I've sat and watched the data flow in and out of my Internet connection and verified that IE just drops the 60K or so of content on the floor, as well as the new ETag, and re-uses its old version.  The only way to prevent it is to force a reload using ctrl-Reload, or clearing your Temporary Internet Files.

What this means is that if you change "who you are" by logging in or out, and nothing else changes, you will get a stale, cached version of your own blog's page.  Which is certainly not good.

As of this morning, we're running with caching turned off on but with a bug fix on  The bug fix is simple:  Don't send Last-Modified: headers.  So we only send back the Etag:
Etag: "1159307238000-c=2303"
Which forces IE6 to pay attention to it, fixing the problem with IE6.  IE7, by the way, works either way; go Microsoft!

This all means that we're not going to try to enable caching for non-Etag-aware clients and caches.  Since non-Etag-aware seems to pretty much equate to old or buggy, and not having caching is just a minor performance hit, this seems to be a pretty reasonable approach in theory.  The question now is, will practice accord with theory?  We really need people to hammer on over the next few days and give us feedback.  See Stephanie's post: Like Magic, We're Back Where We Began... and please leave us feedback!


  1. Real magic would be providing a weay to communicate with AOL, like about the weather on the netscape home page that has shown Winnipeg, Manitoba, Canada, at 21°C for the past year and a half.


Post a Comment

Popular posts from this blog

The problem with creation date metadata in PDF documents

Last night Rachel Maddow talked about an apparently fake NSA document "leaked" to her organization.  There's a lot of info there, I suggest you listen to the whole thing:

There's a lot to unpack there but it looks like somebody tried to fool MSNBC into running with a fake accusation based on faked NSA documents, apparently based on cloning the document the Intercept published back on 6/5/2017, which to all appearances was itself a real NSA document in PDF form.

I think the main thrust of this story is chilling and really important to get straight -- some person or persons unknown is sending forged PDFs to news organization(s), apparently trying to get them to run stories based on forged documents.  And I completely agree with Maddow that she was right to send up a "signal flare" to all the news organizations to look out for forgeries.  Really, really, really import…

Why I'm No Longer On The Facebook

I've had a Facebook account for a few years, largely because other people were on it and were organizing useful communities there.  I stuck with it (not using it for private information) even while I grew increasingly concerned about Facebook's inability to be trustworthy guardians of private information.  The recent slap on the wrist from the FTC for Facebook violating the terms of its prior consent agreement made it clear that there wasn't going to be any penalty for Facebook for continuing to violate court orders.
Mark Zuckerberg claimed he had made a mistake in 2016 by ridiculing the idea of election interference on his platform, apologized, and claimed he was turning over a new leaf:
“After the election, I made a comment that I thought the idea misinformation on Facebook changed the outcome of the election was a crazy idea. Calling that crazy was dismissive and I regret it.  This is too important an issue to be dismissive.” It turns out, though, that was just Zuck ly…

Personal Web Discovery (aka Webfinger)

There's a particular discovery problem for open and distributed protocols such as OpenID, OAuth, Portable Contacts, Activity Streams, and OpenSocial.  It seems like a trivial problem, but it's one of the stumbling blocks that slows mass adoption.  We need to fix it.  So first, I'm going to name it:

The Personal Web Discovery Problem:  Given a person, how do I find out what services that person uses?
This does sound trivial, doesn't it?  And it is easy as long as you're service-centric; if you're building on top of social network X, there is no discovery problem, or at least only a trivial one that can be solved with proprietary APIs.  But what if you want to build on top of X,Y, and Z?  Well, you write code to make the user log in to each one so you can call those proprietary APIs... which means the user has to tell you their identity (and probably password) on each one... and the user has already clicked the Back button because this is complicated and annoying.