As I mentioned in another thread, David Baron and I attended a
Velocity-Conference related event where a bunch of heavyweights from the
devops community get together with a bunch of browser people to talk
about the state of web performance . It was a pretty refreshing event -
smart and experienced folks talking openly in an unconf sort of setting
about making the web perform better. I learned a lot.
I want to report back on some interesting things on two of the sessions:
the browser session, and the ssl session.
The "browser" session. Devops folks were invited to bring ideas and
report on bottlenecks in their designs to the browser folks in the room.
David dealt with a bunch of things in the content space, but as far as
As mentioned in a previous thread - the HTTP cache was a big topic.
There was a general feeling, undocumented by specific cases, that
browser's were not delivering cache rates that they expected. These
folks definitely knew how to set cache-control headers - but the concern
was not limited to a particular browser.
If you haven't seen Will Chan's (of Chrome) recent post on cache metrics
of Chrome please check that out:
bits attributed to me are not a correct attribution, but it really isn't
the impt part). He talks about how it takes ~2 days for 75% of users to
fill up a cache in the 200-300MB range (a rate that definitely won't be
linear), and how 25%+ of users will clear their cache at least once a
week (either manually or due to error recovery).. leading to an overally
generalized hit rate of about 1/3.
It would be very interesting to carefully look at some top-n sites and
decide what we think the optimal hit rate should be.
The desire to have a cache api for querying, asynchrnously loading,
etc.. came along with this.
The topic of browsers doing Global Load Balancing was brought up.
(Apparently it is brought up regularly). The point is that at least from
a network performance point of view browsers are in a better position to
do this than any of the third party solutions that get used. There are
lots of problems with just implementing this based on the A record DNS
set, but there is probably some way to address the issue effectively.
Its an area for innovation.
I initiated a un-conference session on barriers to going 100% SSL on the
internet. I wanted to know what the devops folks saw as standing in the
Some of it matches exactly what I expected to hear: SNI with ie on XP is
lacking, Cert Management and PKI is painful and ineffective
logistically, OCSP causes performance problems, management problems
separating application from network owners around things like citrix
load balancers, challenges around heirarchical caches
Barriers that I learned about:
1] $$$ is a lot more than just the cost of the cert - CDNs disincent SSL
traffic by charging more for it (in addition to the cost of the cert).
2] Mixed content is rightly rejected or warned about. (I saw this on
firefoxflicks.org with IE the other day - yes, bug filed). This creates
a real problem for a transition path - advertising networks and third
party content is not always SSL enabled. I was told this one roughly a
billion times :)
3] Server computational latency was cited as 1.2ms per handshake, but
interestingly not one person cited network latency of TLS (other than
the OCSP problem) as an issue.
4] Knowledge gap about all this gunk is a huge problem. One leading
member of the community, well respected at understanding all the
implications of web performance, admitted while they supported the
privacy goals of this they also knew little about the details,
tradeoffs, or requirements of the whole space. Its not perceived as
central - and that's a major sticking point.
misc: I heard that apache 2.4 has stapling support.