WARNING: This server is unstable and will be retired in the next days.
If you want to keep this forum available, please request immediately a migration
on the Nabble Support forum.
Forums that don't receive any migration request will be deleted forever.
------ Original Message ------ From: "Roberto Peon" grmocg@...
·SPDY compresses HTTP headers using an LZ history based algorithm, which means that previous bytes are used to compress subsequent bytes. So any packet capture that does not include all the traffic sent over that connection will be completely opaque -- no mathematical way to decode the HTTP. Even with all the traffic, a stream decoder will be a tricky thing to build b/c packets depend on each other.
I know there's a SPDY decoder plugin for Wireshark, but I'll defer to people more
knowledgeable about packet analysis tools to cover that area.
The OP is right about this, btw. Technically it is possible that you've flushed the window after 2k of completely new data, but there is no guarantee and so interpreting a stream in the middle may be extremely difficult.
Seems like a fine tradeoff for the latency savings that we get on low-BW links, though.
I think it basically means compression or any transport level transform needs to be able to be switched off when debugging. Which means optional/negotiated.
I think it should be mandatory to implement (so no discovery of the feature is needed), but optional to use.
for something like TLS or gzip I've absolutely no problem with that.
I have to analyse packet dumps of HTTP most days, as I'm sure do many others on this list. We haven't yet evolved as a species to the stage where we don't make mistakes.
I think it's a vitally important facility for discovering implementation errors, which is required in many cases to resolve issues.