posted 10 years ago
The bytes going out are supposed to be a copy of the bytes going in. That obviously means that the server must receive the bytes before it can send them back.
In the XKCD example, how does the server decide to send something back after it has received the 3 bytes of "HAT"? Why did it wait until it received 3 bytes - why didn't it stop at 2, or 1, and allocate and send back the buffer starting with just "H" or "HA"? The server must know somehow that the actual data length is only 3, as opposed to 500, to know to reply after 3 bytes, even if it mistakenly uses the 500 for the amount of memory to put in the reply.
But hey, I decided to look it up myself in RFC 6520. It turns out that the client does provide the length redundantly, just as I suspected. There is a length defined for the overall message, and then within the message, there is a length defined for the payload of the message, with the remainder presumed to be padding. It's this defining of two dependent lengths that allows the client to send inconsistent lengths to cause the bug.
RFC 6520 even explicitly warns against this bug:
"If the payload_length of a received HeartbeatMessage is too large, the received HeartbeatMessage MUST be discarded silently."