posted 21 years ago
Hi All!
I have a client-server application in which I send packets from the server to the client in compressed form. From the server, I first send the size of the compressed bytes (the data size), followed by the actual compressed bytes (the data).
At the client end, I first read the size of the compressed bytes and then read in that many bytes from the socket. This is done for every packet sent by the server. There is no delimiter between any two packets.
Now, I have a doubt that since I am dependent on the data size, if somehow the data size for one packet becomes erroneous, then all subsequent packets will become erroneous as well. Let us consider the following scenario:-
1. Packet1: Length ---> 20 (sent as 2 bytes)
Data -----> The first 20 bytes available
2. Packet2: Length ---> 25 (sent as 2 bytes)
Data -----> The next 25 bytes available
Now, if somehow the client gets the length of packet1 as 21 instead of 20. Thus, what will happen now is that the client will get the first 20 bytes of the data of packet1 plus 1 byte (the MSB) from the length of packet2. Thus, in this case, not only has packet1 become corrupt, but so has packet2 since the length of packet2 has now become 5 instead of 25. As a result, all packets after packet2 will also become corrupt in this way.
Can there be some way such that even if the length of any 1 packet is received incorrectly, only that packet will be affected and no other?
Waiting for a reply from all you learned people out there....