I have coded a simple setup: client connects to server, server starts sending bytes, client reads them, but with a 1 second delay. I have 2 realizations of this: using Java IO and Java NIO. By printing the number of sent bytes by server and the number of received bytes by client I try to measure the size of client buffer in my system. And of course, I double-check it via socket.getReceiveBufferSize() method.
This doesn't have some practical meaning, just an exercise while studing Java IO.
1) When I run the old IO the difference between sent and receved bytes keeps around 8000 bytes all the time. I.e. I see the following logs: "sent 33000 bytes" and "received 25000 bytes" at the same time in the two consoles. Both receive and send buffers checked by corresponding getters are around 8000 bytes. I.e. their sum is 16000.
2) When I run NIO code the delta is always around 27000 bytes, while receive and send buffers remain the same.
The questions are:
Is my setup correct for my purpose of measuring the socket buffers?
Where do reside those bytes that are suspended between "send" and "received"?
And if they are placed in socket buffers - why the deltas between sent and received data are different from buffer sizes?
Windows, Java 6, client and server located on the same machine.
Here is a code for NIO setup:
Client, which reads data one time in a second.
Server, which sends data all the time.
286000 bytes written.
287000 bytes written.
288000 bytes written.
289000 bytes written.
290000 bytes written.
291000 bytes written.
has read 253000
has read 254000
has read 255000
has read 256000
has read 257000
has read 258000
has read 259000
has read 260000
has read 261000
has read 262000