I have an application that I create a serverSocket to run in a
thread to read data being sent to the server. I use the basic server scheme where I block in a while loop using input.readLine() and I place the recieived data in a StringBuffer. When the end of transmission is reached I break out of this loop and place the StringBuffer in a queue for another worker thread to chew on. I then return to the while loop where I block for more incoming data.
This application works as long as the load of incoming messages is low (less than 30 messages per second using a message 1K in size). If I bump up the client to send messages at a rate higher then that I begin to get incomplete messages received at the serverSocket. If I display the data being sent at the client, the message is complete on that end. I am not sure what the issue is.
I have some C and VB colleagues that are telling me that the
JAVA TCP API should be throttling back at the port to avoid an overrun/flood of data by telling the TCP API at the client to throttle back on sending data. They tell me that they do not have these issues in those languages and are blaming Java for my issues.
I have no idea as to why the messages are getting lost/truncated.
Any clues as to why this is happening would be greatly appreciated.
Note that I used the client/server tutorial at Sun to base my design on so I am not doing anything out of the ordinary to receive the incoming messages. I am using Java 1.4.2 for my SDK and JRE.
Lon Allen