• Post Reply Bookmark Topic Watch Topic
  • New Topic

ServerSocketChannel vs SocketChannel

 
Jean Chastain
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I have an application that controls our test boxes, which have one TCP/IP port and two UDP ports to be monitored. The test box is the TCP socket server, written in MS VC++; my app is the client, uses java.nio.channels.SocketChannel in non-blocking mode on the TCP port and java.net.DatagramSockets in blocking mode on the UDP ports. J-sprint said that the select for the TCP port was taking 60% CPU and the receive on the UDP ports was taking 30% and 5%. I used JAMon and a 7-minute test that sent 40 legit messages to the server and got 40 responses from the server and yet the SocketChannel.select woke up 24,601,980 times. Wanting to blame the MS server socket , I wrote a quickie fake server using java.nio.channels.ServerSocketChannel and this is what JAMon reported:

server: Hits=10, Avg=3670ms, Total=36703ms, Min=1406ms, Max=6453ms
client: Hits=10606, Avg=0ms, Total=187ms, Min=0ms, Max=16ms

Why would a SocketChannel.select constantly wake up with 0 channels ready while the ServerSocketChannel.select only woke up when it had something to do?
 
Jean Chastain
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Never mind, apparently select returning with no data is a long standing problem.
 
Jim Yingst
Wanderer
Sheriff
Posts: 18671
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Interesting. If you've found anything online which discusses this issue, I'd be interested in learning more.

From what you've posted above though, it's not clear there's really a problem. Taking 60% of the CPU sounds like it might be bad, but I think the question is, was there anything else the CPU could be doing instead? Try starting up some other CPU-intensive processes and see if the selector keeps hogging time, or does it yield when other processes are doing real work? If response time is not harmed, then maybe you can just ignore the issue.

For that matter - is using a Selector and nonblocking IO even necessary on a client? My impression is that this technology is particularly useful on a server as it allows you to avoid having one or more thread per client. From the client side though - might it be simpler to just use blocking IO, perhaps in a thread of its own? You've usually only got one server to listen to, right? Or maybe more, but usually it's a limited number. Unlike the situation for a server, which may be bombarded with requests from a very large number of clients.
[ October 20, 2005: Message edited by: Jim Yingst ]
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!