• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Measure the idle time of a client socket read

 
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We know that reading a byte from a Java client socket is a blocking read, i.e. it will wait idle until the bytes are available in the socket. Is there any way to measure the idle time of the read?
 
Sheriff
Posts: 22783
131
Eclipse IDE Spring VI Editor Chrome Java Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You can use System.currentTimeMillis(). In pseudo code:
To get the idle time between each byte you would need non-buffered (so no BufferedInputStream) single byte reads using the read() method.
 
Jiafan Zhou
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Rob Spoor wrote:You can use System.currentTimeMillis(). In pseudo code:
To get the idle time between each byte you would need non-buffered (so no BufferedInputStream) single byte reads using the read() method.



Hi, why non-buffered stream? What if I use the BufferedInputStream?
 
Jiafan Zhou
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Rob Spoor wrote:You can use System.currentTimeMillis(). In pseudo code:
To get the idle time between each byte you would need non-buffered (so no BufferedInputStream) single byte reads using the read() method.



Plus I don't think the (end - start) is the idle time, but it also includes the time of the actual read.
 
author
Posts: 23951
142
jQuery Eclipse IDE Firefox Browser VI Editor C++ Chrome Java Linux Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jiafan Zhou wrote:Plus I don't think the (end - start) is the idle time, but it also includes the time of the actual read.



True, but keep in mind that the actual operation should be relatively fast compared to the idle time.

If this is *not* the case, then this measurement would serve little purpose.... ie. what would be the purpose of knowing that you have 10ms of idle time for an operation that takes a second? If the network / operating system is keeping up, meaning no queuing up somewhere, there isn't really a problem.

Henry
 
Rob Spoor
Sheriff
Posts: 22783
131
Eclipse IDE Spring VI Editor Chrome Java Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jiafan Zhou wrote:

Rob Spoor wrote:You can use System.currentTimeMillis(). In pseudo code:
To get the idle time between each byte you would need non-buffered (so no BufferedInputStream) single byte reads using the read() method.



Hi, why non-buffered stream? What if I use the BufferedInputStream?


When you use a buffered stream (BufferedInputStream) you will not only be reading from the socket, but also from memory. When you perform a read operation the BufferedInputStream will try to fill its buffer before returning. If you request less than the buffer can hold you may end up waiting longer than you intend to.
 
Jiafan Zhou
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Henry Wong wrote:

Jiafan Zhou wrote:Plus I don't think the (end - start) is the idle time, but it also includes the time of the actual read.



True, but keep in mind that the actual operation should be relatively fast compared to the idle time.

If this is *not* the case, then this measurement would serve little purpose.... ie. what would be the purpose of knowing that you have 10ms of idle time for an operation that takes a second? If the network / operating system is keeping up, meaning no queuing up somewhere, there isn't really a problem.



I don't totally agree with this. Usually the network read from the socket could be slow itself if reading from a large byte streams. I want to know what percentage of the idle item is occupied in the total read.

And since I don't know how long is the idle time, the idle time could be long as well. I want to know how long is the idle time and want to find out what and where generates this idle time.



 
Marshal
Posts: 28193
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jiafan Zhou wrote:Plus I don't think the (end - start) is the idle time, but it also includes the time of the actual read.



That's true, but note that you are timing the whole operation to the nearest millisecond at best, whereas the time required to get a byte from the operating system's TCP/IP code would be measured in nanoseconds. There's a scale difference of about a million there.

And besides, the operating system does its own buffering, because TCP/IP works with packets. So this idea of recording the idle time per byte is kind of useless.
 
Paul Clapham
Marshal
Posts: 28193
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jiafan Zhou wrote:I don't totally agree with this. Usually the network read from the socket could be slow itself if reading from a large byte streams. I want to know what percentage of the idle item is occupied in the total read.

And since I don't know how long is the idle time, the idle time could be long as well. I want to know how long is the idle time and want to find out what and where generates this idle time.



Why would the length of the stream being read affect the idle time in any way?

As for what generates the idle time, it's the operating system waiting for packets to come over a network connection. It doesn't look like you really understand the relative speeds of computers versus networks, so you ought to get out your pencil and do a bit of arithmetic. How fast is your computer? Let's suppose you have a 4GB machine -- that means it does 4 billion operations per second. And how fast is your network? Let's suppose you have a 10MB network -- that means it transmits 10 million bytes per second. Think about this for a while so you get some idea of what's important and what's trivial.
 
Jiafan Zhou
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Paul Clapham wrote:

Jiafan Zhou wrote:I don't totally agree with this. Usually the network read from the socket could be slow itself if reading from a large byte streams. I want to know what percentage of the idle item is occupied in the total read.

And since I don't know how long is the idle time, the idle time could be long as well. I want to know how long is the idle time and want to find out what and where generates this idle time.



Why would the length of the stream being read affect the idle time in any way?

As for what generates the idle time, it's the operating system waiting for packets to come over a network connection. It doesn't look like you really understand the relative speeds of computers versus networks, so you ought to get out your pencil and do a bit of arithmetic. How fast is your computer? Let's suppose you have a 4GB machine -- that means it does 4 billion operations per second. And how fast is your network? Let's suppose you have a 10MB network -- that means it transmits 10 million bytes per second. Think about this for a while so you get some idea of what's important and what's trivial.



Thanks, Paul. The length of the stream should not affect the idle time. A long byte stream only makes the performance worse, plus the idle time.
Hold on, I am not particularly interested in the idle time that generated by the operating system. I am more interested in the idle time generated by the application. For example, if the server (sender of the packets) takes a nap while sending the packets, whereas the receiver uses the blocking read() to wait for receiving the packets, an idle time will be generated. This is what I am interested to know, in particular, the percentage of this idle time vs the actual read time.
 
Paul Clapham
Marshal
Posts: 28193
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
There is no way to know whether the delay between your code asking for bytes and your code receiving bytes is caused by the operating system waiting to receive bytes for your code, or by the operating system paging out your code, or by the operating system delaying your code because it's serving a higher-priority thread, or any number of other reasons I haven't thought of.

Also, what you said suggests that since reading from a socket is a blocking read, the application takes the opportunity to sneak out behind the printer and smoke a cigarette or something like that. No. It's blocking because it's waiting to get data from the operating system.
 
Jiafan Zhou
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Paul Clapham wrote:There is no way to know whether the delay between your code asking for bytes and your code receiving bytes is caused by the operating system waiting to receive bytes for your code, or by the operating system paging out your code, or by the operating system delaying your code because it's serving a higher-priority thread, or any number of other reasons I haven't thought of.


Is there a way to know this wait (idle) time in a high level above the operating system such as JVM?
Surely when Java input stream (any kind, such as BufferedInputStream) reads from a Socket and the socket does not have any incoming messages, the waiting or idle time will be recorded by the thread?
 
Paul Clapham
Marshal
Posts: 28193
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jiafan Zhou wrote:Is there a way to know this wait (idle) time in a high level above the operating system such as JVM?
Surely when Java input stream (any kind, such as BufferedInputStream) reads from a Socket and the socket does not have any incoming messages, the waiting or idle time will be recorded by the thread?



Why would the thread keep track of that? Like I said, there are dozens of reasons why a thread might be delayed. Anyway you can look at the public API for Thread and see for yourself that there aren't any such methods.

I have no idea why you have this obsession with idle time. You're trying to oversimplify things and then you're demanding that your computing environment should provide ways for you to measure your oversimplified ideas. What's really going on here?
 
Jiafan Zhou
Ranch Hand
Posts: 193
Mac OS X Fedora Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I am not a TCP/IP expert, but this is what I understand.

If a connection is established and one end socket does not have any incoming data, however, at the same time, we use the Java InputStream.read() API to read the data from the socket. SInce this read() is a blocking read, the current thread who execute this statement will be blocked until incoming data from the socket is available. This state is different from the one when there is data available to read from the Socket. As I have mentioned, what I am trying to do is very very simple, to measure the idle time when there is no data to read and the time when there is indeed data to read.
 
Paul Clapham
Marshal
Posts: 28193
95
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You got that already in the first couple of posts to this thread. Everything after that was all about trivia.
 
I don't even know how to spell CIA. But this tiny ad does:
a bit of art, as a gift, the permaculture playing cards
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic