• Post Reply Bookmark Topic Watch Topic
  • New Topic

Help understanding Non-Blocking I/O  RSS feed

 
Jamie Patrick
Greenhorn
Posts: 5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi

I am trying to understand the high-level fundamentals of Non Blocking architectures.
Apologies if this is maybe not quite the right forum but it seemed most likely to me, or else maybe Concurrency section.

Anyway, I am used to the single-thread per request processing design and blocking model but have become interested in the non-blocking style that is becoming fashionable.

My question is not really language or environment specific but I appreciate different languages/containers/frameworks might have slightly different approaches.
Im also not after details of the advantages of non - blocking. I understand the gains to be made by reducing context switching by using smaller thread pool.

My question is more about how applications relate a response from a web service say, to the original request from the client.

As an example to give some context - using Callback approach:

- Thread A - in my application - calls a synchronous web service. In the traditional model, my thread hangs around waiting on the socket - got that
- in a non-blocking design Thread A makes an asynchronous call to the web service and then goes back to its pool to pick up another task - usually returning a Future
- when the response from the web service enters the socket to my application, Thread B get allocated to pick it up say, and gets given a reference to the Future<>, or a Handler, that has a callback method on it which it invokes

So my understanding - at high level - is that no threads are waiting on a response from the socket on the outgoing port - the principle of a non-blocking design have been observed right?

When the web service responds to my application, I cant understand:

1. How does a framework/container usually relate the web service response to the correct Future/Handler instance it needs to be given a reference to?
ie. Thread A is off somewhere else now, what is the mechanism by which Thread B is given a reference to the callback?

2. How does Thread B know that it needs to invoke the callback method?


Many thanks to anyone who can help me understand this
James
 
Henry Wong
author
Sheriff
Posts: 22863
119
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Jamie Patrick wrote:
As an example to give some context - using Callback approach:

- Thread A - in my application - calls a synchronous web service. In the traditional model, my thread hangs around waiting on the socket - got that
- in a non-blocking design Thread A makes an asynchronous call to the web service and then goes back to its pool to pick up another task - usually returning a Future
- when the response from the web service enters the socket to my application, Thread B get allocated to pick it up say, and gets given a reference to the Future<>, or a Handler, that has a callback method on it which it invokes

So my understanding - at high level - is that no threads are waiting on a response from the socket on the outgoing port - the principle of a non-blocking design have been observed right?


Using threads to do non-blocking IO doesn't make the IO non-blocking. It just simulates it, meaning that the calling thread is no longer blocking. The calling thread passes the IO request to a thread pool, and hence, thread pool thread, which in turn, blocks. The calling thread is then free to do other things, and can check on the thread pool thread via the future object provided.

Technically, the IO is still being blocked on. It is just the thread pool thread that is blocking, and when the IO is complete, it will adjust the flags in the future object, and move on to the next task.

Henry
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!