• Post Reply Bookmark Topic Watch Topic
  • New Topic

Web-container question

 
Paul Kelcey
Ranch Hand
Posts: 45
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi,
Our web application talks to a propriety server (let's call it the O server)using TCP. I'd guess about a quarter of requests need to talk to the O server. The other three quarters can be handled by the web application itself. Currently we're using blocking IO when talking from our web-app to the O server (i.e. each request uses a socket to talk TCP to the O server).
My understanding is that the web-container provides a thread from its pool to satisfy each request. This thread is "bound" to the request until the response is made, after which it can be returned to the pool. If a request used blocking IO however and there was network conjestion etc., then that thread is basically idle (i.e. waiting for the network). It can't be returned to the pool during this time?
I was wondering whether we could introduce a transport layer that used non-blocking IO (thus needing far fewer threads) to handle communication to the O server for all requests. Each request that needed to talk to the server could call this layer and then "de-activate" itself (thus freeing the thread to respond to other requests) until such time as the communication layer had the data. At which time it could "re-activate" the request and it would try and grab a thread to satisfy the response. I'm wondering if a web-container handles wait and notify calls in special ways (i.e. if you wait a thread then the web-container reclaims it and uses it for other requests etc.).
Is this possible in any way, shape or form or am I way off the mark here? Are we currently doing it the right way using blocking IO?
Regards,
Paul Kelcey
 
Levent Gurses
Greenhorn
Posts: 25
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'll try to give it a shot. My assumptions are that:
1. You have a Java/J2EE based application
2. Your main goal is to achieve a more responsive web application by deferring the expensive IO operations and have the web server serving the next request

I think you need to employ some sort of messaging(asynchronous if possible) for this given problem.
Here are two possible scenarios:
1. Have the web server call a Message-Driven Bean running inside an EJB container, possibly in a separate box. The call would be an asynchronous call, so the web server will not be blocked waiting for a response
2. Use a commercial enterprise messaging product like IBM's MQSeries, BEA's WebLogic JMS service, Sun's iPlanet service or Progress' SonicMQ. Have your web app create a message to the queue
I have scratched a rough diagram to illustrate the transformation. Have a look and let me know if it makes sense.
Levent Gurses
Consultant


[ August 26, 2003: Message edited by: Levent Gurses ]
 
Paul Kelcey
Ranch Hand
Posts: 45
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thanks for the response.
Whilst your suggestion makes good sense, some of it is out of scope. We're only using a web-container and not an EJB container. The protocol between the web-container and our O server is also well defined.
The thing that interested me about your response however is the implication that if I can perform the "work" required by the request asynchronously, then the web-container can reclaim the thread and serve other requests (putting the original request on hold for a while). When the "work" has been finished, the original request can be completed. Is that what you're saying? If so, that's the mechanism I'm interested in.
Basically, I want to move the "work" from the request processing into an asynchronous layer (that layer would handle all communication to the O server using non-blocking IO). When the work was done, the request would be notifed/called back/whatever in order to send the response.
Is this possible?
PK
 
Levent Gurses
Greenhorn
Posts: 25
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
OK, now I see your problem better. My gut feeling is that as long as you stick to a single box, you can expect I/O blocking. It will not matter if you implement a synchronous or asynchronous method invocation. At the end it is the OS scheduler that controls the tasks (application flow) and it is likely that it will start an expensive I/O operation regardless of the messaging subsystem(unless you have a more sophisticated scheduling tool where you can prioritize processes). The advantage of asynch messaging is that you keep serving your clients as if the operations are being completed. In a normal scenario, one can implement a JMS + Web Server in the same box, but the nature of your project requires a separate box that you need not worry in case of an I/O blockage.
With all this said here is another set of shots:
1. Implement scenario 2 from my first post. In that scenario, you do not need an EJB container, but you need a messaging application. You would use JMS from within the web container to talk to the message queue. If you use the publish-subscribe model then you can achieve asynchronous messaging. No guarantees for congestion-free execution.
2. Write a web service that initiates and performs the expensive I/O operation. Then call the web service from your web application asynchronously. Note that your web service will be running in the same box as your web server so again, no guarantees for I/O blocking.
3. Use a special language construct called "Delegate" (C# has it, not sure if Java does). Call the delegate from your web app, and defer the execution of the expensive calls. Calling a delegate can be done sychronously as well as asynch. And yet once again no guarantees for I/O blocking.
It still bothers me though, that in your app you'll be doing everything in the same box. I would definitely expect "unexpected" I/O blockages. I would recommend you to try one of the methods, observe and take action. Before you do that everything is just another speculation
[ August 26, 2003: Message edited by: Levent Gurses ]
 
Chris Mathews
Ranch Hand
Posts: 2712
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Paul Kelcey:
When the work was done, the request would be notifed/called back/whatever in order to send the response.

You got it up until the last sentence. We are talking about a web application... how would a previous request be notified? The end user (using a web browser) could be long gone. Regardless, there is no way to hijack a user's browser from a process orginating on the server.
The best you can do is have the user "drop off" a request and then check back for the request later or notify them in some form or fashion (email for example).
 
Paul Kelcey
Ranch Hand
Posts: 45
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
OK, thanks for all the replies.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!