Forums Register Login

Asynchronous, Slow worker jobs in Web - response

+Pie Number of slices to send: Send
Hi All,

I want to trigger the threads in Java (exposed methods in Web-service). Now, when user requests comes from JEE, After all process done by request, then only response goes to the user. I want request back as soon as possible. The multiple threads triggered by request should run in back-end. Currently, request thread doing all the work and coming back. So response is too slow. Can I do something using Asynchronous and slow worker to get faster response and work should done by Java multiple threads not by request Thread.


This is my code:

client.java / tester



Publish.java publisher


StartNStop.java


StartStopInterface.java

Thanks All
Ramakrishna K.C
+Pie Number of slices to send: Send
I moved this thread to Web Services because it seems to be specific to that... I can't really understand what the question is or what the problem is, or what the link between the different bits of code you posted is, so it probably isn't a general Java question.

If you show the code you actually are using and having problems with (an SSCCE <- link) then maybe you will find better responses. Right now you are talking about threads and what not but I see no threaded code in your post.

Web Services are different from a Web App, but in web applications what you would do to satisfy the need of getting a response to the user quickly even when the process takes a long long time is to kick of the tasks in a background thread, then send a response with a partial page as a response, and some javascript that periodically requests more information from the server and fills in the data as these further responses come in. This of course requires keying the data from the initial background task to the user so they can be found with the future requests (either through sessions or a UUID). In the old days when it was more likely javascript wasn't available we would keep the response open, feeding back partial response, then incrementally pushing more data as it became available. But this taxes the resources on the server (your request thread pool can quickly get emptied).

So for the web service, you might try a similar approach. Assuming you can't or don't want to keep the response open, you would send a response with some data (enough to show responsiveness) and a signal that more will be available and the key needed to get the next bit. The client would then know to display what it has and make another request to get more results. Finally, when the background processing is done, the server responds with a signal that there is nothing left so the client knows not to ask anymore.

This all is coming from the point of view of someone who knows nothing of Web Services, just a strategy I would import from web applications, so perhaps someone else has a better idea.
1
+Pie Number of slices to send: Send
Background processing of tasks is a very rich topic and you need to consider several things before coming up with a solution. You should definitely make all long running tasks into background tasks. But, you have to consider a lot of things. Most people fall into the same pitfall on starting a thread to execute the job, which is really the most awful thing you could do. You need to atleast consider the following

a) how will the clients get status of the jobs? How will you handle failure?
If your calls are asynchronous, how will the client know the job is complete? Do they need to know that the job is complete? WHat happens when there is a failure? How will you inform your client that the background processing failed?

b) how much load can you handle
You will need to figure out how many background processes you can run in parallel. Resources on any hardware are limited. You can only have so much CPU, memory, IO on any machine. If you start too many background processes, one of these resources will get exhausted, and your performance will detoriate exponentially. So, first thing you need to do is that any background processing framework that you use/build should be designed so you can limit the number of parallel processes,. If the number of simultaneous requests are more than you can handle, then they should be queued. Then, you need to determine how many parallal processes can be executed in your hardware, and set the limit in your framework

Remember that, generally speaking, web servers limit the number of HTTP requests that will be served. Generally, the number is set with the assumption that each call will take at most a few seconds. However, it is very easy to change the number. In TOmcat, for example, you just change the size of the executor thread pool. If the number of requests becomes larger than that number, then tomcat will start giving error messages back to the caller, and the caller can try again later. This is something that you get for free from the web server.

c) How will you scale?
What happens if your load exceeds your capacity? Will your framework allow you to add more machines? If you just execute a task on a thread, you cannot scale it to multiple machines. So, you need to consider whether you want to spend the time and effort building somethig that can scale out to multiple machines

Again, if all your requests are implemented in a synchronous manner, this is something you get for free, and if you change into asynchronous, you will have to build something or buy something that provides you this feature. Generally, load balancer keep a watch on how many requests each Web server is working on, and they attempt to spread the load evenly. Most load balancers allow you to configure the load balancing mechanism, as well as limit how much load to put on each web server. Once all the web servers are fully loaded, the load balancer will start rejecting calls. If all your calls are synchronous, you get some sort of scalability right out of the box:- You just add a new web server to the load balancer and you have increased your capacity. Boom!!

----------------------------------------

There are 2 possible avenues that you can look at
1) Implement something using java's Thread Pool Executor

Core Java comes with a Thread Pool Executor that contains a thread pool and a queue of jobs. You can easily implement back ground processing using a TPE. If you are thinking of doign soemthing that requires you to start a thread, stop right there, and consider the TPE. Right out of the box it allows you to limit the number of parallel jobs executing, and queues up the other jobs.

It doesn't allow you to scale out into multiple machines

2) Invest in a OTS grid/cloud based solution

There are some good off the shelf, open source solutions that allow you to implement background processing in multiple machines. I have used Apache Hadoop and Grid Gain. Both are very good, although it seems like GridGain is ceding to Hadoop nowdays. Amazon supports Hadoop too, so you will be able to take your background processing to the cloud if you have too. Any solution that requires distributing work on multiple machines will have a big learning curve, so go this route if you really have to.
What is that? Is that a mongol hoarde? Can we fend them off with this tiny ad?
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com


reply
reply
This thread has been viewed 1920 times.
Similar Threads
How to create Web service using JAX-WS in Jboss
want to send pdf file to client from server. pdf is generated at server
Type mapping trouble with JAX-WS and EJB3
WSDL access error
stateless session as a webservice
@webparam
com.sun.xml.internal.ws.encoding.soap.DeserializationException: Failed to read a response: javax.xml
More...

All times above are in ranch (not your local) time.
The current ranch time is
Mar 29, 2024 09:49:30.