I am not sure if Request Queue Depth is the correct term that I am using to explain the issue. But with this I mean that if Tomcat 5.5 is able to serve 50 concurrent user request and gets 1 more than that (51 concurrent request), can we control the scenario how Tomcat should behave in such cases? I am sure we can so now I would like to know a little more about the how part.
By default (with no configuration changes), Tomcat will keep the 51st request waiting until one of the 50 currently used servlets/objects are freed up to server that request. In this case the queue depth I was talking about is 1. so if there would be 70 concurrent request to a particular servlet/object and Tomcat's capacity is 50 (it must be a configurable variable somewhere in server.xml file, isn't it? can someone point me to that please?), the queue depth would be (70 - 50 =) 20.
My question now is what happens if the queue depth is very low and the number of concurrent requests are very high? Will it result in denial of service or late respone to the user only or out of memory error or time out (very unlikely as the default timeout is 30)?
Any comment will be appreciated. Thanks.
Ashik Uzzaman Senior Member of Technical Staff
IT - Applications Development