Lets discuss things from server end: What you do think server has to do ? What I think is : Server has to responde for each request and it does n't know what whether you are reloading or requesting from new browser window.
from client end we have browser chache. I don't know whether it helps for dynamic pages like jsp,servlets.
Any body with good understanding on this, please post. Thanks in advance.
So if I surf to a website and hit reload 5 times I have basically spawned 5 threads on the server and 10 request and response objects?
Although you might think they'd be the same, each request/response cycle could in theory generate different content in both directions. For eaxmple, your ISP might use a different proxy on each request so the HTTP headers are different (making the ServletRequest distinct), and the page returned might contain some time-dependent functionality (making the ServletResponse distinct). It therefore makes sense that the request and response objects might be different; further, you wouldn't want two entirely separate requests to share the same object even if the request appeared identical as you'd then also be sharing request-scoped attributes between two different requests (highly undesirable - use session-scoped if that's your requirement). Since there's really no way of knowing what the response will contain without doing all the dynamic execution, there's no obvious way (that I can see) to sensibly optimise that any further.
On the matter of threads, the container has to do work for each request to execute all the code in filters, servlets and JSPs, in theory simultaneously processing all the requests it can that come it at (almost) the same time. You need to use a thread model for each active request. Note I said "active request" - containers could pool threads so they stay alive at the end of one request/response cycle and use them again later. This avoids the overheads of starting a new thread for each request while still threading incoming requests, at the expense of some memory being used to maintain the pool.
At least threads are lightweight! Apache starts multiple processes per server with far greater overheads than a simple thread, and each of those processes then spawns many threads. A typical max setting for the number of processes is 5-10, so in theory that means for each of your five requests there would be 5 processes running, which is significantly less efficient that the equivalent number of threads in a single process. However, running multiple processes in parallel makes the server more stable for large numbers of requests (e.g. a crash of a single request only terminates one process, which can be restarted, and not the entire server!).
Charles Lyons (SCJP 1.4, April 2003; SCJP 5, Dec 2006; SCWCD 1.4b, April 2004)
Author of OCEJWCD Study Companion for Oracle Exam 1Z0-899 (ISBN 0955160340 / AmazonAmazon UK )