I think Prabhu was too simple in his answer to your original question. There is one socket listening, the socket that the service is bound to (in a web server, usually 80/443 for HTTP/HTTPS), but the server socket creates a new socket for each incoming connection. This way the incoming connection can be serviced while the server socket can continue to listen. It is also common practice to use threads to allow simultaneous servicing of existing connections and listening for new ones.
Originally posted by A Kumar:
But from the webserver...how will these 100 requests ask for database connectivity... From the webservers point of view....will it open say 'x' no of sockets that connect to a single point of contact of database server??
Since database connections are non-trivial to create, it is common to use a pool of connections rather than creating one for each request. And again, the database will work like the web server described above: bound to a single socket, creating a socket for each simultaneous connection. Have a look at The Java Tutorial: All About Sockets for some simple code.
I also remember reading from my PHP days that most databases are single write/multi read. If I'm remembering correctly, you can have a lot of connections to your database (which is bad, connection pooling is much better as it creates less strain on the DB server), but the first thread coming in to write gets priority, then goes down in a queue.
Can anyone verify if this is still (or ever was) correct? I'm learning Java to build an online application that may have a lot of people trying to access a database at any given time.
The very existence of flamethrowers proves that at some time, some where, some place, someone once said to themselves "I'd really like to set those people on fire over there, but I just can't get close enough".
Thousands of simultaneous requests? Wow! That's a serious number. Apart from everything else I'd say that multiple servers will be necessary, if every request involves a further access of a second server.
Servers are designed in a multithreaded way. The listening socket is one , but as soon as the request comes ,the server creates another socket and a worker thread to deal with the request and the main thread keeps on listening to that socket for request.If another request comes ,the same procedure is followed. Apart from the main binding port , for every request the server will create a socket at some available port.
Here's a quick example from a real server of what Rahul just described:
When requests come in faster than they are handled the thread pool expands to make more threads. This particular server does not put any limit on requests and threads because it only has a handful of users. One could use a thread pool with a max size or keep count of active requests. I guess you could send a "server busy" message back to the user at some point.
Any database access would be in the MessageHandler and is not really related to being in an HTTP server. At high loads, some kind of connection pooling will be critical for your throughput.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
You showed up just in time for the waffles! And this tiny ad:
Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop