• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Design question

 
Ranch Hand
Posts: 285
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We are designing a J2ee application now.
In the application, there is one scenario which happens realtime. This application recieves messages from the clients always. As long as the system is up, clients can send messages to this system and the messages needs to be inserted to a database table.

Now, since the system is always running and there can be 1000 concurrent messages coming, whats the best way to handle db inserts ?
which pattern is best here ?

Thanks,
 
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Is that an EJB server? What kind of clients ... humans or another system? Calls via EJB remote interfaces or some other protocol?

Almost every server has some mechanism to avoid being flooded by requests. In an EJB server you define some number of instances of the session bean and the server can handle that many concurrent requests. If requests come in faster than that the server queues them up, and when the queue reaches some threshold it throws them away and sends a "server busy" message (I think). So that number of bean instances controls how many come in at once.

Client requirements will drive chunking up requests and transactions. Most often they want a success or failure message in a short time, so you do each one in its own transaction. Sometimes you might be able to batch database updates at the connection and execute a bunch at once. They will all succeed or fail together. That's more efficient but might not work with your client expectations.

If your client doesn't want a response you can send the requests in asynchronously via JMS. The number of MDBs throttles the number of concurrent requests again. I like this option because a persistent queue can hold requests if your server happens to be down for scheduled maintenance or crashes and deliver them when you restart.

Any of that help?
[ May 12, 2005: Message edited by: Stan James ]
 
Shreya Menon
Ranch Hand
Posts: 285
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Stan,

To your points:

Is that an EJB server? What kind of clients ... humans or another system? Calls via EJB remote interfaces or some other protocol?

This application we are designing now. There can be upto 1000 concurrent users, but we dont plan to use EJBs. [entity]. We plan to have DAO and session facades.

Clients are humans : there can be 1000 concurrent users for this system. Users will log in and the system has messages in the database. System sends stored messages[which are already there] to the users.

Also parallely new messages come from other systems [outside system] and these has to be inserted to the database..
Outside messages come as txt files into the server, they have to be parsed and inserted to the database..
clear ?
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
To focus on the database insert question, I'd maybe take two approaches. For humans who expect a quick response start with every update request doing whatever inserts are involved and a commit. That's simple and probably meets user expectations about transactions. For the files coming in from outside I'd start with the same approach, but you'd have the option to batch them up. If you can afford some latency in reading the files you can throttle the number of threads that process files. Be sure to prove you have a problem before solving it with complex batching.

Will you have connection pooling? That will probably have ways to throttle the number that actually run at the same time. If requests come in too fast to handle they'll queue up and eventually time out. If that's not acceptable you may have to beef up the hardware to handle more and more concurrent transactions.

Concurrent users != concurrent requests. You'll observe some ratio of "think time" to "server busy" time per user. With 1000 users I surely wouldn't expect 1000 concurrent requests. See if you can guess the ratio and stress test early and often. My team stresses almost every build. Amazingly simple changes to SQL and indexes can make worlds of difference and you want to find those as soon as possible.
 
reply
    Bookmark Topic Watch Topic
  • New Topic