• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Liutauras Vilda
  • Paul Clapham
  • paul wheaton
Sheriffs:
  • Tim Cooke
  • Devaka Cooray
  • Rob Spoor
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Tim Moores
  • Carey Brown
  • Mikalai Zaikin
Bartenders:

simple cache

 
Ranch Hand
Posts: 60
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,

im writing an application that needs to process x number of rows in a database (1000's) in a multi threaded applcation. i do not want to go to the database once and retrieve all the rows but instead implement a simple cache of size y to store the data in and make subsequent calls to the database for another y rows when they are all processed. is this a good approach and how would i go about implementing it - i was thinking about using an ArrayList and tailoring it towards my needs.

Thanks,
 
Ranch Hand
Posts: 38
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Use javax.sql.rowset.CachedRowSet
 
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
CachedRowSet looks cool. I'd never seen that before. Thanks!

I'm pretty sure that I've read that all JDBC implementations are lazy about transferring data from the database into Java memory. When you select a zillion rows the driver fetches some subset of them into the ResultSet. As you work through the RS the RS gets more rows into memory. If you're set to scroll forward only, the RS can throw away rows you've already seen.

Anybody know if my recollection is correct? Does that also do just what the poster wants?
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Stan: I'm sure it works that way in at least some implementations, and probably most of them. I'd be surprised if that behavior is actually guaranteed somewhere though. It seems like the sort of thing they'd leave to implementations to decide. Statement, PreparedStatement and ResultSet are just interfaces, after all, so each implementation can provide completely different code for this. I don't see anything in the API that would prevent an implementor from completely loading thedata for a ResultSet before returning from an executeQuery(), for example. Most implementors wouldnt do that because it's inefficient, but you never know. I think. Maybe someone else has better info on this?
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
CachedRowSet still looks better. The doc has an example that claims to have no more than 100 rows in memory which is just what the OP was looking for, I think. But it doesn't say what's happening with the ResultSet that must be buried inside somewhere.
 
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Try the setFetchSize(int rows) on the ResultSet. As far as I know, the JDBC specification talks about giving caching "hints" to the implementation. I have experimented with varying cache size on a performance related problem and have found quite significant benefits. I was using Oracle 9i and the OCI driver.


setFetchSize(int rows) - Gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed for this ResultSet object. If the fetch size specified is zero, the JDBC driver ignores the value and is free to make its own best guess as to what the fetch size should be. The default value is set by the Statement object that created the result set. The fetch size may be changed at any time.

 
Ranch Hand
Posts: 52
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello friends.

As suggested solution to keep a cache in a resultset or with CachedrowSet, i m seeing the problem in the database table if some query is done from outside the jvm instance then what happen. I am asking about cached version and database table synchronization.

Currently i working with static treemap object and storing the bean object in it. This solution is very good. But i dont have synchronization way.

How we can synchronise between database and cache?

I will appreciate your review on it.
 
Ranch Hand
Posts: 67
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
How about using opensource tools like hibernate for object/relational persistence instead of your own implementation for coding all synchronizing
stuff wrt. database and objects in memory. Correct me if I wrongly understood your question.

Best Regards,
Praveen.
 
Well behaved women rarely make history - Eleanor Roosevelt. tiny ad:
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic