• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

resultset

 
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My problem is
I want to query the database which contain around 20000 records.
and get all the records and write it to a flat file.
Since there is a large amount of data i need someway to get
the first 200 records in one resultset.
i think i can do this by setting the fetch size of the result set.
But the next time it should get records from 201 to 400 in the
resultset and like that it should continue for 20000 records.
I would somebody can tell me how to get the resultset to contain
records from 200 to 400 second time,400-600 third time and continue like
that till the end of records.
 
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi, welcome to the ranch!

Have you tried just retrieving all the rows? I wonder if a TYPE_FORWARD_ONLY result set isn't free to discard the rows you have read and buffer up a reasonable number of rows you haven't read yet. See if This Tutorial doesn't talk about it.
 
jarid jacob
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Do you mean to say that i dont have to do anything particular
to the resultset object.
once i finish reading the first 200 records and i do a rs.next()
it will retrieve the next record without any additional statement.
if the resultset contains all 10000 records will it not affect the
jvm heap size say only 256 MB.
 
Ranch Hand
Posts: 862
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Yes, rs.next() will iterate through the entire ResultSet. Depending on the size of the table it may consume too much memory, but you would have to test that.
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It would be interesting to confirm that the result set holds all the records at the end. If you fetch with a cursor that does not allow scrolling backwards it seems the rs would be free to discard rows after you've seen them.

If you have JDK 5 look into JConsole. Sun calls it experimental and primitive, but you can watch memory usage over time. Maybe scroll a thousand rows, pause for user input, check the console, and so on to end of data.
[ July 11, 2005: Message edited by: Stan James ]
 
Ranch Hand
Posts: 1683
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My understanding is that the JDBC driver might base a ResultSet on a database cursor and only fetch a small number of rows. These rows are cached on the client side of the connection and, when exhausted, the next block of rows is retrieved by repositioning the cursor.

So, it may appear as if the ResultSet holds all the data, but this would be unlikely if a very large amount of data is involved.
 
Consider Paul's rocket mass heater.
reply
    Bookmark Topic Watch Topic
  • New Topic