• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Fetching 50000 records using JDBC.

 
Mohan Karthick
Ranch Hand
Posts: 199
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If you want to get 50000 records and want to write in txt file one by one using some specified format, what will the best approach using JDBC.
basically one approach is get 50 records first then get another 51-100 records, and release the result set. but how to track next 50 and next 50, there is no primary key in table.
 
Jesper de Jong
Java Cowboy
Saloon Keeper
Posts: 15480
43
Android IntelliJ IDE Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Why do you think you have to do it in batches of 50 records? Why not just a loop like this:

ResultSet does not normally load the whole result into memory at once; it usually has a pointer to the database, and fetches results only when you advance it (with the next() method). You don't need to be afraid that it's going to take a lot of memory.
 
Mohan Karthick
Ranch Hand
Posts: 199
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I thought it will give out of memory at some point,
So if we use setFetchSize(10000) does it make any only good performance.
Are you sure that memeory problem will not happend if don't make batches of 50,50 ?
 
Balaji Loganathan
author and deputy
Bartender
Posts: 3150
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Mohan Karthick:
I thought it will give out of memory at some point,
So if we use setFetchSize(10000) does it make any only good performance.
Are you sure that memeory problem will not happend if don't make batches of 50,50 ?


In addition to what Jesper said, also have a look at http://www.precisejava.com/javaperf/j2se/IO.htm
 
Maximilian Xavier Stocker
Ranch Hand
Posts: 381
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Jesper de Jong:
ResultSet does not normally load the whole result into memory at once; it usually has a pointer to the database, and fetches results only when you advance it (with the next() method). You don't need to be afraid that it's going to take a lot of memory.[/QB]



I strongly disagree with this advice. With some databases/drivers all result sets are loaded to the client in full in all circumstances. Some other databases/drivers do this if the result set is supposed to be scrollable.

Sometimes they will use temporary files to hold the data (the MS MS-SQL driver comes to mind) and others will try to hold it in memory.

In short while in a perfect world it would work as you describe it does NOT always work like that so at the very least you should do some testing with your particular driver/database combo and see what happens. If it is supposed to generically work then I would highly recommend NOT doing it that way.

At the very least here is what I would do.

- set the cursor type to read only and forward only
- set the fetch size to something that will not have problems

But I prefer the style of actually selecting X rows and then closing the result set and executing the next query. A PreparedStatement that has a where clause with a primary key would be good.

The OP says they don't have a primary key I would suggest fixing that first. Does that data have any unique fields? If not then use a surrogate (auto number style) key.
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic