basically one approach is get 50 records first then get another 51-100 records, and release the result set. but how to track next 50 and next 50, there is no primary key in table.
ResultSet does not normally load the whole result into memory at once; it usually has a pointer to the database, and fetches results only when you advance it (with the next() method). You don't need to be afraid that it's going to take a lot of memory.
Originally posted by Mohan Karthick:
I thought it will give out of memory at some point,
So if we use setFetchSize(10000) does it make any only good performance.
Are you sure that memeory problem will not happend if don't make batches of 50,50 ?
In addition to what Jesper said, also have a look at http://www.precisejava.com/javaperf/j2se/IO.htm
Originally posted by Jesper de Jong:
ResultSet does not normally load the whole result into memory at once; it usually has a pointer to the database, and fetches results only when you advance it (with the next() method). You don't need to be afraid that it's going to take a lot of memory.[/QB]
I strongly disagree with this advice. With some databases/drivers all result sets are loaded to the client in full in all circumstances. Some other databases/drivers do this if the result set is supposed to be scrollable.
Sometimes they will use temporary files to hold the data (the MS MS-SQL driver comes to mind) and others will try to hold it in memory.
In short while in a perfect world it would work as you describe it does NOT always work like that so at the very least you should do some testing with your particular driver/database combo and see what happens. If it is supposed to generically work then I would highly recommend NOT doing it that way.
At the very least here is what I would do.
- set the cursor type to read only and forward only
- set the fetch size to something that will not have problems
But I prefer the style of actually selecting X rows and then closing the result set and executing the next query. A PreparedStatement that has a where clause with a primary key would be good.
The OP says they don't have a primary key I would suggest fixing that first. Does that data have any unique fields? If not then use a surrogate (auto number style) key.