I have a query returning 20,000 rows.
When I loop through the result set it take 13 seconds to store the results in a hash (See code snippet below)
while(resultSet.next()) {
int countResult = resultSet.getInt("COUNT");
Object groupFieldResult = resultSet.getObject(1);
if(groupFieldResult!=null) {
if(groupFieldResult instanceof Date) {
Date date = (Date)groupFieldResult;
results.put(date.toString(), new Integer(countResult));
Q1. Is there a way to reduce the time in traversing through these number of records?
In the meantime, As I only need 500 Rows at a time, so I set the MaxRows to Fetch (see below)
stmt.setMaxRows(500);
stmt.setFetchSize(500);
Now this works perfect for the 1st 500 rows.
Q2. The issue is that how do we get the next 500 rows quickly using
JDBC?
I can implement some custom cashing strategy, but a JDBC solution will be more cleaner and simpler.
Please share your thoughts.
Regards, Avi
[ November 22, 2005: Message edited by: Bear Bibeault ]