Originally posted by Anand Karia:
Dear All!
I am using postgresql, while fetching data it takes long time in processing as it contains number of rows.
It's difficult to say anything useful with so little information. Things could be slow because:
your SQL query is inefficientyour resultset is huge or your database is slow, and it just is going to take that longyour network is too slow or has a really high error rateyour fetch size is too smallyour fetch size is too big and your application doesn't have enough memory; your application has to repeatedly garbage collect to scrape up enough memory to store the fetchyour fetch size is too big and your application has too much memory; your JVM has more memory than can be held in your computer's physical memory at one time and garbage collection is triggering swap thrashyour processing after you get the row is inefficient (not a JDBC issue)many other possibilities You need to figure out (or tell us) where the time is going; executing the query? getting the results? processing the rows?
You may need to see if your application is spending a lot of time in garbage collection; use Google to find out how to that if you think you need to. It usually involves setting startup parameters, such as -verbose:gc
Also, you need to understand fetch size, if you don't already. Take a look at:
http://java.sun.com/j2se/1.5.0/docs/api/java/sql/Statement.htm the setFetchSize() method.
Different drivers/databases have different default values if this is not set; for Oracle, it's 10 rows, which is usually too small - the program spends all its time waiting on the network; for Postgres, I think it's still "every row in the resultset", which is sometimes too big. Some number between 100 and 5000 is often better than the defaults.