Hi,
swapnil kataria wrote:My DB is postgresql,
I won't be able to help you in detail with postgresql, I don't know it. I can only offer general advice.
and in a STORE_PROC , i have to do updates in a loop for a table
No. This describes
how are you going to do it, not
what do you need to do or
where do the data come from.
You need to show us how you obtain the data you need to for the updates. Where do they come from? Are the data hard-coded in the stored proc? (Probably not, but even this could be done differently.) Do they come from another query? Are they somehow processed after they've been read from that query? Etc etc.
[query]suppose that loop do 100 updates one by one, I want to do it in a single batch as in java( addbatch, executebatch)[/query]
The trick in speeding things up with JDBC batching lies in two factors (at least for Oracle):
1) JDBC batching reduces number of network roundtrips. This obviously speeds things up in all databases. However, this optimization equally obviously cannot be done for stored procs, as they already run on the sever and don't send data over the network at all.
2) Specifically for Oracle, the JDBC batching (with Oracle 11 version driver) uses additional
boost, in which the INSERT or UPDATE is executed once for all parameters that were set in the batch. This functionality can be used in PL/SQL with the BULK keyword I've already mentioned several times (and promise I won't mention it again
). Something like this might be available in PostgreSQL, or not. Quick peek into PostgreSQL's PL/SQL reference (they have one, I hope) might help you determine whether or not something similar is supported there.
BUT, if your data are not hardcoded in your PL/SQL stored proc, chances are the whole update can be performed by one update statement. That will always be faster than any procedural code.
Always. Stop thinking about implementing JDBC batching in PL/SQL and determine, or let us help you determine, whether you can use the really optimal way.
I have read somewhere I can put autocommit off before loop and commit after loop.
Autocommit is a terrible disaster. I'd be surprised if PostgreSQL even had such a feature in PL/SQL (Oracle certainly does not have it).
You should always have it off. Unless you're using the database as some kind of an inefficient, transaction-less file system and don't care about how transactions occur, that is.
Moreover, frequent commits make some databases perform better, but in other it makes things worse. In PostgreSQL it probably makes things worse (as it does in Oracle and DB2), as they use multiversioning for concurrency control. In SQL Server, MySQL etc. it may help, as they rely on locks for concurrency, which are then hold for shorter periods. Regardless of this, you should always make the transaction as long as it logically needs be - no longer, no shorter.
Cheers,
Martin