• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

commit performance

 
Peter Chase
Ranch Hand
Posts: 1970
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I have an application that adds a lot of rows to certain tables, as it runs. I have so far coded it in the simplest way, which is to commit after adding each row. This means it's doing tens of commits of this type per second.

If I were able to re-jig the application so that it could add several rows before committing, how likely is this to result in improved (or degraded) performance? It's not easy to make this change(*) so I would like some idea of how worthwhile it might be.

If it makes a difference, I'm using JavaDB.

(*) I don't just have an array or collection of items that I'm looping through. I am adding items in response to call-backs from other processes, and there is lots of infrastructure in the way between the call-back handlers and the code that hits the database.
 
Scott Johnson
Ranch Hand
Posts: 518
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I've not used JavaDB yet, but in general, committing after every 100 or 1,000 rows [or more] will probably result in a major improvement in the number of rows you can insert per second. Building and tearing down a transction can be quite a bit of overhead.

I've seen over 100+% improvement in performance by using this technique. You have to balance the performance improvement with the resources (i.e., locks) held for the duration of the transaction.

If you are going to allow the transaction to span several callbacks, things may be a little complex if, in the event of a failure, you need to notify the caller that that certain rows that you previously processed failed to be inserted (and not just the last/current one.)
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic