• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Simultanious Inserting Data

 
sorabh jaiswal
Greenhorn
Posts: 10
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi All,

I am using basic java1.4 and db2 version 9

I have 4 java classes , all having main methods. These 4 java classes are run from Unix. They can either be scheduled to execute at same time or at different times.
The 4 java classes read 4 different text files. The data in the 4 text files are different from each other.
The 4 java classes insert the values to same table (example: Employee Table) .

The problem is if they are all called at the same time how can we avoid table locking. As the data which these java classes are inserting in the Employee table is very huge more that 50000 records. How can we maintain the performance.

1. I was thinking to use synchronize keyword for the method where I insert the data. But it will degrade the performance.
2. Is there any way ....procedures or batch update can help?

Any suggestion or solution someone might have used?
 
Nitesh Kant
Bartender
Posts: 1638
IntelliJ IDE Java MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by sorabh jaiswal:

The problem is if they are all called at the same time how can we avoid table locking. As the data which these java classes are inserting in the Employee table is very huge more that 50000 records. How can we maintain the performance.

Are you inserting new records or updating exisiting records?
If you are inserting new records then you do not have to do anything. Database will handle that for you.
If you are updating, can you use Select For Update? That will only lock rows that are to be updated.

Originally posted by sorabh jaiswal:


1. I was thinking to use synchronize keyword for the method where I insert the data. But it will degrade the performance.
2. Is there any way ....procedures or batch update can help?

Any suggestion or solution someone might have used?


Synchronization will not help as the programs inserting records are in different JVMs.
Batch updates will definetly make it perform better but is no way related to table locking and concurrent inserts.
 
Yelamuri Chandu
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The above posted repy by nitesh kant is perfect. Please follow that.
 
Gabriel Claramunt
Ranch Hand
Posts: 375
Monad Scala
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I would serialize the inserts (i.e. run one program after the other), because after N number of rows are locked, DB2 escalates the lock and locks the entire table, that could lead to deadlock errors from the database. Probably you could get a more specific answer in a DB2 forum.
 
Nitesh Kant
Bartender
Posts: 1638
IntelliJ IDE Java MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Gabriel Claramunt:
I would serialize the inserts (i.e. run one program after the other), because after N number of rows are locked, DB2 escalates the lock and locks the entire table, that could lead to deadlock errors from the database. Probably you could get a more specific answer in a DB2 forum.


I have not worked on DB2 so i can not really comment but the above looks a bit dirty. Escalating the lock to table level is done by the DB2 server and not explicitly by the user. Deadlocks will occur if two programs take nested locks in mutually reverse order. I dont see such a condition here (I may be missing something)
 
Jeanne Boyarsky
author & internet detective
Marshal
Posts: 35279
384
Eclipse IDE Java VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
If you don't need the possibility of a rollback for the whole program, you could commit more often rather than waiting for the end. This clears the locks more often which reduces contention. If you have the possibility of a rollback, I second the recommendation to run them serially. The contention would be extremely high, so you are better off avoiding it.
 
Raees Uzhunnan
Ranch Hand
Posts: 126
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
sorabh ,

Yes batch updates and stored procedures will help ; but what will make big difference is
1. if you can partition the table at the backend. This will open up more insert points and inserts will be faster.

2. Check if you have table level lock scheme set for the table. If so make it to row level lock scheme. This will reduce contention in big way

3. Also since you said data is different in all the files ; I don't see a possibility of update of the same data, and hence need of an index to avoid table scans.

Thanks
raees
[ January 20, 2008: Message edited by: Raees Uzhunnan ]
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic