Suppose user A has selected record 1, 2 and 3. Then he updates record 2 and 3.
Before user A updates the records, user B selects records 1, 2 & 3 and updates records 1 and 2 after some criteria is met. So now the updates done by user A will be overwritten by user B.
How do i avoid this situation? Is there any way to control the simulataneous updates without degrading the performance?
hope this helps.
[ April 24, 2007: Message edited by: Shailesh Chandra ]
Originally posted by Gaurav Chikara:
I am not sure how counter will help in this scenario
Best approach is to define row level locks on Table if your database supports it
Or define isolation level of transaction
Any lock on database may have concequences specially in big application.Still
I would with column of last update,
A update counter will work in same way as timestamp. Only difference is that it will have counter of update operation rather timestamp.
Both approach has their own pros and cons
Any lock on database may have concequences specially in big application
Still defining isolation levels on transactions or locks on rows are better than doing two hits on database two maintain data consistency.
If we have 1000 rows then we will be hitting table 2000 times to maintain data integrity and I believe we won't want our application to be slow
Besides this ROW-LEVEL locks are only for ROW so they are not escalated to table level to impact performance
ALSO we were having clustered database. I guess a lock also has to be replicated in this case.
So It totally depend on actual environment.
[ April 25, 2007: Message edited by: Shailesh Chandra ]