hi mates,
I have some really stressed application, kind of system, where customers are making actions triggering insert/updates in about 6-8 tables. Now I got traffic increased, when this action is performed about 30 times in a minute and deadlocks start to appear :/
I think with my hardware, 8 CPUs 2,4GHz each and 16GB of RAM it should be possible to serve a lot more.
So I believe I can upgrade my code. Right now I use one datasource and persistence
unit for all reads and writes. They are created without specifing explicitly any isolation level. The deadlock comes on query, where I update one table, one column with amount that customer used in action. So it's basically like:
update table a set total_amount = total amount + amount what customer used with some simple subquery to aquire ID (this is one to many relationship, perhaps the issue is that I simply do like this:
where id = (select id from another_table where another_id = some_id) ?)
This update is not so important, so I don't care if get some uncommited data/small differences. What can I do about this? Is it good solution to create one more persistence unit with some read uncommited attribute?
With my knowledge the only solution I see is to create message driven bean and a queue, where those actions are performed one after another. But this can be problem if the traffic will go higher...
Could you help me with some advices?
Regards,
Michal