We are trying to avoid processing similar requests that arrive at same time and return the result of the first successfully processed request. We do this by simply ignoring duplicates by making an entry in cache that there was already a request with a similar known key. But this fails if threads arrive at the same time and all of them successfully do a put even before gets start working or if the first request fails.
In an example of two threads thread1, thread2 arriving at the same time thread1 leading thread2 in put:
1. If two threads arrive at the same time and attempt to make an entry in ehcache at the same time, how can we force ehcache to allow only one entry and fail the next ones by using putIfAbsent or any other means. How do we enable a row level locking with dirty read? Is there a corresponding annotation to do this explicit write/read locking as well or does putIfAbsent does this by default http://ehcache.org/documentation/2.8/apis/explicitlocking? I know the default replace/putIfAbsent behavior is atomic, but still want to confirm if this is true, otherwise why do we need read/write locks?
2. Is it possible for the remaining threads to "wait on a put like acquiring a lock" or say we open a transaction on the first thread that is processing the request and will not put a value until it succeeds?