We need to process the records first come first serve basis. Or at least the records should be processed by the end of the day they are recieved. So random seed generation may not help us.
Not sure why this precludes using a seed (this is after all how some JMS implementations work); maybe I'm missing something, can you explain?
1. Using Hibernate and having a distributed cache. Maintain a list of records accessed in cache. Whenever the EJB reads a record, the hibernate goes through the distributed cache and if it finds a match, return null/error.
A distributed cache makes sense, insofar as you are forcing all data updates through the cache so your application can control them. It doesn't work if they are updated by something else. Caches can be run on to disk, so you presumably don't have to worry too much about that? Maybe clear the cache daily?
2. Have a sequence ID on the first table.
Use a seperate simple table that holds the sequence number of the latest record accessed. Have a pessimistic lock on this simple table. EJB comes to this table to get the sequence number of the record to be fetched. But having a second table and pessimistic lock on it may be a performance hit when the incoming orders are 250,000/day.
That works. The pessimistic lock might be an issue, but not that big an issue I'd guess. The lock would be there and gone in a very brief time. Might be worth prototyping and trying some performance metrics?
3. The sequence ID can be placed in a distributed cache if we have one. But we cant configure a new distributed cache just for the purpose of holding this single key if it is not already in place.
No sure I see how this works. Won't the cache be being updated so frequently as to not really be a cache?