it looks to me the record-level locking is somehow unnecessary, as wat I am actually doing is file-level locking
Logical record locking can help you avoid this problem - can you see how?
- I'm doing the rich client, I started with a thin client, but the requirements of a traditional client server and an easily enhanced client drive me in the direction of a rich client. If you go with the thin client, the system is a bit easier to implement, but enhancements require server changes that would impact all clients.
- I lock arbitrary numeric resources, this is easy to test and seperates the locking activity from the I/O activity. I use a Map of numbers and Locks, a Lock contains a cookie and a Data instance. In the thin client you could just use a Long containing the cookie.
- I synchronize all access to the lock Map, the locking code uses notifyAll.
- I synchronize all I/O at the lowest point possible on the RandomAccessFile to provide thread safe data access.
- like you I have a Singleton I/O manager between the Data instances and the file I/O routines. The file level locking is done in the readRaw and writeRaw methods in this class.
- I don't use a queue of database requests, although I have used that technique in real world applications using Oracle.
1. I am not going to argue with you whether a thin client or a rich client is better, but I think if there is a need for enhancement, the server-side change will always impact the client side. IMO, it usually requires a rich client to change more.
2. Are you suggesting that you cache all the locked records? So when a client requests a lock on a record, it will first check in the cache whether that record is already locked. If the record is locked by another thread, this thread will go to wait status. Please correct me if I am wrong.
3. May I know how you schedule your file IO operation instead of the Task Pool pattern? Of course it's ok if you feel it inconvenient.
Lastly, is there any problems regarding my Locking and File IO design? I am going to start my coding phase next week.