If you mean "dead cliens", ie when client crashes while record is locked, then it is better to handle such situation. In my design I handled it and get 80 / 80 on locking mechanism. [ June 04, 2004: Message edited by: Denis Spirin ]
I think that to max out in the locking area you MUST have a mechanism that will prevent the locking of a record if the client crashes out. I did this by using a thin client design with the unlock on the server in a finally block.
If you implement a thin client (so the client calls a "book" method on the server which does the whole "lock - verify - update - unlock" process, then you should never need to deal with dead clients.
However if, like me, you think the client should be handling the booking process, then there is the potential for a client to die before disconnecting. (For more on the whole fat client / thin client issue, see the long thread "Should lock methods be callable by the client").
The easiest ways to handle this involve setting up a connection factory on the server. This way, each client will have a unique instance of the remote class which provides all the functionality. You then have two ways of handling client death:
Implement the Unreferenced interface, so that the instance of your remote class will eventually be notified if the client dies - you can then perform any clean up operations.
Use the instance of the remote object as the key in a WeakHashMap (with the object in the map being the lock), so that when the client dies, the lock will automagically be removed from the map (you may want to have an extra thread monitoring the status of the map).
I have deliberately not gone into detail on any of these - have a think about them, and see if you can work out how to move forward with them.