[Philippe]: OK, I missed that you synchronized on this in lock() and unlock(). But I think that - given your design - you don't need it. Synchronizing on the record to be locked, waiting() on it and being notified by a simple notify() in unlock (instead of a notifyAll()) should be OK IMO. Just in case it's still unclear: Vlad syncs on the Data instance throughout the operation; I sync first on the cache, to get the Record, then I drop that sync and then sync on the Record for the remainder of the operation. I agree Vlad wouldn't need the Data sync
if he held a Record sync at the same time, but since he isn't syncing on a Record at all, he does need to sync on something, since Data is shared amongst multiple threads.
[Vlad]: Since I have Data singleton and file connection, allowing concurrent read/writes on different records is dabgerous. I know, nio allows atomic read/writes (there was a really fight with Max about the issue) , but Sun (to my opinion) failed to specify in which case is /read/write is atomic and in which not. As you know I agree with you that atomicity is not guaranteed; however in my solution it's not necessary. Different thread can read or write to differnet parts of the FileChannel concurrently; I have no problem with that. I never use the implicit-position methods of FileChannel; I always supply the position explicitly in a read or write, so different threads can have different explicit positions concurrently; no problem. The only problem would be if two threads were reading and writing (or both writing) to the same record at the same time. That, I prevent by syncing on the Record. Actually thanks to caching there's never any file reading after startup either, so it's just concurrent writes to the same position that are a potential problem.
Alternately, it's possible to ensure atomicity of FileChannel's writes and reads by wrapping them in sync clocks, on, say, the FileChannel. I posted code in the past showing how that works, though it's not part of my current solution since the additional sync is unnecessary. And it's one of those "evil" nested syncs, though it's done in such a way that it can't possibly create deadlock because the inner sync block doesn't even have any reference to anything other than the FileChannel (and no one else has this), so it can't possibly create a deadlock by requesting sync on something that's already synced by another thread.
[Max]: 1) you're locking on two objects(Data.this and static cache structure), when you don't need to for architectural reasons. I'm a threading purist, so I hate, hate, hate nested locks. They are sometimes necessary, but I don't think this is one of those cases. There's no nesting of these two syncs. I sync on the cache to get the Record; then I exit that sync and sync on the Record instead. Much as if my caching object were a Vector:
[Max]: 2) My second reason is a weaker, aesthetic one. The design of locking on a given record doesn't obviously allow for the addition or deletion of records. That is, between the time you synchronize on Data.this and time to call the cache method(which is synchronized), the nature of the cache could change. To me, it just seems messy. It's true that there are some subtle and potentially messy issues for create() and delete(); my final code is pretty clean and takes care of these concerns, but there were several places I might hve gone astray (and which a unior programmer might later screw up if they're not careful.) The kay is to ensure that there's only ever one unique Record object corresponding to a given record number. If you delete a record, the Record is still there, with isDeleted = true, until you recycle it later and set isDeleted = false, and put new data in the fields. For create(), you request a new Record from the cache, and it either recycles an old one, or creates a new one. This process needs to be synced so that if there's any concurrent create() request, it doesn't use the same Record, or create a new Record at the same position. Again, the final code is not complex-looking, but it is a bit fragile if someone mucks with it without understanding.
[Max]: Also, you're not really buying any efficiency here. To wit, a client C_1 who wants record R_1 still has to lock the entire cache for time x in order to achieve record R_1. another client C_2 who wants record R_1 still has to lock the cache is order to see if anyone has R_1. This is true in either implantation, weather you lock on data.this first or not. Well if both clients are competing for the same record R_1, there's no performance benefit at all; that's true. Much more commonly, if two clients are competing for different records, then they do still both have to ccompete for the same cache lock first. So there's still a bit of contention. But once C_1 has R_1, and C_2 has R_2, they are independent. This is beneficial (performance-wise) as long as whatever processing that needs to be done on a Record takes significantly more time than the initial acquisition from the cache. Now if all the field data is in the cache too, and we're talking about reads, the difference is not that big. The only processing that needs to be done is to check the isDeleted to make sure the record is valid, then clone the String[] array of fields, and return the clone. OK, that's not a lot more complex than the initial Record acquisition, so we don't get a big win here. However for any update(), create(), or delete(), the record processing also involves waiting for a write to the file. Even with FileChannel, this is going to take longer than the simple sync lock + ArrayList get() which the cache lookup entails - it's I/O after all. So for anything involving writing to the file, there
is a benefit to allowing different writes to proceed concurrently (after the initial cache lookup.) I'm not saying it's a
big difference necessarily, considering that I've argued elsewhere that reads are likely much more common than writes (thanks to find()). But there is a difference.
It's sort of odd - I originally put in record-level locking without caching. (That is, I had an ArrayList of tiny RecordLock obbjects containing locking info, but no fields.) In this case, the fact that read() required I/O meant that using Record-level sync achieved a notable performance
boost, which I did directly obverve when I first moved from Data-level sync to Record-level sync. In turn, the existence of this ArrayList of RecordLocks made it really, really easy to transition to a full-cache solution in which the RecordLocks became Records which had field data as well as locking info. Now, with full caching, reading from the file is not so common, and so there's significantly less benefit to Record-level sync than there was when I first put it in. So I've gone from
1) sync on Data, no field data in memory
2) sync on Records, no field data in memory
3) sync on Records, all field data in memory
and am now contemplating
4) sync on Data, all field data in memory.
Which is what Vlad has, and Max says he prefers. That's I don't think I'll actualy do it, because given that I've already solved the issues of Record-level sync, there's no that much compelling reason to take it out now. Except for those pesky junior programmers who just might do something really stupid later, and muck things up...

They can do that in any design really, but I'll concede they have
some extra opportunities for mischief in my design that they wouldn't in Vlad's.
So, I'm not strongly opposed to Vlad's solution; just wanted to clear up some of the confusion about how my current design actually works
[ September 12, 2003: Message edited by: Jim Yingst ]