"I'm not back." - Bill Harding, Twister
Except that at this point Max and I are talking about the possiblity of two clients sharing one [/i]connection[/i].
I don't mind leaving out deadlock protection for now
"I'm not back." - Bill Harding, Twister
Originally posted by Jim Yingst:
So, does it mean you have decided totally to bypass deadlock handling?
Yes. That is, my DB and network code don't do any deadlock prevention, and my client doesn't do anything that could possibly cause a deadlock.
The only thing what I disagree - dead-lock preventing is not a busines of client application...
Well, I think it needs to be done by something which can uniquely identify clients. The API for DB does not allow anything inside DB to be able to identify a client, IMO.
Originally posted by Vlad Rabkin:
Hi Jim,
Hi Max,
First, as i said I agree with the Jim's point that it is out of the scope of this assignement to handle such dead-lock (but just preffered to make it as an extra )
Max - your last idea is Ok, but I have singleton Data: I didn't want to make a pooling, but having no pool we can get a problem with the file connections. I've tested it under standart configuration of Windows 2000: after 2010 connection to file the server crashed with an Exception : Too many files open. I have read that in standard configuration of Solaris 5 max connections for the file per process is 256.
So, I dedicided to use singelton...
Jim, Max, could please take a look on my new topic about java.rmi.Connection.
Vlad,
Thanx
Originally posted by Vlad Rabkin:
Hi Max,
With a Singletone I don't have concurrent connections to the file: One Singelton Data have only one connection!
I didn't want to allow multiple instances of Data exactly with the purpose not to have many connections.
If I had Data instances (not one, but per client) I would have to hold file connection (per client).
In order to close each connection I would have to supply a new method to Data : close(), which would close file connection (I decided not to extend Interface provided by Assignment, which doesn't have such method).
Second: we cannot rely on client that he correctly closes connection. So, before Data would be garbage collected we could get a problem.
Normally the best solution is providing close() method + pooling.
To avoid all these problems and complecated enhanced patterns, I use sigle Data object which has single connection to the file.
Access to the Data object I coordinate throught Read-Write Lock Manager (it has nothing to do mith LockManager).
It Works perfect!
Vlad[/qb]
"I'm not back." - Bill Harding, Twister
It still seems as though a great deal of trouble will be avoided if the clients are written to behave themselves in a nice orderly fashion, rather than depending on Connection (or Data or whatever) to sort out their problems.
An ideal solution would have well-behaved clients and a central authority capable of resolving problems if they do occur, IMO.
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
"I'm not back." - Bill Harding, Twister
a server that will detect problems and resolve them by simply disconnecting any client who violates the spec
The Sun Certified Java Developer Exam with J2SE 5: paper version from Amazon, PDF from Apress, Online reference: Books 24x7 Personal blog
All you'd have to do is make a connection when you need one: not the entire time. Of course, there's a tradeoff here. You're giving up cycles for memory. Still, it's the way I might do it in a real application.
Including the 3-way deadlock which defeats Philippe's algorithm, as pointed out by Andrew on the previous page.
Wait, how many locks do ou have?
File channels are safe for use by multiple concurrent threads. The close method may be invoked at any time, as specified by the Channel interface. Only one operation that involves the channel's position or can change its file's size may be in progress at any given time; attempts to initiate a second such operation while the first is still in progress will block until the first operation completes. Other operations, in particular those that take an explicit position, may proceed concurrently; whether they in fact do so is dependent upon the underlying implementation and is therefore unspecified.
Originally posted by Jim Yingst:
[QB][b]It still seems as though a great deal of trouble will be avoided if the clients are written to behave themselves in a nice orderly fashion, rather than depending on Connection (or Data or whatever) to sort out their problems. An ideal solution would have well-behaved clients and a central authority capable of resolving problems if they do occur, IMO.
I still think deadlocks are out of scope for this assignment, but I might handle them anyway, as I'm starting to warm to this solution (letting Connection enforce locking order). I'll give it some more thought - along with possibly letting Data have multiple instances (which may mean I don't even need Connection anymore, if Data now tracks this info. But maybe that's just moving code from one class to another without accomplishing anything.)
--------------------
QB]
Originally posted by Vlad Rabkin:
Hi,
I didn't use FileChannel in my assignement, because I as started my assignement I used jdk1.3.1. I used ReadWrite Lock Synchronization.
Now as I've seen in your mails, most of you use FileChannel class from jdk141 (at least I see that Phil does). I just have read description of this class:
It seems that this read-write lock mechanism makes no sense anymore. Do I understand it correctly?
Thanx,
Vlad
You design looks ok, if a little overly complex. Just to stir up the pot a bit.
Did you ever consider caching the records as you read them, and going to your cache instead of the File(for reads)?
Just one more word about caching : I cache records in their converted form (array of field values) in such a way that I optimize reads twice : if a given record may be read three times from the cache, it's read in the file only one time and converted only one time.
"I'm not back." - Bill Harding, Twister
increasing the number of locks, you're increasing complexity of the app, and increasing the opportunity for deadlock
Effectively speaking then, you read the File once, and never again?
Originally posted by Vlad Rabkin:
Hi Max,
Yeap, but I still need to synchronize it:
I should'r pick up any data from the cash, while somybody is writing in the file...
Ori
Is my assumption correct that FileChannel is thread safe, so I don't have any read-write synchronization?
Vlad
Originally posted by Vlad Rabkin:
Hi Max,
thanx a lot for explaining FileChannel!
I will use it to write in file. (I have anyway to read the file header with DataInputStream, because it is required by assignment).
I think your idea about opening a FileChannel is great, but a bit weard:
I thougt always that openening an closing connection to the file is pretty slow operation. As I correctly understand your idea in your case we don't open connection every time, but just FileChannel to opened connection (FileOuputStream or RandomAccesFile).
I will try to test it.
Thanx again for your advice!
Vlad
Some of my private students have done a fairly clever thing, in that their Map has a recordno as a key, and a Record as a value. Thus, for locked, reading, et el, they simply synchronize on the map
By increasing the number of locks, you're increasing complexity of the app, and increasing the opportunity for deadlock, regardless of how you're using those locks. I suggest as few locks as possible. It sounds like you have more then you absolutely need
About your caching. My apologies, I must not have understood you. Effectively speaking then, you read the File once, and never again? Or do you cache records as you read them, so you never have to follow up? If it's the latter( which it sounds like it is), why not do the former?
I'm not really advocating it, mind you: I'm just trying to get a sense of what factors you're weighing.
I did not understand your question till I read your later message to Vlad :
Thus, on startup, load all the records. Then, you simply work with those loaded record, no more IO to read and search. unless you want to, say, modify a record.
That's what I wanted to avoid, because the feasability of loading (and keeping) all records in memory depends on database size. What is we have 10000 records ? Or 100000 ?
So, on startup I read all records once, not to load them but :
build in-memory indexes (if any) for other fields, build a TreeSet of distinct values make an inventory of deleted records
During that startup process, I don't feed my LRU cache. It would slow down the startup process without even improving the performance of the first "real" reads :
If db size is 100000 and the DB admin set the cache size to 1024 (just an example), it's probably better to have get an empty cache after startup than a one filled in with the last 1024 end-of-file records. Right ?
Now I could consider this slight optimization : if the cache size is bigger that the db size, or even if it's size is significant in comparison with the latter (an arbitrary ratio), I could decide to feed the cache at startup. OK.
Or (new idea), accept for my cacheSize property a special value like CACHE_ALL (-1?) and if this value is used, I would :
feed the cache at startup never clean it up
It should be a good thing for small tables.
Cheers,
Phil.
[ July 17, 2003: Message edited by: Philippe Maquet ]
[ July 17, 2003: Message edited by: Philippe Maquet ]
That's what I wanted to avoid, because the feasability of loading (and keeping) all records in memory depends on database size. What is we have 10000 records ? Or 100000 ?
Originally posted by Vlad Rabkin:
Hi Max,
I have completely rejected both ideas:
LockManager,
ReadWriteLock.
I have synchronized all read, write, find, lock/unlock and so on on the Map of record (since I have this map anyway - it is my cashed Database).
It works perfect.
I have now only two concerns:
since I synchronized all methods on the map (not to have dirty reads):
- find method (it doesn't read from database , only from cash) can be a bottleneck.
- What is real advance of using FileChannel with this design (all methods are synchronized, so we do not profit from FileChannel synchronization and atomicity of write operation)
I don't really know what Sun excepts from me !?
Could you let me know your opinion:
is it Ok that find() will be synchronized also?
Many thanx,
Vlad
Space seems cool in the movies, but once you get out there, it is super boring. Now for a fascinating tiny ad:
Gift giving made easy with the permaculture playing cards
https://coderanch.com/t/777758/Gift-giving-easy-permaculture-playing
|