Win a copy of The Little Book of Impediments (e-book only) this week in the Agile and Other Processes forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Record read/write, unlock/write synchronization

 
Nikolay Petrov
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi All,

I'm working on URLyBird 1.2.1 task assignment. I'm curious about how you deal with these two types of synchronizations (if it is applicable to you assignment of cource):
1. Should we support dirty reads. This means that in some cases read record could be executed simultainiously with update/delete and therefore the client will receive wrong data. The cure for this could be to make read/update/delete methods synchronized and thus not allowing records to be read/modified simultainiously (I have separate record object which hadles these, and there is not worry when read/modify different records)
2. Another one is that it is possible to modify/unlock record simultainiously. Again the cure here is to make all of these synchronized.
 
Inuka Vincit
Ranch Hand
Posts: 175
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
When I did my program I thought about 1 but never directly addressed it. The read either read the old records or the new records it wasnt guaranteed.

#2 should not happen. If you designed it right it cannot happen. A record needs to be locked inorder to be modified, and the same user(or object) then has to unlock the record for it to be modified again.
I used cacheing with individual record locks.
 
Frans Janssen
Ranch Hand
Posts: 357
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Nikolay Petrov:
Hi All,
1. Should we support dirty reads. This means that in some cases read record could be executed simultainiously with update/delete and therefore the client will receive wrong data. The cure for this could be to make read/update/delete methods synchronized and thus not allowing records to be read/modified simultainiously (I have separate record object which hadles these, and there is not worry when read/modify different records)


I am assuming dirty reads are inacceptable, if what you mean is that a record could be read when one half of the record could already be modified and the other half isn't.

My first attempt at a locking mechanism made a backup copy (in memory) of all locked records, so that clients who read a locked record would get the record as it was at the moment it had been locked.

But I abandoned this mechanism for the more simple one where all database access is synchronized. With this mechanism a reading client could see the state of record in between two updates, if the client holding the lock were to do two or more updates before unlocking the record (but why would a client do that?).

Frans.
[ January 12, 2005: Message edited by: Frans Janssen ]
 
Nikolay Petrov
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I have synchronization on DB file level. This means that it is imposible to write/read simultainiously and therefore its imposible to corrupt the data on this level. The issue I was directing is a little outside the scope of the assignment. I mean that the other layers are accessing DB in a given order. For example:

The problem is when you are writing an application that will give the possibility of the client to execute operations from different threads. I mean that it could be possible to lock record from Thread1, than perform modification of the record from Thread2 and unlock the record from Thread3.
In this case the it is likely the two situations I've described to happen.
 
Josh Allen
Ranch Hand
Posts: 37
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm figuring out the same issue (URLyBird 1.1.3).
The problem with not locking the memory cache on read()/find() is how are you going to perform a create()/delete()? I'm storing mine in a java.util.Map, and if I add()/remove() from it in create()/delete() then I can't iterate over it in find() concurrently.

Ideally I would like to implement Mult-version read consistency, but I wonder if that would be overkill. Does everyone just lock down the entire database when reading/searching? This seems like a lot of unnecessary contention, but the book I got does just that.
 
Nikolay Petrov
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I don't keep DB in-memory. I have record object and its alone is responsible for performing all operations. This gives me the opportunity of not to lock the entrire DB when I do some operations.

After a little brainstorming on this I came up with idea of read/write lock(gate). As I picture it this means that I have an object with following methods:
<CODE>
lockRead
unlockRead
lockWrite
unlockWrite
</CODE>
I target here the following things:
1. When you perform lockRead all lockReads are passing, but lockWrite is waiting until unlockReads are executed.
2. When you perform lockWrite all other threads should wait until unlockWrite is executed (this includes all lockReads and lockWrites).
 
Josh Allen
Ranch Hand
Posts: 37
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Nikolay Petrov:

After a little brainstorming on this I came up with idea of read/write lock(gate). As I picture it this means that I have an object with following methods:
<CODE>
lockRead
unlockRead
lockWrite
unlockWrite
</CODE>
I target here the following things:
1. When you perform lockRead all lockReads are passing, but lockWrite is waiting until unlockReads are executed.
2. When you perform lockWrite all other threads should wait until unlockWrite is executed (this includes all lockReads and lockWrites).


Were you aware of the jdk 1.5 java.util.concurrent.locks package? This work has already been done for you (gotta love Java!).

http://www.javadoconline.com/search.jsp?from=main&class=ReentrantReadWriteLock&action=search

ReadWriteLock implementation javadoc

The options I'm considering are:
  • Use synched Map.getKeys()
  • Use ArrayListCopyOnWrite
  • Use ReadWrite lock

  • Any other ideas?
     
    Nikolay Petrov
    Greenhorn
    Posts: 9
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    I know this package very well. But I'm not sure is it ok to use it in my assignment and that is why I don't. On the other hand until you know how to make something I don't advise using it. I mean that even if the package is there I must know how its working.

    You could also use synchronized on you record storage. I mean all operations like find/delete could be placed inside synchronized block.
     
    Josh Allen
    Ranch Hand
    Posts: 37
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    Originally posted by Nikolay Petrov:
    I know this package very well. But I'm not sure is it ok to use it in my assignment and that is why I don't. On the other hand until you know how to make something I don't advise using it. I mean that even if the package is there I must know how its working.

    You could also use synchronized on you record storage. I mean all operations like find/delete could be placed inside synchronized block.


    It's not included in the forbidden packages:
    http://www.sun.com/training/catalog/courses/CX-310-252A.xml

    Make sure you replace your implementation with the standard one before submitting:
    Use of functionality provided by the core Java classes will be preferred to your own implementation of that functionality, unless there is a specific advantage to providing your own implementation.


    Using a regular synch block like that would have the same effect as locking down the entire database, which is what I'm trying to avoid.
     
    • Post Reply
    • Bookmark Topic Watch Topic
    • New Topic