• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Concurrency issue with create(...)/createRecord(...) [URLyBird]

 
Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ranchers,

I have the following interface, which would be similar to that of many other assignments/versions:



The interface implies a usage sequence lock->update/delete->unlock, which is reasonable.

The interface locking calls also imply that the record number of the record to be locked must be known by the interface user. This can only be the case if the create methods is called, or the find method is called and provides record numbers as part of the returned data.

To illustrate, the interface user could otherwise be guessing record numbers from LONG.MINVALUE to LONG.MAXVALUE and poke around in a black hole. However, this would be rather inefficient database use. For example, in a data file with only one valid record left (e.g. number 284938), we could poke around forever with not much success at all.

Now the concurrency issue:
The interface implies that creating records has no concurrency issues, since no lock cookie/record number can be passed as an argument. The interface also does not provide any calls to determine record numbers (=file areas) available for record creation.

A concurrency problem occurs if a locking strategy is implemented which allows for concurrent record manipulation (deletes, updates) on different records as:
  • within the lock method, each thread can obtain a reference to a record specific lock object (being/extending ReentrantLock) in an unsynchronized manner. Note: the record is not yet locked at this stage, which facilitates that threads are not waiting for unrelated record locks to be released.
  • Upon calling the/a lock method on the lock object, all other treads have to wait for the lock of that particular record only to be released.


  • An example scenario would be:

  • Client Thread T1 enters create, and starts to write the record bytes (e.g. immediately after the last record in the file), without that file section being locked.
  • Client Thread T2 enters create, and also starts to create a record immediately after the the last record, in the unlocked file section. It overwrites/interferes with whatever T1 does.


  • I found there was only one solution:
    1. synchronize the create method altogether. Only one thread can create at any one time in an unlocked file section. This works particularly well if deleted records are not reused (i.e. all threads create a record immediately after the last completely written record). It mandates though that only the create method will ever be accessing that unlocked section.

    Implicit locking within create somewhat breaks the clean lock->recordmanipulation->unlock sequence stipulated by the interface. Also, it raises a problem with ensuring two create calls are not overwriting each other by:
  • T1 and T2 obtaining a lock reference for an identical file area (say free record at end of file to be record 32)
  • T1 locking that section, writing data, unlocking
  • T2 locking section that now holds data, overwriting, unlocking


  • Anyone came across this issue? Any thoughts?

    ChrisBe
    [ September 26, 2007: Message edited by: Chris Be ]
     
    Greenhorn
    Posts: 20
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Chris,
    First I think that it is necessary to synchronize on the instance of the RandomAccessFile regardless of logical locking.

    For instance if you want to read record:


    This prevents moving pointer in the file by another thread in case this thread is sliced out. We still want to read from position 180.

    Now (without reusing deleted records as it is more complex but achievable ) it is possible to leave the allocation of the record to this lowest level and return record number after record is written to the file.




    T.
     
    Ranch Hand
    Posts: 231
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi,

    Monkhouse's book solves this issue by using a static ReentrantReadWriteLock object:

    whenever there is a change made to the data file (that includes update, delet and create) you need to get a lock on the writeLock of ReentrantReadWriteLock object. Once done with the changes unlock the writeLock of ReentrantReadWriteLock.

    So even if a thread is put to sleep in the middle of doing the change, other threads will have to wait on that writeLock.

    Of course when you do a read you still need to obtain a lock on the readLock of ReentrantReadWriteLock.
     
    Ranch Hand
    Posts: 516
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi,

    Originally posted by Chris Be:
    1. synchronize the create method altogether. Only one thread can create at any one time in an unlocked file section.


    That is what I do. Inside my create method, the sensible code is synchronized.

    Originally posted by Tomas Klubal:
    [QB]Chris,
    First I think that it is necessary to synchronize on the instance of the RandomAccessFile regardless of logical locking.

    For instance if you want to read record:



    I don't agree and I believe this goes against a "must" : "Data shall allow any number of thread to read records concurrently".
    I strongly believe you need to handle multiple(concurrent) pointer to your database (personally I use cache data, and the Map.get(index) can be done concurrently by any number of thread.



    Now (without reusing deleted records as it is more complex but achievable ) it is possible to leave the allocation of the record to this lowest level and return record number after record is written to the file.




    Again, this would go against the "must" in my opinion. 2 threads, each owning their record, cannot perform concurrent write. This should/must be allowed and can be done by keeping multiple pointers. Also, since the records are fixed length, concurrent write do not interface with each other

    Regards,
    Alex
     
    Tomas Klubal
    Greenhorn
    Posts: 20
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi,
    I have to say I have different assigment (B&S), but I see the requirement "Data shall allow any number of thread to read records concurrently"
    as applicable to the higher level of the code (lock manager and logical locking) rather than to file access. Otherwise multiple concurrent reads from the disk have to be accomplished by using separate instances of RandomAccessFile for each thread (not sure if it is allowed by OS).

    1)Monkhouses' book uses this idiom (synchronization on the RandomAccessFile) at the lowest level

    2)seek and read operations of the RandomAccessFile must be together atomic otherwise you can end up with reading different records than you want:

  • First thread T1 sets the file pointer to the position 200 and then is sliced out
  • Second thread T2 sets file pointer to the position 250 and reads whole record (lets say 50 chars). This operation changes file pointer to the 300!
  • First thread T1 is woken up and calls dbFile.read() but the read starts at 300! and this thread reads completely different record than it is supposed to read


  • Regards,
    Tomas
     
    Don't get me started about those stupid light bulbs.
    reply
      Bookmark Topic Watch Topic
    • New Topic