• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • Liutauras Vilda
  • Jeanne Boyarsky
  • paul wheaton
Sheriffs:
  • Ron McLeod
  • Devaka Cooray
  • Henry Wong
Saloon Keepers:
  • Tim Holloway
  • Stephan van Hulst
  • Carey Brown
  • Tim Moores
  • Mikalai Zaikin
Bartenders:
  • Frits Walraven

nx: All of URLy Bird 1.1.3 read/write lock(2)

 
Ranch Hand
Posts: 90
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Andrew and George,According to your suggestion.I will sum up my design for read/write lock.However if i have mistakes,please you give me some comments on it?
I will create static RandomAccessFile variable(be shared by all Data instances) as mutex,in DB interface there are really two kind of actions concerns data file. read(involves read and find) and write(involves update and delete and create).All of actual file access of each method(read,find,update,delete and create) are put into synchronized block on raf.
So do,i can guarantee the valid data(although some time is dated)without
corrupted state.If i want get latest value of record,i can use lock() and
read() and unlock() to accomplish task.
Am i right?
From today,i will ask you questions that is not over 4.because you are busy
on yourself job.
 
Ranch Hand
Posts: 83
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi liqun.
I'm afraid you should build a locking system by yourself. If you do as what you said, only one client can access the database at a time. Do you think so?
Regards
James
 
liqun chang
Ranch Hand
Posts: 90
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi George where did you go?
Hi Andrew and George about the read() method,the instruction say:


// Reads a record from the file. Returns an array where each
// element is a record value.
public String[] read(int recNo) throws RecordNotFoundException


I will implements read as below:


public String[] read(int recNo) throws RecordNotFoundException
{
Sting[] data;
//check whether recNo is validity
isValidated(recNo);
//if not then throw RecordNotFoundException
//else go on
synchronized(raf)
{
raf.seek(position);
raf.read();
}
//convert the byte[] to the String[] data
return data;
}



because my isValidated(recNo) method as below:


isValidated(recNo)
{
synchronized(raf)
{
raf.seek();
raf.read();
}
//check recNo validity,can throw RecordNotFoundException if recNo is invalid.
}


As you can see,if i implements read as above,then can occur the follow questions:
1: Also in read method,there are twice raf.seek calls,how could i avoid this situation.You can see,although read method has not potential deadlock,but it is low efficiency. Could you give me some suggestion about this situation and provide yourself design?
2: My another idea is:remove isValidated(recNo) from each method that throw RecordNotFoundException.instead of this,I provide the low level(also direct read/write to datafile) read/write directly,and then judge whether the recNo is validity,if yes do reading/writting,if not throw RecordNotFoundException.So do,i only provide one synchronized block.Am i right for this mode? please help me use your experience?
Hi George,please help me? because i don't know whehter your have time?
[ February 16, 2004: Message edited by: liqun chang ]
[ February 16, 2004: Message edited by: liqun chang ]
 
Ranch Hand
Posts: 619
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Liqun,

Originally posted by liqun chang:
I will create static RandomAccessFile variable(be shared by all Data instances) as mutex,in DB interface there are really two kind of actions concerns data file. read(involves read and find) and write(involves update and delete and create).All of actual file access of each method(read,find,update,delete and create) are put into synchronized block on raf.
So do,i can guarantee the valid data(although some time is dated)without
corrupted state.If i want get latest value of record,i can use lock() and
read() and unlock() to accomplish task.
Am i right?


Yes, sounds good to me.
Hope this helps,
George
 
liqun chang
Ranch Hand
Posts: 90
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi George,you can see my read method above.because i am urgent,so that i am
wrong for writting find,i has already changed it correct.Could you give some comments?
 
liqun chang
Ranch Hand
Posts: 90
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi George thanks for your reply.
from instruction:


Locking
Your server must be capable of handling multiple concurrent requests, and as part of this capability, must provide locking functionality as specified in the interface provided above. You may assume that at any moment, at most one program is accessing the database file; therefore your locking system only needs to be concerned with multiple concurrent clients of your server. Any attempt to lock a resource that is already locked should cause the current thread to give up the CPU, consuming no CPU cycles until the desired resource becomes available.


From instruction,I find that read(),update(),delete()method can implements easily.but on create method,i am confusion,because
i must do below:
1: get length of datafile
2: length minus head
3: seek start of record data
4: sequence search deleted flags,if find write new record to datafile
5: if not find deleted flags,then write new record to the end of datafile

Could you give me some suggestions about this method,or you have other implementation?Because the instruction tell me to assume at any moment,at most one program is accessing the datafile.So i want make the five steps atomic(also put them in synchronized block)but this seems low efficient?how could i do?please you give me a general suggestion?Or i should provide the
lock of whole datafile?if the datafile is very very large,how could i finish
?
[ February 17, 2004: Message edited by: liqun chang ]
 
liqun chang
Ranch Hand
Posts: 90
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi George:
from instruction:


It must allow the user to search the data for all records, or for records where the name and/or location fields exactly match values specified by the user.


Is the mean three situation:
1: all records:also criteria[] null null null null null null (because have six fields except owner)
2: records where the name and location fields exactly match values:
(criteria[] name null null null null null)&&(criteria[] null location null null null null)
3: records where the name or location fields exactly match values:
(criteria[] name null null null null null)||(criteria[] null location null null null null)
I don't know whether understand the logic of sentence included in quote.
Another question is whether exactly match means that case sensitive.please you give me comments?
 
George Marinkovich
Ranch Hand
Posts: 619
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Liqun,

Originally posted by liqun chang:

1: Also in read method,there are twice raf.seek calls,how could i avoid this situation.You can see,although read method has not potential deadlock,but it is low efficiency. Could you give me some suggestion about this situation and provide yourself design?


The read method you present is logically correct, but as you say it does have two raf.seek calls. You can avoid this if you want as follows:

The advantages of this are:
1) synchronized block is short in duration, only seek and read are synchronized all other work occurs outside of this block
2) only one seek and one read are needed
 
George Marinkovich
Ranch Hand
Posts: 619
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Liqun,

Originally posted by liqun chang:
From instruction,I find that read(),update(),delete()method can implements easily.but on create method,i am confusion,because
i must do below:
1: get length of datafile
2: length minus head
3: seek start of record data
4: sequence search deleted flags,if find write new record to datafile
5: if not find deleted flags,then write new record to the end of datafile
Could you give me some suggestions about this method,or you have other implementation?


Here are some suggestions about the create method:
0) You should calculate the start location of the record data when you finish reading in the header and database schema. This doesn't change once the database file has been opened.
1) I don't think you need to be concerned with locking in the create method. You're only interested in deleted records which shouldn't be locked.
2) Check to see if new record will cause a duplicate key exception, and if so throw the exception.
3) Get the first record in the database that has been deleted, and if you find one, mark the record as valid, and write the new record to this location.
4) If you don't find a deleted record to reuse, then append the new record to the end of the database file.
 
George Marinkovich
Ranch Hand
Posts: 619
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Liqun,

Originally posted by liqun chang:
Is the mean three situation:
1: all records:also criteria[] null null null null null null (because have six fields except owner)
2: records where the name and location fields exactly match values:
(criteria[] name null null null null null)&&(criteria[] null location null null null null)
3: records where the name or location fields exactly match values:
(criteria[] name null null null null null)||(criteria[] null location null null null null)
Another question is whether exactly match means that case sensitive.please you give me comments?


Your understanding of the logic of requirement is technically correct. 1 and 2 are relatively easy to implement. Implementing 3 is difficult to do in a single search. Many people (maybe most people) are not supporting 3 in a single search. Instead they are supporting 3 as two independent searches.
3a: records where the name field exactly matches values:
(criteria[] name null null null null null), and
3b: records where the location field exactly matches values:
(criteria[] null, location, null, null, null, null)
I should also point out that while it is not wrong to allow the user to search on any of the database fields, the assignment instructions only require the search capability for the name and location fields. Providing search capability for fields other than name and location is an enhanced capability not required by the instructions. In other words, you will not lose any points if you decide not to provide this enhanced capability.
Most people have interpreted exactly match to include case. So, "this" is not considered an exact match of "This".
 
Ranch Hand
Posts: 697
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by liqun chang:

public String[] read(int recNo) throws RecordNotFoundException
{
Sting[] data;
//check whether recNo is validity
isValidated(recNo);
//if not then throw RecordNotFoundException
//else go on
synchronized(raf)
{
raf.seek(position);
raf.read();
}
//convert the byte[] to the String[] data
return data;
}
isValidated(recNo)
{
synchronized(raf)
{
raf.seek();
raf.read();
}
//check recNo validity,can throw RecordNotFoundException if recNo is invalid.
}
As you can see,if i implements read as above,then can occur the follow questions:
1: Also in read method,there are twice raf.seek calls,how could i avoid this situation.You can see,although read method has not potential deadlock,but it is low efficiency. Could you give me some suggestion about this situation and provide yourself design?


Hi liqun, I'm just curious to know. The isValidated() function checks the physical location of the start of each record with that of the recNo right? And that's why you are using seek. Am also trying to implement on the same grounds, so eager to know. I understood what George explained, but want to know your intrepretation.
Appreciate your help, Thanks.
 
Satish Avadhanam
Ranch Hand
Posts: 697
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by liqun chang:
Hi Andrew and George,According to your suggestion.I will sum up my design for read/write lock.However if i have mistakes,please you give me some comments on it?
I will create static RandomAccessFile variable(be shared by all Data instances) as mutex,in DB interface there are really two kind of actions concerns data file. read(involves read and find) and write(involves update and delete and create).All of actual file access of each method(read,find,update,delete and create) are put into synchronized block on raf.
So do,i can guarantee the valid data(although some time is dated)without
corrupted state.If i want get latest value of record,i can use lock() and
read() and unlock() to accomplish task.
Am i right?
From today,i will ask you questions that is not over 4.because you are busy
on yourself job.


Hi liqun/George, from above what you wrote, I understood the approach. But what is "mutex"? I mean does that refer to the way RandomAccessFile varible is used as static or is it something different? Can you please explain me what the term "mutex" refers to?
Appreciate your help, thanks.
[ February 17, 2004: Message edited by: Satish Avadhanam ]
 
Satish Avadhanam
Ranch Hand
Posts: 697
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by George Marinkovich:

Here are some suggestions about the create method:
0) You should calculate the start location of the record data when you finish reading in the header and database schema. This doesn't change once the database file has been opened.
1) I don't think you need to be concerned with locking in the create method. You're only interested in deleted records which shouldn't be locked.
2) Check to see if new record will cause a duplicate key exception, and if so throw the exception.
3) Get the first record in the database that has been deleted, and if you find one, mark the record as valid, and write the new record to this location.
4) If you don't find a deleted record to reuse, then append the new record to the end of the database file.


Hi George, am not able to understand some points above. I will write down what I understood and what I think. Please correct me if I'm wrong.
In the 2) point, you said to check if new record will cause a duplicate key exception, and if so throw the exception. Mine is URLYBird assignment. The signature of the create method provided by Sun is like this
// Creates a new record in the database (possibly reusing a
// deleted entry). Inserts the given data, and returns the record
// number of the new record.
public long createRecord(String [] data)
throws DuplicateKeyException;
I mean all we are doing is trying to append a String [] to the end of the data file and return the record number(say the physical start of the record in the data file). I do not understand how we can check for a DuplicateKeyException. I have gone through some of the threads regarding this issue. It seems there are couple of solutions to this. One is to ignore it as stated by Phil once and document it.I also remember that Mark is the only one to stick on to the idea that create method will throw DuplicateKeyException. Can you throw some light on how to check for the DuplicateKeyException?
In the 3) point you were saying that "Get the first record in the database that has been deleted, and if you find one, mark the record as valid, and write the new record to this location". Each record in the database file starts with a deleted flag. So what you meant from above is to check if any of the deleted flag is 1(implies deleted), and if so, just insert the record that is passed to the create method there and return that recNo? What I was thinking was as there is deleted flag, we have to keep all the records in the database file like that and any records to be added should be appended at the bottom of the file. If we invoke the deleteRecord method, all we need to do is to set the deleted flag to 1. I think I'm missing something big time. Please correct me if I'm wrong.
Appreaciate your help, thanks.
 
Bartender
Posts: 1872
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Liqun, Satish and George,
Mmh... let's optimize a bit that first sentence: Hi everybody,

Satish, about DuplicateKeyException:
It seems there are couple of solutions to this. One is to ignore it as stated by Phil once and document it.I also remember that Mark is the only one to stick on to the idea that create method will throw DuplicateKeyException. Can you throw some light on how to check for the DuplicateKeyException?


You're I right by telling "as stated by Phil once", because my own design decision is/was quite different.
In summary, here are the problems:
  • In URLyBird, there is no field (or combination of fields) that you can use to build a primary/unique key.
  • In any assignment, there is no definition of how such a primary/unique key should be defined: a single field? Possibly a compound key? We don't know. There is even no reference to any possible primary/unique keys, the conditions under which DuplicateKeyException should be thrown being not described at all.
  • UpdateRecord() doesn't throw any exception in case a primary/unique key is altered. A primary key should never been altered, while a unique key could be as far as its unicity is kept. But as we have no definition of such keys, there is nothing to be implemented for sure.


  • So, the possible solutions are:
  • ignore the DuplicateKeyException completely and justify your choice;
  • implement a solution which throws it, a solution based on an optional primary key (which cannot be defined on the provided URLyBird file), that - if defined - you check in createRecord() and updateRecord(). In other words, if there is a primary key defined, you check for a DuplicateKeyException case in createRecord(), and you ensure that it is never changed in updateRecord() (throwing some IllegalArgumentException in the case it is).


  • Don't hesitate between solutions 1 and 2... 1 is very defendable IMO and only requires 3 lines in choices.txt, while 2 is quite complex (I know, I did it ), requires indexes to be efficient (indexes that you probably didn't implement either) and goes a bit against the YAGNI principle .
    Now about the recNo validation and how to find a reusable deleted record in createRecord().
    For once, my solution is simple , though efficient:
    - When I open the file (once), I read it all, store all deleted recNos in a TreeSet deletedRecords and remember the valid records count (recCount).
    - When I want to check if a recNo is valid, it's quite simple:

    - To check if a record is deleted:

    - Checking if a RecordNotFoundException must be thrown now becomes so simple and efficient: you throw it if !isRecordValid(recNo) || isRecordDeleted(recNo).
    - In createRecord(), the recNo to be chosen is either the first entry in deletedRecord() is there is one, or recCount + 1. Simple and efficient either, because you don't need to access the file.
    Now the deletedRecords TreeSet and the recCount variable must be kept up-to-date of course, but it's not the hardest job to be done in Data, right?
    Regards,
    Phil.
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Philippe Maquet:
    Hi Liqun, Satish and George,
    Mmh... let's optimize a bit that first sentence: Hi everybody,
    - In createRecord(), the recNo to be chosen is either the first entry in deletedRecord() is there is one, or recCount + 1. Simple and efficient either, because you don't need to access the file.
    Regards,
    Phil.


    Hey Phil, thanks man. Your input is greatly valued and appreciated.
    Hi Liqun, Satish and George,
    Mmh... let's optimize a bit that first sentence: Hi everybody,

    Whoa Baby!! If you are optimizing to only refer names like this, I don't want to even think of how you optimized your code
    And Phil, one more thing. In the createRecord(), I do not understand the purpose of deleted flag. If we have a deleted flag for each record to tell that the record is deleted, then there should be some purpose of keeping the record for future use right. I mean, I don't know, probably in the future we need to use that record and just need to invert the deleted flag. I mean is it essential or say in Sun terms "must" to delete the record physically though we have a method of specifying that the record is deleted(using deleted flag)?
    Phil, you already threw some light. But I think am still in the dark
    Appreciate your help. Thanks.
    [ February 17, 2004: Message edited by: Satish Avadhanam ]
     
    Philippe Maquet
    Bartender
    Posts: 1872
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Satish,


    And Phil, one more thing. In the createRecord(), I do not understand the purpose of deleted flag. If we have a deleted flag for each record to tell that the record is deleted, then there should be some purpose of keeping the record for future use right. I mean, I don't know, probably in the future we need to use that record and just need to invert the deleted flag. I mean is it essential or say in Sun terms "must" to delete the record physically though we have a method of specifying that the record is deleted(using deleted flag)?


    A record to be deleted is just flagged as deleted but remains in the file for future reuse by a createRecord() call. In the meantime, its position in the file is like "forbidden", meaning that any readRecord() or updateRecord() call on its recNo will throw a RecordNotFoundException.
    What I just tried to explain above is a simple way to optimize (a lot!) both createRecord() and the record validity check, which shouldn't (IMO) require any physical read in the file.
    Regards,
    Phil.
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hey Phil, thanks for being patient and explaining in detail. I just wanted to make sure. I think I understood it now.
    Thanks once again.
     
    George Marinkovich
    Ranch Hand
    Posts: 619
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Satish,

    Originally posted by Satish Avadhanam:

    Can you please explain me what the term "mutex" refers to?


    Sure, "mutex" is short for "mutually exclusive." In this context it refers to the fact that we have static variable (raf) in a class (Data) that needs to be thread-safe because multiple threads can access the class at any one time. In order to protect the raf variable from simultaneous access we implement the data class so that any code that accesses the raf variable does so only in a block of code synchronized on the raf variable. Synchronizing on raf is what provides the mutex (or the guarantee that only one thread will have mutually exclusive access to the raf at any one time). So given the code:
    we know that only one thread will be given the raf mutex (all the other threads that want to access the raf will have to wait for it to be released). So, the same thread will do the seek and then the read, without any interference from another thread. This is a very good thing because the variable raf maintains state data (namely the position of the file pointer) and so we would not want thread 1 to seek to position A, then have thread 2 seek to position B, and then have thread 1 read from position B when it really wants to read at position A. If the mutex is granted to thread 1 then thread 2 can't sneak in and seek to position B until thread 1 releases the mutex.
    This mutex makes the raf thread-safe only if all the code that accesses the raf in the class occurs in code blocks that are synchronized on the raf.
     
    George Marinkovich
    Ranch Hand
    Posts: 619
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Satish,

    Originally posted by Satish Avadhanam:

    In the 2) point, you said to check if new record will cause a duplicate
    key exception, and if so throw the exception. Mine is URLYBird assignment.


    I forgot you guys have the URLYBird (I had B&S Contractors and I would argue there that you can discern a primary key - namely, name and location), and from what I understand there really isn't a primary key, so the duplicate key exception is sort of moot. I agree with Phil that you can simply document that it's impossible for the DuplicateKeyException to be thrown and then ignore the issue.


    The signature of the create method provided by Sun is like this

    In the 3) point you were saying that "Get the first record in the database that has been deleted, and if you find one, mark the record as valid, and write the new record to this location". Each record in the database file starts with a deleted flag. So what you meant from above is to check if any of the deleted flag is 1(implies deleted), and if so, just insert the record that is passed to the create method there and return that recNo? What I was thinking was as there is deleted flag, we have to keep all the records in the database file like that and any records to be added should be appended at the bottom of the file. If we invoke the deleteRecord method, all we need to do is to set the deleted flag to 1. I think I'm missing something big time. Please correct me if I'm wrong.


    I was simply trying to follow the method comment that says: "Creates a new record in the database (possibly reusing a deleted entry)." I took that to mean that we should check to see if the database file contained any records that had been deleted (marked for deletion) and use that space before deciding we had to append the new record at the end of the database file.
    I think what you say about the delete method is correct: all we need to do is set the deleted flag to 1. From that time on a record so marked is considered deleted and is not used by any of the database methods with one exception. That exception is the create method which says it can "possibly reuse a deleted entry".

     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Thanks George for clarifying the meaning of "mutex" and also re-assuring about the createRecord() method.
    Appreciate your patience in expalining, thanks.
     
    Ranch Hand
    Posts: 65
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    George,
    I just finsihed my SCJP (did very well thanks for asking). I have been quietly reading this forum and Max's book. Enough so I think I'm understanding most of the issues. I still need practice with RMI and Swing before I can feel like I have enough understanding sign up for the assignment.
    Here is my question. You said,


    1) I don't think you need to be concerned with locking in the create method. You're only interested in deleted records which shouldn't be locked.


    Don't you still need some mechanism to prevent two threads running in the create method from trying to overwrite the same deleted record? The raf lock is only for seeking/reading/writing so it does not prevent this conflict. It seems that the other options are a database lock (sounds too heavyweight to me) or a lock on the deleted record to prevent "simultaneous" overwrites.
    Perhaps I just don't understand enough about the assignments yet.
    [ February 17, 2004: Message edited by: Don Wood ]
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Don Wood:

    Don't you still need some mechanism to prevent two threads running in the create method from trying to overwrite the same deleted record? The raf lock is only for seeking/reading/writing so it does not prevent this conflict. It seems that the other options are a database lock (sounds too heavyweight to me) or a lock on the deleted record to prevent "simultaneous" overwrites.
    Perhaps I just don't understand enough about the assignments yet.
    [ February 17, 2004: Message edited by: Don Wood ]


    Hey Don, we are on the same boat man. First thing, congrats on passing SCJP. I will try to explain, but I'm not sure if its correct.
    As you said, lets assume two threads T_1 and T_2are trying to access createRecord()method. If the createRecord() method is not atomic, then the following sequence of operations might take place.
    T_1 checks the deleted flag of a record and sees it as 1(deleted) and so it wants to write the new record at this position(say offSet).
    Meanwhile, CPU gets T_2 into picture. Now T_2 also sees the deleted flag of record as 1 and wants to write the new record at the same position, offSet.
    T_2 writes the new record at the offSet.
    And now if CPU brings T_1 back to life, and gives it command, then T_1 also writes the record at the same position i.e. offSet right.
    If I'm right, and as Don thought, there might be a chance of corruption of data.
    If that's the case, I think we need to call the createRecord method in a synchronized block of code on RandomAccessFile like

    Please correct me if I'm wrong. Appreciate your help, thanks.
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Don,
    I was thinking of what I wrote above. I think what George meant by

    1) I don't think you need to be concerned with locking in the create method. You're only interested in deleted records which shouldn't be locked.


    is that in other methods like readRecord(long recNo), updateRecord(recNo), we do record locking and then read/manipulate the data. We cannot do this record level locking in the createRecord() method. But when we think about two threads accessing the createRecord() simultaneously, this can be avoided by obtaining lock on the RandomAccessFile itself. I understand that we call this createRecord() method in an adapter class(i.e. Data.java or other adapter), right. There if we call createRecord() method in a synchoronized block, then it should be safe.
    I think while George is explaining about record level locking, we are talking about a whole different issue. I'm not sure if I'm right, please correct me.
    Appreciate it, thanks.
    [ February 17, 2004: Message edited by: Satish Avadhanam ]
     
    George Marinkovich
    Ranch Hand
    Posts: 619
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Don,
    First, congratulations on passing your SCJP and welcome to this forum.
    Second, sign up for the assignment now. You don't need to know everything before you get the assignment. Get the assignment and work on the things you already know, you can pick up the other things as you go along. There's no time limit on the assignment either so there's no reason not to jump in. You'll also get to read a complete set of the assignment instructions instead of just the pieces that people post here on the forum.
    The first thing people usually work on is the database. Then they usually add the GUI for the standalone (local) mode. Then they add support for remote mode.
    Third, you are correct there is a problem with what I posted about not needing to worry about locking in the create method. My design is a little bit different than Liqun's design in that he is encapsulating code that accesses the raf in a block synchronized on the raf, while I decided to synchronize the methods in Data that accessed the raf. I didn't take this difference into consideration when I gave him suggestions about the create method. So good catch (see you are ready to sign up for the SCJD).
    So, you do need to be concerned with synchronization in the create method. One way to do this:

    I think this takes care of the simultaneous overwrite problem. Anytime the database file is accessed the code is placed in a block synchronized on the raf.
     
    Don Wood
    Ranch Hand
    Posts: 65
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Satish and George,
    I understand what you are describing. It seems to me that the disadvantage is that it is effectively a database lock. All reading, updating and deleting will have to wait for the create to complete before they can continue.
    Read, update and delete are concurrent operations. That is multiple threads can do these operations simultaneously on different threads as long as the records are different. It seems to me that create can be concurrent too. My thinking when I asked the question was that we could lock the deleted record.

    The advantage to this approach is that concurrent operations with other records can proceed.
    [ February 17, 2004: Message edited by: Don Wood ]
    Modified the pseudocode because I realized that adding a record at the end of the file had to be done in a loop. Another thread may have added one so it is necessary to check after locking and retry if required.
    [ February 18, 2004: Message edited by: Don Wood ]
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hey George, Am I right with what I said to Don in the last two posts about what you thought? Please clarify.
    Appreciate your help, thanks.
     
    Don Wood
    Ranch Hand
    Posts: 65
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    I thought I should put in one more note before you tell me what crazy idea this is.
    PRO
    I am convinced the approach would work correctly and increase concurrency. It would take a new lock method that would perform different checks from the ones that are in the regular lock method.
    CON
    The increased concurrency would not be a major benefit since adding a new record is a relatively rare event compared to updating records.
    Finally, I would not do it this way on the exam because I would worry that the reviewer would not understand. The locking of deleted records is not intuitive. The locking a not yet existing record (adding to the end of the file) would probably put the reviewer over the edge.
    In short, I will probably do something along the lines of what you have described of synchronizing on the raf object.
    Sometimes it is better not to get too fancy.
    [ February 18, 2004: Message edited by: Don Wood ]
     
    liqun chang
    Ranch Hand
    Posts: 90
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi George,Philippe,and guys,This topic is very big.and i will open another
    thread nx:All of URLy Bird 1.1.3 read/write lock and create() method
    [ February 18, 2004: Message edited by: liqun chang ]
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by liqun chang:
    Hi George,Philippe,and guys,This topic is very big.and i will open another
    thread nx:All of URLy Bird 1.1.3 read/write lock and create() method
    [ February 18, 2004: Message edited by: liqun chang ]


    Hey Liqun, I will try to answer Don's last two posts and then if we still have to discuss, we will do it in new thread.
    Thanks.
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Don,

    Originally posted by Don Wood:
    Satish and George,
    I understand what you are describing. It seems to me that the disadvantage is that it is effectively a database lock. All reading, updating and deleting will have to wait for the create to complete before they can continue.
    Yes, we have to effectively lock the database, i.e. getting lock on "raf" in order to perfom the createRecord() operation. If we get a lock on "raf", all reading, updating and deleting have to wait. That's true.

    Read, update and delete are concurrent operations. That is multiple threads can do these operations simultaneously on different threads as long as the records are different. It seems to me that create can be concurrent too. My thinking when I asked the question was that we could lock the deleted record.

    The advantage to this approach is that concurrent operations with other records can proceed.
    [ February 17, 2004: Message edited by: Don Wood ]
    Modified the pseudocode because I realized that adding a record at the end of the file had to be done in a loop. Another thread may have added one so it is necessary to check after locking and retry if required.
    [ February 18, 2004: Message edited by: Don Wood ]


    Don, I will try to explain. Reading, updating, deleting operations are performed concurrently as you said as we are using record level locking there. Record level locking is as follows: Whenever someone needs to update/delete/read a record, we place the recNo in a lockedRecords Vector(ArrayList...) and then update/delete/read accordingly. After the operation we will remove the recNo from the lockedRecord. We also hold all the recNo's say like in existingRecords(a Vector/ArrayList), which are pointers to all the "existing records" in the database. So, when someone tries to update/read/delete a delted record, we check the existingRecords that is holding the recNo's and if the recNo is not there, we throw a "RecordNotFoundException". This the one approach.
    As far as I understood, when you say lock a deleted record, you meant to lock the recNo right? If so, we need to hold all the deleted recNo's in a different variable say deltedRecords(Vector/ArrayList). So when you want to override a deleted record, you can lock this recNo and return the recNo right. Is this what you meant?
    If this is what you are thinking, then I think its a valid approach too. And also it increases the performance by not blocking the database whenever you create new records. I think I understood now what you you are saying. Performance is increased quite a bit, and also I think there is nothing complex in this approach too.
    Hey Guys, I tried to intrepret what Don is saying. I'm not sure if its right, please correct me.
    Appreciate your help, thanks.
     
    George Marinkovich
    Ranch Hand
    Posts: 619
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi Satish and Don,
    The real issue is the prevention of corruption of the database file due to simultaneous access of the database file by two different threads. This can be done by two different mechanisms. One involves synchronization on the database file and the other involves record-level locking. Either one can satisfy the goal of preventing the corruption of the database file. Don is right to point out that record-level locking could be used to prevent database corruption in the create method. This is almost certainly a more efficient solution than using synchronization on the database file. It may also be a slightly more complicated solution. Like lots of decisions in software design it's a tradeoff. I think either solution can be justified given our assignment instructions. I think both can be well implemented. The choice is yours to make.
    [ February 18, 2004: Message edited by: George Marinkovich ]
     
    Satish Avadhanam
    Ranch Hand
    Posts: 697
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Great. Thanks George.
     
    Ranch Hand
    Posts: 286
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi,

    I understand what you are describing. It seems to me that the disadvantage is that it is effectively a database lock. All reading, updating and deleting will have to wait for the create to complete before they can continue.
    Read, update and delete are concurrent operations. That is multiple threads can do these operations simultaneously on different threads as long as the records are different. It seems to me that create can be concurrent too. My thinking when I asked the question was that we could lock the deleted record.

    Hi,
    I may be adding a point which you have already seen, if so, I apologize.
    At some point, at some lower-level, namely, at the physical, file level, assuming your
    physical DB file changes as the application is used, things do not occur simultaneously.
    The shared object, in this case the DB file, must be accessed with a lock which "in effect
    locks down the complete database."
    You minimize this effect by synchronizing just enough to do the job safely. The reason
    for this is you are dealing with a shared file pointer.
    So, my point, if correct, is that that file pointer which is being shared, is a bottle-neck,
    and the correct sharing of this file pointer cannot be ignored regardless of the design
    chosen, assuming that your DB file changes (i.e., you don't keep the file resident in
    memory).
    This also assumes that your file is a RandomAccessFile and that you don't use NIO
    techniques which might change things somewhat (I have not studied NIO as I think
    studying threads in more depth is more relevant).
    Thanks,
    Javini Javono
    [ February 18, 2004: Message edited by: Javini Javono ]
     
    Don Wood
    Ranch Hand
    Posts: 65
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Javini,
    You are correct that the file level access has to be synchronized to prevent one thread from moving the file pointer while another thread is trying to read or write.
    The discussion we were having concerned whether or not to use this low level lock to synchronize the higher level function of create record. It is simpler to use the low level lock but it means that all reading, updating and deleting have to wait for create to finish. Since this is the only higher level function that holds the raf lock for more than a simple read or write, an alternate method was proposed.
    At the cost of a bit of complexity, the alternate method increases concurrent operation. This is a typical software trade off that we are asked to decide in our jobs and on this assignment.
    Max and others on this forum keep telling us not to jazz up the assignment. In my professional work, I would typically implement this kind of optimization. On this assignment, I might choose not to do it.
     
    Javini Javono
    Ranch Hand
    Posts: 286
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Hi,
    I'll bookmark this link and come back and study it later when I am fine tuning
    my system.
    In this project, certainly when things grow, and these things are multi-threaded,
    then growing them in size poses problems. For instance, it is unsafe to grow
    a mult-threaded ArrayList unless you have a logical lock on it and no other threads
    are trying to use it; the DB file is similar, I suspect.
    An initial solution of having a "free" low-level lock on the entire database to create,
    say, 100 or 1,000 inactive records at a time, is how I will attempt it first. Since the
    DB is a shared object, when it grows, it pretty much needs to be totally locked
    down (at least for the first horizon of my design).
    If you can anticipate the average number of new records that the DB may need to
    create per day, you can create them all at once during low-usage periods (of course,
    I'm hardly going to code for this).
    The "cost" of adding 100 to 1,000 inactive records is then shared across all future
    needs to actually create active, new records.
    The create() method is then a little simpler: it just changes the flag to active and
    adds some valid data to one record.
    Thanks,
    Javini Javono
     
    He baked a muffin that stole my car! And this tiny ad:
    Gift giving made easy with the permaculture playing cards
    https://coderanch.com/t/777758/Gift-giving-easy-permaculture-playing
    reply
      Bookmark Topic Watch Topic
    • New Topic