This week's book giveaway is in the Reactive Progamming forum.
We're giving away four copies of Reactive Streams in Java: Concurrency with RxJava, Reactor, and Akka Streams and have Adam Davis on-line!
See this thread for details.
Win a copy of Reactive Streams in Java: Concurrency with RxJava, Reactor, and Akka Streams this week in the Reactive Progamming forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Junilu Lacar
  • Jeanne Boyarsky
  • Bear Bibeault
Sheriffs:
  • Knute Snortum
  • Tim Cooke
  • Devaka Cooray
Saloon Keepers:
  • Ron McLeod
  • Stephan van Hulst
  • Tim Moores
  • Tim Holloway
  • Carey Brown
Bartenders:
  • Piet Souris
  • Frits Walraven
  • Ganesh Patekar

Synchronization of public methods in DVDDatabase

 
Ranch Hand
Posts: 266
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Andrew,
thank you, oh you great guy, I will buy some Cascades and drink them to your, Bharats and Johns honour


Just to give you something to think about: if you read an entire record as one big byte array, then you could check whether the record was deleted and convert the byte array into individual Strings outside of the synchronized block.


Great idea. That's what I will do


read() is not guaranteed to read the requested number of bytes. Take a look at readFully().


ReadFully() throws an EOFException. How should I handle it?
You can remember in another thread we were talking about the validation function:

and our following discussion:


Ulrich:
1. Concerning the expression: "allRecLen % getRecordLength()) != 0", I suppose you Gurus will critize it because it isn't very elegant, or?
Andrew:
I think that this is perfect. It is simple to read, and does exactly what you want.
quote:
Ulrich:
2. "throw new IOException("the db-file might be corrupted")"
When the length of that part of the file which is reserved for storing the Records can't be divided through the number of Records, I have to suppose that the db-file is corrupted and in that case the IOException is the appropriate Exception.
Andrew:
Yep, the db-file would have to be corrupt in that case


Following the logic of this discussion the EOFException should not occur, but we have of course to throw or to catch it.
So following the logic above it would make sense to catch it and to throw it as IOException. What do you think about it?
Regards
Ulrich
 
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hello Andrew,
You wrote:


The Data.find() method will do "starts with" matches, whereas the GUI must only display "exact" matches.
So you already have to have a method (in the GUI (IMHO)) that will verify whether the data returned by read() is an exact match or not. This routine can automatically discard any records that have been modified.


I compare the criteria strings using starts with in the Data.find() method, but don't do anything at the client end with the returned records. I am not sure what do you mean when you say that the GUI must only display the exact matches?
"This routine can automatically discard any records that have been modified." - I am afraid I don't understand again.
Help!
Bharat
 
Greenhorn
Posts: 29
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Bharat,


John, I think that you are doing what I mention above. But I wanted to clarify just to make sure that we all are on the same page.


Yes, your explanation in this post was very good! The methodology you described is the one I am using i.e. for record locking I am synchronizing on the WeakHashMap in the lock(), unlock() and isLocked() methods; to deal with the file pointer being moved around I am synchronizing on the randomAccessFile in the other relevant methods.
I need a little help regarding my unlock method (see below). Essentially all I do is check if the value recNumber and the key instanceRef are in the lockedRecords and if they are I remove instanceRef from the static WeakHashMap. Here I am assuming that if recNumber is in lockedRecords then instanceRef is the correct key to remove – I presume I should be doing more checks/lookups here, e.g. should I be coding something like “lockedRecords.remove(the key that contains the record number)” rather than lockedRecords.remove(instanceRef)? I am not overly familiar with Maps but will do some reading up on them soon.

Regards,
John
 
John Canavan
Greenhorn
Posts: 29
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi all,
Looks like this thread will never end!!!


Personally I think you could go one of two ways here: either have the validation inside the synchronized block which will limit the race condition, or read the entire record as one long buffer within the synchronized block and then run your validation and conversion routines on that read buffer.


Yes, similar to what Ulrich said, I think I will go this way as well. Anyway, it should be less expensive to write/read a record to/from the disk in one go rather than doing it in 6 goes.
Also, to combat the race condition you mentioned above Andrew, I will change my code accordingly. However there may be an issue here with the lock() and unlock() functions. Let me explain…I will remove any synchronization from the inValidRecord() method as I am now going to read the record in one go into an array of bytes and check the first byte in the array outside of the synchronized block – all okay so far. However a similar type of race condition now exists in the lock() and unlock() functions. Here I synchronize on the randomAccessFile at the start of the function and call inValidRecord() if this returns true I throw the exception but if this returns false I continue on through the method. What if Thread A is updating record 6 and is in the lock() method, record 6 is valid so it continues on, however before it synchronize’s on the static WeakHashMap the record is deleted by Thread B (thus making record 6 invalid), Thread A gets the lock for record 6 and continues now to update (the now invalid) record 6. I could synchronize all the code in these functions (lock() and unlock()) on the randomAccessFile but this would be a nested lock now…..How are you guys (Bharat and Ulrich) dealing with this – I think you use a similar mechanism also? Maybe it is okay to let this scenario happen? (I haven’t even mentioned the trouble the unlock() method may cause as this record is deleted so RecordNotFoundException exception would be thrown here!!)


The Data.find() method will do "starts with" matches, whereas the GUI must only display "exact" matches.
So you already have to have a method (in the GUI (IMHO)) that will verify whether the data returned by read() is an exact match or not. This routine can automatically discard any records that have been modified.


I’m not sure I understand you either Andrew.
The find() method returns an array of int with each int representing a record number that matches the search criteria. In my case, for example, I had began to set up the GUI side of things similar to the way Max has in his book, so in my GUIController.java as part of the find function I have the following code (I probably shouldn’t be calling the read() method at this high level but I had only just started the GUI side of things and will change things around, but the principle is still the same):

So, between getting the results back (i.e. the call to find()) and before the read() is called all/some of these records may be changed (I documented an example in a post I made yesterday) – so this is obviously a problem, hence Andrew’s reply


Nope - not acceptable.


Andrew, are you suggesting that we process the resulting records again to double check that they match the criteria or have you (or anyone) another way of suggesting we do this?


But this brings up another issue: when you go to book the record, if the details have changed, what do you do? What can you do if you are running fat client? What can you do if you are running thin client? Do you go back to my comment above about just documenting this as an issue?


I don’t really know what you mean by “fat client” and “thin client”, Andrew, but I guess when the user tries to book a record you may have to compare what is displayed on the screen with what is physically saved in the database – if anything does not match then inform the user that the record has changed and get him/her to refresh the screen (i.e. perform the search again) and not allow the record to be booked. This is similar to the scenario where a user tries to book a record that has already been booked by another client after the results were printed to the screen. Again what do you guys think?
Thanks,
John
 
Bharat Ruparel
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hello John;
I think I can help you with your unlock method. In the following if block within your code:

You are trying to see if the record is there in the WeakHashMap to be unlocked, right? Instead, what I have is the following block:

Same thing really, but uses the isLocked method.
You may want to move it up before you execute your assignment statements for recNumber and instanceRef since it should come before that and right after you validate records.
Your code for removing the record from the WeakHashMap looks OK to me except I have added the following check just to be on the safe side:

I am not sure if we do need this check, but I am a paranoid soul.
I will post soon on my reworked RAF locking strategy.
Regards.
Bharat
 
Bharat Ruparel
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Andrew,
I am following your advice on reading/writing entire length of the record within a synchronized block to thread-safe the RandomAccessFile (RAF) pointer. To read: I read the recordsize plus one byte for the deleted flag into a byte array using readFully method of the RAF.
I am struggling with an efficient way of writing to the RAF. Thing is, both update and create methods have the record data as an array of String as "String [] data" as a parameter. This means that I have to:
Add all individual field Strings into a big record String plus add a deleted flag byte before this big string. Now I have to convert it into a byte array which can be written back to the RAF with the following statement:
String tempStr = new String(tempArray,"US-ASCII");
RAF.writeBytes(tempStr);
where the tempArray is the byteArray. Is there an easier and more elegant way of doing the same thing?
I am particularly annoyed with my code for converting the big string to a byte array. Currently, it is as follows:

Where p_size is the size of the String.
Please help!
Bharat
[ October 06, 2003: Message edited by: Bharat Ruparel ]
 
author and jackaroo
Posts: 12199
280
Mac IntelliJ IDE Firefox Browser Oracle C++ Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi everyone,

Ulrich ReadFully() throws an EOFException. How should I handle it? .... So following the logic [that EOFException should not occur] it would make sense to catch it and to throw it as IOException. What do you think about it?


Sounds well reasoned to me.

Bharat I compare the criteria strings using starts with in the Data.find() method, but don't do anything at the client end with the returned records. I am not sure what do you mean when you say that the GUI must only display the exact matches?
"This routine can automatically discard any records that have been modified." - I am afraid I don't understand again.


Check your instructions. Do you have a comment (possibly in the "User Interface" section) stating "It must allow the user to search the data for all records, or for records where the name and/or location fields exactly match values specified by the user"?
That "exactly match" criteria gives you a basic discrepancy between the requirements of the server and the requirements of the client interface. On the client side you can only show "exact matches" but the server can provide "starts with matches".
Therefore you have to have code on the client side that will reduce the results from the server to just exact matches. And if one of the criteria fields has been modified between the time of the search and the time of the retrieval of the record, then it will automatically be discarded as it will not be an exact match.

John I need a little help regarding my unlock method (see below). ... I am not overly familiar with Maps but will do some reading up on them soon.


Your unlock code is faulty at present - it will allow a client to unlock any record as long as the client has one record locked. I won't go into it any further at present - better to wait till you have read up on Maps.

John a similar type of race condition now exists in the lock() and unlock() functions.


Sounds like you may have to check the validity of the record both before and after you have gained the lock.

John The find() method returns an array of int with each int representing a record number that matches the search criteria.


But is this a record that exactly matches the data the user entered, or a record that starts with the data the user entered?

John Andrew, are you suggesting that we process the resulting records again to double check that they match the criteria or have you (or anyone) another way of suggesting we do this?


I think you have to validate the records a second time - see my comment to Bharat above.

John I don?t really know what you mean by ?fat client? and ?thin client?


In another thread we were discussing "two tier" vs "three tier" (having clients call the lock method vs having clients call a booking method on the server that does locking internally). If the clients are calling all the methods of the database directly then they are "fat" clients (there is more code on the client side). If they are calling a booking method, then they are "thin" clients - all the code is on the server side.

Bharat I am struggling with an efficient way of writing to the RAF. .... I have to: Add all individual field Strings into a big record String plus add a deleted flag byte before this big string.


There are lots of ways of doing this. Have you looked at what System.arrayCopy() can do for you?
Regards, Andrew
 
John Canavan
Greenhorn
Posts: 29
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Andrew, Bharat and Ulrich,
Thanks for all your help.
Regards,
John
 
Ulrich Heeger
Ranch Hand
Posts: 266
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi you three,
thank you also very much,
Ulrich
 
Bharat Ruparel
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Andrew,
You wrote:


Check your instructions. Do you have a comment (possibly in the "User Interface" section) stating "It must allow the user to search the data for all records, or for records where the name and/or location fields exactly match values specified by the user"?
That "exactly match" criteria gives you a basic discrepancy between the requirements of the server and the requirements of the client interface. On the client side you can only show "exact matches" but the server can provide "starts with matches".
Therefore you have to have code on the client side that will reduce the results from the server to just exact matches. And if one of the criteria fields has been modified between the time of the search and the time of the retrieval of the record, then it will automatically be discarded as it will not be an exact match.


Thanks. I understand now and have coded accordingly on the server and the client side.
Regards.
Bharat
 
Ulrich Heeger
Ranch Hand
Posts: 266
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi you guys,
just a last minuscule question
I'm posting this question here because John and Bharat, you have the same approach.
I don't know if I should have a synchronized block within the update-method:

I think if I should synchronize this block. I suggest no, because even if another client puts lrecNo to the HashMap just after we have checked it, the SecurtiyException will be thrown anyway :
What do you think about it?
Regards
Ulrich
 
Bartender
Posts: 1872
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Ulrich,
I would synchronize it, even for a simple call to containsKey() as you show, because we are in the cases covered by the following excerpt of the HashMap doc :

Note that this implementation is not synchronized. If multiple threads access this map concurrently, and at least one of the threads modifies the map structurally, it must be synchronized externally. (A structural modification is any operation that adds or deletes one or more mappings; merely changing the value associated with a key that an instance already contains is not a structural modification.)


You are propably OK, but I would synchronize the whole block for safety purposes.
Best,
Phil.
 
Ulrich Heeger
Ranch Hand
Posts: 266
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Phil,
thank you for your help, so I will synchronize it
Greetings to Brussels
Ulrich
 
You are HERE! The other map is obviously wrong. Better confirm with this tiny ad:
Java file APIs (DOC, XLS, PDF, and many more)
https://products.aspose.com/total/java
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!