Vlad Djordan

Greenhorn
+ Follow
since Jan 15, 2010
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Vlad Djordan

Hi Pieter,

I need to check from the Locking class whether a record exists in that data file as well as if the record is deleted or not. Personally I feel it should be checked within the FileAccess class and not within the Locking class - am I right in saying this?



Delegating the workload of your Data class to a FileAccess and Lock class is good design, however, I don't believe the Lock class should know anything about the FileAccess class and vice versa.

In my implementation, much like yourself, I used a form of FileAccess to load the data from the file, but all of my implementation, including locking/unlocking is done within the Data class. I certainly could have taken the approach of using a LockManager, but decided against it.

Either solution is fine, but I think if your LockManager knows about the FileAccess and vice versa, that wouldn't be considered the best design. You should try and figure out a way to maintain the locking/unlocking in the LockManager/Data and leave the FileAccess to just what the name suggests, file access.

Cheers,

Vlad
Hi Mikael,

Now I'm wondering...will this really be safe? ..since the LockManager class is not a singleton. I believe it should, since the only reference to LockManager is inside Data, which is a singleton. The LockManager object is initialized only once, when Data is initialized.



I would go a step further myself and declare the LockManager with default access so that it is only visible from the package your Data class is in. Essentially, your design is still gearing towards instantiating the LockManager once and it just seems logical to me to make it a Singleton as well. I don't believe there is anything wrong with your approach, but you should test the locking mechanism thoroughly.

I only method synchronized create() and find() methods, while all the others were block synchronized, and it was all implemented in the Data class and I managed to provide concurrent record access to my database structure.

Cheers,

Vlad
Hi Roel,

If you were referring to that specific situation, my remark was not needed (and completely useless). But because that was not clear to me (and maybe some other ranchers in the near future) I made my remark.



Your remark certainly wasn't 'useless' and I will try and do a better job of being clearer in the future, both when asking questions and providing feedback.

Cheers,

Vlad
Hi Roel,

In my opinion you will only be safe if data.lock(1) is called from the same client! Otherwise 2 different clients will have same lockCookie value and might end up in overwriting each other's updates. And with a lockCookie that is the whole problem: how do you determine that data.lock(1) is called from the same client-application? If you could determine that you would be able to handle deadlock situation within your code.



First of all, I was referring to a situation where there is a clear mapping between clients (not cookies) and record numbers, and Bably suggests such a structure exists in one of his previous posts. Therefore, I will explain this again. I will use your examples of IllegalStateException:

Scenario 1:
Client A tries to lock record 5.
Client A locks record 5.
Client A tries to lock record 5 again. ---> IllegalStateException is thrown

Scenario 2:
Client A tries to lock record 5.
Client A locks record 5.
Client A tries to lock record 6. ----> IllegalStateException is thrown

You are able to do this because you map client id's to record numbers, or you have a structure which keeps track of which client has a lock on which record. Now then:

Scenario 1:
Client A tries to lock record 5.
Client A locks record 5.
Client A tries to lock record 5 again. ----> No need for the Exception! Why? Because you already keep a track of which client has a lock on record 5 and that is client A. If client B tried to lock record 5, you would not return at this point and would make Client B wait until client A is finished with the record.

I was only trying to explain to Bably that an IllegalStateException can be thrown in this scenario, but it can be handled by another approach.


Your server must be capable of handling multiple concurrent requests, and as part of this capability, must provide locking functionality as specified in the interface provided above.



Of course it does. I would not have been able to run successful tests if it didn't. There is no deadlock in my code regarding different clients accessing sensitive data, but that wasn't your question. Your question was:

Have you already thought about the scenario "a thread/client locks (or tries to lock) more than 1 record"?



In your code you handle this with IllegalStateExceptions, and while currently my code may deadlock (in this situation only) I have clearly documented why that is. I even mentioned that I may change this portion later and that I make lock/unlock available to the client, ie. I'm implementing a thick client.

My only point to Bably was that since there is a structure that maps clients to record numbers he can:
1. Throw IllegalStateException in both cases, as you.
2. Throw IllegalStateException in only one case, and in the other 'allow' it.
3. Document only that this is a possibility for deadlock, but the only deadlock possibility, ie. his locking ensures Client A and Client B aren't deadlocked for some other reason.


Cheers,

Vlad
Hi,

I just wanted to add one more thing.

Bably, if your instructions specify that you must handle the situation where the same client tries to lock the same record twice, or tries to lock another record while holding a lock on a different record, then you must handle this scenario. My instructions don't appear to state this as a 'must' requirement, therefore, I think as long as its documented, it should be fine.

Furthermore, I think another approach at handling the same client trying to get a lock on a record it already has a lock on, is to just return the previously generated cookie value. Since we already know it has locked this record, I think it is safe to return that cookie value again instead of an Exception. I haven't looked through the forum, but I don't see why this would be an invalid approach. An exception could be thrown then only if the same client tries to lock a record while already holding a lock on another record.

For example,

Client A:


Client A:


I think in the above two scenarios Roel's code might throw an Exception, possibly IllegalStateException, in each case, but I think both variations are fine. Also, allowing the deadlock, but explaining it, if your instructions don't clearly say that you must handle the situation, is ok as well.

Cheers,

Vlad
Hi,

Have you already thought about the scenario "a thread/client locks (or tries to lock) more than 1 record"?



Personally, my locking currently deadlocks in this scenario, and I know why it deadlocks and I know how to fix it, however, for the time being I've decided to document as to why I haven't 'fixed' it. My lock/unlock have been specified with a cookie value, and I have decided to make the lock/unlock methods available to the client, ie. I'm implementing the thick client approach. (Many threads on this forum discuss whether thin client is better than thick client, or whether thin client is more appropriate for the assignment than a thick client, so I won't go into it here. Needless to say, I will document my decision.)

Now, why am I allowing deadlock?

Two reasons:

1. My lock method is defined with these comments:



The above requirement doesn't mention anything about what should happen if a lock already exists on a record and the lock is assigned to the same client, or whether this particular client holds a lock on any other records, it only mentions the different client.

2. The deadlock can occur in the following situation:



However, this can easily be remedied by ensuring the JavaDoc and the choices.txt clearly define that it is the responsibility of the client to ensure lock/update/unlock is one atomic operation, ie. that the unlock is called on a locked record prior to locking the same record again:


When I designed my approach I thought about this scenario and have documented it and believe that as long as this scenario is explained, no marks should be deducted. In fact, I've found others who have done the same thing here.

I may go back and revise this (currently working on the business/rmi/gui portion), but for now I will leave this as is.

Bably, I believe you should be ok with either approach, as long as you document your process clearly, in both the JavaDoc and the choices.txt.


Cheers,

Vlad
Hi,

And also how to add client-side cookie-tracking to DataClassTest (a significant use case which is required for the 1.1.1 interface but seems to be entirely unnecessary for 1.3.1).



The client side cookie tracking is necessary for both actually, it is just that given the interface for different versions, developers will have to deal with this situation differently, depending on the version of the assignment they received. I got the version 1.1.1 myself as well so you should have these methods defined:



I believe for version 1.3.1, the method declarations are as follows (someone else can confirm):



What this means is that whoever received the version 1.3.1 had to build a structure which supports maintaining client information, whereas the developers which received version 1.1.1 have it somewhat easier, since the cookie value is part of the method signatures. Either way, you need a way of knowing who the clients are. With respect to Roberto's test class, instead of writing these lines:



You would write something similar to:



they will need to modify Roberto's classes significantly.



I didn't have to update Roberto's class significantly. I just changed maybe 10 lines, and really it wasn't so much as changing as it was retrofitting certain method calls to match my interface, but it still took less than 1 minute to change. Think of this class as a helper test. You should still try to build some of your own unit tests, but Roberto's class is perfectly valid for all developers working on, I believe, any version of the SCJD assignment. He even mentions somewhere that you may have to modify where exactly unlock is called from, for example in the try{} or finally{}, but his test case is very good and perfectly valid and I certainly didn't have issues modifying it.

Cheers,

Vlad
Hi,

I understand that I need to generate a unique value by doing System.nanoTime()



You don't HAVE to use this approach, but for the intents and purposes of the assignment it is good enough. Remember to justify the usage in your choices.txt.

I am not so clear about the lock cookie concept.



The lock cookie is used in order to uniquely identify which clients currently hold a lock on a specific record, so that only that client, which holds the lock on a record can modify/update it. This ensures that no other client can modify/update a record while there is a lock on the record by another client.

In short, if client X holds a lock on record 5, then client Y cannot modify/update/delete record 5, until client X is finished using it. The lock cookie enables you to identify the clients in this case.

what role will it have in stand alone mode....then how do I get the cleintId value for the standalone mode.



I don't believe you need to generate the client id using System.nanoTime() in the standalone mode, you can use the current Thread id instead. I think you would only generate the client id in networked mode. There's a good discussion here.

Cheers,

Vlad
Hi,

I'm cashing the file into an ArrayList and to delete a record the first field is set from 1 to 0. However at some point there will be a call to readRecord where the Jtable will need to be populated and if it come across a record position that has been set to 0 it will throw a RecordNotFoundException. So to get around this problem I'll create a service class that will query Data for a list of active records and Jtable will traverse with this list getting an active record each time from readRecord.



Some drawbacks about using a List instead of a Map to cache records you can read about here. I'm not sure exactly what you mean by setting the first field from 1 to 0? What happens if you delete 10 records, how is this handled by your list? My map contains record numbers as keys, and Room objects as values which are built using RoomColumn data. Then when I 'delete', I set the value of a key-value pair to null, and later can reuse that space when creating new records.

If it fails to create a new record then maybe it could return -1 to denote a failed attempt.



I would strongly suggest against doing this. In fact, I don't believe it is necessary given the scope of the assignment. I don't handle this situation. The only exceptional circumstance that should happen (if your assignment is similar to mine where the create method is declared with the DuplicateKeyException) is to throw the DuplicateKeyException. By the way, I don't handle this scenario either and I will explain in my choices.txt why.

Should all method in Data be synchronized especially if Data is to be a Singleton?



Many have taken this approach, and it is perfectly valid and if done right, you will not get penalized and can receive maximum marks on the locking section. Personally, I had a different approach in mind, and as soon as I finished the code in my Data class, the locking worked. I method synchronize only create and find methods, but my lock, unlock, read, update and delete are block synchronized.

There's some very good tests available for your locking mechanism, once you are finished, here and here.

Cheers,

Vlad
Hi,

And I guess you have misunderstood Vlad a bit, because he is saying exactly the same thing as you



I certainly am. Harry I believe your approach is correct. My response was originally just an addendum to the
answer provided to Andriy, that you can't ignore records marked with the deleted flag and why you can't ignore them,
and that every record marked as deleted has to find its way back to the file one way or the other.

Vlad
Hi Roel,

I think I may have misunderstood your original post.

My create and search methods are method synchronized, while my lock, unlock, update, delete and read are block synchronized.

Of course when I update I have to first lock a record, update, then unlock, same for delete. I guess I went for performance by synchronizing
just blocks of code. My answer is yes and yes to both 1 and 2. The only two points I didn't follow in the provided interface are
throwing a DuplicateKeyException in create, and RecordNotFoundException in unlock, and I'll explain in the choices.txt why, but the
rest of the interface is implemented as is.

I've run a test available on the SCJD faq and all my runs completed successfully including incrementing the counter to the
limit of an int. I feel pretty confident about the design.

Thanks for checking up on me,

Vlad
Hi,

I can't use a Room-object in my Data class (so just worked with String[])



I did the exact same thing.

I chose the singleton pattern applied to the Data class (with a record cache) and marked every method synchronized.



Interesting. I suppose then you synchronized on the actual Data class then when providing concurrent access, ie.
locking on the entire database?



I made my Data class a Singleton as well, however, I lock on record numbers. I think even the Sun interface
defines the lock as: "Locks a record so that it can only be updated or deleted by this client."

Thus I've only synchronized my search and create methods, while the others will lock on the record number.
Both approaches work, and some of the tests I've run including some tests found here all execute properly.

I used in my RoomBusinessService and in the GUI Room-objects.



I plan to do the same, but have just finished the back end.


Vlad

Hi,

I would have to agree with Roel,

But you can't skip them in numeration



A deleted record in the database file provide by Sun is not the same as a deleted
record in an actual RDBMS.

What this means is, if you were to execute the delete method on your database file,
you shouldn't completely remove the record information and the record number. Either
the space which the previous record used to occupy will be available for reuse and will
be reused when creating a new record, or you can opt out not to reuse the record space.

Either way, you have to write that deleted (or possibly a new record in its place) record
back to the file.

Vlad
Hi,

1. The record's location in the db file. (I chose this to implement the DBAccess. Am I wrong?)



I don't think you are wrong because I chose the same approach. Essentially, every time I come across a
record in the database file, I assign it a record number in my cache, and I store the record information.

2. The order of the record in the db file.



I'm not quite sure what you mean by this. If you are talking a sequential order, isn't this the same as option 1?

3. Others?



I'm not aware of any others, but if some developers found different ways to handling records, I'd like to see
their approach, but I think almost everyone will use approach 1.


Vlad
Hi,

While I personally haven't read it, the book mentioned by Allan is very good (according to many posters on this forum).

Just remember that there is errata for that book and keep in mind that if you were to take the code snippets as they are
you may not pass the SCJD. There's a good thread Copying Monkhouse.

Also, some other certification books of use can be found here: Certification Books.

As for the GUI, if I was you, I'd first focus on the back end, ie. database reading/writing, locking and so on. Having a well
thought out implementation and well tested locking mechanism and database access routines makes your life easier, imho.

Vlad