Don Wood

Ranch Hand
+ Follow
since Dec 05, 2003
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Don Wood


Philippe said:
I use "rwd".


It seems to me that "rws" is a better choice because it also updates the metadata. Since the length of the file changes when a record is added to the end of the file, it seems like a good idea to have the metadata stay as current as the data.
This seems even more important if you choose to close the raf in a finalize method. If you use "rwd" and close the raf in a finalize method, I think you would run the risk of losing records added on to the end of the file if the finalize method does not get called.
As an aside, I think closing the raf in a finalize method is not a good idea but I have seen a number of people say they are doing this. But for those who are closing in the finalize method, it seems to me that "rws" is a must.
[ April 17, 2004: Message edited by: Don Wood ]
Hi Maria,
Please remove one of these methods from this post since we should not be posting this much code. The locking portion of the evaluation is worth too many points to be putting it all out for everyone to see.
I suspect that the setLockCookie and getLockCookie methods are the problem. The cookie is in the lockedRecords. Why is it also being put somewhere else? How does getlockCookie know what value is to be returned since it has no parameters? My suggestions are:
  • In the lock method, remove the line

  • In the unlock method, replace if(cookie == getLockCookie()){

  • with something like this:
  • Remove the getLockCookie() and setLockCookie() methods from your code.


  • In the future, please use the UBB buttons to place you code in code blocks to make it more readable.
    Also, I think you should take a look at the
    Sun coding conventions. I know that some people disagree with some of the coding convention choices that Sun has made but for the purposes of this exam it is probably a good idea to follow them closely. The General Considerations portion of the score will be affected by your coding style.
    [ April 04, 2004: Message edited by: Don Wood ]
    Hi Eric,
    I've been thinking about your question for a while:

    I did not find shutdown server as a requirement, perhaps I am missing something here?


    I agree that it is not listed as a requirement. But it seems to me that there is a locking issue that should not be ignored.
    We lock records to ensure that the data remains consistent. Of course, you know this. But what if someone shuts down the server while one or more threads are in the middle of updating/creating records? If the server is not safely shutdown (that is, it terminates while locks are held) isn't there is a risk that the data becomes inconsistent because records are partially written?
    It seems to me, that safe server shutdown is needed to preserve the ability of the locking mechanism to keep data consistent.
    I wonder if not implementing a safe sever shutdown contributes to some of the 44/80 locking scores we have been seeing.
    In short, I view safe server shutdown as a required piece of locking functionality even though it is not explicitly stated as such in the instructions.


    If this is indeed the problem, you need to use a different file name since you should not have a class the same name as one in the sdk. Whatever the file name is the public class or interface name must exactly match
    Really, I just laughed out for sometime when reading this. Sorry about that. Yeah, I know that we should not use any class names that match with SDK. I know you just want me to make sure that whether am sticking for this rule. Thanks for the concern.


    So you laughed about my comment not to use sdk supplied names and that turned out to be the problem? Hmmmmm.
    Glad you got the humor.
    Now, the name of the file is someThing.java because that is the name of the public class. Is that the file name you are using?
    I put your four lines in a file by this name and it compiles fine. If your file name is correct then I'm stumped too.
    Hi Satish,
    There's not enough information in the post for me to be sure but judging from the error message, .bad class file: .\WeakHashMap.java, it looks to me like the name of the file is WeakHashMap.java but the name of the public class is different (someThing).
    If this is indeed the problem, you need to use a different file name since you should not have a class the same name as one in the sdk. Whatever the file name is the public class or interface name must exactly match
    Javini,
    Your point is well taken. Perhaps it wasn't said very well above but the idea is to let any clients that have locks complete their requests. So a client that held a record lock would continue with that request and subsequent requests until it unlocked the record. No new locks would be accepted.
    I haven't thought far enough ahead to decide whether or not I will allow a client that does not have a lock to complete requests that do not require one (such as a read). I think I would like to allow only clients with locks to continue but I'll have to decide that when I get to the implementation.

    I thought if NO new clients are coming, then obvioulsy no all new requests are stopped right? However we want to let all the correct working threads to complete their operations.
    Oh, I get it. Say suppose if 3-4 threads are waiting on a record and if shutdown the server, we just want the currently operating thread on the record to complete and not any new threads to do any further operations. This might be the reason to lock the complete db and then shutdown the JVM.


    Yes, that's the reason. That way all clients get responses to their requests. Some responses are completions and some are server shutting down. But none have the request lost because the server just went away.


    I still need to workout(basically think alot on this) on this and do accordingly. The thing is this fancy server shutdown and client crashes dealing though is not required, I think doing them is good not only for exam point of view, but personally also.


    Actually, locking the db for server shutdown is pretty simple as I see it. If you think you need to do something fancy, let's talk it through.
    Hi Satish,
    You are soo far ahead of me. I'm still ramping up. Some things are clear and some are not. When my code is more complete, then I'll have a slew of questions. Hope you are still around to help straigten me out.


    Safe server shutdown
    I will try to explain what I understood are the things we need to do for safe server shutdown. Please comment on it.
    1. While the server is running, if the CSR presses Ctrl-C or HaltServer button on the GUI, then the server should shutdown gracefully.
    2. What I understood by the term "gracefully" is: (a) No new clients are allowed to get a reference to Adapter from ConnectionFactory. (b) Allow some time(Maybe, sleep for a couple of seconds to let the current threads complete their work) and then exit using System.exit(0).
    3. This is achieved by Runtime.addShutdownHook()[i.e. do the sleep thing here and do not allow new clients]
    4. I have gone through many of the past posts but could not understand the reason why they are locking the complete database. I mean if you're not giving access to any new clients then its OK right? What is the point of lockdown the complete database while you're however shutting down the JVM?
    5. I also saw some of the posts discussing about lock(-1) call. What is it about? I am guessing it is regarding old assignment.


    I agree with 1. and 2a.
    My preference is different from 2b and 3 (see below). I just don't like idle timeouts hoping that the timing is right. Most of the time, the time out is wasteful as nothing is going on. When you do need it, how do you know that the 1 or 2 seconds is enough?
    As for question 4, the reason some people are locking the complete database is to ensure no transaction is cut off in mid stream. No new requests can come in but the issue then is when are the threads that are already in going to be done? One thorough but slow approach is to lock every record. This is probably a fine solution for this assignment.
    The answer to question 5 is yes, the lock(-1) is for an old assignment but the concept is relevant for server shutdown.
    A Different Approach
    I had my idea mostly worked out when I found this thread which is pretty close to what my plan is. See Robin's post. There's not a lot of details but you don't need all that much to make it work.
    Lock Db
    What I like about this approach is that we can make the shutdown event driven. Once we know all the threads are out we can shutdown. Most of the time this will be immediately. When it is not, we can still shut down as soon as the last record is unlocked.
    Here's what I'm doing. I think it's ok.
    Looks like I invented a new term above, dealock safe should have been thread safe.
    Javini said,


    To answer your question: what happens to concurrency if the inner lock is not available? I'd say not much, if the lock is not available, the dataCore object is already busy reading, writing, or both.


    When the inner lock is not available, this thread sleeps holding the guard lock. Isn't true that any other thread that wants the guard lock must now wait even if the work it is doing for a different record? Lock and unlcok should be fast operations but with nested locks the guard lock risks becoming a bottleneck. This is what I mean by reduced concurrency since threads doing unrelated work on other records must now wait for this thread which is asleep with the guard lock.
    Hi Javini,
    You said:


    Finally, concerning the creation of new records. If the policy that allows a new record to over-write a previously deleted record is active, then the database does not care at all about the contents of a deleted record; if the record is deleted, then it is locked, and then it is written over (and its contents are not important). If no re-usable records can be found (that is, the database contains no deleted records), then the new record is created at the end of the file.


    This is the point I was making so it seems we agree.


    So, here is what I will probably do. I will state that allowing new records
    to over-write previously existing records may not be logically sound, and
    that I don't know the philosophical or logical answer to this question. It is the customer's responsibility to configure the database in a safe
    manner, and the customer does this by setting up the server and stating
    what the policy will be by chosing one of the following choices from the
    preferences panel:
    1. The database will allow the creation of new records to over-write
    previously deleted records.
    2. The database will only write new records at the end of the file.


    I'm not sure how a reviewer will take this. It sounds like you have implemented a database that you are not sure is safe. You are then asking a customer to make a decision about safety when the customer does not know anything about the implementation of the database. He certainly does not know as much about the implementation as the developer knows.
    We are asked to make decisions as part of this process. As presented, this seems to be a decision not to decide but instead put the responsibility on the customer.
    But there is another interesting point here. I'm guessing that the unstated philosphical issue is whether or not data that is deleted should actually be destroyed. If you don't write over it then it could be mined/restored in some useful form in the future. I think you can present the configuration option as a way to recover historical information. The reviewer will take a much kinder view of that than if it is presented in terms of safety.
    Hi Javini,
    Whenever you are dealing with nested locks, you must always get them in the same order. Apparently, know this as you said the following:


    Dead-locks are avoided by following the locking order rules: always
    lock LockManager, then lock MicroData.


    Since you are using nested locks correctly (always in the same order) you are using one of the classic deadlock avoidance mechanisms. You still have several issues to be evaluated.
  • [list]Is the flow of code obvious enough that a junior programmer will not miss the significance of lock ordering? If I understand the code right, you have nested locks that are acquired in different methods. This is much harder for anyone (much less a junior programmer) to pick up on than if both synchronized blocks were in one method.
  • What happens to the concurrency when the inner lock is not available? The thread goes to sleep still owning the outer lock. The sleeping thread can block other threads from doing any useful work and things come to a screeching halt until the locks are cleared.
  • When you say in step 1, Gets a lock on the LockManager., does this mean a lock on the LockManager object? If so, when the inner lock is busy, then the thread goes to sleep owning the LockManager lock. I don't see how the thread that wants to release the inner lock will be able to do so since the LockManager is locked and the owner is asleep. This can't be right so I must not be understanding what the LockManager lock is.
    Perhaps its just my reading of this that is faulty but when you say


  • Usually, of course, clients are
    operating outside of MicroData, first using the lock manager, then calling
    Data.someMethod() which in turn calls the appropriate methods in MicroData.


    Do you mean that the clients have the lock manager locked when they call MicroData? If so, it seems that you can be holding the lock manager lock for duration of doing I/O. That would seem like the system is overlocking and concurrency is reduced.
    The alternative interpretation that I see is that the clients use the lock manager and release its lock before calling Data.someMethod() which in turn calls the appropriate methods in MicroData. But since MicroData can then call the LockManager that would be a violation of your contract that says the locks are always locked in the same order.
    Nested locks can be used in a deadlock safe manner. However, when you do use them you have to answer a lot of questions. Does it truly avoid deadlock? What are the performance implications, especially when the thread goes to sleep holding the outer lock while waiting for the inner lock to become free? How hard is it for someone else maintaing the code to understand the nested locks?
    My understanding of the avoid nested locks rule, is that it can be done but it will probably save you time and a lot of headaches to redesign the code to avoid nested locks.
    [ March 24, 2004: Message edited by: Don Wood ]
    Hi Satish,
    The pseudocode for locking seems to have some serious issues.
    Firstly, the code uses nested synchronized blocks. The literature that I have seen says, if your design has nested synchronized blocks then you are doing something wrong and you should redesign it.
    Secondly, the code is sleeping in the inner synchronized block which means the thread is still holding the outer lock. Sleeping while holding a lock pretty much a no-no in any application. But in this case things are worse yet. The outer synchronized block holds the lock to lockedRecords! No other thread can get or release record locks including the thread that is holding the record lock you are waiting on. In other words, the first time you have contention on a record lock, the application will deadlock.
    I like the goal of waking up only the threads that are waiting for the specific record lock being released. I haven't started on the locking code yet but you definitely have me thinking about it.
    [ March 22, 2004: Message edited by: Don Wood ]
    Javini,


    In general, and keeping in mind that I don't know yet that my
    code is free of dead-locks, I'd give the hint that update()
    need not call lock() and unlock() at all.


    Sounds risky at the very least.
    So if you are not locking the record on update, how do you know:
  • that no other client has modified the record,
  • that no other client has deleted the record,
  • that no other client has deleted the record and added a new record back into the same slot.


  • It seems likely that you will have one thread destroy changes that another thread has made. This is not a dead lock issue, it is an issue of how do you keep the data consistent?
    And if you have a magic bullet that allows updates without locking, are you confident that you can convince the reviewer that a lock-less update is thread safe? If the reviewer is not convinced, your locking score will suffer.
    [ March 19, 2004: Message edited by: Don Wood ]