Win a copy of Programmer's Guide to Java SE 8 Oracle Certified Associate (OCA) this week in the OCAJP forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Lock-verify-act-unlock operations on client side (Monkhouse and Camerlengo)

 
Alexandru Dragoi
Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hello,

Monkhouse and Camerlengo solution is based on a thick client, where logical record locking operations are exposed to the client.

I am puzzled by the design decision that is exposed in the book. In the following quote the authors try to expose the other choice (lock-verify-act-unlock operations on the server side, i.e. the thin client) and to explain what are the disadvantages of that solution.


One way to solve this problem would be to make the retrieval-verification-rental operations atomic. However, this could only be done on the server side since two separate clients working in their own JVMs would be unaware of any synchronized blocks operating in other JVMs. Having this code operate on the server side makes building a thin client very simple, but we already know that we have to build a Swing client, so we know that the client computers can support thick clients. A bigger problem is that having this code within an atomic block will reduce concurrency.


I do not understand these:
1/
we already know that we have to build a Swing client, so we know that the client computers can support thick clients

If we have to implement the client using Swing, why is that implying that our hardware can support thick clients?
Swing can run also on older hardware...

2/
A bigger problem is that having this code within an atomic block will reduce concurrency.

But concurrency is reduced also in Monkhouse and Camerlengo solution, using the thick client. When a client rents a DVD, it locks it. This means that other clients cannot rent the same DVD, they must wait for the first client to unlock it. This is also reducing concurrency.

What I want to say is that I do not really understand the advantages of a thick client over using a thin one.


Best regards,
Alex
 
Roel De Nijs
Sheriff
Posts: 10374
132
AngularJS Chrome Eclipse IDE Hibernate Java jQuery MySQL Database Spring Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Alexandru Dragoi wrote:What I want to say is that I do not really understand the advantages of a thick client over using a thin one.

Then you simply implement a thin client There is no requirement to implement a thick one
 
Paweł Baczyński
Bartender
Posts: 1816
33
Firefox Browser IntelliJ IDE Java Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Whatever you choose, make sure you describe your choice in choices.txt ;).
 
Alexandru Dragoi
Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Pawel Pawlowicz wrote:Whatever you choose, make sure you describe your choice in choices.txt ;).


I will do that thanks!

How did you implement the access to the database? Did you load the database into a memory cache on start-up, or you did you just read-wrote from/to the database every time it was needed?
I find the first solution more easy to implement and also I can justify it in choices.txt, because we have a local database and there is only one client for the database.
 
Paweł Baczyński
Bartender
Posts: 1816
33
Firefox Browser IntelliJ IDE Java Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Alexandru Dragoi wrote:
Pawel Pawlowicz wrote:Whatever you choose, make sure you describe your choice in choices.txt ;).


I will do that thanks!

How did you implement the access to the database? Did you load the database into a memory cache on start-up, or you did you just read-wrote from/to the database every time it was needed?
I find the first solution more easy to implement and also I can justify it in choices.txt, because we have a local database and there is only one client for the database.


I listed some of my choices here.
I noticed that I forgot to mention that I cached the database in memory.
 
Roel De Nijs
Sheriff
Posts: 10374
132
AngularJS Chrome Eclipse IDE Hibernate Java jQuery MySQL Database Spring Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Pawel Pawlowicz wrote:I noticed that I forgot to mention that I cached the database in memory.

That's what I did too.
 
Alexandru Dragoi
Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Pawel Pawlowicz wrote:I listed some of my choices here.
I noticed that I forgot to mention that I cached the database in memory.

Thanks Pavel and congratulations on your achievement!

I noticed that you already specified that you use a shutdown hook
I used a shutdown hook in server mode to save data when the application quits.

so this would imply that you cache the database records in memory.
 
Paweł Baczyński
Bartender
Posts: 1816
33
Firefox Browser IntelliJ IDE Java Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Alexandru Dragoi wrote:I noticed that you already specified that you use a shutdown hook so this would imply that you cache the database records in memory.

Yes, you are right. I just wanted to be specific.
 
Alexandru Dragoi
Ranch Hand
Posts: 36
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Alexandru Dragoi wrote:Hello,
2/
A bigger problem is that having this code within an atomic block will reduce concurrency.

But concurrency is reduced also in Monkhouse and Camerlengo solution, using the thick client. When a client rents a DVD, it locks it. This means that other clients cannot rent the same DVD, they must wait for the first client to unlock it. This is also reducing concurrency.

What I want to say is that I do not really understand the advantages of a thick client over using a thin one.
Best regards,
Alex


I guess implementing a fat client will increase concurrency because in order to reserve something, the client must make multiple requests to the server: lock, read, modify and unlock.
The case in which concurrency is increased is the one in which the server would pass the requests (like: lock, read, modify and unlock) to a thread pool to be treated separately.
Concurrency is increased compared to the case in which the whole operation (reserve = lock +read+modify+unlock) is made by a single thread (for example by the same thread on server side which treats the client connection). Note: in this latter case, lock + unlock can be replaced with a synchronized block.

This could be compared with the RMI case, in which each method invocation is treated separately by a thread pool on the server side, rather than be treated by a single tread.

By the way, did any of you implemented the socket solution in this way: each request to be treated by a thread pool on the server side?
For sockets, you have a thread per connection, but my question is related to what happens in the next step when the server receives the request and unpacks it. Does it pass the request to a thread pool?
 
Roel De Nijs
Sheriff
Posts: 10374
132
AngularJS Chrome Eclipse IDE Hibernate Java jQuery MySQL Database Spring Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Alexandru Dragoi wrote:I guess implementing a fat client will increase concurrency because in order to reserve something, the client must make multiple requests to the server: lock, read, modify and unlock.

This topic has a great (and lengthy) discussion about if not exposing the lock methods to the client meets the requirement (started by the author you mentioned in the title )

Alexandru Dragoi wrote:By the way, did any of you implemented the socket solution in this way: each request to be treated by a thread pool on the server side?

Since I follow this forum closely (5 years I believe) I think the number of people who implemented the socket solution can be counted on 1 hand...
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic