Win a copy of The Little Book of Impediments (e-book only) this week in the Agile and Other Processes forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

JDO and concurrency

 
Rama Raghavan
Ranch Hand
Posts: 116
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I've used some home grown persistance of objects, JDBC and EJB, but not JDO (yet).
Could someone share what are the gotchas with JDO wrt concurrency..
Rama
 
David Jordan
Author
Ranch Hand
Posts: 66
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
JDO implementations are required to support standard database pessimistic concurrency control. All of the locking is done in the underlying database and you can usually set the transaction isolation level in a property file so the JDO implementation will use an appropriate level.
JDO also optionally supports optimistic transactions.
What kind of gotchas are you looking for?
 
Rama Raghavan
Ranch Hand
Posts: 116
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Traditionally, for instance, using JDBC, we can explicitly obtain a read lock when required, and then update the record, or when using EJBs, I believe there is a single instance of an Entity bean, say, for a given key, and the container single threads transactions on that bean instance unless READ-ONLY beans where it may be relaxed.
Wasn't sure how JDO deals with objects accessed from multiple threads (or sessions). Its not the gotcha's, but the behavior I guess I was looking for.
Rama
[ June 18, 2003: Message edited by: Rama Raghavan ]
 
David Jordan
Author
Ranch Hand
Posts: 66
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
With JDO, using datastore (pessimistic concurrency) transactions you have the following:
When you read an object from the database, a read lock is obtained automatically in the database. When you modify a field of your object, a write lock is acquired. There is a single instance of an object maintained in the cache for each individual object in the database that has a given unique identity. JDO allows multiple threads to access the same objects in a transaction. But each transaction has its own cache of objects that have been accessed, providing transaction isolation.
Using JDO optimistic concurrency reduces the locking overhead considerably. JDO vendors also allow you to set transaction isolation levels.
 
Lasse Koskela
author
Sheriff
Posts: 11962
5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by David Jordan:
When you modify a field of your object, a write lock is acquired.

How does the PersistenceManager/StateManager detect that the "user" modifies a persisted field? Does the enhancer add some method calls into each method referring a persisted field?
 
David Jordan
Author
Ranch Hand
Posts: 66
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Yes, exactly. The putfield bytecode instruction is replaced with a call to a static method in the class that deals with the update. This static method is also generated by the enhancer. It turned out that the stack signature for doing a putfield is identical to the stack signature we needed for calling the method (thanks to the JVM instruction set and stack-oriented architecture). So a single byte code instruction was the only thing that needed to change and the surrounding byte codes that set up things for the putfield did not have to change at all. Because of this, this byte code replacement results in zero expansion of your byte code in your methods, which also helped with debugging to still work properly. But as I have said, additional methods and some static data is added to your classes which will increases the overall size of your class files and byte code.
 
Lasse Koskela
author
Sheriff
Posts: 11962
5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Then I suppose JDO implementations implement only fine-grained disk-sync (every state modifying method call leads to a disk write)? Does this show in performance?
 
David Jordan
Author
Ranch Hand
Posts: 66
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
How locking is done and at what granularity is strictly up to the implementation. I am giving examples to try and illustrate and explain what happens. JDO implementations have some level of freedom as to when this locking occurs. But one point is that when using JDO with a relational database, the locking is typically done by the database itself, not by the JDO implementation. Across implementations, the granularity of locking can vary, but the level of granularity that the application notifies a JDO implementation of a change to a field is at the field level. This can be captured and managed by the JDO implementation at the object level, it may be locked in the database at a row, block, table level. It may be simplest to think of it as object-level locking, but that this can vary with implementations.
 
Craig Russell
Author
Greenhorn
Posts: 28
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
JDO actually requires a cache, so not every insert, delete, or update gets written through to the datastore. There are also flags that the application sets to indicate the required visibility of changes when querying or iterating an extent.
This caching is part of the JDO architecture, so it's quite well developed. Most JDO implementations do a very good job of avoiding writes until commit, and even do such things as automatically using the jdbc batching features when available. But on the other hand, there is nothing wrong with an implementation writing through every change and fetching data on demand.
Originally posted by Lasse Koskela:
Then I suppose JDO implementations implement only fine-grained disk-sync (every state modifying method call leads to a disk write)? Does this show in performance?
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic