Mike Keith

author
+ Follow
since Jul 14, 2005
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Mike Keith

Augusto Sellhorn wrote:

Richard S. Hall wrote:

Augusto Sellhorn wrote:The only time I'm aware I'm using blueprint is when extending shell commands (OSGI-INF/blueprint), I'm assuming every time I have a spring context (META-INF/spring)in OSGi that's Spring DM but you seem to imply Blueprint has integration with Spring ...



It seems you are getting confused between Spring and Spring DM. Spring != Spring DM.

It is more accurate to say Spring DM is basically equivalent to Blueprint. Spring DM provided the starting point for creating the OSGi Blueprint spec, so they are not identical, but very similar.

What is Blueprint (or Spring DM, for that matter)? It integrates the Spring bean model with the OSGi platform. It allows you to package Spring beans inside bundles and for Spring beans to publish their interfaces into the OSGi service registry and to be injected with services from the OSGi service registry. That's pretty much it.



That's what I'm doing, in my spring context I'm registering OSGi services like this ...



Is this Spring DM or Blueprint? I'm thinking it's Spring DM, since the state of this bundle under the "spring" column shows as failed if there's an error in it.



It could be either since the elements are the same (but is likely Spring DM since Blueprint namespaces are not typically named "osgi"). Note that Spring DM no longer exists, really, since it was donated to Eclipse in the form of the Gemini Blueprint project. See the statement made by SpringSource for more details.

-Mike
Gemini Project Lead
Allow me to add a few extra words around Gemini.

Gemini is an Eclipse project with a goal of enabling modularization in standard enterprise technologies. It is composed of a number of subprojects, each devoted to a specific technology. As was already mentioned, Blueprint is one subproject; Web (servlets), JPA and JDBC are others. You can read more about it here and you can ask questions on the Gemini Forum if you want to know more.

-Mike
Gemini Project Lead
Weird. I posted an answer, but then James's answer from an hour earlier showed up.
If you have db cascades then you should not use cascading in your annotations. Of course, you still need the annotations (or XML) to map the state/relationships to the correct db columns, etc. However, be warned that having db cascades enabled can sometimes throw a wrench in the JPA provider, since it is not expecting the records to be deleted underneath it. For example, it might be caching the deleted objects. You might want to consider:

a) making sure that you end the persistence context immediately after a delete, or ensuring that you do not reference the objects that are being deleted by the db cascades

b) evict the objects, that are delete cascaded in the db, from the second level cache using the JPA 2 Cache API

c) make sure that you are not concurrently doing anything else that might be using the deleted objects
First of all, flushmode COMMIT is really more of a hint that the provider should not flush until commit, there are no guarantees.
Second, setting the flushmode is not going to guarantee that you are going to get a single trip to the database. Once flushing occurs (for example at commit time) then there may be any number of database trips occurring, depending upon what the provider needs to do, and I highly doubt any provider would actually batch multiple persist calls of the same object type into a single SQL statement.

zb cong wrote:thanks for answer, i have tried every possibilities. all fail. i conclude the first problem is the usage of join key word:


That is correct. You can't use JOIN unless you have a relationship attribute between the two entities.

so i conclude that it is MANDATORY that there MUST be reference between two entities if i want to join them, is my understanding correct?


That is incorrect. As I said above, you join without using the JOIN keyword. The solution I gave you above (SELECT d FROM Doggy d, Catty c WHERE d.age = c.age) should work. If you get an error then it is a bug in OpenJPA. Plug in another provider and try it.
There is no "ON" in JP QL, but you can do an implicit join and use the WHERE clause:

SELECT d FROM Doggy d, Catty c WHERE d.age = c.age
Wha, this forum isn't real life?
Now I'm wondering if I should take off these six-shooters I've been wearing around the house.
Since you don't seem to be ignoring me I'm assuming that it wasn't me ;-)
We have to be careful not to generalize, but in your case it sounds like taking the optimistic approach and handling a possible exception (much like optimistic locking needs to handle a possible exception) might actually be the preferred approach since your alternative of going to the database for every validation seemed to be a little unwieldy.
We didn't really add any new type of identifier in JPA 2.0 since an id class and embedded id pretty much covers what you could ever want to reasonably do, I think, but we did add a lot of options for more easily mapping these types of identifiers. See this thread for some examples. There are lots more, particularly when you get to mixing embedded ids with id classes, and multiple composite PKs. I spend a fair bit of time in the book going over the gory details of these and other cases.

In terms of the best strategy, well, that is mostly a matter of personal taste and application use. Paul mentioned some of the commonly held opinions about the synthetic or provider-generated PKs. He is right that they can indeed impose a simple uniqueness that is easy to manage and efficient to use. I thought I should add a few more perspectives, though, for your consideration.

It turns out that there is almost always one or more attributes that is/are unique in your data set. If two records could have the same data but a different generated key, from an application domain perspective they are the same thing and there would be no reason for there to be two records in the first place. So you have to be careful that you are not just covering up the uniqueness problem. You usually do need to know what is unique about the data in any case, and often additional database constraints ensuring that uniqueness will need to be in place.

One of the problems with generated primary keys is that from the client perspective, an artificial PK has no domain relevance. If you are looking for a particular domain object and you know the domain-specific unique aspect of it, you can't use the prototypical PK lookup operation (find) because you have no idea what its generated PK is. You know the practical domain key, but since you are using a generated PK you are forced to do a query instead of a simple cache lookup by PK. What you end up doing is putting indexes on the actual unique domain fields in the database, anyway, to make these types of queries more efficient. At that point you might just as well have used the application attribute as the key. If course, when the key is composed of multiple fields you get back to the complexity, management and efficiency arguments again...

So generated keys can really be helpful, but you should understand what you are giving up, and not use them if you don't need them.

I should also mention that although it is possible to take these generated keys out of the user view, when the entity is mobile, or just for client management and entity differentiation, having the unique id in the entity itself is a real advantage.

Again, as is commonly the case, the answer is "it depends on the application".
Well, yes, you *could* store some of your data in an XML file, but I wouldn't recommend you split your entity data across an RDB and an XML file. Although something like EclipseLink can support mapping a single class to the database and to XML, I do not believe that Hibernate supports this, and I don't think it would be wise anyway.
The fact that Spring offers integration with JPA is a good indicator that JPA works without EJB.

JPA was initially developed within the EJB 3.0 expert group and released as a separate document but as part of the EJB 3.0 spec (that is why the previous book was called "Pro EJB 3: Java Persistence API"). Although JPA could always be used independent of EJB, when JPA 2.0 work began it was as a completely separate expert group, making its independence from EJB more obvious. Think of it as a portable persistence API equivalent to Hibernate, but that has additional integration support built right into all of the existing Java EE components, plus Spring, Google App Engine, or any other popular Java container/framework.

Every popular persistence implementation implements it, with 3 major open source products among them: Hibernate, EclipseLink and OpenJPA. EclipseLink is the only one that is currently compliant with JPA 2.0, but the others will definitely follow. The key advantage of using JPA is that your application is not bound to a particular implementation because the JPA spec defines a way for the implementations to be transparently plugged in under the covers.
Well, if you asked a doctor he/she would probably tell you not to add instance variables...

But since IANAD (I am not a doctor) I'll play along. Let's assume for a second that you really do need to add an instance variable, but you don't want to add a new column to the database. Where would you expect the new variable state to be persistently stored? You can't share an existing column that you are already storing existing instance variable state in (okay, you can, but it becomes trickier and you generally don't really want to do that). If you are renaming a variable "foo" to "bar" then you could certainly override the new default of "bar" by using a @Column(name="FOO") annotation on your new "bar" variable, and you would be reusing the column from the previous (now renamed) instance variable. Of course you would end up with a schema that was rather mismatched with your object model, something that is quite common when the db already exists but not really necessary when you are creating the db for the application.

If you have the luxury of writing a new application with a new database schema then you might as well just use schema generation and not even give much thought to what the schema looks like while you are in development and testing. Then, when you get closer to the end of the dev cycle, and you aren't making design changes to your classes you get to the stage of fine-tuning your db schema and mappings and every data schema change matters (see comments in Advantages of ORM thread).

On the other hand, if you are mapping to an existing database and can't change the schema at all (be it adding a column or otherwise) then you kind of need to just use the mapping flexibility to map your changing object model with the RDB as you go along. Not much choice in that case.