Last week, we had the author of TDD for a Shopping Website LiveProject. Friday at 11am Ranch time, Steven Solomon will be hosting a live TDD session just for us. See for the agenda and registration link

Bogdan Baraila

Ranch Hand
+ Follow
since May 23, 2011
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Bogdan Baraila

Thanks Bill. Indeed the ROLE_ANNONYMOUS should work, but unfortunately i'm using Spring security with flex and i wasn't able to set up the anonymous filtering.
9 years ago
Hello to all,

Is there away to unsecure just a single method from a spring service that is annotated with the @Secured annotation. I know that I could annotate just the method I want and leave the unsecured one without an annotation, but since all except one are secured methods i would prefer to leave the annotation at class level.

9 years ago
Using multithreading and StatelessSession i have now obtained a time of 35 seconds for 1 millions records. It can go even lower if i use more threads but for now is enough
Yes It's in memory. I'm just instantiating the object into a for, setting them some values in function of the for index and save them into database.
I know that the file processing will add some extra time, but now i'm more concerned about the saving in database process (i have lots of ideea of how i can speed up the file processing but for the database saving this is all i have got until now).
Yes 70 seconds it's just the time of creating the objects in memory and saving them in my database.
Like the title says is just about the inserting of 1 million entities in the database.

William P O'Sullivan wrote:Is this just the parsing process?

For now i have just created one million object similar with what i will have from the files and saved them. I'm just concerned about the inserting into database process. The reading from the file will be fast. If there are any problems i can use multithreding for the parsing, but the database insert (Java object -> database row process) is usually the bottleneck.
Hello all,

My application will need to parse some files which will generate more then 1 milion of records in the database.
Using StatelessSession from hibernate and postgresql i've reach the time of 70 second for 1 million records (an entity has 4 characters column and the bigint primary key).

I have 2 questions:
1) Since I didn't work with so many data until now, do you think that this is a good time?
2) Do you have any suggestions to improve this time?

In oracle you will need to use a sequence (create sequence hibernate_sequence
start with 1
increment by 1
nomaxvalue; ) and instead of IDENTITY you can use Sequence or Native.
No problem .
Actually for each database you will have a SessionFactory configured that will handle your sessions. About the mapping since you have so many tables - don't forget that you can use annotations directly in the POJO class.
No you should not open 10 session to makes 10 updates. Creating session is expensive. This is the reason for which hibernate has a SessionFactory which offers the getCurrentSession method. This method will get the current open session or it will open a new one (if there is no open session).
You could start with the last hibernate version. Please take a look at a basic tutorial:
Also if you don't want null values you can set up the default value for every field from mapping files or also you can use the dynamic-insert, dynamic-update features (it will create on insert/update query only with the modified columns). I know that this doesn't tell you anything but at least once you get more familiar with Hibernate you will know what to look.
The debate Hibernate vs JDBC it's very long but usually Hibernate wins all the time. In your example if you already have more then 100 queries, Hibernate would proven itself a better solution. But your hibernate example it's not really got (maybe you need to study hibernate better before you can make your own ideea). For example there is no reason to update on object with createQuery. Hibernate already has his own methods for saving, deleting, updating, loading etc. (; session.saveOrUpdate(object) ...) or in function of your arhitecture you can use EntityManager (for a JPA way) or HibernateDaoSupport (if you use Spring).
Regarding speed your example it's also not very good. If you make for example 4 consecutive updates on the same object Hibernate will make this in a single transaction (thanks to his session cache) and it will be surely more fast then JDBC.
And regarding the number of queries - in one of my projects i have made a GenericHiberanteDAO in which i have generic methods for all the basic operation (save, update, delete, find, searchByCriteria with or without pagination etc) and if i need to add a new entity I just need to extend my class (and no extra code needed) for database operation. And this are just a few of the Hibernate advantages.
I think that the problem my be in how you are using the NUMBER data type. If you are using oracle then you should use something like this: NUMBER(p) or INTEGER or LONG (BLOB). Here it is the description of NUMBER. As you can see if it's used like in your function then it will basically expect a list of floats.
The NUMBER datatype
Stores zero, positive, and negative numbers, fixed or floating-point numbers

Fixed-point NUMBER
precision p = length of the number in digits
scale s = places after the decimal point, or (for negative scale values) significant places before the decimal point.

Integer NUMBER
This is a fixed-point number with precision p and scale 0. Equivalent to NUMBER(p,0)

Floating-Point NUMBER
floating-point number with decimal precision 38
Is this all the stacktrace of the error? From the looks of it your are missing a class (probably you are missing a jar or you have diffrent spring/hibernate jar versions)