Win a copy of Beginning Java 17 Fundamentals: Object-Oriented Programming in Java 17 this week in the Java in General forum!

Dinesh Sundrani

Ranch Hand
+ Follow
since Mar 21, 2006
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Dinesh Sundrani


I have a text file called class_content_decider.config and I would like to develop a simple Java Application ( that would read this 1-liner config file and create a DYNAMIC class (object on the fly).

Say for example - class_content_decider.config
firstanme, lastname, age, dob, company, salary

And the dynamic Class/Object should look like

Class temp1_21
String firstanme;
String lastname;
String age;
String dob;
String company;
String salary;

Would Reflection help in such a (specific) case?

Please suggest!

12 years ago

I have a particular business case and wanted to know if Pentaho Reporting would help me in achieving this functionality. It's a J2EE based project and pertaining to Financial data that we get from Bloomberg Terminal in XML format, which we apply some mathematical models and forecast the stock price of the company 3y, 5y, 7y down the line generating dynamic spreadsheets which include pie charts, bar graphs on which we further run (time series) regression models and generate the speculated risk and returns of the firm as juxtaposed with the benchmarks used.

Let me know if Pentaho has some each solutions to this.

Dinesh Sundrani
Do you think singleton scope is a good idea in multithread environments, how do we maintain the synchronism if multiple threads are trying to modify the same single bean object?

Do you think using prototype scope is better in such a case or does having a single copy of the bean for each client makes it a dead tortoise speedwise?

I am especially interested in high throughput, ultra low latency trading applications where timed execution is of prime essence. Losing a minute (in creating a copy of the bean), could cause the trade to stale out and enter a dead queue, eventuall losing out on $$.

Hi Dhanji,

So from what I understand about DI with my past experiences is that the dependency you inject is FIXED, so if the object moves from one VM to the other - you are in pretty much messed up state in trying to re-work the dependencies and it's injection.

While comparing it to a JNDI service - it will be able to locate the object irrespective of the location of the object on the subnet.

Do you think this is one of the biggest weaknesses of DI Patter or are there any bigger issues and workarounds if any?

All the Ranchers welcoming you to discuss more about your book. Welcome!
Hello Mr Dhanji Prasanna,

We are extensively using Spring in our projects and use a lot of Dependency Injections and AOP cross-cutting concern modularization. I would really like to know something about the SealedAdaper class that uses the Inject annotation to inject the dependency, how is it different from the usual DI we perform by reading beans from a xml configuration.

Bingoooo !!! That's really what I was looking for..


So I have a trade order execution system where the stock order once executed and confirmed, the complete order.xml is stored in an Oracle DB as a BLOB field. Now I have 2 fields which I need to index and search on: 1. The OrderId numeric field and the BLOB order.xml field.

Does Hibernate Search custom field bridge and a Lucene Fieldable lazy field implementation help in such a case to read the xml tags, extract the CDATA text and index on those words so as to become easier to be searchable by Broker Name, Customer Name, CUSIP Id, Stock Ticker, etc�
Thanks Emmanuel and John

I wish we had a shorter version of the xml configuration of somethign like this, rather than to repeat the redundant information - each tag, every time.
Awesomeee! Thanks Paul and Emmanuel!

That's exactly what I wanted to know a way to avoid burdening the master index copy each and every time the updation is made, Rather work on the local copies, if any upadtes - send them to the MasterAwaited Queue and let the master do the work of sync when it has time to do so.

This very much addresses my concern (posted int he other thread) about having a ultra fast search engine rather than a very precisive one.
Thanks Emmanuel, I got the idea of an inverted index!

But doesn't updating the index via those listerners each and every time slows up the search? Because I would prefer a search engine that is very quick and efficient rather than 'very-super' precisive in getting me the return list for iteration.

Do we have a batch based updation system also, where say, we can run it to sync-up the index data-structure (the file system directory) every night from 3AM to 4AM, where the server is in a zombie mode with seldom any hits or activity
Can I have a MOM based architecture, by using JMS to configure Queues to accept index-updation commands, store them in order or receival and process them to update the Index data Structure?

Is there a chance of the index getting corrupted due to multiple writes and how to aviod and maintain the coherency?
Does Hibernate Search aid in creating new indexes in existing directories?


is it always done at the startup, when a complete B Tree like structure is generated and maintained and kept in cache for cache hits and quick access.

How is the index sync maintained? and how are the index conflicts handled?
Is this the correct way to configure Search Listeners in the config file or do we have to manually configure them, if we decide not to use Hibernate Annotations?

[ Edited to use code tags - Paul Sturrock ]
[ December 09, 2008: Message edited by: Paul Sturrock ]
A warm welcome to the authors of the book.

Am I correct in saying that Hibernate Search, so to speak, is a wrapper build with Apache Lucene as the core kernel and the one doing the heavy-weight lifting, taking care of the nitty-gritty details leaving the business functionality to the developers?

Well in that case, I would be really curious to know how the indexing would work and what would be the case in a peculiar scenario where multiple user are trying to add documents and it gets stuck in a index corruption mode?

Does it internally use HQL to map the simple/complex data structures to maintain the indexes and its synchronicity; reconciling difference if any? OR do we have to use Lucene querying?

Dinesh S