I have a text file called class_content_decider.config and I would like to develop a simple Java Application (TestDynamicClass.java) that would read this 1-liner config file and create a DYNAMIC class (object on the fly).
Say for example - class_content_decider.config
firstanme, lastname, age, dob, company, salary
I have a particular business case and wanted to know if Pentaho Reporting would help me in achieving this functionality. It's a J2EE based project and pertaining to Financial data that we get from Bloomberg Terminal in XML format, which we apply some mathematical models and forecast the stock price of the company 3y, 5y, 7y down the line generating dynamic spreadsheets which include pie charts, bar graphs on which we further run (time series) regression models and generate the speculated risk and returns of the firm as juxtaposed with the benchmarks used.
Let me know if Pentaho has some each solutions to this.
Do you think singleton scope is a good idea in multithread environments, how do we maintain the synchronism if multiple threads are trying to modify the same single bean object?
Do you think using prototype scope is better in such a case or does having a single copy of the bean for each client makes it a dead tortoise speedwise?
I am especially interested in high throughput, ultra low latency trading applications where timed execution is of prime essence. Losing a minute (in creating a copy of the bean), could cause the trade to stale out and enter a dead queue, eventuall losing out on $$.
So from what I understand about DI with my past experiences is that the dependency you inject is FIXED, so if the object moves from one VM to the other - you are in pretty much messed up state in trying to re-work the dependencies and it's injection.
While comparing it to a JNDI service - it will be able to locate the object irrespective of the location of the object on the subnet.
Do you think this is one of the biggest weaknesses of DI Patter or are there any bigger issues and workarounds if any?
We are extensively using Spring in our projects and use a lot of Dependency Injections and AOP cross-cutting concern modularization. I would really like to know something about the SealedAdaper class that uses the Inject annotation to inject the dependency, how is it different from the usual DI we perform by reading beans from a xml configuration.
So I have a trade order execution system where the stock order once executed and confirmed, the complete order.xml is stored in an Oracle DB as a BLOB field. Now I have 2 fields which I need to index and search on: 1. The OrderId numeric field and the BLOB order.xml field.
Does Hibernate Search custom field bridge and a Lucene Fieldable lazy field implementation help in such a case to read the xml tags, extract the CDATA text and index on those words so as to become easier to be searchable by Broker Name, Customer Name, CUSIP Id, Stock Ticker, etc�
That's exactly what I wanted to know a way to avoid burdening the master index copy each and every time the updation is made, Rather work on the local copies, if any upadtes - send them to the MasterAwaited Queue and let the master do the work of sync when it has time to do so.
This very much addresses my concern (posted int he other thread) about having a ultra fast search engine rather than a very precisive one.
Thanks Emmanuel, I got the idea of an inverted index!
But doesn't updating the index via those listerners each and every time slows up the search? Because I would prefer a search engine that is very quick and efficient rather than 'very-super' precisive in getting me the return list for iteration.
Do we have a batch based updation system also, where say, we can run it to sync-up the index data-structure (the file system directory) every night from 3AM to 4AM, where the server is in a zombie mode with seldom any hits or activity
Am I correct in saying that Hibernate Search, so to speak, is a wrapper build with Apache Lucene as the core kernel and the one doing the heavy-weight lifting, taking care of the nitty-gritty details leaving the business functionality to the developers?
Well in that case, I would be really curious to know how the indexing would work and what would be the case in a peculiar scenario where multiple user are trying to add documents and it gets stuck in a index corruption mode?
Does it internally use HQL to map the simple/complex data structures to maintain the indexes and its synchronicity; reconciling difference if any? OR do we have to use Lucene querying?