Kumar Raja

Ranch Hand
+ Follow
since Mar 18, 2010
Kumar likes ...
Hibernate Spring Java
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
2
In last 30 days
0
Total given
0
Likes
Total received
21
Received in last 30 days
0
Total given
5
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Kumar Raja

I took Person class as an example here. But the Entity looks like as below



Unique constraint is defined on PERSON_DT and PERSON_ID and a sequence
5 years ago
Hello All,

       I have a requirement to perform clean insert (delete + insert), a huge number of records (close to 100K) per requests. For sake testing purpose, I'm testing my code with 10K. With 10K also, the operation is running for 30 secs, which is not acceptable. I'm doing some level of batch inserts provided by spring-data-JPA. However, the results are not satisfactory.

My code looks like below


I have also set below properties


   Looking at logs, I noticed that the insert operation is taking around 3 - 4 secs to save 2000 records, but not much on iteration. So I believe the time taken to read through the stream is not a bottleneck. But the inserts are. I also checked the logs and confirm that Spring is doing a batch of 40 inserts as per the property set.

I'm trying to see, if there is a way, I can improve the performance, by using multiple threads (say 2 threads) that would read from a blocking queue, and once accumulated say 2000 records, will call save. I hope, in theory, this may provide better results. But the problem is as I read, Spring manages Transactions at the thread level, and Transaction can not propagate across threads. But I need the whole operation (delete + insert) as atomic. I looked into few posts about Spring transaction management and could not get into the correct direction.

Is there a way I can achieve this kind of parallelism using Spring transactions? If Spring transactions is not the answer, are there any other techniques that can be used?

Thanks
Kumar  
5 years ago
I agree with the option of returning the resultset as a single row. But I'm trying to understand if MyBatis has a solution to such problems. Or if modifying the query is not an option, I may probably let MyBatis build two objects representing each row and at service layer, I will merge them into one. But just trying to avoid this overhead and hoping MyBatis have an implicit way of handling things
I have a resultSet from DB, that returns say two records as below

   TRANID Type Name Amount
   1      B    ABC  100.00
   1      S    XYZ  -100.00

The above data represents a transaction where a seller and buyer are involved.

Now I need to map the above resultset to MyBatis, such that it returns me a transaction object in below structure.

   Transaction :{
       id : 1,
       buyer:{
          name : "ABC",
          Amt : "100.00"
       },
       seller: {
          name: "XYZ",
          Amt: "-100.00"
       }
   }

If the DB had returned the data in one row, with both buyer and seller data in one dimension like

   TRANID BNAME BAMOUNT SNAME SAMOUNT
   1          ABC     100.00      XYZ     -100.00

then I can use a resultmap something like below

   <resultMap id="transactionRM" type="Transaction">
    <id property="id" column="TRANID"/>
    <association property="buyer" type="User">
      <result property="name" column="BNAME"/>
      <result propert="amt" column="BAMT"/>
    </association>
    <association property="seller" type="User">
      <result property="name" column="SNAME"/>
      <result propert="amt" column="SAMT"/>
    </association>
   </resultMap>

I will be able to achieve what I wanted, because I have unique aliases/column names for buyer and seller.

But how can I achieve the same results, if the results come in two row, where one is a buyer and seller, and type is a discriminator that determines if the row belongs to seller or buyer.

I tried to define a resultMap for User, like

   <resultMap id ="buyerRM" type ="User">
      <result property="name" column="Name"/>
      <result property="amt" column="Amount"/>
   </resultMap>

   <resultMap id ="sellerRM" type ="User">
      <result property="name" column="Name"/>
      <result property="amt" column="Amount"/>
   </resultMap>

   <resultMap id="transacionRM" type="Transaction">
     <association property="buyer" resultMap="buyerRM" type="User"/>
     <association property="seller" resultMap="sellerRM" type="User">
   </resultMap>

The above resultmap will not work, as same column names defined for both buyer and seller and the data will be duplicated.

Any suggestions.

Thanks in advance.
Any advise please.

Does MyBatis implicitly support such mapping? If not, probably I will have to handle the merging part explicitly after getting back the two resultsets.
I referred MyBatis documentation (http://www.mybatis.org/mybatis-3/), however I could not find a decent solution for mapping multiple cursors being returned from a stored procedure in MyBatis.

For eg, assume that my object is defined as


and My stored proc, takes in a department Id as input and returns details of that department and also all the associated employees.


I know, how to deal with one refcursor is returned, I typically use


and my resultmap may look like



But I'm not sure what changes are needed to my mapping (resultMap and select), such a way stored procedure can return two cursors and mybatis implicitly create the Department bean with Employees populated.
Hello All,

      Recently we started noticing WARN level message in our activeMQ logs


WARN  | Transport Connection to: tcp://xx.xx.xx.xx:xxxxxx failed: java.net.SocketException: Connection reset | org.apache.activemq.broker.TransportConnection.Transport | ActiveMQ Transport: tcp:///xx.xx.xx.xx:xxxxxx@61616




Our network engineers determined that the IP address belongs to a subnet of one our subscribers. So, in order to pin point the issue, we asked all our subscribers to shut down their listeners and we ensured that there are no open connections on openwire. However, we still continued to see this warn messages being logged, even though there are no active subscribers.

What could be an issue here? What else can be done, to pin point the issue. Even though our network team confirmed that the IP address listed in the WARN, is coming from the same subnet as one of the subscribers, we were  still getting the warn, even after all subscribers are down.

Thanks
Kumar
7 years ago
Thank you Tim.

Yes, recently I heard about Tomcat Valve and will read more on this . Thanks for referring them here.
8 years ago
But I'm still curious to know, if there is a standard way to influence the filter sequence declared between WEB-INF/web.xml and conf/web.xml
8 years ago
To fix my original problem, I removed the filter from the application web.xml to audit log, and instead used AOP for that purpose. It seems to be working fine .
8 years ago

Hello All,

As per my understanding, the order of filter execution is the order they are defined in the web.xml for the same url-mapping. But I could not find any reference how this behaves, if we have multiple filters defined for same url-mapping across the container's web.xml and individual application's web.xml

My assumption was, since an application is deployed as part of a container, say tomcat, which has web.xml of its own, any requests targeted for each such deployed app, will have to go through the filter chain defined in tomcat/conf/web.xml before going through application filters. But my understanding is wrong

I have a simple web app with two filters defined in web.xml as follows



Without any further change, if I hit my web url as

http://localhost:8080/myapp

, I see AppFilterOne and AppFilterTwo being hit in that order.

Now, I added two more filters in my tomcat/conf/web.xml (My tomcat version is 7.0.27 and supporting servlet 3.0)



Now, if I access

http://localhost:8080/myapp

, the filters executed in the below order
AppFilterOne
AppFilterTwo
TomWebFilterOne
TomWebFilterTwo

My initial assumtion was, the TomWebFilters will intercept first and then the application specific filters.

If the results I see are actually correct and that is how the filters work, is there a way I can influence the execution order of the filter. I heard about <absolute-ordering> but not sure, if that is the correct approach here.


A little back ground my actual problem:
We have a bunch of web applications deployed in different vm's in tomcat 7X instance. Each of these web application has an audit filter that audits and logs each incoming requests. However, in tomcat /conf/web.xml a filter is defined to support NTLM authentication (JCIFS flavor). Because of this setup, every requests coming in are actually logged (as part of audit filter) and then filtered for NTLM. We want the NTLM to happen first and then anything else.

There are couple of approaches, I'm thinking here
A) Instead of defining in tomcat/conf/web.xml, we may need to define that filter as the first filter in each application.
B) Have NTLM filter set an attribute in request, stating the status of the NTLM process and our audit filter will check for this for two times (i.e for two 401 HTTP status codes), and return back.

I'm not particularly happy with both approaches and hence wondering what can be done

Thanks

8 years ago
Hi Tim,

As I mentioned, the whole connection handling is being done by MyBatis and the application code has little handle on the actual connection object. Also I do not see any option that explicitly closes the connection in MyBatis as mentioned in MyBatis.

You mentioned

There is a pool option that can cause abandoned connections to be tracked.

. Is this an option in Tomcat-jdbc pool library ?
8 years ago
Thank you Paul. Our code is based on MyBatis framework, where the connection acquiring/closing is all handled by framework. We do little in terms of handling connections directly. However, I will take a look , if anything from code is leading to that problem.

However, I did notice that there was a


I did not bother to mention this problem because, the timing of Out of Memory issue logged is few mins after the first abandoned connection was logged. So, I thought Out of Memory could not have caused the connections being lost. But I think, I might be wron
8 years ago
Hi All,

My question is some what related to my other post, posted in the past. I have following settings in my context.xml



and we are concurrently accessing datasource. I have tried using 10 to 50 concurrent threads on particularly slower queries and connection pooling is working with out any issue for 40 concurrent accesses. However, it is failing quite often for 50 threads, with





I could not understand , why Pool needs to abandon a connection. Is removeAbandoned property is set to true , could that have raised the issue?

Is there any way, I can log the total connections being used out of maximum active connections at any point of time? I looked into JDBC interceptors, but could not find any way to print/log, active connections vs currently being used connections.

Thanks
8 years ago