Pierre-Hugues Charbonneau

Greenhorn
+ Follow
since Feb 05, 2011
Merit badge: grant badges
For More
Java EE - Enterprise consultant
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
1
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Pierre-Hugues Charbonneau

Hi Brendon,

If you start Weblogic8.1 from MyEclipse then you will need to locate the section when you can add/modify JVM startup arguments.
-XX:+HeapDumpOnOutOfMemoryError should be added before or after the memory arguments such as Xms & Xmx.

If you cant figure out in Eclipse then you can still edit the Weblogic commEnv.cmd script from your Weblogic 8.1 binary installation and add the parameter from there.

This script file is located under:

<WL8 HOME>/weblogic81/common/bin/commEnv.cmd (for Windows)
<WL8 HOME>/weblogic81/common/bin/commEnv.sh (for UNIX)
12 years ago
Hi Brendon,

It depends of which Sun JVM 1.4 version you are using at runtime with your Weblogic 8.1 server.

HeapDump generation following an OutOfMemoryError (hprof format) was first added in JDK 1.4.2_12 and can be enabled by adding -XX:+HeapDumpOnOutOfMemoryError in your JVM start-up arguments.

Regards,
P-H
12 years ago
Hi Chetan,

Can you please also look at your WAS server log? Heap Dump generation normally indicates your application did trigger an OutOfMemoryError (Java Heap or native memory depletion) condition.

Please provide a snapshot of your server log showing error and StackTrace of the first occurrence of OOM, if any.

Can you please also provide which JDK vendor you are using with WAS6, Sun HotSpot VM or IBM JDK over AIX OS??
12 years ago
Hi Dennis,

This normally means that your Tomcat Pool is running out of Connection. At that point Tomcat ObjectPool code will start to wait until a Connection is available.

Do you see other Threads involved in actual database transactions or just waiting to acquire a Connection? Also, what are your current Tomcat settings for this particular data source?

Possible scenarios:

- Database hanging scenario causing rapid depletion of your Tomcat JDBC Data Source (long running query, database slowdown condition, too much load vs. tuning etc.)
- JDBC DataSource/Connection leak

At this point, please provide more detail on other Threads from your captured Thread Dump so we can pinpoint the root cause as per process below:

- Breakdown Thread Dump in logical silos:
- What is the total # of active Threads (executing a request)
- What is the total # of Threads waiting to acquire a Connection from your JDBC DataSource
- What is the total # of Threads involved in an actual transaction with your JDBC DataSource
12 years ago
Hi Suresh,

Jeanne is correct.

In summary:

* Default behaviour for application server *

For 1 EAR file with 2 WAR files within same VM, the class loader tree looks like this:

1. System class loader (JDK and application server libraries)
2. Parent class loader (EAR file)
2.1 Child class loader 1 (WAR file #1)
2.2 Child class loader 2 (WAR file #2)

- Static variable defined within code at system class path is visible for everyone
- Static variable defined within EAR file itself e.g. via jar file is visible by both EAR code and both WAR file #1 & #2
- Static variable defined within WAR file itself is only visible to WAR file scope

** Note that each application server provides tuning to override the default behaviour ex: WAR file can override / preferred any class found within parent class loader **

Regards,
Hi Aman,

Any problem preventing you to gather/generate Thread dump from your production environment?

Regards.
Thanks Aman,

Happy to help. Please just let me know if you need help with the Thread Dump analysis.

Regards.
Hi Aman,

In order to troubleshoot these kind of problems, you will need to generate a few JVM Thread Dump during your business process / processing.

Following and analysing each snapshot will allow you to pinpoint the bottleneck and root cause so you can implement corrective measures.

What are your environment specifications? Thread Dump generation technique will depend of your JDK vendor and version.

Once you get a few snapshots, have a look at the Thread executing your EJB method : processBatch and post the Thread snippet to this Thread.
Hi M Kulkarni,

What are the specifications of your environment e.g. which version of Java Sun/IBM JDK etc?

Find below one approach we used successfully for many production environments using Sun JDK 1.5, 1.6 and IBM JVM 1.5,1.6:

- Perform a load test with proper load ratio of targeted production load
- Generate a Java Heap Dump (latest JDK versions allow you to generate Java Heap Dump on the fly, extremelly usefull to analyse your application memory footprint)
- Analyse the Heap Dump using Eclipse Memory Analyser

The Heap Dump analysis will provide you a breakdown of your objects and allow you to perform searches in order to come up with a proper Java Heap breakdown and % for each silo that you are interesting in.



13 years ago