Jim terry

Ranch Hand
+ Follow
since Nov 18, 2018
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Jim terry



There are 9 types of java.lang.OutOfMemoryError each signaling a unique memory-related issue within Java applications. Among these, ‘java.lang.OutOfMemoryError: Metaspace’ is a challenging error to diagnose. In this post, we’ll delve into the root causes behind this error, explore potential solutions, and discuss effective diagnostic methods to troubleshoot this problem. Let’s equip ourselves with the knowledge and tools to conquer this common adversary.

Here’s a video summary of the article:



JVM Memory Regions
To better understand OutOfMemoryError, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction about different JVM memory regions. But in nutshell, JVM has following memory regions:



Fig: JVM Memory Regions

1.Young Generation: Newly created application objects are stored in this region.
2.Old Generation: Application objects that are living for longer duration are promoted from the Young Generation to the Old Generation. Basically this region holds long lived objects.
3.Metaspace: Class definitions, method definitions and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace.
4.Threads: Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region.
5.Code Cache: Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region.
6.Direct Buffer: ByteBuffer objects are used by modern framework (i.e. Spring WebClient) for efficient I/O operations. They are stored in this region.
7.GC (Garbage Collection): Memory required for automatic garbage collection to work is stored in this region.
8.JNI (Java Native Interface): Memory for interacting with native libraries and code written in other languages are stored in this region.
9.misc: There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions.

What is ‘java.lang.OutOfMemoryError: Metaspace’?


Fig: ‘java.lang.OutOfMemoryError: Metaspace’
When lot of class definitions, method definitions are created in the ‘Metaspace’ region than the allocated Metaspace memory limit (i.e., ‘-XX:MaxMetaspaceSize’), JVM will throw ‘java.lang.OutOfMemoryError: Metaspace’.

What causes ‘java.lang.OutOfMemoryError: Metaspace’?
‘java.lang.OutOfMemoryError: Metaspace’ is triggered by the JVM under following circumstances:

1.Creating large number of dynamic classes: If your application uses Groovy kind of scripting languages or Java Reflection to create new classes at runtime.
2.Loading large number of classes: Either your application itself has a lot of classes or it uses a lot of 3rd party libraries/frameworks which have a lot of classes in it.
3.Loading large number of class loaders: Your application is loading a lot of class loaders.

Solutions for ‘OutOfMemoryError: Metaspace’
Following are the potential solutions to fix this error:

1.Increase Metaspace Size: If OutOfMemoryError surfaced due to increase in number of classes loaded, then increase the JVM’s Metaspace size (-XX:MetaspaceSize and -XX:MaxMetaspaceSize). This solution is sufficient to fix most of the ‘OutOfMemoryError: Metaspace’ errors, because memory leaks rarely happen in the Metaspace region.
2.Fix Memory Leak: Analyze memory leaks in your application using the approach given in this post. Ensure that class definitions are properly dereferenced when they are no longer needed to allow them to be garbage collected.

Sample Program that generates ‘OutOfMemoryError: Metaspace’
To better understand ‘java.lang.OutOfMemoryError: Metaspace’, let’s try to simulate it. Let’s leverage BuggyApp, a simple open-source chaos engineering project. BuggyApp can generate various sorts of performance problems such as Memory Leak, Thread Leak, Deadlock, multiple BLOCKED threads, … Below is the java program from the BuggyApp project that simulates ‘java.lang.OutOfMemoryError: Metaspace’ when executed.

In the above program ‘OOMMetaspace’ class’s ‘main()’ method contains an infinite ‘while (true)’ loop. Within the loop, thread uses open-source library javassist to create dynamic classes whose names start with ‘com.buggyapp.MetaspaceObject’. Class names generated by this program will look something like this: ‘com.buggyapp.MetaspaceObjectb7a02000-ff51-4ef8-9433-3f16b92bba78’. When so many such dynamic classes are created, the Metaspace memory region will reach its limit and the JVM will throw ‘java.lang.OutOfMemoryError: Metaspace’.

How to troubleshoot ‘OutOfMemoryError: Metaspace’?
To diagnose ‘OutOfMemoryError: Metaspace’, we need to inspect the contents of the Metaspace region. Upon inspecting the contents, you can figure out the leaking area of the application code. Here is a blog post that describes a few different approaches to inspect the contents of the Metaspace region. You can choose the approach that suits your requirements.  My favorite options are:

1. -verbose:class: If you are running on Java version 8 or below then you can use this option. When you pass the ‘-verbose:class’ option to your application during startup, it will print all the classes that are loaded into memory. Loaded classes will be printed in the standard error stream (i.e. console, if you aren’t routing your error stream to a log file). Example:

java {app_name} -verbose:class
When we passed the ‘-verbose:class’ flag to the above program, in the console we started to see following lines to be printed:

This is a clear indication that classes with ‘com.buggyapp.MetaspaceObject’prefix are loaded so frequently into the memory. This is a great clue/hint to let you know from where the leak is happening in the application.

2. -Xlog:class+load: If you are running on Java version 9 or above then you can use this option. When you pass the ‘-Xlog:class+load’ option to your application during startup, it will print all the classes that are loaded into memory. Loaded classes will be printed in the file path you have configured. Example:

java {app_name} -Xlog:class+load=info:/opt/log/loadedClasses.txt

If you are still unable to determine the origination of the leak based on the class name, then you can do a deep dive by taking a heap dump from the application. You can capture heap dump using one of the 8 options discussed in this post. You might choose the option that fits your needs. Once a heap dump is captured, you need to use tools like HeapHero, JHat, … to analyze the dumps.

What is Heap Dump?
Heap Dump is basically a snapshot of your application memory. It contains detailed information about the objects and data structures present in the memory. It will tell what objects are present in the memory, whom they are referencing, who are referencing, what is the actual customer data stored in them, what size of they occupy, are they eligible for garbage collection… They provide valuable insights into the memory usage patterns of an application, helping developers identify and resolve memory-related issues.

How to analyze Metaspace Memory leak through Heap Dump?
HeapHero is available in two modes:

1. Cloud: You can upload the dump to the HeapHero cloud and see the results.
2. On-Prem: You can register here and get the HeapHero installed on your local machine & then do the analysis.

Note: I prefer using the on-prem installation of the tool instead of using the cloud edition, because heap dump tends to contain sensitive information (such as SSN, Credit Card Numbers, VAT, …) and don’t want the dump to be analyzed in external locations.

Once the heap dump is captured, from the above program, we uploaded it to the HeapHero tool. Tool analyzed the dump and generated the report. In the report go to the ‘Histogram’ view. This view will show all the classes that are loaded into the memory. In this view you will notice the classes with the prefix ‘com.buggyapp.MetaspaceObject’ . Right click on the ‘…’ that is next to the class name. Then click on the ‘List Object(s) with > incoming references’ as shown in the below figure.


Fig: Histogram view of showing all the loaded classes in memory

Once you do it, the tool will display all the incoming references of this particular class. This will show the origin point of these classes as shown in the below figure. It will clearly show which part of code is creating these class definitions. Once we know which part of code is creating these class definitions, then it would be easy to fix the problem.


Fig: Incoming References of the class

Conclusion
In this post, we’ve covered a range of topics, from understanding JVM memory regions to diagnosing and resolving ‘java.lang.OutOfMemoryError: Metaspace’. We hope you’ve found the information useful and insightful. But our conversation doesn’t end here. Your experiences and insights are invaluable to us and to your fellow readers. We encourage you to share your encounters with ‘java.lang.OutOfMemoryError: Metaspace’ in the comments below. Whether it’s a unique solution you’ve discovered, a best practice you swear by, or even just a personal anecdote, your contributions can enrich the learning experience for everyone.



1 month ago


Garbage Collection (GC) plays an important role in Java’s memory management. It helps to reclaim memory that is no longer in use. Garbage Collector uses its own set of threads to reclaim memory. These threads are called GC Threads. Sometimes JVM can end up either with too many or too few GC threads. In this post, we will discuss why JVM can end up having too many/too few GC threads, the consequences of it and potential solutions to address them.

How to Find Your Application’s GC Thread Count
You can determine your application’s GC thread count by doing thread dump analysis as outlined below:

1.Capture thread dump from your production server.
2.Analyze the dump using a thread dump analysis tool like fastThread.
3.Tool will immediately report the GC thread count, as shown in the figure below.



Fig: fastThread tool reporting GC Thread count

How to Set GC Thread Count
You can manually adjust the number of GC threads by setting the following two JVM arguments:

-XX:ParallelGCThreads=n: Sets the number of threads used in parallel phase of the garbage collectors.
-XX:ConcGCThreads=n: Controls the number of threads used in concurrent phases of garbage collectors.

What Is the Default GC Thread Count?
If you don’t explicitly set the GC thread count using the above two JVM arguments, then default GC thread count is derived based on the number of CPUs in the server/container.

–XX:ParallelGCThreads Default: For  on Linux/x86 machine is derived based on the formula:

if (num of processors <=8) {
  return num of processors;
} else {
 return 8+(num of processors-8)*(5/8);
}
So if your JVM is running on server with 32 processors, then ParallelGCThread value is going to be: 23(i.e. 8 + (32 – 8)*(5/8)).

-XX:ConcGCThreads Default: It’s derived based on the formula:

max((ParallelGCThreads+2)/4, 1)
So if your JVM is running on server with 32 processors, then

ParallelGCThread value is going to be: 23 (i.e. 8 + (32 – 8)*(5/8))
ConcGCThreads value is going to be: 6 (i.e. max(25/4, 1)

How JVM Can End Up with Too Many GC Threads
It’s possible for your JVM to unintentionally have too many GC threads, often without your awareness. This typically happens because the default number of GC threads is automatically determined based on the number of CPUs in your server or container.

For example, on a machine with 128 CPUs, the JVM might allocate around 80 threads for the parallel phase of garbage collection and about 20 threads for the concurrent phase, resulting in a total of approximately 100 GC threads.

If you’re running multiple JVMs on this 128-CPU machine, each JVM could end up with around 100 GC threads. This can lead to excessive resource usage because all these threads are competing for the same CPU resources. This problem is particularly noticeable in containerized environments, where multiple applications share the same CPU cores. It will cause JVM to allocate more GC threads than necessary, which can degrade overall performance.

Why Is Having Too Many GC Threads a Problem?
While GC threads are essential for efficient memory management, having too many of them can lead to significant performance challenges in your Java application.

1. Increased Context Switching: When the number of GC threads is too high, the operating system must frequently switch between these threads. This leads to increased context switching overhead, where more CPU cycles are spent managing threads rather than executing your application’s code. As a result, your application may slow down significantly.

2. CPU Overhead: Each GC thread consumes CPU resources. If too many threads are active simultaneously, they can compete for CPU time, leaving less processing power available for your application’s primary tasks. This competition can degrade your application’s performance, especially in environments with limited CPU resources.

3. Memory Contention: With an excessive number of GC threads, there can be increased contention for memory resources. Multiple threads trying to access and modify memory simultaneously can lead to lock contention, which further slows down your application and can cause performance bottlenecks.

4. Increased GC Pause Times and Lower Throughput: When too many GC threads are active, the garbage collection process can become less efficient, leading to longer GC pause times where the application is temporarily halted. These extended pauses can cause noticeable delays or stutters in your application. Additionally, as more time is spent on garbage collection rather than processing requests, your application’s overall throughput may decrease, handling fewer transactions or requests per second and affecting its ability to scale and perform under load.

5. Higher Latency: Increased GC activity due to an excessive number of threads can lead to higher latency in responding to user requests or processing tasks. This is particularly problematic for applications that require low latency, such as real-time systems or high-frequency trading platforms, where even slight delays can have significant consequences.

6. Diminishing Returns: Beyond a certain point, adding more GC threads does not improve performance. Instead, it leads to diminishing returns, where the overhead of managing these threads outweighs the benefits of faster garbage collection. This can result in degraded application performance, rather than the intended optimization.

Why Is Having Too Few GC Threads a Problem?[/B]
While having too many GC threads can create performance issues, having too few GC threads can be equally problematic for your Java application. Here’s why:

1. Longer Garbage Collection Times: With fewer GC threads, the garbage collection process may take significantly longer to complete. Since fewer threads are available to handle the workload, the time required to reclaim memory increases, leading to extended GC pause times.

2. Increased Application Latency: Longer garbage collection times result in increased latency, particularly for applications that require low-latency operations. Users might experience delays, as the application becomes unresponsive while waiting for garbage collection to finish.

3. Reduced Throughput: A lower number of GC threads means the garbage collector can’t work as efficiently, leading to reduced overall throughput. Your application may process fewer requests or transactions per second, affecting its ability to scale under load.

4. Inefficient CPU Utilization: With too few GC threads, the CPU cores may not be fully utilized during garbage collection. This can lead to inefficient use of available resources, as some cores remain idle while others are overburdened.

5. Increased Risk of OutOfMemoryErrors and Memory Leaks: If the garbage collector is unable to keep up with the rate of memory allocation due to too few threads, it may not be able to reclaim memory quickly enough. This increases the risk of your application running out of memory, resulting in OutOfMemoryErrors and potential crashes. Additionally, insufficient GC threads can exacerbate memory leaks by slowing down the garbage collection process, allowing more unused objects to accumulate in memory. Over time, this can lead to excessive memory usage and further degrade application performance.

Solutions to Optimize GC Thread Count
If your application is suffering from performance issues due to an excessive or insufficient number of GC threads, consider manually setting the GC thread count using the above mentioned JVM arguments i.e.,

-XX:ParallelGCThreads=n
-XX:ConcGCThreads=n

Before making these changes in production, it’s essential to study your application’s GC behavior. Start by collecting and analyzing GC logs using tools like GCeasy. This analysis will help you identify if the current thread count is causing performance bottlenecks. Based on these insights, you can make informed adjustments to the GC thread count without introducing new issues

Note: Always test changes in a controlled environment first to confirm that they improve performance before rolling them out to production.

Conclusion
Balancing the number of GC threads is key to ensuring your Java application runs smoothly. By carefully monitoring and adjusting these settings, you can avoid potential performance issues and keep your application operating efficiently.

1 month ago