Fix “java.lang.OutOfMemoryError: Java heap space” in Java: Causes, Heap Basics, and Practical Solutions

目次

1. Introduction

When you develop in Java, have you ever had your application suddenly crash and the console shows:

java.lang.OutOfMemoryError: Java heap space

This error means “Java has run out of usable memory (the heap).”
However, from the error message alone, it’s not immediately obvious:

  • What caused the heap to run out
  • What you should adjust, and how
  • Whether the problem is in the code or in the configuration

As a result, people often resort to “quick fixes” like “just increase -Xmx” or “add more server memory.”

But increasing heap size without understanding the root cause is not only not a real fix—it can also trigger other problems.

  • GC (garbage collection) becomes heavier and response times degrade
  • Overall server memory gets tight and affects other processes
  • A real memory leak remains, and OutOfMemoryError occurs again

That’s why “java heap space” is not just “low memory.”
You should treat it as a sign of a compound issue involving application design, implementation, and infrastructure settings.

1-1. Intended Audience

This article is intended for readers who:

  • Understand Java basics (classes, methods, collections, etc.)
  • But don’t fully understand how memory is managed inside the JVM
  • Have encountered “java heap space” or OutOfMemoryError in development/test/production—or want to be prepared
  • Run Java on Docker/containers/cloud and feel slightly unsure about memory settings

Your years of Java experience don’t matter.
If you want to “properly understand the error and learn to isolate the cause by yourself,” this guide aims to be directly useful in real work.

1-2. What You’ll Learn in This Article

In this article, we explain the “java heap space” error from the mechanism upward—not just a list of fixes.

Key topics include:

  • What the Java heap is
    • How it differs from the stack
    • Where objects are allocated
  • Common patterns that cause “java heap space”
    • Bulk loading large data
    • Overgrowing collections and caches
    • Memory leaks (code that keeps references alive)
  • How to check and increase heap size
    • Command-line options (-Xms, -Xmx)
    • IDE settings (Eclipse / IntelliJ, etc.)
    • Application server configuration points (Tomcat, etc.)
  • Memory-saving techniques in code
    • Revisiting how you use collections
    • Pitfalls when using streams and lambdas
    • Chunking strategies for large data
  • The relationship between GC and the heap
    • How GC basically works
    • How to read GC logs at a basic level
  • Detecting memory leaks and using tools
    • Getting a heap dump
    • Getting started with analysis using VisualVM or Eclipse MAT
  • Things to watch out for in container environments (Docker / Kubernetes)
    • The relationship between containers and -Xmx
    • Memory limits via cgroups and the OOM Killer

In the latter half of the article, we also answer common questions in an FAQ format, such as:

  • “Should I just increase the heap for now?”
  • “How far can I safely increase the heap?”
  • “How can I roughly tell if it’s a memory leak?”

1-3. How to Read This Article

The “java heap space” error is important both for people who:

  • Need to fix a production incident right now
  • Want to prevent issues before they happen

If you need an immediate fix, you can jump ahead to practical sections such as:

  • How to change heap size
  • How to check for memory leaks

On the other hand, if you want a thorough understanding, read in this order:

  1. The basics: “What the Java heap is”
  2. Typical causes
  3. Then solutions and tuning steps

This flow will help you cleanly understand the mechanism behind the error.

2. What Is the Java Heap?

To properly understand the “java heap space” error, you first need to know how Java manages memory.
In Java, memory is divided into multiple areas by purpose, and among them the heap plays a critical role as the memory space for objects.

2-1. Big Picture of Java Memory Areas

Java applications run on the JVM (Java Virtual Machine).
The JVM has multiple memory areas to handle different kinds of data. The most common three are:

■ Types of Memory Areas

  • Heap The area where objects created by the application are stored. If this runs out, you get the “java heap space” error.
  • Stack The area for method calls, local variables, references, and more. If this overflows, you get “StackOverflowError.”
  • Method Area / Metaspace Stores class information, constants, metadata, and JIT compilation results.

In Java, all objects created with new are placed on the heap.

2-2. The Role of the Heap

The Java heap is where things like the following are stored:

  • Objects created with new
  • Arrays (including the contents of List/Map, etc.)
  • Objects generated internally by lambdas
  • Strings and buffers used by StringBuilder
  • Data structures used within the collection framework

In other words, when Java needs to “keep something in memory,” it is almost always stored on the heap.

2-3. What Happens When the Heap Runs Out?

If the heap is too small—or the application creates too many objects—Java runs GC (garbage collection) to reclaim memory by removing unused objects.

But if repeated GC still can’t free enough memory, and the JVM can no longer allocate memory, you’ll get:

java.lang.OutOfMemoryError: Java heap space

and the application will be forced to stop.

2-4. “Just Increase the Heap” Is Half Right and Half Wrong

If the heap is simply too small, increasing it can solve the issue—for example:

-Xms1024m -Xmx2048m

However, if the root cause is a memory leak or inefficient processing of huge data in code, increasing the heap only buys time and does not fix the underlying problem.

In short, understanding “why the heap is running out” is the most important thing.

2-5. Heap Layout (Eden / Survivor / Old)

The Java heap is broadly split into two parts:

  • Young generation (newly created objects)
    • Eden
    • Survivor (S0, S1)
  • Old generation (long-lived objects)

GC works differently depending on the area.

Young generation

Objects are first placed in Eden, and short-lived objects are quickly removed.
GC runs frequently here, but it is relatively lightweight.

Old generation

Objects that survive long enough are promoted from Young to Old.
GC in Old is more expensive, so if this area keeps growing, it can cause latency or pauses.

In many cases, a “heap space” error ultimately happens because the Old generation fills up.

2-6. Why Heap Shortage Is Common for Beginners and Intermediate Devs

Because Java performs garbage collection automatically, people often assume “the JVM handles all memory management.”

In reality, there are many ways to run out of heap, such as:

  • Code that keeps creating large numbers of objects
  • References kept alive inside collections
  • Streams/lambdas unintentionally generating huge data
  • Overgrown caches
  • Misunderstanding heap limits in Docker containers
  • Incorrect heap configuration in an IDE

That’s why learning how the heap itself works is the shortest path to a reliable fix.

3. Common Causes of the “java heap space” Error

Heap shortage is a frequent issue in many real-world environments, but its causes can largely be grouped into three categories: data volume, code/design, and misconfiguration.
In this section, we organize typical patterns and explain why they lead to the error.

3-1. Memory Pressure from Loading Large Data

The most common pattern is when the data itself is so large that the heap gets exhausted.

■ Common Examples

  • Loading a huge CSV/JSON/XML all at once into memory
  • Fetching a massive number of database records in one shot
  • A Web API returns a very large response (images, logs, etc.)

A particularly dangerous scenario is:

When the “raw string before parsing” and the “objects after parsing” exist in memory at the same time.

For example, if you load a 500MB JSON as a single string and then deserialize it with Jackson, the total memory usage can easily exceed 1GB.

■ Direction for Mitigation

  • Introduce chunked reading (streaming processing)
  • Use paging for database access
  • Avoid keeping intermediate data longer than necessary

Following the rule “handle large data in chunks” goes a long way in preventing heap exhaustion.

3-2. Over-Accumulating Data in Collections

This is extremely common for beginner to intermediate developers.

■ Typical Mistakes

  • Continuously adding logs or temporary data to a Listit grows without being cleared
  • Using a Map as a cache (but never evicting entries)
  • Creating new objects continuously inside loops
  • Generating huge numbers of temporary objects via Streams or lambdas

In Java, as long as a reference remains, GC cannot remove the object.
In many cases, developers unintentionally keep references alive.

■ Direction for Mitigation

  • Define a lifecycle for caches
  • Set capacity limits for collections
  • For large-data mechanisms, clear periodically

For reference, even if it doesn’t “look” like a memory leak:

List<String> list = new ArrayList<>();
for (...) {
    list.add(heavyData);  // ← grows forever
}

This kind of code is very dangerous.

3-3. Memory Leaks (Unintended Object Retention)

Because Java has GC, people often think “memory leaks don’t happen in Java.”
In practice, memory leaks absolutely do happen in Java.

■ Common Leak Hotspots

  • Keeping objects in static variables
  • Forgetting to unregister listeners or callbacks
  • Leaving references alive inside Streams/Lambdas
  • Objects accumulating in long-running batch jobs
  • Storing large data in ThreadLocal and the thread gets reused

Memory leaks are not something you can completely avoid in Java.

■ Direction for Mitigation

  • Revisit how you use static variables
  • Ensure removeListener() and close() are always called
  • For long-running processes, take a heap dump and investigate
  • Avoid ThreadLocal unless truly necessary

Because memory leaks will recur even if you increase the heap,
root-cause investigation is essential.

3-4. JVM Heap Size Too Small (Defaults Are Small)

Sometimes the application is fine, but the heap itself is simply too small.

The default heap size varies by OS and Java version.
In Java 8, it’s commonly set to roughly 1/64 to 1/4 of physical memory.

A dangerous setup often seen in production is:

No -Xmx specified, while the app processes large data

■ Common Scenarios

  • Only production has larger data volume, and default heap is not enough
  • Running on Docker without setting -Xmx
  • Spring Boot started as a fat JAR with default values

■ Direction for Mitigation

  • Set -Xms and -Xmx to appropriate values
  • In containers, understand physical memory vs cgroup limits and configure accordingly

3-5. Long-Running Patterns Where Objects Keep Accumulating

Applications like the following tend to accumulate memory pressure over time:

  • Long-running Spring Boot applications
  • Memory-intensive batch jobs
  • Web applications with large user traffic

Batch jobs in particular often show this pattern:

  • Memory is consumed
  • GC barely recovers enough
  • Some accumulation remains, and the next run hits OOM

This leads to many delayed-onset heap space errors.

3-6. Misunderstanding Limits in Containers (Docker / Kubernetes)

There’s a common pitfall in Docker/Kubernetes:

■ Pitfall

  • Not setting -Xmx → Java references host physical memory rather than the container limit → it uses too much → the process is killed by the OOM Killer

This is one of the most common production incidents.

■ Mitigation

  • Set -XX:MaxRAMPercentage appropriately
  • Align -Xmx with the container memory limit
  • Understand “UseContainerSupport” in Java 11+

4. How to Check Heap Size

When you see a “java heap space” error, the first thing you should do is confirm how much heap is currently allocated.
In many cases, the heap is simply smaller than expected—so checking is a critical first step.

In this section, we cover ways to check heap size from the command line, inside the program, IDEs, and application servers.

4-1. Check Heap Size from the Command Line

Java provides several options to check JVM configuration values at startup.

■ Using -XX:+PrintFlagsFinal

This is the most reliable way to confirm heap size:

java -XX:+PrintFlagsFinal -version | grep HeapSize

You’ll see output like:

  • InitialHeapSize … the initial heap size specified by -Xms
  • MaxHeapSize … the maximum heap size specified by -Xmx

Example:

   uintx InitialHeapSize                          = 268435456
   uintx MaxHeapSize                              = 4294967296

This means:

  • Initial heap: 256MB
  • Max heap: 4GB

■ Concrete Example

java -Xms512m -Xmx2g -XX:+PrintFlagsFinal -version | grep HeapSize

This is also useful after changing settings, making it a dependable confirmation method.

4-2. Check Heap Size from Within a Running Program

Sometimes you want to check heap amounts from inside the running application.

Java makes this easy using the Runtime class:

long max = Runtime.getRuntime().maxMemory();
long total = Runtime.getRuntime().totalMemory();
long free = Runtime.getRuntime().freeMemory();

System.out.println("Max Heap:    " + (max / 1024 / 1024) + " MB");
System.out.println("Total Heap:  " + (total / 1024 / 1024) + " MB");
System.out.println("Free Heap:   " + (free / 1024 / 1024) + " MB");
  • maxMemory() … the maximum heap size (-Xmx)
  • totalMemory() … the heap currently allocated by the JVM
  • freeMemory() … the currently available space within that heap

For web apps or long-running processes, logging these values can help during incident investigation.

4-3. Check Using Tools Like VisualVM or Mission Control

You can also visually inspect heap usage with GUI tools.

■ VisualVM

  • Real-time heap usage display
  • GC timing
  • Heap dump capture

It’s a classic, commonly used tool in Java development.

■ Java Mission Control (JMC)

  • Enables more detailed profiling
  • Especially useful for operations on Java 11+

These tools help you visualize issues such as only the Old generation growing.

4-4. Check in IDEs (Eclipse / IntelliJ)

If you run your app from an IDE, the IDE’s settings can affect heap size.

■ Eclipse

Window → Preferences → Java → Installed JREs  

Or set -Xms / -Xmx under:
Run Configuration → VM arguments

■ IntelliJ IDEA

Help → Change Memory Settings  

Or add -Xmx under VM options in Run/Debug Configuration.

Be careful—sometimes the IDE itself imposes a heap limit.

4-5. Check in Application Servers (Tomcat / Jetty)

For web applications, heap size is often specified in the server startup scripts.

■ Tomcat Example (Linux)

CATALINA_OPTS="-Xms512m -Xmx2g"

■ Tomcat Example (Windows)

set JAVA_OPTS=-Xms512m -Xmx2g

In production, leaving this at defaults is common—and it often leads to heap space errors after the service has been running for a while.

4-6. Checking Heap in Docker / Kubernetes (Important)

In containers, physical memory, cgroups, and Java settings interact in complicated ways.

In Java 11+, “UseContainerSupport” can adjust heap automatically, but behavior may still be unexpected depending on:

  • The container memory limit (e.g., --memory=512m)
  • Whether -Xmx is explicitly set

For example, if you only set a container memory limit:

docker run --memory=512m ...

and don’t set -Xmx, you can run into:

  • Java references host memory and tries to allocate too much
  • cgroups enforce the limit
  • The process is killed by the OOM Killer

This is a very common production issue.

4-7. Summary: Heap Checking Is the First Mandatory Step

Heap shortages require very different fixes depending on the cause.
Start by understanding, as a set:

  • The current heap size
  • The actual usage
  • Visualization via tools

5. Solution #1: Increase the Heap Size

The most direct response to a “java heap space” error is increasing the heap size.
If the cause is simple memory shortage, increasing the heap appropriately can restore normal behavior.

However, when you increase the heap, it’s important to understand both
the correct configuration methods and the key precautions.
Incorrect settings can lead to performance degradation or other OOM (Out Of Memory) issues.

5-1. Increase Heap Size from the Command Line

If you start a Java application as a JAR, the most basic method is specifying -Xms and -Xmx:

■ Example: Initial 512MB, Max 2GB

java -Xms512m -Xmx2g -jar app.jar
  • -Xms … the initial heap size reserved at JVM startup
  • -Xmx … the maximum heap size the JVM can use

In many cases, setting -Xms and -Xmx to the same value helps reduce overhead from heap resizing.

Example:

java -Xms2g -Xmx2g -jar app.jar

5-2. Configuration for Resident Server Apps (Tomcat / Jetty, etc.)

For web applications, set these options in the application server startup scripts.

■ Tomcat (Linux)

Set in setenv.sh:

export CATALINA_OPTS="$CATALINA_OPTS -Xms512m -Xmx2048m"

■ Tomcat (Windows)

Set in setenv.bat:

set CATALINA_OPTS=-Xms512m -Xmx2048m

■ Jetty

Add the following to start.ini or jetty.conf:

--exec
-Xms512m
-Xmx2048m

Because web apps can spike in memory usage depending on traffic, production should generally have more headroom than test environments.

5-3. Heap Settings for Spring Boot Apps

If you run Spring Boot as a fat JAR, the basics are the same:

java -Xms1g -Xmx2g -jar spring-app.jar

Spring Boot tends to use more memory than a simple Java program because it loads many classes and configurations at startup.

It often consumes more memory than a typical Java application.

5-4. Heap Settings in Docker / Kubernetes (Important)

For Java in containers, you must be careful because container limits and JVM heap calculation interact.

■ Recommended Example (Docker)

docker run --memory=1g \
  -e JAVA_OPTS="-Xms512m -Xmx800m" \
  my-java-app

■ Why You Must Explicitly Set -Xmx

If you don’t specify -Xmx in Docker:

  • The JVM decides heap size based on the host machine’s physical memory, not the container
  • It may try to allocate more memory than the container allows
  • It hits the cgroup memory limit and the process is killed by the OOM Killer

Because this is a very common production issue,
you should always set -Xmx in container environments.

5-5. Heap Setting Examples for CI/CD and Cloud Environments

In cloud-based Java environments, a common rule of thumb is to set heap based on available memory:

Total MemoryRecommended Heap (Approx.)
1GB512–800MB
2GB1.2–1.6GB
4GB2–3GB
8GB4–6GB

※ Leave the remaining memory for the OS, GC overhead, and thread stacks.

In cloud environments, total memory can be limited. If you increase the heap without planning, the entire application can become unstable.

5-6. Does Increasing the Heap Always Fix It? → There Are Limits

Increasing heap size can temporarily eliminate the error, but it does not solve cases like:

  • A memory leak exists
  • A collection keeps growing forever
  • Huge data is processed in bulk
  • The app has an incorrect design

So treat increasing the heap as an emergency measure, and be sure to follow up with code optimization and revisiting your data-processing design, which we will cover next.

6. Solution #2: Optimize Your Code

Increasing the heap size can be an effective mitigation, but if the root cause lies in your code structure or the way you process data, the “java heap space” error will come back sooner or later.

In this section, we’ll cover common real-world coding patterns that waste memory, and concrete approaches to improve them.

6-1. Rethink How You Use Collections

Java collections (List, Map, Set, etc.) are convenient, but careless usage can easily become the primary cause of memory growth.

■ Pattern ①: List / Map Grows Without Bound

A common example:

List<String> logs = new ArrayList<>();

while (true) {
    logs.add(fetchLog());   // ← grows forever
}

Collections with no clear termination condition or upper bound will reliably squeeze the heap in long-running environments.

● Improvements
  • Use a bounded collection (e.g., cap the size and discard old entries)
  • Periodically clear values you no longer need
  • If you use a Map as a cache, adopt a cache with eviction → Guava Cache or Caffeine are good options

■ Pattern ②: Not Setting Initial Capacity

ArrayList and HashMap automatically grow when they exceed capacity, but that growth involves:
allocating a new array → copying → discarding the old array.

When handling large datasets, omitting initial capacity is inefficient and can waste memory.

● Improvement Example
List<String> items = new ArrayList<>(10000);

If you can estimate the size, it’s better to set it up front.

6-2. Avoid Bulk Processing of Large Data (Process in Chunks)

If you process massive data all at once, it’s easy to fall into the worst-case scenario:
everything ends up on the heap → OOM.

■ Bad Example (Read a Huge File All at Once)

String json = Files.readString(Paths.get("large.json"));
Object data = new ObjectMapper().readValue(json, Data.class);

■ Improvements

  • Use streaming processing (e.g., Jackson Streaming API)
  • Read in smaller portions (batch paging)
  • Process streams sequentially and do not retain the whole dataset
● Example: Process Huge JSON with Jackson Streaming
JsonFactory factory = new JsonFactory();
try (JsonParser parser = factory.createParser(new File("large.json"))) {
    while (!parser.isClosed()) {
        JsonToken token = parser.nextToken();
        // Perform only what you need, and do not retain it in memory
    }
}

6-3. Avoid Unnecessary Object Creation

Streams and lambdas are convenient, but they may generate large numbers of temporary objects internally.

■ Bad Example (Creating a Huge Intermediate List with Streams)

List<Result> results = items.stream()
        .map(this::toResult)
        .collect(Collectors.toList());

If items is huge, a large number of temporary objects are created, and the heap balloons.

● Improvements
  • Process sequentially with a for loop
  • Write out only what you need immediately (do not keep everything)
  • Avoid collect(), or control it manually

6-4. Be Careful with String Concatenation

Java Strings are immutable, so each concatenation creates a new object.

■ Improvements

  • Use StringBuilder for heavy concatenation
  • Avoid unnecessary concatenation when generating logs
StringBuilder sb = new StringBuilder();
for (String s : items) {
    sb.append(s);
}

6-5. Don’t Overbuild Caches

This is a common situation in web apps and batch processing:

  • “We added a cache for speed.”
  • → but forgot to clear it
  • → the cache keeps growing
  • → heap shortage → OOM

■ Improvements

  • Set TTL (time-based expiration) and a maximum size
  • Using ConcurrentHashMap as a cache substitute is risky
  • Use a well-managed cache like Caffeine that controls memory properly

6-6. Don’t Recreate Objects Inside Large Loops

■ Bad Example

for (...) {
    StringBuilder sb = new StringBuilder(); // created every iteration
    ...
}

This creates more temporary objects than necessary.

● Improvement
StringBuilder sb = new StringBuilder(); 
for (...) {
    sb.setLength(0);  // reuse
}

6-7. Split Memory-Heavy Work into Separate Processes

When you handle truly massive data in Java, you may need to revisit the architecture itself.

  • Separate ETL into a dedicated batch job
  • Delegate to distributed processing (Spark or Hadoop)
  • Split services to avoid heap contention

6-8. Code Optimization Is a Key Step to Prevent Recurrence

If you only increase heap, you will eventually hit the next “limit,” and the same error will occur again.

To fundamentally prevent “java heap space” errors, you must do:

  • Understand your data volume
  • Review object creation patterns
  • Improve collection design

7. Solution #3: Tune GC (Garbage Collection)

The “java heap space” error can happen not only when the heap is too small, but also when GC cannot reclaim memory effectively and the heap gradually becomes saturated.

Without understanding GC, you can easily misdiagnose symptoms like:
“Memory should be available, but we still get errors,” or “The system becomes extremely slow.”

This section explains the basic GC mechanism in Java and practical tuning points that help in real operations.

7-1. What Is GC (Garbage Collection)?

GC is Java’s mechanism for automatically discarding objects that are no longer needed.
The Java heap is broadly split into two generations, and GC behaves differently in each.

● Young generation (short-lived objects)

  • Eden / Survivor (S0, S1)
  • Temporary data created locally, etc.
  • GC happens frequently, but it’s lightweight

● Old generation (long-lived objects)

  • Objects promoted from Young
  • GC is heavier; if it happens often, the app can “freeze”

In many cases, “java heap space” ultimately happens when the Old generation fills up.

7-2. GC Types and Characteristics (How to Choose)

Java provides multiple GC algorithms.
Choosing the right one for your workload can significantly improve performance.

● ① G1GC (Default since Java 9)

  • Splits the heap into small regions and reclaims them incrementally
  • Can keep stop-the-world pauses shorter
  • Great for web apps and business systems

→ In general, G1GC is a safe default choice

● ② Parallel GC (Good for throughput-heavy batch jobs)

  • Parallelized and fast
  • But pause times can become longer
  • Often beneficial for CPU-heavy batch processing

● ③ ZGC (Low-latency GC with millisecond-level pauses)

  • Available in Java 11+
  • For latency-sensitive apps (game servers, HFT)
  • Effective even with large heaps (tens of GB)

● ④ Shenandoah (Low-latency GC)

  • Often associated with Red Hat distributions
  • Can minimize pause times aggressively
  • Also available in some builds such as AWS Corretto

7-3. How to Explicitly Switch GC

G1GC is the default in many setups, but you can specify a GC algorithm depending on your goal:

# G1GC
java -XX:+UseG1GC -jar app.jar

# Parallel GC
java -XX:+UseParallelGC -jar app.jar

# ZGC
java -XX:+UseZGC -jar app.jar

Because the GC algorithm can drastically change heap behavior and pause time, production systems often set this explicitly.

7-4. Output GC Logs and Visually Inspect Problems

It’s crucial to understand how much memory GC is reclaiming and how often stop-the-world pauses occur.

● Basic GC Logging Configuration

java \
  -Xms1g -Xmx1g \
  -XX:+PrintGCDetails \
  -XX:+PrintGCDateStamps \
  -Xloggc:gc.log \
  -jar app.jar

By examining gc.log, you can identify clear signs of heap pressure, such as:

  • Too many Young GCs
  • The Old generation never decreases
  • Full GC occurs frequently
  • Each GC reclaims an unusually small amount

7-5. Cases Where GC Latency Triggers “java heap space”

If heap pressure is caused by patterns like the following, GC behavior becomes a decisive clue.

● Symptoms

  • The application suddenly freezes
  • GC runs for seconds to tens of seconds
  • The Old generation keeps growing
  • Full GC increases, and finally OOM occurs

This indicates a state where GC is trying hard, but cannot reclaim enough memory before hitting the limit.

■ Common Root Causes

  • Memory leaks
  • Collections retained permanently
  • Objects living too long
  • Old generation bloat

In these cases, analyzing GC logs can help you pinpoint leak signals or load spikes at specific times.

7-6. Key Points When Tuning G1GC

G1GC is strong by default, but tuning can make it even more stable.

● Common Parameters

-XX:MaxGCPauseMillis=200
-XX:G1HeapRegionSize=8m
-XX:InitiatingHeapOccupancyPercent=45
  • MaxGCPauseMillis → Target pause time (e.g., 200ms)
  • G1HeapRegionSize → Region size used to partition the heap
  • InitiatingHeapOccupancyPercent → The Old-gen occupancy percentage that triggers a GC cycle

However, in many cases defaults are fine, so only change these when you have a clear need.

7-7. Summary of GC Tuning

GC improvements help you visualize factors that are not obvious from just increasing heap size:

  • Object lifetimes
  • Collection usage patterns
  • Whether a memory leak exists
  • Where heap pressure concentrates

That’s why GC tuning is a highly important process for “java heap space” mitigation.

8. Solution #4: Detect Memory Leaks

If the error still recurs even after increasing heap and optimizing code, the most likely suspect is a memory leak.

People often assume Java is resistant to memory leaks because GC exists, but in practice, memory leaks are one of the most troublesome and recurrence-prone causes in real environments.

Here, we focus on practical steps you can use immediately, from understanding leaks to using analysis tools such as VisualVM and Eclipse MAT.

8-1. What Is a Memory Leak? (Yes, It Happens in Java)

A Java memory leak is:

A state where references to unnecessary objects remain, preventing GC from reclaiming them.

Even with garbage collection, leaks commonly occur when:

  • Objects are kept in static fields
  • Dynamically registered listeners are never unregistered
  • Collections keep growing and retain references
  • ThreadLocal values persist unexpectedly
  • Framework lifecycles do not match your object lifecycle

So leaks are absolutely a normal possibility.

8-2. Typical Memory Leak Patterns

● ① Collection Growth (Most Common)

Continuously adding to List/Map/Set without removing entries.
In business Java systems, a large portion of OOM incidents come from this pattern.

● ② Holding Objects in static Variables

private static List&lt;User&gt; cache = new ArrayList&lt;&gt;();

This often becomes the starting point of a leak.

● ③ Forgetting to Unregister Listeners / Callbacks

References remain in the background via GUI, observers, event listeners, etc.

● ④ Misusing ThreadLocal

In thread-pool environments, ThreadLocal values can persist longer than intended.

● ⑤ References Retained by External Libraries

Some “hidden memory” is difficult to manage from application code, making tool-based analysis essential.

8-3. Checkpoints to Spot “Signs” of a Memory Leak

If you see the following, you should strongly suspect a memory leak:

  • Only the Old generation steadily increases
  • Full GC becomes more frequent
  • Memory barely decreases even after Full GC
  • Heap usage increases with uptime
  • Only production crashes after long runtimes

These are much easier to understand once visualized with tools.

8-4. Tool #1: Visually Check Leaks with VisualVM

VisualVM is often bundled with the JDK in some setups and is very approachable as a first tool.

● What You Can Do with VisualVM

  • Real-time monitoring of memory usage
  • Confirm Old generation growth
  • GC frequency
  • Thread monitoring
  • Capture heap dumps

● How to Capture a Heap Dump

In VisualVM, open the “Monitor” tab and click the “Heap Dump” button.

You can then pass the captured heap dump directly into Eclipse MAT for deeper analysis.

8-5. Tool #2: Deep Analysis with Eclipse MAT (Memory Analyzer Tool)

If there’s one industry-standard tool for Java memory leak analysis, it’s Eclipse MAT.

● What MAT Can Show You

  • Which objects consume the most memory
  • Which reference paths keep objects alive
  • Why objects are not being released
  • Collection bloat
  • Automatic “Leak Suspects” reports

● Basic Analysis Steps

  1. Open the heap dump (*.hprof)
  2. Run the “Leak Suspects Report”
  3. Find collections retaining large amounts of memory
  4. Check the Dominator Tree to identify “parent” objects
  5. Follow the reference path (“Path to GC Root”)

8-6. If You Understand the Dominator Tree, Analysis Speeds Up Dramatically

The Dominator Tree helps you identify objects that dominate (control) large portions of memory usage.

Examples include:

  • A massive ArrayList
  • A HashMap with an enormous number of keys
  • A cache that is never released
  • A singleton held by static

Finding these can drastically reduce time to locate the leak.

8-7. How to Capture a Heap Dump (Command-Line)

You can also capture a heap dump using jmap:

jmap -dump:format=b,file=heap.hprof <PID>

You can also configure the JVM to automatically dump the heap when OOM occurs:

-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/log/heapdump.hprof

This is essential for production incident investigation.

8-8. True Leak Fix Requires Code Changes

If a leak exists, measures like:

  • Increasing heap size
  • Tuning GC

are merely temporary life-support measures.

Ultimately, you need design changes such as:

  • Fixing the part that holds references indefinitely
  • Revisiting collection design
  • Avoiding excessive use of static
  • Implementing cache eviction and cleanup

8-9. How to Distinguish “Heap Shortage” vs “Memory Leak”

● In a Heap-Shortage Case

  • OOM happens quickly as data volume increases
  • It scales with workload
  • Increasing heap stabilizes the system

● In a Memory-Leak Case

  • OOM happens after long uptime
  • As requests increase, performance gradually worsens
  • Memory barely decreases even after Full GC
  • Increasing heap does not solve it

8-10. Summary: If Heap Tuning Doesn’t Fix OOM, Suspect a Leak

Among “java heap space” issues, the most time-consuming root cause to identify is often a memory leak.

But with VisualVM + Eclipse MAT, it’s often possible to discover within minutes:

  • Objects consuming the most memory
  • The root references keeping them alive
  • The source of collection bloat

9. “java heap space” Issues in Docker / Kubernetes and How to Fix Them

Modern Java applications increasingly run not only on on-prem environments but also on Docker and Kubernetes (K8s).
However, because container environments use a different memory model than the host, there are many easy-to-misunderstand points for Java developers, and “java heap space” errors or OOMKilled (forced container termination) can happen frequently.

This section summarizes container-specific memory management and the settings you must know in real operations.

9-1. Why Heap Space Errors Are So Common in Containers

The reason is simple:

Java may not always recognize container memory limits correctly.

● A Common Misconception

“Since I set a Docker memory limit --memory=512m, Java should run within 512MB.”

→ In practice, that assumption can be wrong.

When deciding heap size, Java may reference the host’s physical memory rather than the container’s limits.

As a result:

  • Java decides “the host has plenty of memory”
  • It attempts to allocate a larger heap
  • Once it exceeds the container limit, the OOM Killer runs and the process is forcibly terminated

9-2. Improvements in Java 8u191+ and Java 11+

From certain Java 8 updates and in Java 11+, “UseContainerSupport” was introduced.

● Behavior in Containers

  • Can recognize cgroup-based limits
  • Automatically calculates heap size within those limits

However, behavior still varies by version, so explicit configuration is recommended in production.

9-3. Explicitly Setting Heap Size in Containers (Required)

● Recommended Startup Pattern

docker run \
  --memory=1g \
  -e JAVA_OPTS="-Xms512m -Xmx800m" \
  my-java-app

Key points:

  • Container memory: 1GB
  • Java heap: keep it within 800MB
  • The rest is used by thread stacks and native memory

● Bad Example (Very Common)

docker run --memory=1g my-java-app   # no -Xmx

→ Java may allocate heap based on host memory, and once it crosses 1GB, you get OOMKilled.

9-4. Memory Settings Pitfalls in Kubernetes (K8s)

In Kubernetes, resources.limits.memory is critical.

● Pod Example

resources:
  limits:
    memory: "1024Mi"
  requests:
    memory: "512Mi"

In this case, keeping Java -Xmx at around 800MB to 900MB is typically safer.

● Why Set It Lower Than the Limit?

Because Java uses more than heap:

  • Native memory
  • Thread stacks (hundreds of KB × number of threads)
  • Metaspace
  • GC worker overhead
  • JIT-compiled code
  • Library loading

Together, these can easily consume 100–300MB.

In practice, a common rule is:

If limit = X, set -Xmx to about X × 0.7 to 0.8 for safety.

9-5. Automatic Heap Percentage in Java 11+ (MaxRAMPercentage)

In Java 11, heap size can be auto-calculated using rules like:

● Default Settings

-XX:MaxRAMPercentage=25
-XX:MinRAMPercentage=50

Meaning:

  • Heap is capped at 25% of available memory
  • In small environments, it may use at least 50% as heap

● Recommended Setting

In containers, it’s often safer to set MaxRAMPercentage explicitly:

JAVA_OPTS="-XX:MaxRAMPercentage=70"

9-6. Why OOMKilled Happens So Often in Containers (Real-World Pattern)

A common production pattern:

  1. K8s memory limit = 1GB
  2. No -Xmx configured
  3. Java references host memory and tries to allocate more than 1GB heap
  4. The container is forcibly terminated → OOMKilled

Note that this is not necessarily a java heap space (OutOfMemoryError) event—this is a container-level OOM termination.

9-7. Container-Specific Checkpoints Using GC Logs and Metrics

In container environments, focus especially on:

  • Whether pod restarts are increasing
  • Whether OOMKilled events are recorded
  • Whether the Old generation keeps growing
  • Whether GC reclaim drops sharply at certain times
  • Whether native (non-heap) memory is running out

Prometheus + Grafana makes this far easier to visualize.

9-8. Summary: “Explicit Settings” Are the Default in Containers

  • --memory alone may not lead Java to calculate heap correctly
  • Always set -Xmx
  • Leave headroom for native memory and thread stacks
  • Set values lower than Kubernetes memory limits
  • On Java 11+, MaxRAMPercentage can be useful

10. Anti-Patterns to Avoid (Bad Code / Bad Settings)

The “java heap space” error happens not only when heap is truly insufficient, but also when certain dangerous coding patterns or incorrect configurations are used.

Here we summarize common anti-patterns seen frequently in real work.

10-1. Leaving Unbounded Collections to Grow Forever

One of the most frequent problems is collection bloat.

● Bad Example: Adding to a List Without Any Limit

List<String> logs = new ArrayList<>();
while (true) {
    logs.add(getMessage());  // ← grows forever
}

With long uptime, this alone can easily push you into OOM.

● Why It’s Dangerous

  • GC cannot reclaim memory, and the Old generation bloats
  • Full GC becomes frequent, making the app more likely to freeze
  • Copying massive numbers of objects increases CPU load

● How to Avoid It

  • Set a size limit (e.g., an LRU cache)
  • Clear periodically
  • Do not retain data unnecessarily

10-2. Loading Huge Files or Data All at Once

This is a common mistake in batch and server-side processing.

● Bad Example: Reading a Huge JSON in One Shot

String json = Files.readString(Paths.get("large.json"));
Data d = mapper.readValue(json, Data.class);

● What Goes Wrong

  • You retain both the pre-parse string and post-parse objects in memory
  • A 500MB file can consume well over double that in memory
  • Additional intermediate objects are created, and the heap gets exhausted

● How to Avoid It

  • Use streaming (sequential processing)
  • Read in chunks rather than bulk load
  • Do not retain the full dataset in memory

10-3. Continuing to Hold Data in static Variables

● Bad Example

public class UserCache {
    private static Map<String, User> cache = new HashMap<>();
}

● Why It’s Dangerous

  • static lives as long as the JVM is running
  • If used as a cache, entries may never be released
  • References remain, becoming a breeding ground for memory leaks

● How to Avoid It

  • Keep static usage to a minimum
  • Use a dedicated cache framework (e.g., Caffeine)
  • Set TTL and a maximum size limit

10-4. Overusing Stream / Lambda and Generating Huge Intermediate Lists

The Stream API is convenient, but it can create intermediate objects internally and put pressure on memory.

● Bad Example (collect creates a massive intermediate list)

List<Item> result = items.stream()
        .map(this::convert)
        .collect(Collectors.toList());

● How to Avoid It

  • Process sequentially with a for-loop
  • Avoid generating unnecessary intermediate lists
  • If the dataset is large, reconsider using Stream in that part

10-5. Doing Massive String Concatenation with the + Operator

Since Strings are immutable, every concatenation creates a new String object.

● Bad Example

String result = "";
for (String s : list) {
    result += s;
}

● What’s Wrong

  • A new String is created every iteration
  • A huge number of instances are produced, pressuring memory

● How to Avoid It

StringBuilder sb = new StringBuilder();
for (String s : list) {
    sb.append(s);
}

10-6. Creating Too Many Caches and Not Managing Them

● Bad Examples

  • Storing API responses in a Map indefinitely
  • Continuously caching images or file data
  • No control mechanism like LRU

● Why It’s Risky

  • The cache grows over time
  • Non-reclaimable memory increases
  • It will almost always become a production issue

● How to Avoid It

  • Use Caffeine / Guava Cache
  • Set a maximum size
  • Configure TTL (expiration)

10-7. Keeping Logs or Statistics in Memory Continuously

● Bad Example

List<String> debugLogs = new ArrayList<>();
debugLogs.add(message);

In production, logs should be written to files or log systems. Keeping them in memory is risky.

10-8. Not Specifying -Xmx in Docker Containers

This accounts for a large portion of modern heap-related incidents.

● Bad Example

docker run --memory=1g my-app

● What’s Wrong

  • Java may auto-size heap based on host memory
  • Once it exceeds the container limit, you get OOMKilled

● How to Avoid It

docker run --memory=1g -e JAVA_OPTS="-Xmx700m"

10-9. Over-Tuning GC Settings

Incorrect tuning can backfire.

● Bad Example

-XX:MaxGCPauseMillis=10
-XX:G1HeapRegionSize=1m

Extreme parameters can make GC overly aggressive or prevent it from keeping up.

● How to Avoid It

  • In most cases, default settings are sufficient
  • Only tune minimally when there is a specific, measured problem

10-10. Summary: Most Anti-Patterns Come from “Storing Too Much”

What all these anti-patterns have in common is:

“Accumulating more objects than necessary.”

  • Unbounded collections
  • Unnecessary retention
  • Bulk loading
  • static-heavy designs
  • Cache runaway
  • Exploding intermediate objects

Avoiding these alone can dramatically reduce “java heap space” errors.

11. Real Examples: This Code Is Dangerous (Typical Memory Problem Patterns)

This section introduces dangerous code examples frequently encountered in real projects that often lead to “java heap space” errors, and explains for each:
“Why it’s dangerous” and “How to fix it.”

In practice, these patterns often occur together, so this chapter is extremely useful for code reviews and incident investigations.

11-1. Bulk-Loading Huge Data

● Bad Example: Reading All Lines of a Huge CSV

List&lt;String&gt; lines = Files.readAllLines(Paths.get("big.csv"));

● Why It’s Dangerous

  • The larger the file, the more memory pressure
  • Even a 100MB CSV can consume more than double memory before/after parsing
  • Retaining massive records can exhaust the Old generation

● Improvement: Read via Stream (Sequential Processing)

try (Stream<String> stream = Files.lines(Paths.get("big.csv"))) {
    stream.forEach(line -> process(line));
}

→ Only one line is held in memory at a time, making this very safe.

11-2. Collection Bloat Pattern

● Bad Example: Continuously Accumulating Heavy Objects in a List

List<Order> orders = new ArrayList<>();
while (hasNext()) {
    orders.add(fetchNextOrder());
}

● Why It’s Dangerous

  • Every growth step reallocates the internal array
  • If you don’t need to keep everything, it’s pure waste
  • Long runtimes can consume huge Old-generation space

● Improvement: Process Sequentially + Batch When Needed

while (hasNext()) {
    Order order = fetchNextOrder();
    process(order);      // process without retaining
}

Or batch it:

List<Order> batch = new ArrayList<>(1000);
while (hasNext()) {
    batch.add(fetchNextOrder());
    if (batch.size() == 1000) {
        processBatch(batch);
        batch.clear();
    }
}

11-3. Generating Too Many Intermediate Objects via Stream API

● Bad Example: Repeated intermediate lists via map → filter → collect

List<Data> result = list.stream()
        .map(this::convert)
        .filter(d -> d.isValid())
        .collect(Collectors.toList());

● Why It’s Dangerous

  • Creates many temporary objects internally
  • Especially risky with huge lists
  • The deeper the pipeline, the higher the risk

● Improvement: Use a for-loop or sequential processing

List<Data> result = new ArrayList<>();
for (Item item : list) {
    Data d = convert(item);
    if (d.isValid()) {
        result.add(d);
    }
}

11-4. Parsing JSON or XML All at Once

● Bad Example

String json = Files.readString(Paths.get("large.json"));
Data data = mapper.readValue(json, Data.class);

● Why It’s Dangerous

  • Both the raw JSON string and the deserialized objects remain in memory
  • With 100MB-class files, the heap can fill instantly
  • Similar issues can occur even when using Stream APIs, depending on usage

● Improvement: Use a Streaming API

JsonFactory factory = new JsonFactory();
try (JsonParser parser = factory.createParser(new File("large.json"))) {
    while (!parser.isClosed()) {
        JsonToken token = parser.nextToken();
        // Process only when needed and do not retain data
    }
}

11-5. Loading All Images / Binary Data into Memory

● Bad Example

byte[] image = Files.readAllBytes(Paths.get("large.png"));

● Why It’s Dangerous

  • Binary data can be large and “heavy” by nature
  • In image-processing apps, this is a top cause of OOM

● Improvements

  • Use buffering
  • Process as a stream without retaining the whole file in memory
  • Bulk-reading multi-million-line logs is similarly dangerous

11-6. Infinite Retention via static Cache

● Bad Example

private static final List<Session> sessions = new ArrayList<>();

● What’s Wrong

  • sessions won’t be released until the JVM exits
  • It grows with connections and eventually leads to OOM

● Improvements

  • Use a size-managed cache (Caffeine, Guava Cache, etc.)
  • Clearly manage the session lifecycle

11-7. Misuse of ThreadLocal

● Bad Example

private static final ThreadLocal<SimpleDateFormat> formatter =
        ThreadLocal.withInitial(() -> new SimpleDateFormat("yyyy-MM-dd"));

ThreadLocal is useful, but with thread pools it can keep values alive and cause leaks.

● Improvements

  • Keep ThreadLocal short-lived
  • Avoid using it unless truly necessary
  • Call remove() to clear it

11-8. Creating Too Many Exceptions

This is often overlooked, but Exceptions are very heavy objects due to stack trace generation.

● Bad Example

for (...) {
    try {
        doSomething();
    } catch (Exception e) {
        // log only
    }
}

→ Flooding exceptions can pressure memory.

● Improvements

  • Do not use exceptions for normal control flow
  • Reject invalid input via validation
  • Avoid throwing exceptions unless necessary

11-9. Summary: Dangerous Code “Quietly” Eats Your Heap

The common theme is:
“structures that gradually squeeze the heap, stacked on top of each other.”

  • Bulk loading
  • Infinite collections
  • Forgetting to unregister/clear
  • Intermediate object creation
  • Exception flooding
  • static retention
  • ThreadLocal leftovers

In all cases, the impact becomes obvious during long runtimes.

12. Java Memory Management Best Practices (Essential to Prevent Recurrence)

So far, we’ve covered the causes of “java heap space” errors and countermeasures such as heap expansion, code improvements, GC tuning, and leak investigation.

This section summarizes best practices that reliably prevent recurrence in real operations.
Think of these as the minimum rules to keep Java applications stable.

12-1. Set Heap Size Explicitly (Especially in Production)

Running production workloads on defaults is risky.

● Best Practices

  • Explicitly set -Xms and -Xmx
  • Do not run production on defaults
  • Keep heap sizes consistent between dev and prod (avoid unexpected differences)

Example:

-Xms1g -Xmx1g

In Docker / Kubernetes, you must set heap smaller to match container limits.

12-2. Monitor Properly (GC, Memory Usage, OOM)

Heap problems are often preventable if you catch early warning signs.

● What to Monitor

  • Old generation usage
  • Young generation growth trends
  • Full GC frequency
  • GC pause time
  • Container OOMKilled events
  • Pod restart count (K8s)

● Recommended Tools

  • VisualVM
  • JDK Mission Control
  • Prometheus + Grafana
  • Cloud provider metrics (e.g., CloudWatch)

A gradual increase in memory usage over long runtimes is a classic leak sign.

12-3. Use “Controlled Caches”

Cache runaway is one of the most common causes of OOM in production.

● Best Practices

  • Use Caffeine / Guava Cache
  • Always configure TTL (expiration)
  • Set a maximum size (e.g., 1,000 entries)
  • Avoid static caches as much as possible
Caffeine.newBuilder()
    .maximumSize(1000)
    .expireAfterWrite(10, TimeUnit.MINUTES)
    .build();

12-4. Be Careful with Overusing Stream API and Lambdas

For large datasets, chaining Stream operations increases intermediate objects.

● Best Practices

  • Don’t chain map/filter/collect more than necessary
  • Process huge datasets sequentially with a for-loop
  • When using collect, be conscious of data volume

Streams are convenient, but they are not always memory-friendly.

12-5. Switch Huge Files / Huge Data to Streaming

Bulk processing is a major root cause of heap issues.

● Best Practices

  • CSV → Files.lines()
  • JSON → Jackson Streaming
  • DB → paging
  • API → chunked fetch (cursor/pagination)

If you enforce “don’t load everything into memory,” many heap-space problems disappear.

12-6. Treat ThreadLocal with Care

ThreadLocal is powerful, but misuse can cause severe memory leaks.

● Best Practices

  • Be especially careful when combined with thread pools
  • Call remove() after use
  • Do not store long-lived data
  • Avoid static ThreadLocal whenever possible

12-7. Periodically Capture Heap Dumps to Detect Leaks Early

For long-running systems (web apps, batch systems, IoT), capturing heap dumps regularly and comparing them helps detect early leak signs.

● Options

  • VisualVM
  • jmap
  • -XX:+HeapDumpOnOutOfMemoryError
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/log/heapdump.hprof

Auto-dumping on OOM is a must-have production setting.

12-8. Keep GC Tuning Minimal

The idea “tuning GC will automatically boost performance” can be dangerous.

● Best Practices

  • Start with default settings
  • Make minimal changes only when a measured problem exists
  • Use G1GC as the default choice
  • In many cases, increasing heap is more effective than micro-tuning

12-9. Consider Splitting the Architecture

If data volumes become too large or the app becomes too monolithic and demands a massive heap, you may need architectural improvements:

  • Microservices
  • Splitting batch data processing
  • Decoupling with message queues (Kafka, etc.)
  • Distributed processing (Spark, etc.)

If “no matter how much heap you add, it’s never enough,” suspect an architectural issue.

12-10. Summary: Java Memory Management Is About Layered Optimization

Heap-space problems are rarely solved by a single setting or a single code fix.

● Key Takeaways

  • Always set heap explicitly
  • Monitoring matters most
  • Never allow collection bloat
  • Use streaming for large data
  • Manage caches properly
  • Use ThreadLocal carefully
  • Analyze leaks with tools when needed
  • Containers require a different mindset

Following these points will prevent most “java heap space” errors with high certainty.

13. Summary: Key Points to Prevent “java heap space” Errors

In this article, we covered “java heap space” errors from root causes to mitigation and recurrence prevention.

Here we整理 the essentials as a practical recap.

13-1. The Real Issue Isn’t “Heap Is Too Small” but “Why Is It Running Out?”

“java heap space” is not just a simple memory shortage.

● The root cause is typically one of the following

  • Heap size is too small (insufficient configuration)
  • Bulk processing of huge data (design issue)
  • Collection bloat (lack of deletion/design)
  • Memory leak (references remain)
  • Misconfiguration in containers (Docker/K8s-specific)

Start with: “Why did the heap run out?”

13-2. First Steps to Investigate

① Confirm heap sizing

→ Explicitly set -Xms / -Xmx

② Understand runtime memory constraints

→ In Docker/Kubernetes, align limits and heap sizing
→ Also check -XX:MaxRAMPercentage

③ Capture and inspect GC logs

→ Old-gen growth and frequent Full GC are warning signs

④ Capture and analyze heap dumps

→ Use VisualVM / MAT to establish evidence for leaks

13-3. High-Risk Patterns Common in Production

As shown throughout this article, the following patterns frequently lead to incidents:

  • Bulk processing of huge files
  • Adding to List/Map without a bound
  • Cache runaway
  • Accumulating data in static
  • Explosive intermediate objects via Stream chains
  • Misusing ThreadLocal
  • Not setting -Xmx in Docker

If you see these in code or settings, investigate first.

13-4. Fundamental Fixes Are About System Design and Data Processing

● What to review at the system level

  • Switch large data handling to streaming processing
  • Use caches with TTL, size limits, and eviction
  • Perform regular memory monitoring for long-running apps
  • Analyze early leak signs with tools

● If it’s still difficult

  • Separate batch vs online processing
  • Microservices
  • Adopt distributed processing platforms (Spark, Flink, etc.)

Architectural improvements may be required.

13-5. The Three Most Important Messages

If you remember only three things:

✔ Always set heap explicitly

✔ Never bulk-process huge data

✔ You can’t confirm leaks without heap dumps

Just these three can greatly reduce critical production incidents caused by “java heap space.”

13-6. Java Memory Management Is a Skill That Creates a Real Advantage

Java memory management may feel hard, but if you understand it:

  • Incident investigation becomes dramatically faster
  • High-load systems can be run stably
  • Performance tuning becomes more accurate
  • You become an engineer who understands both app and infrastructure

It’s no exaggeration to say system quality is proportional to memory understanding.

14. FAQ

Finally, here is a practical Q&A section covering common questions people search for around “java heap space.”

This complements the article and helps capture a wider range of user intent.

Q1. What’s the difference between java.lang.OutOfMemoryError: Java heap space and GC overhead limit exceeded?

● java heap space

  • Occurs when the heap is physically exhausted
  • Often caused by huge data, collection bloat, or insufficient settings

● GC overhead limit exceeded

  • GC is working hard but reclaiming almost nothing
  • A sign that GC can’t recover due to too many live objects
  • Often suggests a memory leak or lingering references

A useful mental model:
heap space = already crossed the limit,
GC overhead = right before the limit.

Q2. If I simply increase heap, will it be solved?

✔ It may help temporarily

✘ It does not fix the root cause

  • If heap is genuinely too small for your workload → it helps
  • If collections or leaks are the cause → it will recur

If the cause is a leak, doubling heap only delays the next OOM.

Q3. How much can I increase Java heap?

● Typically: 50%–70% of physical memory

Because you must reserve memory for:

  • Native memory
  • Thread stacks
  • Metaspace
  • GC workers
  • OS processes

Especially in Docker/K8s, it’s common practice to set:
-Xmx = 70%–80% of the container limit.

Q4. Why does Java get OOMKilled in containers (Docker/K8s)?

● In many cases, because -Xmx is not set

Docker may not always pass container limits cleanly to Java, so Java sizes heap based on host memory → exceeds limit → OOMKilled.

✔ Fix

docker run --memory=1g -e JAVA_OPTS="-Xmx800m"

Q5. Is there an easy way to tell if it’s a memory leak?

✔ If these are true, it’s very likely a leak

  • Heap usage keeps increasing with uptime
  • Memory barely decreases even after Full GC
  • Old-gen grows in a “stair-step” pattern
  • OOM happens after hours or days
  • Short runs look fine

However, final confirmation requires heap dump analysis (Eclipse MAT).

Q6. Heap settings in Eclipse / IntelliJ are not applied

● Common causes

  • You didn’t edit the Run Configuration
  • The IDE’s default settings are taking precedence
  • Another startup script’s JAVA_OPTS overrides your settings
  • You forgot to restart the process

IDE settings differ, so always check the “VM options” field in Run/Debug Configuration.

Q7. Is it true that Spring Boot uses a lot of memory?

Yes. Spring Boot often consumes more memory due to:

  • Auto configuration
  • Many Beans
  • Class loading in fat JARs
  • Embedded web server (Tomcat, etc.)

Compared to a plain Java program, it can use an extra ~200–300MB in some cases.

Q8. Which GC should I use?

In most cases, G1GC is the safe default.

● Recommendations by workload

  • Web apps → G1GC
  • Throughput-heavy batch jobs → Parallel GC
  • Ultra-low latency needs → ZGC / Shenandoah

Without a strong reason, choose G1GC.

Q9. How should I handle heap in serverless environments (Cloud Run / Lambda)?

Serverless environments have tight memory limits, so you should explicitly configure heap.

Example (Java 11):

-XX:MaxRAMPercentage=70

Also note that memory can spike during cold starts, so leave headroom in your heap configuration.

Q10. How can I prevent Java heap issues from recurring?

If you strictly follow these three rules, recurrence drops dramatically:

✔ Set heap explicitly

✔ Process huge data via streaming

✔ Regularly review GC logs and heap dumps

Summary: Use the FAQ to Remove Doubts and Apply Practical Memory Countermeasures

This FAQ covered common search-driven questions about “java heap space” with practical answers.

Together with the main article, it should help you become strong at handling Java memory problems and significantly improve system stability in production.