Hive query runs out of heap memory when shuffle in memory

Document created by Hao Zhu Employee on Feb 18, 2016
Version 1Show Document
  • View in full screen mode

Author: Hao Zhu

Original Publication Date: February 12, 2015

 

Symptom:

Hive query fails with out of memory errors when doing "shuffleInMemory":

 

Error: java.lang.OutOfMemoryError:

Java heap space at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.shuffleInMemory(ReduceTask.java:1774)

at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutputFromFile(ReduceTask.java:1487)

at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1361)

at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1278)

Root Cause:

When Hive query is doing shuffle phase in MapReduce, it tries to copy map outputs to reducer.
The memory used in this case is:

mapred.job.shuffle.input.buffer.percent(default is 0.70) * Max heap size(-Xmx in mapred.reduce.child.java.opts).

Currently the code will not check if there is enough heap memory for shuffle phase, so it may run out of heap memory.

One case is when Hive query is to select many columns.

Solution:

1. Increase mapred.reduce.child.java.opts after fully understanding the memory usage on the whole cluster.
Please refer to this article for details about Five Steps to Avoiding Java Heap Space Errors.
Do not blindly increase this memory setting since it may cause other service or jobs running out of memory.
In hive shell:

set mapred.reduce.child.java.opts=-Xmx8192m;

2. Decrease mapred.job.shuffle.input.buffer.percent from default 0.70 to 0.20 for example.

In hive shell:

set mapred.job.shuffle.input.buffer.percent=0.20;

Attachments

    Outcomes