Spark Troubleshooting guide: Memory Management: How do I troubleshoot typical out-of-memory (OOM) issues on Spark Driver?

Document created by hdevanath Employee on Jun 19, 2017
Version 1Show Document
  • View in full screen mode

The driver memory may need to be configured for optimal performance. Following are example scenarios where a reconfiguration of memory may be required or considered for evaluation. By default, the memory allocated for Spark driver is 1G. Owing to high volume and variability from Spark SQL and high velocity from Spark Streaming, driver memory is always constrained or of high demand.
 
If the memory is not adequate this would lead to frequent Full Garbage collection. Full Garbage collection typically results in releasing redundant memory. 
If the amount of memory released after each Full GC cycle is less than 2% in the last 5 consecutive Full GC's, then JVM will throw and Out of Memory exception.
And the stack trace would look something similar to below:

17/06/07 11:25:00 ERROR akka.ErrorMonitor: Uncaught fatal error from thread [sparkDriverakka. 
actor.default-dispatcher-29] shutting down ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: Java heap space
Exception in thread "task-result-getter-0" java.lang.OutOfMemoryError: Java heap space


Increasing the Spark driver memory should fix the issue. The following setting is captured as part of the spark-submit or in the spark-defaults.conf file. 

--conf "spark.driver.memory=4g" 
or
"--driver-memory=4g"


 

Attachments

    Outcomes