AnsweredAssumed Answered

spark + yarn cluster: how can i configure physical node to run only one executor\\task each time?

Question asked by oransh on Feb 23, 2016
Latest reply on Mar 18, 2016 by Hao Zhu
Hi,

I have an environment that combines 4 physical nodes with a small amount of RAM and each has 8 CPU cores.
I noticed that spark decides automatically to split the RAM for each CPU. The result is that a memory error occurred.
I'm working with big data structures, and I want that each executor will have the entire RAM memory on the physical node (otherwise i'll get a memory error).
I tried to configure 'yarn.nodemanager.resource.cpu-vcores 1' on 'yarn-site.xml' file or 'spark.executor.cores 1' on spark-defaults.conf without any success.

Outcomes