I hava 3 nodes,each node have 200G memory and 28 cpu on yarn.
but sometimes ,yarn will lost one node's memoey and cpu,
But soon, loss of memory and CPU will resume.
Somebody in the same problem?
Thanks Cathy Liu,
I found the reason, because of insufficient disk space.
Hi zou haiyang,
Are you running Spark on Yarn? Under what scenarios do you observe "yarn will lost one node's memory and cpu"? I found a good suggestion on avoiding the lost of cpu: https://stackoverflow.com/questions/30713666/spark-resources-not-fully-allocated-on-amazon-emr
Let me know if that resolution helps.
Great! Thank you for sharing your finding.
Retrieving data ...