AnsweredAssumed Answered

What is recommended size for /tmp (local storage in general)

Question asked by nirav on Sep 20, 2016
Latest reply on Sep 21, 2016 by nirav

Is there any recommendation for Yarn-mapreduce and Spark heavy workload on cluster for /tmp storage? I saw on few blogs its recommended as 20-25% of total disk capacity of node? Is it true for MapR cluster? Seems too much to me.

 

PS - Some people might be using different directory then /tmp to store temporary files from mapreduce, spark etc but point is it's all going to be on your local disk (as recommended) .

 

Here's the general hadoop docs which talks about recommended size.

Formula to Calculate HDFS nodes storage - Hadoop Online Tutorials  

Outcomes