Is there any recommendation for Yarn-mapreduce and Spark heavy workload on cluster for /tmp storage? I saw on few blogs its recommended as 20-25% of total disk capacity of node? Is it true for MapR cluster? Seems too much to me.
PS - Some people might be using different directory then /tmp to store temporary files from mapreduce, spark etc but point is it's all going to be on your local disk (as recommended) .
Here's the general hadoop docs which talks about recommended size.