How to manage the size of map-reduce task attempt logs

Document created by jbubier Employee on Feb 7, 2016
Version 1Show Document
  • View in full screen mode

Author: Jonathan Bubier


Original Publication Date: February 12, 2015



Map-reduce tasks typically log output into one or more of three files - syslog, stdout and stderr.  These files are found by default under /opt/mapr/hadoop/hadoop-0.20.2/logs/userlogs/<job id>/<attempt id>/ on the node that ran the task attempt.  Ex:


$ ls -l /opt/mapr/hadoop/hadoop-0.20.2/logs/userlogs/job_201410010922_0006/

total 12

drwxr-s--- 2 root mapr 4096 Dec 5 10:41 attempt_201410010922_0006_m_000001_0

drwxr-s--- 2 root mapr 4096 Dec 5 10:41 attempt_201410010922_0006_r_000000_0

-rw-r----- 1 root mapr 499 Dec 5 10:41 job-acls.xml


$ ls -l /opt/mapr/hadoop/hadoop-0.20.2/logs/userlogs/job_201410010922_0006/attempt_201410010922_0006_m_000001_0/

total 8

-rw-r----- 1 root mapr 155 Dec 5 10:41 log.index

-rw-r----- 1 root mapr 0 Dec 5 10:41 stderr

-rw-r----- 1 root mapr 0 Dec 5 10:41 stdout

-rw-r----- 1 root mapr 1228 Dec 5 10:41 syslog

Note the job ID is job_201410010922_0006 and there are two task attempts - one map task and one reduce task.  With MapR's Central Logging feature once a task attempt completes the corresponding logs are found on MapR-FS in a local volume path unique to the node that ran the attempt.  Specifically /var/mapr/local/<hostname>/logs/mapred/userlogs/<job id>/<attempt id>/ where <hostname> is the node's full qualified domain name.  Ex:


# hadoop fs -ls /var/mapr/local/host1.domain.prv/logs/mapred/userlogs/job_201410010922_0005/

Found 4 items

drwxr-xr-x - root root 4 2014-12-05 10:33 /var/mapr/local/host1.domain.prv/logs/mapred/userlogs/job_201410010922_0005/attempt_201410010922_0005_m_000000_0

drwxr-xr-x - root root 4 2014-12-05 10:33 /var/mapr/local/host1.domain.prv/logs/mapred/userlogs/job_201410010922_0005/attempt_201410010922_0005_m_000001_0

drwxr-xr-x - root root 4 2014-12-05 10:33 /var/mapr/local/host1.domain.prv/logs/mapred/userlogs/job_201410010922_0005/attempt_201410010922_0005_m_000002_0

drwxr-xr-x - root root 4 2014-12-05 10:33 /var/mapr/local/host1.domain.prv/logs/mapred/userlogs/job_201410010922_0005/attempt_201410010922_0005_r_000000_0

If the log output from map-reduce task attempts is growing significantly and is occupying a large amount of space either on local storage or on MapR-FS it may be necessary to restrict the size of the task logs.  This article describes various methods to limit and control the space consumption of task attempt logs.


Restrict log retention time

It is important to determine how long the output from map-reduce tasks should be retained.  It is possible that the individual task logs are not particularly large but they are being retained indefinitely and accumulating significantly. By default the logs are retained for 24 hours from task completion, defined by the property 'mapred.userlog.retain.hours'.  As users can override this setting on a per job basis the default maximum retention time for all jobs is 168 hours (1 week), defined by the property 'mapred.userlog.retain.hours.max'. This allows logs to be retained for longer than 24 hours but not longer than 168 hours if a user attempts to set a higher value for mapred.userlog.retain.hours.


Limit the number of hours for which task attempt logs will be retained on a server by setting the value of the 'mapred.userlog.retain.hours.max' parameter in the /opt/mapr/hadoop/hadoop-0.20.2/conf/mapred-site.xml file. This change should be made on servers running the TaskTracker process. TaskTracker services must be restarted in order to apply this change. Ex:






The above will reduce the maximum retention time from 168 hours to 96 hours.  Similarly if the value of 24 hours for default log retention is too long it can be reduced in the same manner as above.


Restrict log retention size

To restrict the size of task attempt output there are corresponding properties for retention size for map tasks and reduce tasks.  These properties are '' and 'mapreduce.cluster.reduce.userlog.retain-size' and define the retention size in bytes. By default these properties are undefined and there is no limit on the size of the map or reduce task logs.  When these properties are defined the TaskLogsTruncater in the TaskTracker truncates all task log files greater than the defined retention size after task completion.  Note that the truncater retains the last 'retain-size' bytes of the log, not the first 'retain-size' bytes of the log. 


If the size of the task logs needs to be restricted define the properties '' and 'mapreduce.cluster.reduce.userlog.retain-size' to a value appropriate for your environment.  This is done by setting the properties in /opt/mapr/hadoop/hadoop-0.20.2/conf/mapred-site.xml on all TaskTrackers and restarting all TaskTrackers.  Ex:











The above sets a retention size of 1MB for each map task log file and reduce task log file. As mentioned above each task typically has a unique log file for stdout, stderr and syslog so the example limit would allow for 3MB of space consumption, 1MB for each of the three logs. As mentioned above the truncation does not kick in until the task completes so it is possible to see task logs grow beyond the retain size while the task is still in progress. However the logs should not be larger than the 'retain-size' after task completion.