AnsweredAssumed Answered

Changing the default memory allocations

Question asked by Karthee on Mar 9, 2017
Latest reply on Apr 3, 2017 by maprcommunity

Hi There,

 

I have a 3 node cluster which is running in M5 Community Edition. each node has 256GB of RAM,but i am having the issues when big job runs,so i want to change the memory allocations, i know these config's in apache clustering but as i new to the MapR, would you help me out to change these default memory settings,

 

 

set yarn.scheduler.minimun-allocation-mb = 1024;

set yarn.scheduler.maximum-allocation-mb = 8192;

set mapreduce.map.memory.mb = 4096;

set mapreduce.reduce.memory.mb = 8192;

set mapreduce.map.java.opts.max.heap = 3072;

set mapreduce.reduce.java.opts.max.heap = 6144;

set yarn.app.mapreduce.am.resource.mb = 1024;

 

i would really appreciate your help!

Outcomes