AnsweredAssumed Answered

Running MR jobs with a lot of ram

Question asked by fozziethebeat on Jul 26, 2011
Latest reply on Jul 26, 2011 by lohit
Hi,

I'm currently trying to parse a large number of sentences and this requires using a parsing model that uses about 2g-4g of ram per parser.  As of right now, i've extended the heap size for each mapreduce job by using mapred.map.child.java.opts.  But i'm seeing a lot of jobs crash due to a lack of available physical ram.

After inspecting the MapR UI, i see that most of my nodes are using 60%-80% of the ram.  Does that percentage include ram available for mapreduce tasks?  Or am i limited to the remaining ram for my mapreduce tasks?  If the ram MapR is use does include the memory for each task, which parameter in warden.conf should I change?  would it be one of the service.command.* options?

Thanks!

Outcomes