Have been noticing that with MapR 6.0 cluster, memory usage seems to be higher than the average production case (according to MapR documentation) (and just very high in general considering our current level of usage) and we would like to fix this somehow.
In the docs (https://maprdocs.mapr.com/home/AdvancedInstallation/PreparingEachNode-memory.html), it says "typical MapR production nodes have 32 GB or more". We are using twice as much as stated here (64GB) for some of our nodes and still seeing 87%+ mem. utilization (with only around 12 active users).
Does mfs automatically try to utilize its service.command.mfs.heapsize.maxpercent=85 value as specified by /opt/mapr/conf/warden.conf or something like this?
What would happen if mem utilization reached 100% for any given node?
Basically, curious about what could be happening here (since it seems abnormal).
Please note that I opened another post in the past (https://community.mapr.com/thread/22707-best-practices-way-to-investigate-mapr-resource-utilization) where I mention a known reporting bug in the MCS. I do not think that is the problem here because these MCS memory utilization values match up against the reported utilization values of the hypervisor management interface that lets us inspect the VMs that our cluster in installed across.