AnsweredAssumed Answered

NFS Performance

Question asked by peppert on Apr 27, 2012
Latest reply on May 2, 2012 by peppert
I have a single node, Dell R720XD host with 12 3TB SAS 7.2k disks off of a perc 710 w/ 1 GB write cache running under Ubuntu 12.04 beta 2 amd64 and the latest Sun JDK.  All disks are configured as individual RAID0 volumes (13 total virtual disks including the RAID1 OS drives which are on 2 dedicated drives outside of the storage pool.)  CPU is 2 six-core Xeon E5-2620.  16GB RAM.

When using a non-mapr software RAID solution like mdraid 0 or zfs, I'm able to perform sustained sequential writes and reads to the combined volume at a rate of between 1.1GB/sec and 1.5GB/sec.  I get similar performance when accessing said software RAID volume via NFS via localhost.

When accessing the same volume via MapR NFS (configured to access the raw /dev/sd_ devices individually), performance drops to about 200MB/sec, and parallel access to the nfsserver process reduces performance linearally as processes increase, to the point where 10 concurrent writes of a 1 gig file cached entirely in RAM result in 20MB/sec of throughput.  Disabling compression has no effect.

I've got 2 10GbE adapters in this host (and the other hosts that I was hoping to put in an eventual M5 cluster), and was rather hoping to see performance closer to 600-800MB/sec for large sequential reads and writes via NFS.  What can I do to help things along?

Thanks in advance.