AnsweredAssumed Answered

Is there a way for me to use RDMA interconnects with the M3?

Question asked by mmohr Employee on Aug 15, 2012
Latest reply on Aug 15, 2012 by srivas
The plugin in question is made by Mellanox, and distributed pre-integrated into an otherwise vanilla Hadoop configuration. I've had no luck successfully mating this with MAPR's hadoop cluster, but we've stayed with MapR because the performance hit we take for using IP over Infiniband is more than made up by the performance boost we get for MFS. I've had zero success in finding anything relating to Infiniband and MAPR. We're running an M3 test grid comprised of four nodes: One CLDB, one job node, and two more dedicated computation nodes. That's four nodes in total, three zookeepers, plus a VM external to the cluster which serves as user aggregation, web presence, and NFS. The four hardware nodes are connected with ethernet for NFS booting their operating systems, and infiniband for high speed interconnect. M3 presently seems to load balance between the two NICs. Infiniband will work with standard IP via IPoIB, but there are some pretty painful performance penalties. It's much better to go native, using RDMA. If there was some (Stable and production-ready) way for us to use RDMA for interconnects, we'd find that very useful

Outcomes