AnsweredAssumed Answered

What are the steps to configure a Spark shell on an edge node in MapR?

Question asked by brett_ on Nov 16, 2016
Latest reply on Nov 17, 2016 by MichaelSegel

Hi all, this is a pretty straightforward question, I'm a Scala developer who has developed models in Spark but I'm still learning the mapR configuration/provisioning process, so bear with me.


We have a ~10-13 node Hadoop cluster, and an edge node that different people use as a gateway to access the cluster or a workbench to run applications that use the cluster. I just want the Spark shell and driver to run on the edge node, and submit work to the cluster. Spark has built in cluster managers and supports YARN, but I'm a bit confused by (a) what from the documentation below applies and (b) what configurations I have to do in addition to the mapR documentation to get a workable Spark instance on the edge node that can run jobs on the cluster workers. 


Managing the MapReduce Mode for Ecosystem Components - MapR 5.0 Documentation - 


Any answers or feedback would be helpful, thanks.