AnsweredAssumed Answered

Installer failure on Centos 6/EC2: Unable to execute command: hadoop fs -put -f /opt/mapr/spark/spark-1.5.2/lib/spark-assembly-1.5.2-mapr-1512-hadoop2.7.0-mapr-1509.jar /installer/spark-1.5.2/. Returned: 1

Question asked by rrusk on Feb 3, 2016
Latest reply on May 5, 2016 by rrusk
I am getting a consistent failure attempting to use the mapr installer to install MapR on an EC2 cluster as described at https://www.mapr.com/blog/spinning-hadoop-cluster-cloud.  Have tried using various combinations of cluster specs, Linux distros, additional MapR components, etc.  The goal is to get core MapR plus Spark running under load for evaluation and prototyping purposes.

Currently, I am getting the error: Unable to execute command: hadoop fs -put -f /opt/mapr/spark/spark-1.5.2/lib/spark-assembly-1.5.2-mapr-1512-hadoop2.7.0-mapr-1509.jar /installer/spark-1.5.2/. Returned: 1

Any help in getting past this would be much appreciated.  I see this error on both Ubuntu 14.04 and Centos6 HVM with Updates AMIs.  Attached are log files from the latest installation attempt, installing onto a cluster of 3 d2.xlarge instances with 2GB swap, 8 GB / and 6 GB /opt partitions.  In this instance, the error appears on node 10.0.1.38, and the graphical installer reports that "3 nodes did not install correctly".

[logs.zip][1]

  [1]: /storage/temp/374-installer-processlog.txt


Outcomes