AnsweredAssumed Answered

Hadoop CLDB java stack size error

Question asked by hvhove on May 6, 2014
Latest reply on Jun 17, 2014 by thomasr
I'm trying to set up a 3 node m3 cluster on the google compute engine.
But when the configure-mapr-instance.sh is running it keeps waiting for the hadoop file system to come on-line, which it never does.

I've found that in cldb.log an error keeps returning:
<pre>

"The stack size specified is too small, Specify at least 228k
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit."
</pre>

After some searching i found the mapr/initscripts/mapr-cldb which specifies the Java stack size with:
<code>
-XX:ThreadStackSize=160 \
</code>
I've changed this to 256,256k,1024,1024k,4m and ran the "sudo service mapr-cldb start"
which gives back:
<pre>
"0.20.2
Error: cldb can not be started. See /opt/mapr/logs/cldb.log for details"
</pre>
And the same stack size error in the logfile is added.

Does anyone have any clues as to why this error is occuring and how I can solve this?
Thanks in advance.

Ps: I'm starting up my cluster with following command:
<code>

launch-mapr-cluster.sh --project <project> --cluster hadoop --mapr-version 3.0.1 --image projects/centos-cloud/global/images/centos-6-v20140415 --machine-type n1-standard-2 --zone europe-west1-b --config-file node3m3.lst --persistent-disks 1x20 --license-file licensem3.txt
(the actual running cluster would have more nodes and better machine types)

Outcomes