What do you mean by safe mode Problem? How user come out of safe mode in HDFS?
Safe Mode is a state in Hadoop where the HDFS cluster goes in read only mode i.e. no data can be written to the blocks and any deletion or replication of blocks can happen. During this state, the Namenode goes into maintenance mode. The Namenode implicitly goes into this mode at the startup of the HDFS cluster because at startup, the Namenode gives some time to the data nodes to account for their Data Blocks so that it does not start the replication process without knowing whether there are sufficient replicas already present or not. Once, all the validations are done by the Name node, the safe mode is implicitly disabled. Sometimes, it so happens that the Namenode is not able to come out of the safe mode e.g. NameNode allocated a block and then was killed before the HDFS client got the addBlock response. After NameNode restarted, it couldn't get out of Safe Mode waiting for the block which was never created. In this case, we are not able to write data to the HDFS as it is still in safe mode which is read-only. To resolve this, we need to manually exit out of the safe mode by running the following command: sudo -u hdfs hadoop dfsadmin -safemode leave
This is an Apache Hadoop issue that doesn't impact MapR.
Why are you asking this question here?
Retrieving data ...