AnsweredAssumed Answered

Unknown Space used by Hadoop cluster

Question asked by prachi on Oct 15, 2012
Latest reply on Oct 21, 2012 by mandoskippy
We have organized 9 node Seismic Hadoop cluster. We loaded a 4gb SegY file on this cluster. To load a file in HDFS (Hadoop distributed File System) we use following command:
*./suhdp load -input /usr/share/dumphere/seismic_4GB.segy -output /seismicdata/segy_source/seismic_4GB.su -cwproot /home/hd/seismicunix* So practically it should take space of HDFS(/seismicdata/segy_source/), which in the current scenario is happening. But with the space of HDFS it is consuming the space on NameNode as well(that is 4gb is being consumed of local VM).

So is it that one file is being created on NameNode as well as the effect of load command??

Outcomes