AnsweredAssumed Answered

Volume Issues End Of FIle Exception,  File is corrupt! and maprfs:Filename.txt not a SequenceFile

Question asked by bmis2014 on Mar 3, 2015
Hi MapR Team,

We have job that write to a HDFS user directory location, and when MAPPER finish it commits FIle from its user directory to Volume based directory(Final Destination).

We use following to achieve this:  Source is user hdfs directory and target is volume based directory
FileSystem.get(job.getConfiguration()).rename(source, target);

During the above steps, we have notice that Hive Dependent Job that reads (this volume data periodically) encounters:

1) Caused by: maprfs:somefile.txt not a SequenceFile

2) Caused by:  Unterminated or unclosed files.

    hadoop fs -text maprfs:File.txt | wc -l
    2015-02-22 19:05:04 INFO: - Successfully loaded & initialized native-zlib library
    2015-02-22 19:05:04 INFO: - Got brand-new decompressor
    text: null

3) Caused by: File is corrupt!

      105        at$Reader.readBlock(
      106        at$
      107        at
      108        at

Can you please suggest what is best way to move data from HDFS to volume and ensure that we do not run into #1,#2 and #3.

Thanks in advance for your help !

Based on in-house expert of MapR suggested following:

HDFD Directory/file ---copy--> Volume/tmp/file ----rename--> Volume/final/traget/dir

Also another use case is volume to volume transfer:
/user/volume1/tmp   ---copy--> /user/volume2/archival/yyyy-mm-dd/tmp --->

Let  me know if there is any MapR FS API or any thing that will work beside above or if there is no api (please add API that does this work behind the API call so it is seamless for user)