I am trying to load my csv file from mapr-fs to hbase via bulk loading. Unfortunately, the job failed to proceed at
2017-07-26 23:40:38,721 INFO [main] mapreduce.Job: map 0% reduce 0%
2017-07-26 23:51:08,259 INFO [main] mapreduce.Job: Task Id : attempt_1501121937105_0006_m_000000_0, Status : FAILED AttemptID:attempt_1501121937105_0006_m_000000_0 Timed out after 600 secs
I have created hbase table 't2' with family column 'n', and my csv file contains 3 columns(RowID,FirstName,LastName). Besides, I placed the csv file in mapr-fs with the path hdfs://my.cluster.com/tmp/customers2.csv
I ran the bulk load command(Importtsv) like this:
hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.separator=',' -Dimporttsv.columns=HBASE_ROW_KEY,FirstName:n,LastName:n t2 hdfs://my.cluster.com/tmp/customers2.csv
Any idea what went wrong for this? Your solution will be much appreciated! Thanks.
And besides, do we have other effective way to load data from MapR-FS to the HBase?