AnsweredAssumed Answered

sqoop incremental import for hive

Question asked by yamini on May 19, 2017
Latest reply on May 24, 2017 by maprcommunity

Trying the Sqoop incremental to load the updated/added records to HDFS on append mode. The mapreduce job picks the expected rows (new rows in the table) and also MapReduce output also gives the expected number of rows. But always it shows as 0 bytes transferred and inserts null record in the hive table (probably creates an empty file for the table). Also, I could see the below message


fs.MapRFileSystem: Cannot rename across volumes, falling back on copy/delete semantics

 

 

 

 

sqoop import -Dmapred.job.queue.name ************ \
--connect jdbc:oracle:thin: \
--username=****** \
--password=******* \
--table source_table \
--target-dir user/hive/sqp_contact -m 10 \
--append \
--check-column date \
--fields-terminated-by '\0001' \
--lines-terminated-by '\n' \
--incremental lastmodified \
--last-value "2017-02-11 00:00:00"

Outcomes