AnsweredAssumed Answered

Large ingest to MapR-DB via Pig often fails

Question asked by cleranceroberts on Jun 27, 2015
Latest reply on Jun 27, 2015 by cleranceroberts
I have an MapR cluster running on Amazon EC2 with (very) large instance types - d2.8xlarge.

I have a simple Pig script that loads data from CSV files in Amazon S3, and loads it into a MapR table.

     REGISTER /opt/mapr/lib/mapr-hbase-4.1.0-mapr.jar
     SET fs.s3.awsAccessKeyId '{}'
     SET fs.s3.awsSecretAccessKey '{}'
     A = LOAD 's3://mydata' USING PigStorage(',') AS (col1:chararray, col2:chararray, rk:chararray, col3:int, col5:int, col4:int);
     B = FOREACH A GENERATE rk, TOMAP(col1, '-1');
     STORE B INTO '/user/mapr/mytable' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('col2:*','-loadKey true -noWAL true');

This script is run periodically and brings in many millions of lines of CSV into my MapR table.  There are many instances where rows get quite large, with many different col2 column qualifiers. 

This script works well at first, but after a few days of periodic runs (4x/hr) jobs begin failing or taking a very long time to complete.  The logs aren't particularly useful in for the failing Map jobs:

    2015-06-27 13:35:53,650 INFO mapred.Task [communication thread]: Communication exception: org.apache.hadoop.ipc.RemoteException(java.io.IOException): JvmValidate Failed. Ignoring request from task: attempt_201506261047_0058_m_000002_5002, with JvmId: jvm_201506261047_0058_m_1900709183
at org.apache.hadoop.mapred.TaskTracker.validateJVM(TaskTracker.java:4859)
at org.apache.hadoop.mapred.TaskTracker.ping(TaskTracker.java:5014)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.ipc.WritableRpcEngine$Server$WritableRpcInvoker.call(WritableRpcEngine.java:481)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2000)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1996)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1566)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1994)

at org.apache.hadoop.ipc.Client.call(Client.java:1413)
at org.apache.hadoop.ipc.Client.call(Client.java:1366)
at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:231)
at com.sun.proxy.$Proxy4.ping(Unknown Source)
at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:689)
at java.lang.Thread.run(Thread.java:745)

    2015-06-27 13:35:56,657 WARN mapred.Task [communication thread]: Parent died.  Exiting attempt_201506261047_0058_m_000002_5002

What is happening here, and why do my jobs keep failing or taking many hours (when they should be taking minutes) to complete?  Is there a better way to be doing this, or are there other optimizations I should consider?

My cluster is a MapR 4.1 Community Edition cluster with four d2.8xlarge nodes (36 core, 244GB Ram, 24x2TB drives, 10-Gigabit ethernet).

Outcomes