AnsweredAssumed Answered

IOException: Pass a Delete or a Put

Question asked by gowri_shankar on Feb 10, 2014
I was writing a reducer which would do some aggregration and finally write the aggregated result to a hbase table, when i try to run this on our cluster it  gives the following exception
java.io.IOException: Pass a Delete or a Put

My Reducer code is as follows
public class ExperimentalReducer
   extends
   TableReducer<ImmutableBytesWritable, IntWritable, ImmutableBytesWritable> {

  final byte[] DEST_COLUMN_FAMILY = "x".getBytes();
  final byte[] DEST_COLUMN = "y".getBytes();

  @Override
  protected void reduce(
    ImmutableBytesWritable arg0,
    Iterable<IntWritable> arg1,
    org.apache.hadoop.mapreduce.Reducer<ImmutableBytesWritable, IntWritable, ImmutableBytesWritable, Writable>.Context arg2)
    throws IOException, InterruptedException {
   int count = 0;
   Iterator<IntWritable> iterator = arg1.iterator();
   while (iterator.hasNext()) {
    count++;
   }

   Put put = new Put(arg0.get());
   put.add(DEST_COLUMN_FAMILY, DEST_COLUMN, Bytes.toBytes(count));
   arg2.write(arg0, put);
  }

}

This is how the reducers and mappers were initialized

public int run(String[] args) throws Exception {
  Configuration conf = super.getConf();
  Job job = new Job(conf, "ExpirementalMapReduce");
  job.setJarByClass(ExpirementalJob.class);

  Scan scan = new Scan();
  scan.addColumn(COLUMN_FAMILY, COLUMN);

  TableMapReduceUtil.initTableMapperJob(SOURCE_TABLE, scan,
    MyMapper.class, Text.class, IntWritable.class, job);
  TableMapReduceUtil
    .initTableReducerJob(DEST_TABLE, MyReducer.class, job);
  job.setNumReduceTasks(3);

  boolean waitForCompletion = job.waitForCompletion(true);
  return 0;
}

Can someone throw some light on this issue?

Thanks
Gowri

Outcomes