AnsweredAssumed Answered

NullPointer exception at the end of the reducer (@ apache.hadoop.mapred.IFileInputStream.read(IFileInputStream.java:101))

Question asked by omirandette on Aug 16, 2013
Latest reply on Oct 7, 2013 by jerdavis
Currently using M5 2.1.3.  At the end of the reducer I receive this exception
<pre>
java.lang.NullPointerException
at org.apache.hadoop.mapred.IFileInputStream.doRead(IFileInputStream.java:149)
at org.apache.hadoop.mapred.IFileInputStream.read(IFileInputStream.java:101)
at org.apache.hadoop.mapred.IFileInputStream.close(IFileInputStream.java:68)
at org.apache.hadoop.mapred.IFile$Reader.close(IFile.java:516)
at org.apache.hadoop.mapred.Merger$Segment.close(Merger.java:244)
at org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:359)
at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:371)
at org.apache.hadoop.mapred.ReduceTask$4.next(ReduceTask.java:576)
at org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContext.java:117)
at org.apache.hadoop.mapreduce.ReduceContext$ValueIterator.next(ReduceContext.java:163)
at DailyAggregationRawComparator$Reduce.reduce(DailyAggregationRawComparator.java:246)
at DailyAggregationRawComparator$Reduce.reduce(DailyAggregationRawComparator.java:238)
</pre>
The same exception doesnt happen if my input file is smaller (28GB vs 56GB).



            set.clear();
            for (IntWritable val : values) { //line 246
                set.add(val.get());
            }

Outcomes