AnsweredAssumed Answered

HFileOutputFormat trying to create partitions under root

Question asked by vinod_singh on Jan 19, 2012
Latest reply on Jan 22, 2012 by yufeldman
I am trying to create hfile using HFileOutputFormat. The job configuration looks like-

    job.setOutputFormatClass(HFileOutputFormat.class);
    Configuration hConf = HBaseConfiguration.create(conf);
    hConf.set("hbase.zookeeper.quorum", "x.x.x.x");
    hConf.setInt("hbase.zookeeper.property.clientPort", 5181);
    FileInputFormat.addInputPath(job, new Path("/some/input"));
    HFileOutputFormat.setOutputPath(job, new Path("/some/output"));
    HTable table = new HTable(hConf, "myTable");
    HFileOutputFormat.configureIncrementalLoad(job, table);

Execution of above job leads to following error-

<pre>12/01/20 05:10:16 INFO mapreduce.HFileOutputFormat: Looking up current regions for table org.apache.hadoop.hbase.client.HTable@661736e
12/01/20 05:10:16 INFO mapreduce.HFileOutputFormat: Configuring 1 reduce partitions to match current region count
12/01/20 05:10:16 INFO mapreduce.HFileOutputFormat: Writing partition information to /partitions_1327061416218
2012-01-20 05:10:16,2355 ERROR Client fs/client/fileclient/cc/client.cc:409 Thread: 140300181067520 Create failed for file partitions_1327061416218, error Permission denied(13)
Exception in thread "main" java.io.IOException: Create failed for file: /partitions_1327061416218, error: Permission denied (13)
        at com.mapr.fs.MapRClient.create(MapRClient.java:175)
        at com.mapr.fs.MapRFileSystem.create(MapRFileSystem.java:214)
        at com.mapr.fs.MapRFileSystem.create(MapRFileSystem.java:223)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:588)
        at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.<init>(SequenceFile.java:1082)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:398)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:285)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:266)
        at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.writePartitions(HFileOutputFormat.java:210)
        at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:265)
        at com.vinodsingh.hadoop.HFileMapper.main(HFileMapper.java:44)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:186)</pre>

Why it is trying to create partitions under root directory?

Outcomes