AnsweredAssumed Answered

hbase lzo Fatal

Question asked by khan on Dec 15, 2011
Latest reply on Dec 15, 2011 by lohit

We are heavy hbase user and looking the M3 version of MapR for a week before to decide change our current hadoop base cluster to mapr .M5 . We have a few questions to better understand problem we faced here.

We are testing now 3 server cluster M3. All installation went well and we can import our hbse table without problem. However we cant use the LZO in hbase , we installed as explained here.

We import a table to M3 hbase.. Than change the Compression field to LZO.. After that we tried to major_compact to copress colums.. Than regionserver gives:

ATAL org.apache.hadoop.hbase.regionserver.HRegionServer: ABORTING region server,60020,1323942299576, load=(requests=0, regions=213, usedHeap=1759, maxHeap=3976): Uncaught exception in service thread regionserver60020.compactor java.lang.AbstractMethodError: com.hadoop.compression.lzo.LzoCompressor.reinit(Lorg/apache/hadoop/conf/Configuration;)V at at at$Algorithm.getCompressor( at$Writer.getCompressingStream( at$Writer.newBlock( at$Writer.checkBlockBoundary( at$Writer.append( at$Writer.append( at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append( at org.apache.hadoop.hbase.regionserver.Store.compact( at org.apache.hadoop.hbase.regionserver.Store.compact( at org.apache.hadoop.hbase.regionserver.HRegion.compactStores( at org.apache.hadoop.hbase.regionserver.HRegion.compactStores( at

error... and all reio server dead restarting.

So we couldnt use LZO in M3 . is there any suggestion ?

Secondly can we use snappy also with M3 hbase ?

This compression is important for us..

One other thing is that if we use the internal compression of M3 (enabling the compression directory) can we get smilar performance for compression rate and the speed.

Best Regards