Enable hbase table fails with error : "java.io.IOException: Compression algorithm 'lzo' previously failed test.

Document created by najmuddin_chirammal Employee on Feb 7, 2016
Version 1Show Document
  • View in full screen mode

Author: Najmuddin Chirammal

 

Original Publication Date: November 19, 2014

 

Enable hbase table fails with error : "java.io.IOException: Compression algorithm 'lzo' previously failed test."

Issue :

While trying to enable a table, the command hangs and region server logs the following error.

 

INFO org.apache.hadoop.hbase.regionserver.HRegion: Setting up tabledescriptor config now ...

ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=tsdb-uid,,1415280696456.8020447b0317d4671167b435cf5322de., starting to roll back the global memstore size.

java.io.IOException: Compression algorithm 'lzo' previously failed test.

  at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:78)

  at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:4644)

  at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4633)

  at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:4583)

  at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:335)

  at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:101)

  at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)

  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

  at java.lang.Thread.run(Thread.java:744)

 

Resolution:

The Error indicate the Compression algorithm (lzo) is not available on the specific region server, Copy a test file and use the following command to check whether a Compression algorithm is available on the region server.

 

hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://<cluster_name>/<test_file_name> <compression_algorithm>

// Following output shows a non-available compression algorithm.

 

# hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://ncCluster1/nc/foobar lz0

INFO util.NativeCodeLoader: Loaded the native-hadoop library

INFO security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution

INFO util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32

Exception in thread "main" java.lang.IllegalArgumentException: Unsupported compression algorithm name: lz0

at org.apache.hadoop.hbase.io.hfile.Compression.getCompressionAlgorithmByName(Compression.java:378)

at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.compressionByName(AbstractHFileWriter.java:263)

at org.apache.hadoop.hbase.io.hfile.HFile$WriterFactory.withCompression(HFile.java:360)

at org.apache.hadoop.hbase.util.CompressionTest.doSmokeTest(CompressionTest.java:108)

at org.apache.hadoop.hbase.util.CompressionTest.main(CompressionTest.java:138)

 

//An available compression algorithm should return "SUCCESS"

 

# hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://ncCluster1/nc/passwd snappy

14/12/11 02:31:36 INFO util.NativeCodeLoader: Loaded the native-hadoop library

14/12/11 02:31:36 INFO security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution

14/12/11 02:31:36 INFO util.ChecksumType: Checksum using org.apache.hadoop.util.PureJavaCrc32

14/12/11 02:31:36 WARN snappy.LoadSnappy: Snappy native library is available

14/12/11 02:31:36 INFO snappy.LoadSnappy: Snappy native library loaded

14/12/11 02:31:36 INFO compress.CodecPool: Got brand-new compressor

14/12/11 02:31:36 INFO compress.CodecPool: Got brand-new decompressor

SUCCESS

Note: Remember to copy file used in compression test as by default it will remove the file

 

Refer http://doc.mapr.com/display/MapR/HBase#HBase-compression to know more about Hbase compression and the steps to enable LZO on region servers.

Attachments

    Outcomes