AnsweredAssumed Answered

Loading data to Hbase from Spark

Question asked by suhaibahmed2 on Apr 30, 2018
Latest reply on May 23, 2018 by cathy

I am trying to create a table in Hbase using as a Spark Dataframe. (from a spark Data frame). The simplified code I am trying is this.

 

import org.apache.spark.sql.datasources.hbase.HBaseTableCatalog;

 

#Create a Data frame with two columns and two rows.

val df =sc.parallelize(Seq(("1","a"),("2","b"))).toDF(Seq("col1","col2"): _*);

 

 

#Create a Table Catalog

val cat = s"""{
| |"table":{"namespace":"default", "name":"nameSpaceTesting"},
| |"rowkey":"col1",
| |"columns":{
| |"col1":{"cf":"rowkey", "col":"col1", "type":"string"},
| |"col2":{"cf":"cf1", "col":"col2", "type":"boolean"}
| |}
| |}""".stripMargin

 

#Save using df.write
df.write.options(Map(HBaseTableCatalog.tableCatalog -> cat)).format("org.apache.hadoop.hbase.spark").save()

(I even tried adding the option : )

 

I am not sure why this isn't working, the error I am getting is this : 

 

18/04/30 04:29:01 ERROR TaskSetManager: Task 2 in stage 0.0 failed 1 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 0.0 failed 1 times, most recent failure: Lost task 2.0 in stage 0.0 (TID 2, localhost, executor driver): java.lang.RuntimeException: Error occurred while instantiating com.mapr.fs.hbase.HTableImpl11.
==> java.lang.NullPointerException.

 

Find attached the full stack trace of the problem.

Could anyone please help me out with why this is happening ?

Attachments

Outcomes