Spark Troubleshooting Guide: Running Spark: How to connect to Spark Thrift Server?

Document created by hdevanath Employee on Jun 19, 2017Last modified by hdevanath Employee on Jun 19, 2017
Version 3Show Document
  • View in full screen mode

When a user connects to Spark from external client sources like Beeline or Tableau, usage of Thrift is needed. This article is a step-wise guide to connect to Spark Thrift Server.

 

 

Step 1) Start Spark Thrift Server on any port other than 10000 (Because hs2 port is on 10000).
The example below starts Thrift on port 10001

/opt/mapr/spark/spark-2.1.0/sbin/start-thriftserver.sh --hiveconf  hive.server2.thrift.port=10001

 
Step 2) Optional: export LD_LIBRARY_PATH=/opt/mapr/hadoop/hadoop-2.7.0/lib/native/
 
Step 3) Using client beeline to connect to spark thrift url

[mapr@vn4 lib]$ /opt/mapr/spark/spark-2.1.0/bin/beeline  
Beeline version 1.6.1-mapr-1611 by Apache Hive  beeline> !connect jdbc:hive2://localhost:10001 
Connecting to jdbc:hive2://localhost:10001 
Enter username for jdbc:hive2://localhost:10001: mapr 
Enter password for jdbc:hive2://localhost:10001: **** 
17/03/07 16:00:17 INFO Utils: Supplied authorities: localhost:10001 
17/03/07 16:00:17 INFO Utils: Resolved authority: localhost:10001 
17/03/07 16:00:17 INFO HiveConnection: Will try to open client transport  with JDBC Uri: jdbc:hive2://localhost:10001 
Connected to: Spark SQL (version 1.6.1) 
Driver: Spark Project Core (version 1.6.1-mapr-1611) 
Transaction isolation: TRANSACTION_REPEATABLE_READ  0: jdbc:hive2://localhost:10001>

 

 
Step 4) Command to stop spark thrift.

/opt/mapr/spark/spark-2.1.0/sbin/stop-thriftserver.sh

 
What to expect
Once the above steps are executed, the output logs can be used for review and validation.
For the above example, the default directory where logs are dumped is: /opt/mapr/spark/spark-2.1.0/logs
 
 

2 people found this helpful

Attachments

    Outcomes