AnsweredAssumed Answered

pyspark code to load data to dataframe from maprfs

Question asked by srini14171417 on May 31, 2018
Latest reply on May 31, 2018 by maprcommunity


I am trying to load json data from maprfs directory,when i load data from local Unix system it is working fine ,but when i try to load data from maprfs using below code it throwing error


[*************]$ python
Python 2.7.5 (default, Aug 4 2017, 00:39:18)
Type "help", "copyright", "credits" or "license" for more information.

import json

from pyspark import SparkContext

from pyspark.sql import SQLContext

from pyspark.sql.functions import udf

from pyspark.sql.functions import *

import os

sc = SparkContext("local", "First App")

sqlContext = SQLContext(sc)





WARN  FileStreamSink:66 - Error while looking for metadata directory.

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

  File "/usr/lib/python2.7/site-packages/pyspark/sql/", line 166, in load

    return self._df(self._jreader.load(path))

  File "/usr/lib/python2.7/site-packages/py4j/", line 1160, in __call__

    answer, self.gateway_client, self.target_id,

  File "/usr/lib/python2.7/site-packages/pyspark/sql/", line 63, in deco

    return f(*a, **kw)

  File "/usr/lib/python2.7/site-packages/py4j/", line 320, in get_return_value

    format(target_id, ".", name), value)

py4j.protocol.Py4JJavaError: An error occurred while calling o675.load.

: No FileSystem for scheme: maprfs

        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(

        at org.apache.hadoop.fs.FileSystem.createFileSystem(

        at org.apache.hadoop.fs.FileSystem.access$200(

        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(

        at org.apache.hadoop.fs.FileSystem$Cache.get(

        at org.apache.hadoop.fs.FileSystem.get(

        at org.apache.hadoop.fs.Path.getFileSystem(