Filter by Answers and Ideas

dp
I am trying to run spark and maprDB and getting into few issues: 1. Mismatch found for java and native libraries java build version  5.2.1.42646.GA, native build version BUILD_VERSION=5.2.1.42385RELEASE_APP=GA    with following code: SparkConf sparkConf = new SparkConf();JavaSparkContext sc = new JavaSparkContext(sparkConf);Configuration… (Show more)
in Answers
milindtakawale
Hi, We have Drill on YARN configured on our MapR cluster and we are using Drill's JDBC interface to fetch data from a DB Client. To improve query performance I am using Connection Pooling, so that a new connection is not created for each query request. If we have 1000 such DB clients, each maintaining a connection pool of 20 connections each,… (Show more)
in Answers
mapr_test
Hi, I am trying to mount cluter via nfs from machine where I installed mapr: but I see following error: root@<machine>:/etc/init.d#  mount <machine>:/mapr /mapr mount.nfs: access denied by server while mounting <machine>:/mapr Also, I don't see mapr directory on this node. How to make this work ? Thanks,
in Answers
dzndrx
Hi Community,   Can I know what is difference between view and tables in Drill?
in Answers
Karthee
Hi Team,   Environment: MapR- 5.2  Hive-2.1 Tez-0.8 Abinitio GDE -3.2.7   We are trying to write Abinitio multifiles job into Hive table with ORC and PARQUET formats.  but the Abinitio jobs are getting failed with this error,   Error from Component 'Write_Hive_Table.Write_Data_to_HDFS', Partition 0 [B1] terminate called after throwing an… (Show more)
in Answers
Karthee
Hi Team,   My Environment:           MapR- 5.2           Hive-2.1           Tez-0.8           Abinitio GDE -3.2.7   We have a problem with writing a Text format - Hive Table into ORC Table as CTAS. The data set is around 370G. We can't directly write into Hive table (ORC Format), so we writing into Text hive table from Abinitio and then we are… (Show more)
in Answers
anshul09013
We have a long running spark application in MapR cluster which reads data from MapR stream and perform some processing and dump into MapR DB, we want to send alert in form of emails in case a spark application fails or re-run. Is there any way in MapR to send such alerts in 2 cases:   1. Spark application fails. 2. Spark application is… (Show more)
in Answers
mvince
Hi i`m trying to configure custom logging in my spark jobs via log4j.properties files. I updated log4j.properties in ../spark-1.6.1/conf/log4j.properties and now logging works if I run my spark jobs with spark-submit command   However, I if I try to run spark action in oozie workflows, my log4j.properties is not applied I tried to copy/link… (Show more)
in Answers
cf
CentOS recently released its point release (7.4). I think the installer doesn't support this yet. Can anyone confirm this? Is there a new installer coming out and when? Thanks!
in Answers
sgudavalliR
Hello,   I have a quick question about writing to MapR Fs directly a using a Rest endpoint, without publishing to a Queuing system.   1) Is this mandatory to set up a MapR client on my Web server to communicate with MapR cluster? (Or) I see there is a Configuration object, what is the minimum configuration that i have to provide to connect and… (Show more)
in Answers
mandoskippy
Hello, I am trying to connect to a MapR stream, and this is working on my cluster already, but I am getting this error:   KAFKA_41 - Could not get partition count for topic '/zeta/brewpot/apps/prodstream:mystream' : com.streamsets.pipeline.api.StageException: KAFKA_41 - Could not get partition count for topic… (Show more)
in Answers
ankitdh7
Hi All   I am having a problem in mounting my cluster via nfs. Whenever i am trying to mount it, command is just getting stuck with no output.   Command i am trying: [root@node06 logs]# mount -t nfs -o nfsvers=3 -o hard,nolock node06:/mapr /mapr ^C   server name = node06 cluster name = MAPR_CLUSTER   mount points are visible for exporting:… (Show more)
in Answers
ANIKADOS
Hello, We have a Volume where we get this alarm :VOLUME_ALARM_INODES_EXCEEDED  Number of files in volume talend has exceeded the threshold. Number of files: 51040699, Threshold: 50000000.   I search about this problem, and as a solution I'm thinking to move the folders included in the folder where the volume is mounted, to create a new… (Show more)
in Answers
Load more items