Filter by Answers and Ideas

5 updates since last visit
ANIKADOS
Hello, We have a Volume where we get this alarm :VOLUME_ALARM_INODES_EXCEEDED  Number of files in volume talend has exceeded the threshold. Number of files: 51040699, Threshold: 50000000.   I search about this problem, and as a solution I'm thinking to move the folders included in the folder where the volume is mounted, to create a new… (Show more)
in Answers
ANIKADOS
We have a cluster of 4 nodes with the characteristics above Spark jobs make a lot of times in processing, how could we optimize this time, knowing that our jobs run from RStudio and we still have a lot of memory not utilized.    
in Answers
Elena_L
Hi,   Can anyone answer this question. "Which companies are using Disco for MapR?.. Thanks in advance...!   Thank you Elena Lauren
in Answers
ANIKADOS
Hello,   Please, what could be the reason of this errors that I find in logs of yarn for both resourcemanager and nodemanager ?   yarn-mapr-resourcemanager-hdpcalprd6.out 2017-09-22 14:49:57,7739 ERROR JniCommon fs/client/fileclient/cc/jni_MapRClient.cc:4008 Thread: 88451 removeFid failed for 18116.34.131350, error 2 2017-09-22 14:49:57,7742… (Show more)
in Answers
dzndrx
Hi Community,   Can I know what is difference between view and tables in Drill?
in Answers
john.humphreys
Are the table sizes in the MapR web console's "MapR Tables" view including replication?  I'm not sure if this is 454.9 GB before or after replication.    It would be nice to qualify it on the UI as taking ~1.5TB vs ~500GB is a big difference when planning cluster sizes/data history/etc .  
in Answers
john.humphreys
I'm pretty sure there's a bug in the topic monitoring APIs.   When you hit the /rest/stream/topic/info endpoint, it returns a date in this format. { "timestamp": 1505940698061, "timeofday": "2017-09-20 04:51:38.061 GMT-0400", "status": "OK", "total": 20, "data": [ { "partitionid": 0, ... "maxtimestamp": "2017-09-20T04:47:49.050-0400",… (Show more)
in Answers
vinaypatel
Hello, I am trying to access hive CLI. However, it is failing to start with the following AccessControl issue. Strangly enough, I am able to query hive data from Hue without the AccessControl issue. However, hive CLI is not working.     Any help is much appreciated.   [<user_name>@<edge_node> ~]$ hive SLF4J: Class path contains multiple SLF4J… (Show more)
in Answers
dp
I am trying to run spark and maprDB and getting into few issues: 1. Mismatch found for java and native libraries java build version  5.2.1.42646.GA, native build version BUILD_VERSION=5.2.1.42385RELEASE_APP=GA    with following code: SparkConf sparkConf = new SparkConf();JavaSparkContext sc = new JavaSparkContext(sparkConf);Configuration… (Show more)
in Answers
milindtakawale
Hi, We have Drill on YARN configured on our MapR cluster and we are using Drill's JDBC interface to fetch data from a DB Client. To improve query performance I am using Connection Pooling, so that a new connection is not created for each query request. If we have 1000 such DB clients, each maintaining a connection pool of 20 connections each,… (Show more)
in Answers
mapr_test
Hi, I am trying to mount cluter via nfs from machine where I installed mapr: but I see following error: root@<machine>:/etc/init.d#  mount <machine>:/mapr /mapr mount.nfs: access denied by server while mounting <machine>:/mapr Also, I don't see mapr directory on this node. How to make this work ? Thanks,
in Answers
Karthee
Hi Team,   Environment: MapR- 5.2  Hive-2.1 Tez-0.8 Abinitio GDE -3.2.7   We are trying to write Abinitio multifiles job into Hive table with ORC and PARQUET formats.  but the Abinitio jobs are getting failed with this error,   Error from Component 'Write_Hive_Table.Write_Data_to_HDFS', Partition 0 [B1] terminate called after throwing an… (Show more)
in Answers
Karthee
Hi Team,   My Environment:           MapR- 5.2           Hive-2.1           Tez-0.8           Abinitio GDE -3.2.7   We have a problem with writing a Text format - Hive Table into ORC Table as CTAS. The data set is around 370G. We can't directly write into Hive table (ORC Format), so we writing into Text hive table from Abinitio and then we are… (Show more)
in Answers
Load more items