Hi, I've a 5 node MapR cluster running on MapR Version 5.2. Each server has 250GB Memory and 15TB disk space. I noticed that there's a considerable amount of /tmp space used and it's growing as more users are trying to run their applications/jobs. In /tmp there're user caches that got created with each user(user id) that are trying to run jobs.… Show more
Hi All, We installed and configured Grafana in our UAT cluster the UI which come up with default 3 dashboards cldb , node and volume. I see an exclamatory mark in red near Node Dashboard. I tried to fix it by adding a parameter in opentsdb.conf and then took a restart of opentsdb and grafana so was able to make CLDB Dashboard but the issue is… Show more
Hi guys. I recently installed Impala on a 3-node MapR cluster. When I run a simple query.The performance is not as good as Impala+HDFS. Here is the query: SELECT * FROM ft_test, ft_wafer WHERE ft_test_parquet.id = ft_wafer_parquet.id and month = 1 and day = 8 and param = 2913; It took about 3s. But when using the same query but with HDFS. It… Show more
This could be because of HDFS SCAN on node 0 is taking more time:- HDFS_SCAN_NODE (id=0):(Total: 2s973ms, non-child: 2s973ms, % non-child: 100.00%) File Formats: PARQUET/SNAPPY:8 is "invalidate metadata" improving the performance? Could be related to [IMPALA-2400] Unpredictable locality behavior for reading Parquet files - ASF JIRA
Error While running Spark - shell or running oozie-spark hive actionjava.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:532) at org.apache.spark
Issue: Some times you may come across an issue like this while running spark-shell or Oozie spark action java.lang.NoClassDefFoundError: org/apache/tez/dag/api/SessionNotRunning at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:532) at… Show more
can you please give me any reference so that i can access csv files in maprfs using spark in eclipse( installed in mapr client)
Hi Team, I am trying to connect MAPR Drill driver to hadoop cluster. The connection works successfully (tested ok) in direct mode. The connection gives an error in zookeeper quorum mode. Error - [MapR][Drill] (1010) Error occurred while trying to connect: [MapR][Drill] (10) Failure occurred while trying to connect to… Show more
Hello Team, We have a new requirement that where everyone asking us to configure to kill user jobs (Yarn/Spark/Hive etc) which runs > x number of hours. Is there a parameter that we can setup in MapR? Monitoring cluster 24*7 and killing manually is lot of effort. Apparently the users are claiming that this possible in Horton works using Ambari.… Show more
Hi Current classpath (# hadoop classpath)is not having the path for AWS jar file (opt/mapr/hadoop/hadoop-2.7.0/share/hadoop/tools/lib/hadoop-aws-2.7.0-mapr-1506.jar) How can I update include above to the existing classpath in MAPR cluster. Please advise. Here is what I tried and not working. 1. I tried to copy the jar file (above… Show more
I'd like to install a MapR Cluster that has 2 node type, master node and data nodes. Is it possible to have MapR Nodes with having any MapR-FS disks?
My question is what is the outcome when 2 clients try to connect to same VIP pool. According to MapR documentmentation: I just wonder why we need 2 VIP Pools. What if Client 2 try to connect to VIP Pool A?
I AM TRYING TO QUERY MAPR DB TABLE BY ID(IN JSON FORMAT) THROUGH OJAI JAVA API. BUT I AM GETTING ONLY IDS OF ALL THE DOCUMENTS PRESENT IN THE TABLE. I AM NOT ABLE TO GET ALL DETAILS OF A PARTICULAR USER. PLEASE HELP ME IN THIS REGARD.
I am trying to install MapR on a single node VM for basic development and testing. Have created a seperate partition of 500Gb. The partition can be accessed and is unmounted and even mkfs.ext3 runs smoothly on that. But during installation i am not able to select that partition, it's grayed out. What are the other conditions for a disk to be… Show more
Load more items
Hi All, In my three node cluster, i have optimized all the required parameters for the performance. But this is not much helping in my case, All our hive tables are created with parquet format, when my team tries to load from external table to internal table, please find the script below, ksh -c 'hadoop fs -rm -R… Show more