Filter by Answers and Ideas

qulino
Dear Mapr Community, I face the 'Volume data unavailable' & 'Volume Low data Replication' on volume containing metrics on 1 node: /var/mapr/local/mycluster_serv1/metrics   One of the volume containers is offline:   maprcli dump containerinfo -ids 2083 -json { "timestamp":1534350208488, "timeofday":"2018-08-15 07:23:28.488 GMT+0300",… (Show more)
in Answers
john.humphreys
How can I monitor MapR-DB region splits and packing?  I'd like to know when they start and stop to see if they are responsible for some intermittent slowness we see when using MapR-DB and/or MapR-Streams.   If the answer is Spyglass... is there an alternative way?  We had to disable that due to performance bugs in MapR 5.2.0.
in Answers
RSS
I have create huge files in MAPRFS however File System used is not increaed.
in Answers
da
Currently I am able to replicate data from Oracle-DB to MapR-DB with MapR-Streaming . Oracle -> Oracle GoldenGate (MapR Streaming Client) ->MapR-FS (MapR Streaming data) ->Ab-initio ->MapR DB.   But here I am looking for a solution which can replace Ab-initio. As suggestion ? Can MapR Apache Spark solve the purpose ? How ? any other option ...… (Show more)
in Answers
RSS
I have create huge files in MAPRFS however File System used is not increaed.
in Answers
rbh
MapR Installer is giving me the following error "RAM": {"required": "64 GB", "state": "ERROR", "value": "8.0 GB"}   It is giving the same error for versions 6.0.1, 6.0.0, 5.2.0, 5.1.0, 5.0.0 and 4.0.1. For basic development and testing purposes, 64gb RAM is expensive. I only am using MapR-DB. The documentation in the website says min Ram for… (Show more)
in Answers
MichaelSegel
With respect to storage tiering,  MapR currently only allows you to segregate machines in to groups where all disks/devices on a node must be homogeneous.    So if I wanted to offer 'faster storage' to a cluster, then all of the devices on a specific node must be of the same type. That means adding a machine or set of machines that only have…
maprcommunity
Feature requested by  Sanjeev Kumar  on Truncate mapr json table from Shell     Tug Grall replied: MapR DB JSON does not have, yet, a truncate operation. So you have to delete and recreate it. That said it could be a interesting feature to add to the product.
dhinakaran
What is the real use service impersonation ticket? I tried it as shown below.   1) There are two users mapr ( uid 15002 ) and a test user ( uid 1000 ) in the cluster.   2) I generate a service impersonation ticket for mapr with impersonation uid 1000 maprlogin generateticket -type servicewithimpersonation -user mapr -out /home/test/impticket… (Show more)
in Answers
john.humphreys
Chaitanya Yalamanchili - Quick question.  I'm saw a new issue which I believe is associated with the same bug.  I had my consumer group hang again (all partitions this time).  But in this case, the stream/assign/list monitoring endpoint actually says nothing is using the consumer group; it just returns an empty array.   This is still true even… (Show more)
in Answers
etd45
Hi,  I'm testing spark job submition with sparkPi application.   When I send : /opt/mapr/spark/spark-2.2.1/bin/spark-submit --class org.apache.spark.examples.SparkPi  --master yarn --deploy-mode cluster maprfs:///user/mapr/spark-examples_2.11-2.2.1-mapr-1803.jar 100 I expect to get the ApplicationId   But I only get this stdout: Warning:… (Show more)
in Answers
petemayall
Hi, I have been using the MapR Sandbox successfully apart from the use of Drill Explorer. I have downloaded Simba's Drill ODBC Driver but am unable to configure it to access the Sandbox. Firstly, am I using the right product and if so then how do I configure it, please?   Pete
in Answers
Kashivishwanath
Hello All,   I am trying to stream data from MapR topic to MapR DB. I am using MapR Sandbox 5.2.1. I am attaching the code snippet and the error I am getting, please suggest me if I am missing anything and help me to fix this issue.    CODE SNIPPET 1 (Saving Rdd to MapR-DB: ConsumerConfig.AUTO_OFFSET_RESET_CONFIG -> "earliest",… (Show more)
in Answers
Load more items