Skip navigation

News

Announcement 1 The Converge Community turns 6 months old! View top posts and take our poll

Filter by Answers and Ideas
sjgx
I have a large set of data, split over hundreds of files, that I am copying over to my cluster vis ssh. I have 9 nodes, all but one setup as data nodes and the remaining node (hadoop-node2) has most of the MapR processes running on it. On hadoop-node2 I run hadoop dfs -mkdir /data and then copy over my data via  ssh
2

bgajjela
Hi ,   Is there  a possibility of Mapr-Hive and Apache Hive are in sync. what are the time lines or the scope of mapr-hive are in sync with new releases of apache hive. I believe we are way beyond apache-hive even though few back porting of bugs are happening to current release of hive.   Thanks, Bharath
0
Top & Trending
onelson
Log into the MapR Community to add to course discussions. Connect with and learn from other students as well as MapR subject matter experts.   Course Discussions Essentials (ESS 100) Lesson 1 – Describe your big data problem Essentials (ESS 102) Lesson 6 – What features are important for your big data projects?   Find
28
Top & Trending
Vinayak Meghraj
Steps to execute R from /opt/mapr/spark/spark-1.6.1/bin/sparkR interactive shell   Example 1)  1) people <- read.df(sqlContext, "file:///opt/mapr/spark/spark-1.6.1/examples/src/main/resources/people.json", "json") 2) head(people)   Example 2) 1) sc <- sparkR.init() 2) sqlContext <- sparkRSQL.init(sc)
2