vgutta

MapR CLI 101 Tutorial

Blog Post created by vgutta on Dec 14, 2016

This is a tutorial of the MapR Command Line Interface. It was inspired by a previous post in the Converge Blog. To see the original post, click here.

 

Before we start, let's review the MapR architecture:

 

MapR Architecture

The MapR architecture consists of the following services or daemons:

  • CLDB (Containers Location Data Base)
  • MapR Fileserver
  • Resource Manager
  • Node Manager
  • ZooKeeper
  • NFS
  • Webserver
  • Warden

Warden is a daemon that runs on all of the cluster nodes to manage and monitor the other services running in a cluster node. It's like a watchdog. The warden will not start any services unless ZooKeeper is reachable and more than half of the configured ZooKeeper nodes are alive.

 

Cluster Management

  • Bringing up the cluster

1. Start ZooKeeper on all of the nodes where it is installed by running the command:

  root@ip-10-245-12-77:~# service mapr-zookeeper start

2. Check the status of the ZooKeeper by running the command:

  root@ip-10-245-12-77:~# service mapr-zookeeper qstatus
  After running qstatus, all zookeepers should return “follower” for the exception of one “leader.”

3. On the nodes running the CLDB, start the warden by running the command:

  root@ip-10-245-12-77:~# service mapr-warden start

4. Verify that the CLDB master is running by issuing the following command on all nodes where CLDB is installed:

  root@ip-10-245-12-77:~# maprcli node cldbmaster

5. On the remaining nodes, start the warden by running the command:

  root@ip-10-245-8-52:~# service mapr-warden start

6. After waiting a couple of minutes, verify cluster status by running the command:

  root@ip-10-245-12-77:~#  maprcli node list -columns service

For more information, see Troubleshooting Initialization 

 

  • Stopping the cluster

Verify that MapReduce or HBase processes are not active, and that no data is being loaded to the cluster or being persisted within the cluster.

When you shut down a cluster, follow this sequence to preserve your data and replication:

  1. Verify that recent data has finished processing.
  2. Shut down any NFS servers.
  3. Shut down any ecosystem components that are running.
  4. Shut down the job and task trackers.
  5. Shut down ResourceManager and NodeManager services if you are using YARN.
  6. Shut down Warden on all nodes that are not running CLDB.
  7. Shut down Warden on the CLDB nodes.
  8. Shut down ZooKeeper on the ZooKeeper nodes.

Complete the following steps to shut down the cluster:

Change to the root user (or use sudo for the following commands).

                  

Before shutting down the cluster, you will need a list of NFS nodes, CLDB nodes, and all remaining nodes. Once the CLDB is shut down, you cannot retrieve a list of nodes; it is important to obtain this information at the beginning of the process. Use the node list command as follows:

1. Determine which nodes are running the NFS service

root@ip-10-245-12-77:~# maprcli node list -filter "[rp==/*]and[svc==nfs]" -columns id,h,hn,svc,rp id                   racktopo                                                         service                                              hostname                                      health  ip 8707346954164511835  /data/default-rack/ip-10-244-129-15.us-west-2.compute.internal   fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-244-129-15.us-west-2.compute.internal   0       10.244.129.15 453989218842577487   /data/default-rack/ip-10-244-131-141.us-west-2.compute.internal  fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-244-131-141.us-west-2.compute.internal  0       10.244.131.141 2892638075826172151  /data/default-rack/ip-10-244-164-169.us-west-2.compute.internal  fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-244-164-169.us-west-2.compute.internal  0       10.244.164.169 8833235307193396272  /data/default-rack/ip-10-244-165-232.us-west-2.compute.internal  webserver,cldb,fileserver,nfs,hoststats              ip-10-244-165-232.us-west-2.compute.internal  0       10.244.165.232 6559272699074389504  /data/default-rack/ip-10-244-45-60.us-west-2.compute.internal    fileserver,tasktracker,hbmaster,nfs,hoststats        ip-10-244-45-60.us-west-2.compute.internal    0       10.244.45.60 9137989191756045555  /data/default-rack/ip-10-245-12-77.us-west-2.compute.internal    webserver,cldb,fileserver,nfs,hoststats              ip-10-245-12-77.us-west-2.compute.internal    0       10.245.12.77 7791110135751846418  /data/default-rack/ip-10-245-14-103.us-west-2.compute.internal   fileserver,tasktracker,hbmaster,nfs,hoststats        ip-10-245-14-103.us-west-2.compute.internal   0       10.245.14.103 1152291012558508871  /data/default-rack/ip-10-245-7-200.us-west-2.compute.internal    fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-245-7-200.us-west-2.compute.internal    0       10.245.7.200 7482334955545014043  /data/default-rack/ip-10-245-8-49.us-west-2.compute.internal     fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-245-8-49.us-west-2.compute.internal     0       10.245.8.49 4127302514082488703  /data/default-rack/ip-10-245-8-52.us-west-2.compute.internal     webserver,cldb,fileserver,nfs,hoststats,jobtracker   ip-10-245-8-52.us-west-2.compute.internal     0       10.245.8.52 root@ip-10-245-12-77:~#

2. Determine which nodes are running the CLDB service

root@ip-10-245-12-77:~# maprcli node list -filter "[rp==/*]and[svc==cldb]" -columns id,h,hn,svc,rp id                   racktopo                                                         service                                             hostname                                      health  ip 8833235307193396272  /data/default-rack/ip-10-244-165-232.us-west-2.compute.internal  webserver,cldb,fileserver,nfs,hoststats             ip-10-244-165-232.us-west-2.compute.internal  0       10.244.165.232 9137989191756045555  /data/default-rack/ip-10-245-12-77.us-west-2.compute.internal    webserver,cldb,fileserver,nfs,hoststats             ip-10-245-12-77.us-west-2.compute.internal    0       10.245.12.77 4127302514082488703  /data/default-rack/ip-10-245-8-52.us-west-2.compute.internal     webserver,cldb,fileserver,nfs,hoststats,jobtracker  ip-10-245-8-52.us-west-2.compute.internal     0       10.245.8.52 root@ip-10-245-12-77:~#

3. List all non-CLDB nodes

root@ip-10-245-12-77:~# maprcli node list -filter "[rp==/*]and[svc!=cldb]" -columns id,h,hn,svc,rp id                   racktopo                                                         service                                              hostname                                      health  ip 8707346954164511835  /data/default-rack/ip-10-244-129-15.us-west-2.compute.internal   fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-244-129-15.us-west-2.compute.internal   0       10.244.129.15 453989218842577487   /data/default-rack/ip-10-244-131-141.us-west-2.compute.internal  fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-244-131-141.us-west-2.compute.internal  0       10.244.131.141 2892638075826172151  /data/default-rack/ip-10-244-164-169.us-west-2.compute.internal  fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-244-164-169.us-west-2.compute.internal  0       10.244.164.169 6559272699074389504  /data/default-rack/ip-10-244-45-60.us-west-2.compute.internal    fileserver,tasktracker,hbmaster,nfs,hoststats        ip-10-244-45-60.us-west-2.compute.internal    0       10.244.45.60 7791110135751846418  /data/default-rack/ip-10-245-14-103.us-west-2.compute.internal   fileserver,tasktracker,hbmaster,nfs,hoststats        ip-10-245-14-103.us-west-2.compute.internal   0       10.245.14.103 1152291012558508871  /data/default-rack/ip-10-245-7-200.us-west-2.compute.internal    fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-245-7-200.us-west-2.compute.internal    0       10.245.7.200 7482334955545014043  /data/default-rack/ip-10-245-8-49.us-west-2.compute.internal     fileserver,tasktracker,hbregionserver,nfs,hoststats  ip-10-245-8-49.us-west-2.compute.internal     0       10.245.8.49 root@ip-10-245-12-77:~#

4. Shut down all NFS instances

root@ip-10-245-12-77:~# maprcli node services -nfs stop -nodes ip-10-244-129-15.us-west-2.compute.internal ip-10-244-131-141.us-west-2.compute.internal ip-10-244-164-169.us-west-2.compute.internal ip-10-244-165-232.us-west-2.compute.internal ip-10-244-45-60.us-west-2.compute.internal ip-10-245-12-77.us-west-2.compute.internal ip-10-245-14-103.us-west-2.compute.internal ip-10-245-7-200.us-west-2.compute.internal ip-10-245-8-49.us-west-2.compute.internal ip-10-245-8-52.us-west-2.compute.internal root@ip-10-245-12-77:~#

5. Stop the warden in each CLDB node

root@ip-10-245-12-77:~# service mapr-warden stop

6. Stop the warden in the remaining cluster nodes

root@ip-10-245-12-77:~# service mapr-warden stop

7. Stop the ZooKeeper in each ZooKeeper node

root@ip-10-245-12-77:~# service mapr-zookeeper stop

Restart a Service on node(s)

Starts, stops, or restarts services on one or more server nodes. Permissions required: ss, fc, or a.

To start or stop services, you must specify the service name, the action (start, stop, or restart), and the nodes on which to perform the action. You can specify the nodes in either of two ways:

  1. Use the nodes parameter to specify a space-delimited list of node names.
  2. Use the filter parameter to specify all nodes that match a certain pattern. See Filters for more information.

Syntax:

CLI

maprcli node services

    [ -action start|stop|restart]

    [ -cldb start|stop|restart]

    [ -cluster <cluster> ]

    [ -fileserver start|stop|restart]

    [ -filter <filter> ]

    [ -hbmaster start|stop|restart]

    [ -hbregionserver start|stop|restart]

    [ -jobtracker start|stop|restart]

    [ -name <service> ]

    [ -nfs start|stop|restart]

    [ -nodes <node names> ]

    [ -tasktracker start|stop|restart]

    [ -webserver start|stop|restart]

    [ -zkconnect <ZooKeeper Connect String> ]

Examples

Start the NodeManager Service

   maprcli node services -name nodemanager -action start

Stop the ResourceManager Service

   maprcli node services -name resourcemanager -action stop

Stop the ResourceManager Service

   maprcli node services -name resourcemanager -action restart

Restart nfs server

   maprcli node services -nodes hadoop1 -nfs restart

Restart nfs server using a filter

      Using a filter is common, especially in HBase environments, where full restarts of region and master servers are    needed.

   maprcli node services -filter ["csvc==nfs"] -nfs restart

 

Give full permission to MapR administrator user

root@ip-10-245-12-77:~# id mapr uid=2147483632(mapr) gid=2147483632(mapr) groups=2147483632(mapr),42(shadow) root@ip-10-245-12-77:~# maprcli acl edit -type cluster -user mapr:fc root@ip-10-245-12-77:~#

Alarms Email Notifications Setup

root@ip-10-245-12-77:~# maprcli alarm config save -values "AE_ALARM_AEQUOTA_EXCEEDED,1,Carlos.Morillo@maprtech.com" root@ip-10-245-12-77:~# maprcli alarm config save -values "NODE_ALARM_CORE_PRESENT,1,Carlos.Morillo@maprtech.com" root@ip-10-245-12-77:~#

Listing Alarms

root@ip-10-245-12-77:~# maprcli alarm list -type cluster root@ip-10-245-12-77:~# maprcli alarm list -type node root@ip-10-245-12-77:~#

Listing Nodes

root@ip-10-245-12-77:~# maprcli node list -columns id,h,hn,br,da,dtotal,dused,davail,fs-heartbeat id                   davail  dused  bytesReceived  hostname                                      dtotal  health  fs-heartbeat  ip 8707346954164511835  116     170    1741           ip-10-244-129-15.us-west-2.compute.internal   287     0       0             10.244.129.15 453989218842577487   38      249    2465           ip-10-244-131-141.us-west-2.compute.internal  287     0       0             10.244.131.141 2892638075826172151  106     180    764            ip-10-244-164-169.us-west-2.compute.internal  287     0       0             10.244.164.169 8833235307193396272  109     173    2168           ip-10-244-165-232.us-west-2.compute.internal  283     0       0             10.244.165.232 6559272699074389504  99      187    2605           ip-10-244-45-60.us-west-2.compute.internal    287     0       0             10.244.45.60 9137989191756045555  107     175    1606           ip-10-245-12-77.us-west-2.compute.internal    283     0       0             10.245.12.77 7791110135751846418  39      247    1627           ip-10-245-14-103.us-west-2.compute.internal   287     0       0             10.245.14.103 1152291012558508871  101     185    764            ip-10-245-7-200.us-west-2.compute.internal    287     0       0             10.245.7.200 7482334955545014043  39      248    904            ip-10-245-8-49.us-west-2.compute.internal     287     0       0             10.245.8.49 4127302514082488703  86      197    19685          ip-10-245-8-52.us-west-2.compute.internal     283     0       0             10.245.8.52 root@ip-10-245-12-77:~# maprcli node list -columns id,br,fs-heartbeat,jt-heartbeat id                   bytesReceived  hostname                                      jt-heartbeat  fs-heartbeat  ip 8707346954164511835  832            ip-10-244-129-15.us-west-2.compute.internal   2             0             10.244.129.15 453989218842577487   1897           ip-10-244-131-141.us-west-2.compute.internal  2             0             10.244.131.141 2892638075826172151  1749           ip-10-244-164-169.us-west-2.compute.internal  2             0             10.244.164.169 8833235307193396272  1521           ip-10-244-165-232.us-west-2.compute.internal  2             0             10.244.165.232 6559272699074389504  1812           ip-10-244-45-60.us-west-2.compute.internal    2             0             10.244.45.60 9137989191756045555  1038           ip-10-245-12-77.us-west-2.compute.internal    2             0             10.245.12.77 7791110135751846418  1084           ip-10-245-14-103.us-west-2.compute.internal   2             0             10.245.14.103 1152291012558508871  836            ip-10-245-7-200.us-west-2.compute.internal    2             0             10.245.7.200 7482334955545014043  1869           ip-10-245-8-49.us-west-2.compute.internal     2             0             10.245.8.49 4127302514082488703  19243          ip-10-245-8-52.us-west-2.compute.internal     2             0             10.245.8.52 root@ip-10-245-12-77:~#

Determining which nodes are running the ZooKeeper service

Note that there is only one ZooKeeper leader and the remaining ZooKeeper nodes are followers.

      

   root@ip-10-245-12-77:~# maprcli node listzookeepers

Adding a node (in this cluster, the nodes running the ZooKeeper service are also running the CLDB service)

root@newnode:~# /opt/mapr/server/configure.sh -Z ip-10-244-165-232.us-west-2.compute.internal,ip-10-245-12-77.us-west-2.compute.internal,ip-10-245-8-52.us-west-2.compute.internal -C ip-10-244-165-232.us-west-2.compute.internal,ip-10-245-12-77.us-west-2.compute.internal,ip-10-245-8-52.us-west-2.compute.internal root@newnode:~# /opt/mapr/server/disksetup -F /tmp/disks.txt root@newnode:~# service mapr-warden start

  • Volumes

Creating a Volume

root@ip-10-245-12-77:~# maprcli volume create -name carlosvolume -path /carlosvolume -quota 1G -advisoryquota 200M root@ip-10-245-12-77:~#

Creating a Mirror Volume

root@ip-10-245-12-77:~# maprcli volume create -name carlosvolume_mirror -source carlosvolume@my.cluster.com -path /carlosvolume_mirror -type 1 root@ip-10-245-12-77:~#

Listing Volumes

root@ip-10-245-12-77:~# maprcli volume list -columns volumeid,volumetype,volumename,mountdir,mounted,aename,quota,used,totalused,actualreplication,rackpath quota  mountdir                                                              actualreplication  volumeid   aename  rackpath  used    mounted  volumename                                                       volumetype  totalused  aetype 1024   /carlosvolume                                                         ...                48119723   root    /data     0       1        carlosvolume                                                     0           0          0 0      /carlosvolume_mirror                                                  ...                219729268  root    /data     0       1        carlosvolume_mirror                                              1           0          0 0      /user/mapr                                                            ...                161101264  mapr    /data     597899  1        mapr                                                             0           597899     0 0                                                                            ...                1          mapr    /data     0       0        mapr.cldb.internal                                               0           0          0 0      /                                                                     ...                104597444  mapr    /data     0       1        mapr.cluster.root                                                0           0          0 0      /var/mapr/configuration                                               ...                121877052  mapr    /data     0       1        mapr.configuration                                               0           0          0 0      /hbase                                                                ...                234741384  mapr    /data     0       1        mapr.hbase                                                       0           0          0 0      /var/mapr/local/ip-10-244-129-15.us-west-2.compute.internal/logs      ...                206969086  mapr    /data     5       1        mapr.ip-10-244-129-15.us-west-2.compute.internal.local.logs      0           5          0 0      /var/mapr/local/ip-10-244-129-15.us-west-2.compute.internal/mapred    ...                243798871  mapr    /data     1       1        mapr.ip-10-244-129-15.us-west-2.compute.internal.local.mapred    0           1          0 0      /var/mapr/local/ip-10-244-129-15.us-west-2.compute.internal/metrics   ...                69263110   mapr    /data     96      1        mapr.ip-10-244-129-15.us-west-2.compute.internal.local.metrics   0           96         0 0      /var/mapr/local/ip-10-244-131-141.us-west-2.compute.internal/logs     ...                23205174   mapr    /data     2       1        mapr.ip-10-244-131-141.us-west-2.compute.internal.local.logs     0           2          0 0      /var/mapr/local/ip-10-244-131-141.us-west-2.compute.internal/mapred   ...                105260920  mapr    /data     1       1        mapr.ip-10-244-131-141.us-west-2.compute.internal.local.mapred   0           1          0 0      /var/mapr/local/ip-10-244-131-141.us-west-2.compute.internal/metrics  ...                19860697   mapr    /data     97      1        mapr.ip-10-244-131-141.us-west-2.compute.internal.local.metrics  0           97         0 0      /var/mapr/local/ip-10-244-164-169.us-west-2.compute.internal/logs     ...                212161365  mapr    /data     5       1        mapr.ip-10-244-164-169.us-west-2.compute.internal.local.logs     0           5          0 0      /var/mapr/local/ip-10-244-164-169.us-west-2.compute.internal/mapred   ...                162207671  mapr    /data     1       1        mapr.ip-10-244-164-169.us-west-2.compute.internal.local.mapred   0           1          0 0      /var/mapr/local/ip-10-244-164-169.us-west-2.compute.internal/metrics  ...                251008000  mapr    /data     99      1        mapr.ip-10-244-164-169.us-west-2.compute.internal.local.metrics  0           99         0 0      /var/mapr/local/ip-10-244-165-232.us-west-2.compute.internal/logs     ...                254163265  mapr    /data     0       1        mapr.ip-10-244-165-232.us-west-2.compute.internal.local.logs     0           0          0 0      /var/mapr/local/ip-10-244-165-232.us-west-2.compute.internal/metrics  ...                252158411  mapr    /data     97      1        mapr.ip-10-244-165-232.us-west-2.compute.internal.local.metrics  0           97         0 0      /var/mapr/local/ip-10-244-45-60.us-west-2.compute.internal/logs       ...                185745772  mapr    /data     5       1        mapr.ip-10-244-45-60.us-west-2.compute.internal.local.logs       0           5          0 0      /var/mapr/local/ip-10-244-45-60.us-west-2.compute.internal/mapred     ...                213209407  mapr    /data     1       1        mapr.ip-10-244-45-60.us-west-2.compute.internal.local.mapred     0           1          0 0      /var/mapr/local/ip-10-244-45-60.us-west-2.compute.internal/metrics    ...                211996945  mapr    /data     97      1        mapr.ip-10-244-45-60.us-west-2.compute.internal.local.metrics    0           97         0 0      /var/mapr/local/ip-10-245-12-77.us-west-2.compute.internal/logs       ...                111775179  mapr    /data     0       1        mapr.ip-10-245-12-77.us-west-2.compute.internal.local.logs       0           0          0 0      /var/mapr/local/ip-10-245-12-77.us-west-2.compute.internal/metrics    ...                233931728  mapr    /data     97      1        mapr.ip-10-245-12-77.us-west-2.compute.internal.local.metrics    0           97         0 0      /var/mapr/local/ip-10-245-14-103.us-west-2.compute.internal/logs      ...                251542201  mapr    /data     2       1        mapr.ip-10-245-14-103.us-west-2.compute.internal.local.logs      0           2          0 0      /var/mapr/local/ip-10-245-14-103.us-west-2.compute.internal/mapred    ...                160008303  mapr    /data     1       1        mapr.ip-10-245-14-103.us-west-2.compute.internal.local.mapred    0           1          0 0      /var/mapr/local/ip-10-245-14-103.us-west-2.compute.internal/metrics   ...                73005604   mapr    /data     96      1        mapr.ip-10-245-14-103.us-west-2.compute.internal.local.metrics   0           96         0 0      /var/mapr/local/ip-10-245-7-200.us-west-2.compute.internal/logs       ...                66440508   mapr    /data     5       1        mapr.ip-10-245-7-200.us-west-2.compute.internal.local.logs       0           5          0 0      /var/mapr/local/ip-10-245-7-200.us-west-2.compute.internal/mapred     ...                87429862   mapr    /data     1       1        mapr.ip-10-245-7-200.us-west-2.compute.internal.local.mapred     0           1          0 0      /var/mapr/local/ip-10-245-7-200.us-west-2.compute.internal/metrics    ...                247661324  mapr    /data     97      1        mapr.ip-10-245-7-200.us-west-2.compute.internal.local.metrics    0           97         0 0      /var/mapr/local/ip-10-245-8-49.us-west-2.compute.internal/logs        ...                159900756  mapr    /data     5       1        mapr.ip-10-245-8-49.us-west-2.compute.internal.local.logs        0           5          0 0      /var/mapr/local/ip-10-245-8-49.us-west-2.compute.internal/mapred      ...                141734370  mapr    /data     1       1        mapr.ip-10-245-8-49.us-west-2.compute.internal.local.mapred      0           1          0 0      /var/mapr/local/ip-10-245-8-49.us-west-2.compute.internal/metrics     ...                59237315   mapr    /data     97      1        mapr.ip-10-245-8-49.us-west-2.compute.internal.local.metrics     0           97         0 0      /var/mapr/local/ip-10-245-8-52.us-west-2.compute.internal/logs        ...                69920939   mapr    /data     0       1        mapr.ip-10-245-8-52.us-west-2.compute.internal.local.logs        0           0          0 0      /var/mapr/local/ip-10-245-8-52.us-west-2.compute.internal/metrics     ...                26655377   mapr    /data     96      1        mapr.ip-10-245-8-52.us-west-2.compute.internal.local.metrics     0           96         0 0      /var/mapr/cluster/mapred/jobTracker                                   ...                162832045  mapr    /data     0       1        mapr.jobtracker.volume                                           0           0          0 0      /var/mapr/metrics                                                     ...                157755141  mapr    /data     0       1        mapr.metrics                                                     0           0          0 0      /var/mapr                                                             ...                129618546  mapr    /data     0       1        mapr.var                                                         0           0          0 0      /tera.in                                                              ...                225962280  root    /data     262621  1        tera.in                                                          0           262621     0 0      /tera.out                                                             ...                62181860   root    /data     1       1        tera.out                                                         0           1          0 0      /user                                                                 ...                250577554  mapr    /data     0       1        users                                                            0           0          0 root@ip-10-245-12-77:~#

Volume Properties

root@ip-10-245-12-77:~# maprcli volume info -name carlosvolume numreplicas  schedulename  volumeid  rackpath  volumename    used  volumetype  aetype  creator  advisoryquota  snapshotcount  quota  mountdir       scheduleid  snapshotused  nameContainerSizeMB  replicationtype  maxinodesalarmthreshold  minreplicas  acl                                                                                  actualreplication  aename  needsGfsck  partlyOutOfTopology  mounted  logicalUsed  readonly  totalused 3                          48119723  /data     carlosvolume  0     0           0       root     200            0              1024   /carlosvolume  0           0             0                    high_throughput  0                        2            {"acl":{"Principal":"User root","Allowed actions":["dump","restore","m","d","fc"]}}  ...                root    false       0                    1        0            0         0 root@ip-10-245-12-77:~# maprcli volume info -output terse -name carlosvolume qta   rp     ro  mrf  aqt  dsu  nfsck  id        ssu  arf  drf  tsu  on    miath  aen   sid  sn  acl                                                    mt  n             sc  poot  dcr              dlu  t  p              ncsmb  aet 1024  /data  0   2    200  0    false  48119723  0    ...  3    0    root  0      root  0        {"acl":{"User root":["dump","restore","m","d","fc"]}}  1   carlosvolume  0   0     high_throughput  0    0  /carlosvolume  0      0 root@ip-10-245-12-77:~#

Mount/Unmount Volume

root@ip-10-245-12-77:~# hadoop fs -ls maprfs:/// Found 7 items drwxr-xr-x   - root root          0 2013-03-27 17:33 /carlosvolume drwxrwxrwx   - root root          0 1970-01-01 00:00 /carlosvolume_mirror drwxr-xr-x   - mapr mapr          6 2013-02-21 17:50 /hbase drwxr-xr-x   - root root          1 2013-03-11 14:07 /tera.in drwxr-xr-x   - root root          1 2013-03-11 14:54 /tera.out drwxr-xr-x   - mapr mapr          1 2013-02-21 18:23 /user drwxr-xr-x   - mapr mapr          1 2013-02-21 17:42 /var root@ip-10-245-12-77:~# maprcli volume unmount -name carlosvolume root@ip-10-245-12-77:~# hadoop fs -ls maprfs:/// Found 6 items drwxrwxrwx   - root root          0 1970-01-01 00:00 /carlosvolume_mirror drwxr-xr-x   - mapr mapr          6 2013-02-21 17:50 /hbase drwxr-xr-x   - root root          1 2013-03-11 14:07 /tera.in drwxr-xr-x   - root root          1 2013-03-11 14:54 /tera.out drwxr-xr-x   - mapr mapr          1 2013-02-21 18:23 /user drwxr-xr-x   - mapr mapr          1 2013-02-21 17:42 /var root@ip-10-245-12-77:~# maprcli volume mount -name carlosvolume root@ip-10-245-12-77:~# hadoop fs -ls maprfs:/// Found 7 items drwxr-xr-x   - root root          0 2013-03-27 17:33 /carlosvolume drwxrwxrwx   - root root          0 1970-01-01 00:00 /carlosvolume_mirror drwxr-xr-x   - mapr mapr          6 2013-02-21 17:50 /hbase drwxr-xr-x   - root root          1 2013-03-11 14:07 /tera.in drwxr-xr-x   - root root          1 2013-03-11 14:54 /tera.out drwxr-xr-x   - mapr mapr          1 2013-02-21 18:23 /user drwxr-xr-x   - mapr mapr          1 2013-02-21 17:42 /var root@ip-10-245-12-77:~#

Removing a Volume

root@ip-10-245-12-77:~# hadoop fs -ls maprfs:/// Found 8 items drwxr-xr-x   - root root          0 2013-03-27 17:33 /carlosvolume drwxrwxrwx   - root root          0 1970-01-01 00:00 /carlosvolume_mirror drwxr-xr-x   - mapr mapr          6 2013-02-21 17:50 /hbase drwxr-xr-x   - root root          1 2013-03-11 14:07 /tera.in drwxr-xr-x   - root root          1 2013-03-11 14:54 /tera.out drwxr-xr-x   - root root          0 2013-03-27 17:59 /testvolume drwxr-xr-x   - mapr mapr          1 2013-02-21 18:23 /user drwxr-xr-x   - mapr mapr          1 2013-02-21 17:42 /var root@ip-10-245-12-77:~# maprcli volume remove -name testvolume root@ip-10-245-12-77:~# hadoop fs -ls maprfs:/// Found 7 items drwxr-xr-x   - root root          0 2013-03-27 17:33 /carlosvolume drwxrwxrwx   - root root          0 1970-01-01 00:00 /carlosvolume_mirror drwxr-xr-x   - mapr mapr          6 2013-02-21 17:50 /hbase drwxr-xr-x   - root root          1 2013-03-11 14:07 /tera.in drwxr-xr-x   - root root          1 2013-03-11 14:54 /tera.out drwxr-xr-x   - mapr mapr          1 2013-02-21 18:23 /user drwxr-xr-x   - mapr mapr          1 2013-02-21 17:42 /var root@ip-10-245-12-77:~#

Later on, I will cover the MapR CL, and I'll include examples of how to use it with Mirrors, Schedules, and Snapshots.

I hope you enjoy it and find this information useful.

 

Related Resources

Getting Started with the MapR Command Line 

Accessing MapR FileSystem without installing the MapR Client 

Accessing MapR DB without installing the MapR Client 

More on maprcli

Outcomes