AnsweredAssumed Answered

Error occurs when using label based scheduler with FairScheduler....

Question asked by allenzhang on Dec 21, 2015
Latest reply on Dec 24, 2015 by allenzhang
Hello All,

I know there is a feature in apache hadoop supports Label-Based Schedulering with Capacity Scheduler,
I was wondering if label-based scheduing works with Fair Scheduler in MapR?

To answer my question by myself is : YES.
since I find this page in your site:
http://doc.mapr.com/display/MapR/Label-based+Scheduling+for+YARN+Applications

I was trying to follow this page to have a test with the MapR Sandbox VM:
we are considering enable this key feature if it works very well, I am just a trial version user and hopefully become your really customers, please help with below error, Thanks in advance.

**finally I got this error:**
15/12/19 07:10:38 INFO security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager
15/12/19 07:10:38 FATAL distributedshell.Client: Error running Client
org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, queue=root.test doesn't have permission to access all labels in resource request. labelExpression of resource request=fast. Queue labels=
        at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:273)
        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:328)
        at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
        at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:595)
        at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:230)
        at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:451)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2032)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1595)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2030)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
        at org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
        at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:263)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy8.submitApplication(Unknown Source)
        at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:259)
        at org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:708)
        at org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:215)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)


----------


the command I submitted is as below under /opt/mapr/hadoop/hadoop-2.7.0/ directory:
bin/hadoop jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.0-mapr-1506.jar org.apache.hadoop.yarn.applications.distributedshell.Client --jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.0-mapr-1506.jar --shell_command ls -node_label_expression fast -queue test

the fair-scheduler.xml file displays:

    <queue name="test">
      <aclSubmitApps>root</aclSubmitApps>
      <label>fast</label>
    </queue>


**and maybe follwoing more info can help:**

[root@maprdemo hadoop-2.7.0]# **yarn rmadmin -refreshQueues**
15/12/19 07:12:43 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to maprdemo/192.168.247.129:8033
[root@maprdemo hadoop-2.7.0]# **yarn rmadmin -refreshLabels**
15/12/19 07:12:51 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to maprdemo/192.168.247.129:8033
Refreshed labels for nodes in the cluster successfully.

[root@maprdemo hadoop-2.7.0]# **hadoop fs -cat /var/mapr/node.labels**

maprdemo fast

[root@maprdemo hadoop-2.7.0]# **yarn rmadmin -showLabels**           
15/12/19 07:13:33 INFO client.MapRZKBasedRMFailoverProxyProvider: Updated RM address to maprdemo/192.168.247.129:8033
                  Nodes     Labels
               maprdemo     [fast]
[root@maprdemo hadoop-2.7.0]#

Would you please give any feedback on this? any step I missed? any configration I missed?

Thanks,
Allen






Outcomes