AnsweredAssumed Answered

mapreduce jobs on hive stuck after upgrade from 3.0 to 4.1

Question asked by 00gavin on Jul 8, 2015
Latest reply on Jul 27, 2015 by adamdiaz
Hi,

I had tried an upgrade from mapr 3.0 to 4.1 following the offline upgrade document from mapr website. Further I also followed the instructions to upgrade hive to 1.0

I can see all my previous data, however any query that invokes mapreduce gets stuck at Kill Command Information.



As a temporary work around, I have pointed hadoop home to the old hadoop-0.2 and this works fine.
However, if I try to point hadoop to the newer version, hive refuses to run any mapreduce query.

Please let me know if there are any logs that I can verify or access in an attempt to solve this issue.

Thanks


-----------------------------------------------------#######----------------------------------------------------

p.s.

**sample logs from new hadoop:**

hive> select count(*) from test;
Query ID = root_20150708150909_87a9fc79-3d7a-4010-a77c-8bbd0ec0806c
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1435871991972_0287, Tracking URL = http://10.76.243.51:8088/proxy/application_1435871991972_0287/
Kill Command = /opt/mapr/hadoop/hadoop-2.5.1//bin/hadoop job  -kill job_1435871991972_0287



 
***And its stuck at this point. On pointing hadoop to old version:***

**sample logs from old hadoop:**

hive> select count(*) from test;                                 
Query ID = root_20150708152020_30abfa8c-a054-49e1-bf8b-009f11824357
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
Starting Job = job_201507021519_0511, Tracking URL = http://inlteexp02.sats.corp:50030/jobdetails.jsp?jobid=job_201507021519_0511
Kill Command = /opt/mapr/hadoop/hadoop-0.20.2/bin/../bin/hadoop job  -kill job_201507021519_0511
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-07-08 15:20:32,958 Stage-1 map = 0%,  reduce = 0%
2015-07-08 15:20:39,003 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.43 sec
2015-07-08 15:20:46,061 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.21 sec
MapReduce Total cumulative CPU time: 2 seconds 210 msec
Ended Job = job_201507021519_0511
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.21 sec   MAPRFS Read: 0 MAPRFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 210 msec
OK
10000
Time taken: 15.331 seconds, Fetched: 1 row(s)







Outcomes