AnsweredAssumed Answered

Copying error in file system of hdfs

Question asked by vladdv on Apr 27, 2013
Latest reply on Apr 28, 2013 by Ted Dunning
I wrote the following bash script

#!/bin/bash
cd /export/hadoop-1.0.1/bin
./hadoop namenode -format
./start-all.sh
./hadoop fs -rmr hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/output
./hadoop fs -rmr hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input
./hadoop fs -mkdir hdfs://192.168.1.8:7000/export/hadoop-1.0.1/input
./readwritepaths
./hadoop fs -put /export/hadoop-1.0.1/bin/input/paths.txt hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input/paths.txt
./hadoop jar /export/hadoop-1.0.1/bin/ParallelIndexation.jar org.myorg.ParallelIndexation /export/hadoop-1.0.1/bin/input /export/hadoop-1.0.1/bin/output -D mapred.map.tasks=1 1> resultofexecute.txt 2>&1
As a result of its execution on a command

./hadoop fs -put /export/hadoop-1.0.1/bin/input/paths.txt hdfs://192.168.1.8:7000/export/hadoop-1.0.1/bin/input/paths.txt
I received the following messages

13/04/28 10:13:15 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

  at org.apache.hadoop.ipc.Client.call(Client.java:1066)
  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
  at $Proxy1.addBlock(Unknown Source)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  at $Proxy1.addBlock(Unknown Source)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)

13/04/28 10:13:15 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null
13/04/28 10:13:15 WARN hdfs.DFSClient: Could not get block locations. Source file "/export/hadoop-1.0.1/bin/input/paths.txt" - Aborting...
put: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
13/04/28 10:13:15 ERROR hdfs.DFSClient: Exception closing file /export/hadoop-1.0.1/bin/input/paths.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /export/hadoop-1.0.1/bin/input/paths.txt could only be replicated to 0 nodes, instead of 1
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1556)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601)
  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
  at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

  at org.apache.hadoop.ipc.Client.call(Client.java:1066)
  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
  at $Proxy1.addBlock(Unknown Source)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:601)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
  at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
  at $Proxy1.addBlock(Unknown Source)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3507)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3370)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2700(DFSClient.java:2586)
  at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2826)
I give also datanode a log on one of the subordinate nodes (on the second subordinate node this log contains a similar error)

2013-04-28 11:10:40,634 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = myhost2/192.168.1.10
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1243785; compiled by 'hortonfo' on Tue Feb 14 08:15:38 UTC 2012
************************************************************/
2013-04-28 11:10:40,948 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-04-28 11:10:40,982 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-04-28 11:10:40,983 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-04-28 11:10:40,983 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2013-04-28 11:10:41,285 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-04-28 11:10:41,308 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-04-28 11:10:42,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 0 time(s).
2013-04-28 11:10:43,811 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 1 time(s).
2013-04-28 11:10:44,813 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 2 time(s).
2013-04-28 11:10:45,814 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 3 time(s).
2013-04-28 11:10:46,814 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 4 time(s).
2013-04-28 11:10:47,814 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 5 time(s).
2013-04-28 11:10:48,815 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 6 time(s).
2013-04-28 11:10:49,815 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 7 time(s).
2013-04-28 11:10:50,816 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 8 time(s).
2013-04-28 11:10:51,818 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 9 time(s).
2013-04-28 11:10:51,822 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7000 not available yet, Zzzzz...
2013-04-28 11:10:53,824 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 0 time(s).
2013-04-28 11:10:54,825 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 1 time(s).
2013-04-28 11:10:55,826 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 2 time(s).
2013-04-28 11:10:56,828 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 3 time(s).
2013-04-28 11:10:57,828 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 4 time(s).
2013-04-28 11:10:58,829 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 5 time(s).
2013-04-28 11:10:59,829 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 6 time(s).
2013-04-28 11:11:00,830 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 7 time(s).
2013-04-28 11:11:01,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 8 time(s).
2013-04-28 11:11:02,831 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 9 time(s).
2013-04-28 11:11:02,833 INFO org.apache.hadoop.ipc.RPC: Server at 192.168.1.8/192.168.1.8:7000 not available yet, Zzzzz...
2013-04-28 11:11:04,834 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 0 time(s).
2013-04-28 11:11:05,834 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 1 time(s).
2013-04-28 11:11:06,835 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 2 time(s).
2013-04-28 11:11:07,836 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 3 time(s).
2013-04-28 11:11:08,837 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 4 time(s).
2013-04-28 11:11:09,837 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 192.168.1.8/192.168.1.8:7000. Already tried 5 time(s).
2013-04-28 11:11:40,381 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-hadoop/dfs/data: namenode namespaceID = 454531810; datanode namespaceID = 345408440
  at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
  at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
  at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

2013-04-28 11:11:40,383 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at myhost2/192.168.1.10
************************************************************/
Help to eliminate a copying error.

Outcomes