AnsweredAssumed Answered

How to delete directories from corrupt volume

Question asked by terryhealy on Jun 14, 2017
Latest reply on Jun 15, 2017 by mufeed

Version 5.2.0. Multiple system failures resulted in multiple non-recoverable disk errors - fsck fails. Apparently, these have corrupted the only copies of files within the volume. We have accepted the data loss, but have readable data directories within the volume. When attempting to delete some of the corrupt files, we get this error:

thealy@t2:~$ hadoop fs -rm -r /pcaps/5/6
17/06/14 13:01:59 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
17/06/14 13:01:59 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
2017-06-14 13:02:00,0295 ERROR Client fs/client/fileclient/cc/ Thread: 30050 Rmdirs failed for dir 6,Readdirplus rpc error No data available(61) fid 670053.10059.613190
2017-06-14 13:02:00,0295 ERROR JniCommon fs/client/fileclient/cc/ Thread: 30050 remove: File /pcaps/5/6, rpc error, No data available(61)
17/06/14 13:02:00 ERROR fs.MapRFileSystem: Failed to delete path maprfs:///pcaps/5/6, error: No data available (61)
rm: `/pcaps/5/6': Input/output error


How can we "force" the delete to get rid of the files/directories that are corrupt?