AnsweredAssumed Answered

gfsck failed - Remote I/O error when I access data in volume daily.db

Question asked by milagro on Jan 7, 2016
Latest reply on Mar 27, 2016 by mufeed

I got "Remote I/O error" when I access data in volume daily.db, and the tried to run gfsck to repair it, but gfsck failed as below:


$/opt/mapr/bin/gfsck -d rwvolume=daily.db
Starting GlobalFsck:
  clear-mode            = false
  debug-mode            = true
  dbcheck-mode          = false
  repair-mode           = false
  assume-yes-mode       = false
  cluster               =
  rw-volume-name        = erose.db
  snapshot-name         = null
  snapshot-id           = 0
  user-id               = 0
  group-id              = 0


  get volume properties ...
    rwVolumeName = erose.db (volumeId = 127951978, rootContainerId = 60899, isMirror = false)


  put volume erose.db in global-fsck mode ...


  get snapshot list for volume erose.db ...


  starting phase one (get containers) for volume erose.db(127951978) ...
    container 60899 (latestEpoch=9, fixedByFsck=false)
    container 86037 (latestEpoch=3, fixedByFsck=false)
java.lang.Exception: SnapChainList RPC failed 22
        at com.mapr.fs.globalfsck.PhaseOne.SnapChainList(
        at com.mapr.fs.globalfsck.PhaseOne.Run(
        at com.mapr.fs.globalfsck.GlobalFsck.Run(
        at com.mapr.fs.globalfsck.GlobalFsckClient.main(
  failed phase one
java.lang.Exception: PhaseOne failed with status 22
        at com.mapr.fs.globalfsck.GlobalFsck.Run(
        at com.mapr.fs.globalfsck.GlobalFsckClient.main(
  remove volume erose.db from global-fsck mode (ret = 22) ...
GlobalFsck failed (error 22)


fsck shows all SPs look good. There is no INVALID node except 5 RESYNC containers.


Can you please suggest some solution? Thanks!