AnsweredAssumed Answered

MapRFS Disks on node after disk failure

Question asked by chriscurtin on Aug 1, 2012
Latest reply on Aug 2, 2012 by chriscurtin
Hi,

We had a disk failure on one of our MapR nodes (day 2 in production too ;)). The cluster detected it, alerted and I'm guessing replicated the data elsewhere since CLDB shows 100% replicate at 3 copies.

We removed the disk from the available disks for that node (waiting on replacement from the vendor) and restarted the services on the node. Everything except that one disk is 'green' in the UI. (4 other disks on that node.)

The issue is nothing is being stored on that node now. We load 20+GB of data nightly and this node still shows 0% on all the disks. mappers and reducers are running on the node.

No errors in mfs.log. No errors in other logs since we brought the node back up.

Thoughts on why nothing is going to this node?

Thanks,

Chris

Outcomes