AnsweredAssumed Answered

Problem when removing failure disk

Question asked by valentinp on Jul 3, 2012
Latest reply on Oct 22, 2012 by nabeel
I experiment with 2-node configuration and I ve got a problem with disks.
At first I try to use storagefiles as data storage.
I create file as it is showed in example:

execute dd if=/dev/zero of=/root/storagefile bs=1G count=20

and then add string


to disks.txt. Then run disksetup.

I make these operations on two nodes. So I can't exactly explain what goes wrong, but know I have an unknown error with storage file on second node.

There is a failure message in faileddisk.log. Message recommends to remove disk "/dev/_data_storagefile", but "/dev/_data_storagefile" doesn't exists. So I can't remove it.

Moreover output of mrconfig disk list doesn't contain this disk, but in conf/disktab it is.
I try to remove it from conf/disktab but it doesn't bring the result.
I consider that there is another config file or something which contains info about that disk.

Another strange thing is that I had same disk at first node but it is offline (why at first node offline and at second online, I don't know) and there are not any errors.

Thanks a lot.