We were adding a couple of new nodes to the cluster. One them had bad hardware so it was live for a day and then failed, lets call it node 10. While waiting for replacement parts we had a CLDB failure on node 1 (the node it was on went down and long and short CLDB was forced to be migrated to a new node). CLDB was migrated to node 2 and life was good again. We repaired the bad node, node 10, and brought it back in. We forgot to run a configure.sh with the new CLDB IP on node 10. Oddly enough CLDB running in its new home, node 2, decided to shut down - obviously not liking node 10. Seems like odd behavior for 1 "bad" worker node to be able to deep six the entire cluster. Or am I missing something?