AnsweredAssumed Answered

Bug in HA NFS setup using multiple ViPs when starting the secondary network devices?

Question asked by chris_almond on Mar 12, 2012
Latest reply on Jul 18, 2012 by jbubier
I have a small M5 test cluster (4 nodes). Three nodes are running the NFS service. When I start the NFS service on all three nodes, only two of them successfully bring up the secondary interface on eth1:mapr0. I think the key to this problem is understanding what makes the **_nfsmon\_if\_script.pl_** script determine that the interface on the failing node is already "configured locally".  I poked around in the networking config on the node (running RHEL 6.2), but I see nothing different between it and the other good nodes.

Here is the ViP setup:

**maprcli virtualip list** output:
>   00:1e:c9:59:d3:b3  00:1e:c9:59:d3:b3<br>   00:1e:c9:44:20:f3  00:1e:c9:44:20:f3<br>  00:1e:c9:59:f7:93  00:1e:c9:59:f7:93<br>

**nfsmon.log output from one of the successful nodes:**<br>
>Mon Mar 12 15:08:43 2012 [INFO] starting /opt/mapr/server/nfsmon\_if\, ARGS = -s up -d eth1 -i -n<br>
**Configuring on eth1:mapr0**<br>
Running cmd: /sbin/ifconfig eth1:mapr0 netmask up<br>
2012-03-12 15:08:49,2800 INFO nfsmon[1780] fs/nfsd/ Bringing up vIp: cmd=/opt/mapr/server/nfsmon\_if\ -s up -d eth1 -i -n, ret=0<br>

**nfsmon.log output for the node which fails:**<br>
>Mon Mar 12 15:08:40 2012 [INFO] starting /opt/mapr/server/nfsmon\_if\, ARGS = -s up -d eth1 -i -n<br>
** configured locally**<br>
2012-03-12 15:08:40,9924 INFO nfsmon[2414] fs/nfsd/ Bringing up vIp: cmd=/opt/mapr/server/nfsmon\_if\ -s up -d eth1 -i -n, ret=0<br>

**ip addr output** on failing node after startup (showing no secondary interface activated for eth1):
>2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000<br>
    link/ether 00:1e:c9:59:d3:b1 brd ff:ff:ff:ff:ff:ff<br>
    inet brd scope global eth0<br>
    inet6 fe80::21e:c9ff:fe59:d3b1/64 scope link<br>
       valid_lft forever preferred_lft forever<br>