When NFS mounts on ESXi go bad

Every once in a while you may reboot your NAS w/o rebooting the ESXi server.. From my experience, you are in for some pain.  Previously I found that you would have to unmount the NFS share, remount, remove all VMs, and re-add all of them.

all gone


So how do you fix the (inactive) and inaccessible?


~ # esxcfg-nas -l
<volumeName> is /volume1/ESX from <IP> unmounted unavailable

~ # esxcfg-nas -d <volumeName>
NAS volume <volumeName> deleted.

~ # esxcfg-nas -a -o <IP> -s /volume1/ESX <volumeName>
Connecting to NAS volume: <volumeName>
<volumeName> created and connected.

After doing this, all of the (inaccessible) boxes should start popping back up as their real names.  It *may* require a reboot.  Older ESXi versions wouldn’t re-add, and doing it once at a time was quite a pain.

Comments are closed