Changing the IP of your NFS Datastore

I’ve ran into some issues today trying to change the IP address of my NFS Datastore. Our filer has a nice 10GbE interface that I’m not using, so my thought was to put my hosts into Maintenance Mode one at a time, remove the old datastore and connect the new one…which is essentially the same exact mount but going over a different interface. Boy was I wrong…

VMotion uses the datastore location path, not the datastore name to do it’s migrations. Basically VC thought this was a completly new datastore, even appended (1) after it. I verified this by looking at the datastores inventory and noticed they listed both, even showed that they were different paths;

datastore = netfs://
datastore (1) = netfs://

So, it looks like I’ll be forced to do a SVMotion on all my data to be able to use this new 10GbE interface.

I also have a ticket open with VMware, just to make sure. I’ll let you know if I find out different.

Update — Well if you’re looking to avoid any downtime you will be forced to use SVMotion to migrate your data to a new datastore. There are some tricks you can use depending on your storage vendor, for example with NetApp you can use flexclones then write a script to unregister and re-register your VMX files. For me, I just took a long weekend and moved everything over. I must tell you that SVMotion for NFS is NOT supported (yet), but we all know it works :)

Created on September 13, 2008 by Rick Scherer

Posted under ESX 3.5 Tips, ESXi 3.5 Tips, NetApp, Storage, VMware.

This blog has 25,246 views.

Tags: , , ,

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)

10 Comments so far

  1. skiser
    11:30 am on October 8th, 2008

    I ran into the same problem. We originally used iSCSI for our VMware infrastructure. We had 2 NICs on the Filer dedicated to that. But when we moved to NFS, we ended up putting them over the public NIC team for the Filer do to a limitation of addresses from our Network team on the iSCSI VLAN. We have since changed the IP that was for the iSCSI NIC team of the filer to be on a new non-routed NFS VLAN.

    I too planned on placing a host in maintenance mode, changing the VMkernel IP and hooking back up to the same datastores. I also opened a ticket with VMware and they told me to use Storage VMotion to move everything to a new NFS volume. Only I have a few terabytes of VMs and am looking for a better way to do this than to Storage VMotion all my VMs.

    If you hear of a better way, please let me know.

  2. Rick Scherer
    11:12 pm on October 13th, 2008

    There are really only two ways you can simplify this but both would require the VMs in the datastore you want to move to be powered down. Bottom line, if your changing the location of your datastore (IP, physical, whatever) your going to have to mount a new datastore.

    First option is to utilize NetApp Flex Clones (license required). You would simply power off all VMs on the source datastore, flex-clone to a new volume and mount that new volume on your ESX hosts. You’ll need to unregister and register the VMs to point to the new datstore. You can script this very easily.

    Second would be to use snap restore (license required) to copy the VM folders to the new destination…this is long and painful and you might as well just use SVMotion in place of this.

    So I suppose if you have the flex clone license that would be the easiest and fastest way to migrate all your data to a new datastore. Perhaps I’ll write up something about this.

  3. pancamo
    5:43 pm on October 14th, 2008

    Yea, this is a pain! This also causes major issues with clustered systems like Isilon that use DNS to balance storage… Hopefully vmware will find a solution to this someday…

    If you netapp is clustered, you should be able failover your netapp to the second head, change the 10gbit nic to the same IP as your 1Gbit nic and failback all without bringing down any VMs. But you better talk to netapp about this… I’m not sure if failover after changing to 10gbit nic is supported by netapp…

    We have a R200 that we snapmirror all of our vmware volumes too. If we have a complete data corruption issue on our 3070c, we can failover to the R200 with an IP change.

    Virtual Center keys off IP address, so you can’t change IP address without confusing VC.

  4. Rick Scherer
    4:11 pm on October 15th, 2008

    Sounds like your plan would work Dan, but the big issues is keeping the same IP, I suppose if your running over separate VLANs you can do this by creating a vif alias. I have two more datastores to move over to 10GbE, so perhaps I will give this a try.

    Also, good way to put that old R200 to work! Perfect for a disaster situation…too bad you don’t get A-SIS on that though!

  5. chafner
    9:53 am on November 25th, 2008

    Did anyone try the failover method mentioned by pancamo above?

    I’m looking at the same 10gbit migration and it sounds quicker than the alternatives.

  6. Rick Scherer
    11:05 am on November 25th, 2008

    pancamo’s method will work, but this is dependent on the fact that your 10GbE network would be using the same IP address space as your 1GbE network. My issue is that our 10GbE and 1GbE interfaces had different IP addresses.

  7. Leon Funnell
    3:05 am on July 31st, 2009

    I have successfully set up NFS datastores using hostnames registered in /etc/hosts. I did this in anticipation of changing our filers from 1GE to 10GE when the SFPs for my Nexus 5010 arrive. What I am not sure about is if I put my ESX host in maintenance mode and change the entry in the hosts file to point to the 10GE NIC IP address, will this then update the connection to the NFS datastore? Secondly, is using hostname mappings for an NFS datastore actually supported? ESX servers are 3.5U4 and filers are v6040 7.3.11L1.

  8. Rick Scherer
    9:04 am on August 1st, 2009

    Unfortunately Leon it will require either some downtime or a SVMotion to another datastore. vCenter assigns a UUID to the datastore and it is based on the netfs (IP) address of the filer. Unless your 10GbE connection uses the same IP address as your 1GbE connection, you’ll either need to;

    1) Present a new datastore and use SVMotion for a zero downtime migration
    2) Shutdown each VM and unregister and reregister within vCenter.

  9. that1guynick
    3:30 pm on September 24th, 2009

    Ran into this last night myself. I had all VM’s powered off and unregistered.

    Could not for the life of me get the datastore name to come back without the (1) appended to it.

    Come to find out, I had a test linux VM using the same datastore name in a test cluster that I had forgotten about. So I’ve got another round of svMotions in my near future.

    We tried with both IP and NAMED server names with the NFS mount. Had I found and removed that linux test one as well, it would have gone fine.


  10. julianwood
    8:10 am on March 18th, 2010

    Another issue we have had is sometimes the same datastore on the same storage with the same IP address gets referenced by two different netfs:// locations, one with the host name and one with the IP address.

    Still not sure how this happens. We originally had CNAME entries pointing to our filers but now go directly to the hostname as we thought this could be an issue. This has now appeared again and we’re not sure what is happening.
    All our (1) datastores which are wrong look like netfs://hostname rather than netfs://IP address.


Leave a Comment

Name (required)

Email (required)



More Blog Post

Next Post:
Previous Post: