How to Decommission a VMware Host from a Dell (EMC) VxRail vSAN Cluster After Migrating to Azure
- Behan Venter
- Oct 9
- 4 min read
Updated: Oct 10
By Behan Venter, CTO, Hudson Cloud Systems

Migrating workloads to Azure is a major milestone, but what comes next is just as important. Once your VMs are safely in the cloud, it’s time to clean up your on-prem environment. This post walks through the full process of decommissioning a VMware host from a VxRail vSAN cluster after completing a cloud migration.
We recently completed this task in a production environment, and we’ve broken it down into clear, practical steps you can follow.
⚠️ Warning ⚠️: This procedure is a reference example only - following this procedure will modify your system and could lead to an outage or data loss if executed incorrectly or if your system is not compatible with this procedure. Before proceeding, make sure you fully understand each command, how it applies to your system, and its potential impact. Have a rollback plan in place in case anything unexpected occurs. If you have questions about this procedure, reach out to me via our Contact Page.
Disclaimer: The procedures and information provided in this blog post are intended for educational and informational purposes only. Hudson Cloud Systems is not responsible for any system outages, data loss, or other issues resulting from the application of these instructions. Always consult with a qualified systems administrator and thoroughly review your environment before making changes to production infrastructure.
Prerequisites and notes
Make sure you’ve got the following in place before starting:
The host has no running VMs or templates.
You have admin access to both vCenter and the ESXi host.
The host has two active uplinks.
SSH is enabled.
Variables used in this example which will be different in your environment:
The host's management IP and addresses on the 192.168.100.0/24 network
The management vlan ID is 31
The virtual switch is named "EVO:RAIL Distributed Switch"
VMK interfaces are vmk0-4
Always document and double check your configuration before starting.
Step 1: Prepare the Host
Log into vCenter
Confirm vSAN cluster health
Use vMotion to migrate all VMs off the host
Place the host in Maintenance Mode:
Right-click → Enter Maintenance Mode
Choose Full Data Migration
Run the Pre-Check
Confirm message: The host can enter maintenance mode. X TB of data will be moved.
Click Enter Maintenance Mode
Data evacuation may take several hours ⏳
Step 2: Remove the Host from the vSAN Cluster
A. SSH to the Host
Check vSAN status: esxcli vsan cluster get
You should see:
Local node state: HEALTHY
Maintenance Mode: ON
Leave the vSAN cluster: esxcli vsan cluster leave
Confirm the host has left: esxcli vsan cluster get
Expected output: vSAN Clustering is not enabled on this host
B. Reconfigure Networking
Add a temporary standard switch:
esxcli network vswitch standard add --vswitch-name=vSwitch-mgmt-temp
Create a port group:
esxcli network vswitch standard portgroup add \
--portgroup-name=VMkernel_temp \
--vswitch-name=vSwitch-mgmt-temp
Tag VLAN 31:
esxcli network vswitch standard portgroup set \
--portgroup-name=VMkernel_temp \
--vlan-id=31
List DVPorts:
esxcli network vswitch dvs vmware list
Remove vmnic0 from Distributed Switch:
esxcfg-vswitch -Q vmnic0 -V <Port ID> "EVO:RAIL Distributed Switch"
Attach vmnic0 to the standard switch:
esxcli network vswitch standard uplink add --uplink-name=vmnic0 --vswitch-name=vSwitch-mgmt-temp
C. Reassign VMkernel Interface and Routes
Next we will perform several actions in one shell command:
Delete vmk2
Create and add a new vmk2 to the temporary port group
Assign IP address details to vmk2 — Set the correct IP address in the command!
Tag vmk2 for Management traffic
Add a default route to the default gateway
Restart VMware services (sometimes needed)
We need to perform all of these actions as a one-liner command, as we will lose SSH connectivity as soon as vmk2 is removed.
(sh -c 'esxcli network ip interface remove --interface-name=vmk2 && \
esxcli network ip interface add --interface-name=vmk2 --portgroup-name=VMkernel_temp && \
esxcli network ip interface ipv4 set --interface-name=vmk2 --ipv4=192.168.100.xx --netmask=255.255.255.0 --type=static && \
esxcli network ip interface tag add --interface-name=vmk2 --tagname=Management && \
esxcli network ip route ipv4 add --network=default --gateway=192.168.100.1 && \
services.sh restart')
Replace 192.168.100.xx with the host’s actual IP address.
Verify the host is reachable:
ping 192.168.100.102
D. Final Uplink and Interface Cleanup
Find and remove vmnic1 from the Distributed Switch:
esxcli network vswitch dvs vmware list
esxcfg-vswitch -Q vmnic1 -V <Port ID> "EVO:RAIL Distributed Switch"
Remove unnecessary VMkernel interfaces:
esxcli network ip interface remove --interface-name=vmk0
esxcli network ip interface remove --interface-name=vmk3
esxcli network ip interface remove --interface-name=vmk4
Step 3: Remove the Host from Inventory
Open the Networking tab in the vCenter web GUI
Under EVO:RAIL Distributed Switch:
Click ACTIONS > Add and Manage Hosts > Remove Hosts
Add the host in Maintenance Mode
Complete the wizard
Then:
Go to Hosts and Clusters
Right-click the host → Remove from Inventory
The host is now cleanly removed from the cluster and vCenter. ✅
Step 4: Power Off the Host
From the iDRAC or remote console:
Select Graceful Shutdown
Wrap-Up
This step-by-step process ensures your host is decommissioned without leaving residual config or cluster instability. It's a best-practice method for wrapping up your on-prem presence after moving workloads to Azure.
Want this checklist as a PDF or automated via script? Let us know—we’d be happy to help.