The following example displays volumes on the “vs1” SVM and the “node0” controller:
cluster::> volume show -vserver vs1 -node node0
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
vs1 clone aggr1 online RW 40MB 37.87MB 5%
vs1 vol1 aggr1 online RW 40MB 37.87MB 5%
vs1 vs1root aggr1 online RW 20MB 18.88MB 5%
3 entries were displayed.
Step 5. Determine an aggregate to which you can move a given volume: volume move target-aggr show
-vserver svm_name -volume vol_name
Example
The following example shows that the “user_max” volume on the “vs2” SVM can be moved to any
of the listed aggregates:
cluster::> volume move target-aggr show -vserver vs2 -volume user_max
Aggregate Name Available Size Storage Type
-------------- -------------- ------------
aggr2 467.9GB FCAL
node12a_aggr3 10.34GB FCAL
node12a_aggr2 10.36GB FCAL
node12a_aggr1 10.36GB FCAL
node12a_aggr4 10.36GB FCAL
5 entries were displayed
Step 6. Run a validation check on each volume that you want to move to verify that it can be moved to the
specified aggregate: volume move start -vserver svm_name -volume volume_name -destination-
aggregate destination_aggregate_name -perform-validation-only true
Step 7. Move the volumes one at a time (advanced privilege level): volume move start -vserver svm_name
-volume vol_name -destination-aggregate destination_aggr_name -cutover-window integer
You cannot move the controller root volume (vol0). Other volumes, including SVM root volumes,
can be moved.
Step 8. Display the outcome of the vvoolluummee mmoovvee operation to verify that the volumes were moved
successfully: volume move show -vserver svm_name -volume vol_name
Step 9. If the vvoolluummee mmoovvee operation does not complete the final phase after multiple attempts, force the
move to finish: volume move trigger-cutover -vserver svm_name -volume vol_name -force true
Forcing the vvoolluummee mmoovvee operation to finish can disrupt client access to the volume that you are
moving.
Step 10. Verify that the volumes were moved successfully to the specified SVM and are in the correct
aggregate: volume show -vserver svm_name
Moving non-SAN data LIFs and cluster management LIFs to the new controllers
After you have moved the volumes from the original controllers, you need to migrate the non-SAN data LIFs
and cluster-management LIFs from the original controllers to the new controllers.
You cannot migrate a LIF that is used for copy-offload operations with VMware vStorage APIs for Array
Integration (VAAI).
Step 1. From the controller where the cluster LIF is hosted, change the home ports for the non-SAN data
LIFs from the original controllers to the new controllers: network interface modify -vserver vserver_
name -lif lif_name -home-node new_node_name -home-port {netport|ifgrp}
Step 2. Take one of the following actions:
102
ThinkSystem DM3000x and DM5000x Hardware Installation and Maintenance Guide