OpenStack Cinder Volume Migration: Moving Between Backends

In the previous posts I explored how the Cinder scheduler interprets capabilities, extra_specs, and how multiple backends can intentionally share the same volume_backend_name to present a unified storage tier. That abstraction is powerful — but it also introduces an operational question:

How do you move an existing volume between two backends when the scheduler treats them as equivalent?

This article focuses entirely on the operational mechanics and the safest patterns for controlled backend migration.


Why Moving Volumes Is Not Automatic

When multiple pools advertise the same volume_backend_name, the scheduler considers them functionally identical from a placement perspective. New volumes may land on any eligible backend based on filters and weighting.

Existing volumes, however, remain pinned to their original host unless an explicit migration or retype operation occurs. The scheduler does not rebalance or shuffle volumes automatically.

Operationally, that means administrators must target the destination backend directly.


Understanding the Target: Host Strings

Cinder identifies a backend using the host string, not the backend name alone. The format typically looks like:

cinder@backendA#pool0
cinder@backendB#pool0

Even if both backends advertise:

volume_backend_name = gold

they are still separate scheduler destinations.

You can discover valid hosts with:

openstack volume service list

or:

openstack volume backend pool list --long

Method — Direct Migration (Operator-Controlled)

The supported operational method for moving a volume between backends is direct migration. There is no retype-based workflow for forcing placement onto a specific host.

Example:

openstack volume migrate \
    --host cinder@backendB#pool0 \
    --force-host-copy True \
    --lock-volume True \
    <volume-id>

Key points

The --host parameter explicitly selects the destination backend.

--force-host-copy True ensures migration proceeds even when storage-assisted movement is not available between backends.

--lock-volume True prevents conflicting operations during migration and keeps the workflow deterministic.

This approach is typically used when:

  • Rebalancing capacity between arrays
  • Draining an array before maintenance
  • Performing backend evacuations
  • Moving volumes across heterogeneous drivers

Because the scheduler treats shared backend names as equivalent, migration is the only reliable way to override placement after a volume already exists.


Storage-Assisted vs Host-Assisted Migration

Migration behavior depends heavily on storage driver capabilities.

Storage-assisted migration

The storage platform performs the data move natively. This is usually faster and minimally disruptive. Usually drivers need volumes being migrated to be detached from instances.

It is important to interpret the Cinder feature matrix carefully here. Storage-assisted migration applies where the driver can relocate data through array-native mechanisms. In reality this means storage-assisted migration only occurs where the volume is being migrated between tiers located on the same backend array. Therefore sharing the same volume_backend_name does not imply native movement between independent backends.

Host-assisted migration

When native movement is not available, Cinder copies data through the storage or compute network. Expect longer transfer times and potential performance impact.

Using --force-host-copy True guarantees that migration proceeds even when storage-assisted workflows are not possible.

You can infer which path was used by inspecting the Cinder logs during the operation.


Common Operational Pitfalls

Same Backend Name ≠ Same Physical Location

Using a shared volume_backend_name creates a logical tier, not a physical cluster. Migration still requires explicit host selection.

Scheduler Weights Do Not Move Existing Volumes

Filters and goodness functions only apply at scheduling time. They do not trigger rebalance operations.

Multiattach and In-Use Volumes

Driver capabilities vary. Some platforms require volumes to be detached before migration. Others — including the Pure Storage driver — allow migration while volumes remain attached. Always validate driver behavior before planning operational moves.


Practical Workflow for Backend Rebalancing

A reliable pattern for operators is:

Identify the backend that needs to be drained or rebalanced.

List volumes currently on that host.

Run openstack volume migrate with an explicit destination host.

Monitor progress using:

openstack volume show <volume-id> -c migration_status -c status

This keeps automation predictable while avoiding surprises from implicit scheduler behavior.


Decommissioning a Backend in Cinder

When retiring or decommissioning a storage array, operators must proceed carefully to avoid service disruption. In Cinder, this is typically handled by removing the backend from enabled_backends, while leaving the backend stanza in place.

Key Steps

  1. Remove from enabled_backends
    Prevents new volumes from being scheduled to the array. Existing volumes remain pinned to the backend until migrated or deleted.
  2. Leave the backend stanza in cinder.conf
    Keeping the configuration present allows Cinder to continue managing existing volumes on the backend. This is important because:
    • Cinder still needs to report status for these volumes
    • Migration and deletion operations rely on the backend configuration
  3. Migrate or age out existing volumes
    Volumes can be handled in one of two ways:
    • Manual migration using openstack volume migrate to move volumes to a new backend
    • Natural aging: volumes are deleted or no longer used over time, eventually leaving the backend empty
  4. Remove the backend stanza
    Once all volumes are migrated or deleted, the backend stanza can safely be removed from cinder.conf and the service restarted, completing the decommission.

Operational Notes

  • During this period, the backend is effectively “read-only” for volume placement but fully functional for volume management operations.
  • Monitoring is essential to ensure no volumes are left stranded or overlooked.
  • This approach aligns with Cinder’s principle that backends are policy boundaries, and careful decommissioning preserves scheduler integrity while preventing accidental placement of new volumes.

Where This Fits in the Bigger Picture

The first two posts in this series focused on how the scheduler decides where new volumes land. Migration flips the perspective: you are overriding that decision after the fact.

The deeper lesson is that volume_backend_name is a policy abstraction, not an operational boundary. Understanding that distinction lets you design flexible tiers without sacrificing precise control when infrastructure needs to evolve.

Leave a Reply

Your email address will not be published. Required fields are marked *