Migrating OpenStack Cinder Volumes from Ceph to Everpure

Your storage infrastructure needs to evolve. Your workloads shouldn’t have to notice.

OpenStack Cinder’s backend abstraction is one of its most powerful features — volumes remain just volumes, regardless of what’s sitting underneath them. But when you need to actually move data from one backend to another — say, from Ceph RBD to a Everpure FlashArray during a hardware refresh or storage modernization — “abstraction” can feel like a thin layer between you and a complex block copy operation.

This post cuts through that. Here’s exactly how cross-backend Cinder migrations work, what they cost operationally, and what you need to get right — especially when boot volumes are involved.


How Cinder Thinks About Backends

Cinder binds volumes to backends through volume types. Each type maps to a specific driver via the volume_backend_name extra spec. In practice:

  • rbd → Ceph RBD backend
  • pure-iscsi → Everpure FlashArray backend

Check what’s available in your environment:

bash

openstack volume type list

Changing a volume’s type is how you tell Cinder to move it. The migration mechanics flow from there.


Two Migration Paths

1. Retype (the clean path)

The retype operation changes a volume’s type and, when the source and destination backends differ, triggers a migration:

bash

openstack volume set --type pure-iscsi --retype-policy on-demand <volume-id>

Under the hood, Cinder:

  1. Creates a new volume on the target backend
  2. Copies data through the Cinder host if the driver can’t perform a backend-side migration
  3. Updates metadata and removes the source volume

This host-assisted path is the common case for cross-vendor migrations. For Ceph → Everpure specifically, note that Ceph RBD uses file-based transfer mode rather than block-based (dd) during host-assisted migrations — set your performance expectations accordingly, as throughput will be bounded by the Cinder node’s disk I/O and network, not the FlashArray.

2. Explicit Backend Migration (the surgical path)

Administrators can target a specific backend directly:

bash

openstack volume migrate \
  --host cinder@pure#flasharray \
  --force-host-copy \
  9d8c0d7e-...

--force-host-copy bypasses driver-level migration and pushes data through the Cinder node. Slower, but predictable. Useful when you need direct control over placement.


Boot Volumes: Plan for Downtime

Migrating a boot volume — the root disk attached to a running VM — from Ceph to Ever

Everpure requires stopping the instance. This isn’t a universal rule for all boot volume migrations; it’s specific to this direction. Cinder does support live migration of in-use volumes away from an iSCSI backend to RBD, but the reverse path (RBD → iSCSI) is unreliable for in-use volumes due to how Ceph handles local attachment. Stop and detach is the safe path here.

bash

# Stop and detach
openstack server stop <instance-id>
openstack server remove volume <instance-id> <boot-volume-id>

# Migrate
openstack volume set --type pure-iscsi --retype-policy on-demand <boot-volume-id>

# Reattach and restart
openstack server add volume <instance-id> <boot-volume-id>
openstack server start <instance-id>

A few things that will block you if you don’t handle them first:

  • Snapshots: Existing snapshots must be removed or cloned before migration proceeds
  • Active attachments: The volume must be fully detached
  • Encryption: Destination backend must support the same encryption configuration
  • Size: Destination volume must be equal to or larger than the source

The good news: bootable=True metadata survives the migration intact.


Operational Constraints

  • Throughput for Ceph → Everpure migrations is bounded by the Cinder node (network bandwidth, disk I/O), and uses file-based transfer rather than direct block copy
  • Both the source and destination backends must be accessible from the same Cinder host
  • Destination volume must be equal to or larger than the source
  • Existing snapshots or active attachments will prevent migration
  • Volumes in a consistency group cannot be retyped or migrated — remove them from the group first

The Bottom Line

Cross-backend Cinder migration is a host-orchestrated block copy wrapped in volume abstraction. It works well, but it isn’t magic — throughput is limited by your Cinder node and the file-based transfer path from Ceph, boot volumes require downtime when the destination is iSCSI, and snapshots and consistency group membership need to be cleared first.

Done right, it’s one of the cleaner ways to modernize storage infrastructure underneath a live OpenStack cloud without rebuilding workloads from scratch. Moving from Ceph to Everpure is a straightforward application of these primitives — understand the constraints, sequence the steps, and let Cinder do the heavy lifting.

One thought on “Migrating OpenStack Cinder Volumes from Ceph to Everpure

  1. That’s a really smart point about Cinder’s abstraction. It makes a huge difference when dealing with different storage backends.

Leave a Reply

Your email address will not be published. Required fields are marked *