Managing multi-tenant OpenStack storage at scale requires enforcing predictable performance boundaries without creating administrative overhead. Traditional per-volume QoS doesn’t scale—you can’t effectively manage thousands of individual policies across hundreds of tenants.
Pure Storage FlashArray volume groups solve this by implementing tenant-level QoS at the storage layer. Combined with OpenStack quotas, they provide precise control over both performance and capacity without the complexity of per-volume management.
In my previous post, I detailed the limitations of Cinder’s per-volume QoS model. Volume groups eliminate those constraints by moving QoS enforcement to the tenant level where it belongs.
Volume Groups: Shared Performance Boundaries
A FlashArray volume group is a logical container that aggregates volumes under unified performance policies. The critical architectural principle: QoS limits apply to aggregate performance across all volumes in the group.
If you configure a volume group with 50,000 IOPS and it contains five volumes, those volumes share that IOPS budget. One volume might consume 30,000 IOPS while others use 5,000 each, but the array enforces the total limit.
This delivers:
- Hard tenant isolation: Each tenant’s volume group has enforced performance boundaries. Tenant A’s runaway workload cannot impact tenant B’s guaranteed allocation.
- Dynamic performance distribution: Volumes burst up or down within the group’s total budget automatically.
- Atomic operations: Snapshots and clones execute at the group level, providing consistent point-in-time captures.
Layer this on top of OpenStack tenant quotas (volume count, storage capacity, snapshots) and you get comprehensive control over both performance and capacity dimensions.
Implementation: Extra Specs and Automatic Assignment
FlashArray volume group support landed in OpenStack 2025.1 (Cinder 26.x). Implementation uses vendor-specific extra specs on volume types:
volume_backend_name = flasharray_backend
flasharray:vg_name = tenant1-group
flasharray:vg_maxIOPS = 50000
flasharray:vg_max_BWS = 1000M
When a volume is created with this type, the Cinder driver automatically:
- Validates the volume group exists
- Creates the volume as a group member
- Applies QoS constraints with no additional configuration
Define the policy once in the volume type, and every volume automatically complies. No per-volume configuration, no manual assignments, no policy drift.
Default Volume Types: Automatic Policy Enforcement
Set a volume type as the default for a tenant, and any volume they create without specifying a type uses that default. This enables zero-touch policy enforcement.
Example:
# Create volume type with group QoS
openstack volume type create tenant1-storage \
--property flasharray:vg_name=tenant1-group \
--property flasharray:vg_maxIOPS=50000 \
--property flasharray:vg_max_BWS=1000M
# Set as project default
openstack project set PROJECT_ID \
--default-volume-type tenant1-storage
# Configure quotas
openstack quota set PROJECT_ID \
--volumes 100 --gigabytes 10240 --snapshots 50
This tenant now operates within defined boundaries: 100 volumes max, 10TB capacity, 50 snapshots, with all volumes collectively capped at 50,000 IOPS and 1GB/s bandwidth. Performance and capacity enforced automatically.
VMware vVol Integration
Volume groups enable direct ingestion of VMware vVols into OpenStack for hybrid deployments on shared FlashArray infrastructure:
- vVols already exist on the array
- Volume groups import them directly into Cinder
- Once imported, volumes are managed by OpenStack volume group QoS policies
- No data movement or conversion required
This builds on my three-part VMware-to-OpenStack migration series (starting here), making hybrid storage management seamless. After ingestion, the volumes operate under OpenStack’s QoS framework rather than their original VMware policies.
Why This Matters
Volume groups fundamentally change OpenStack storage operations:
- Reduced overhead: Manage 50 tenant policies instead of 10,000 volume policies
- Predictable performance: Hard isolation at the storage layer makes SLAs enforceable
- Simplified planning: Model tenant resources holistically—”5TB and 25,000 IOPS per tenant”
- Automation-friendly: Integrates cleanly with Ansible, Terraform, and IaC tooling
- Hybrid enablement: Unified storage management across VMware and OpenStack workloads
Getting Started
- Define volume groups on FlashArray per tenant or workload
- Configure Cinder volume types with
flasharray:vg_name,flasharray:vg_maxIOPS, andflasharray:vg_max_BWS - Assign default volume types per tenant for automatic enforcement
- Set tenant quotas for volume count, capacity, and snapshots
- Configure vVol ingestion for hybrid deployments if needed
Once configured, all volumes automatically inherit tenant-level QoS and quota restrictions with no manual intervention.
Conclusion
FlashArray volume groups shift OpenStack storage QoS from per-volume to tenant-level enforcement. By implementing performance boundaries at the storage layer and combining them with native quotas, they solve the operational challenges of multi-tenant block storage at scale.
For production OpenStack infrastructure, especially multi-tenant or hybrid environments, volume groups are essential. They’re the difference between managing storage reactively—one volume at a time—and managing it proactively with scalable, enforceable policies.