Since the Mitaka release of OpenStack, the Pure Storage Cinder driver has supported Cinder replication, although this first iteration only supported asynchronous replication.
The Rocky release of OpenStack saw Pure’s Cinder driver support synchronous replication by integrating our ActiveCluster feature from the FlashArray.
This synchronous replication automatically created an ActiveCluster pod on the paired FlashArrays called cinder-pod
. A pretty obvious name I would say.
While this provided a seamless integration for OpenStack users to create a synchronously replicated volume using a correctly configured volume type, there was one minor limitation. ActiveCluster pods were limited to 3000 volumes.
Now you might think that is more than enough volumes for any single ActiveCluster environment. I certainly did until I received a request to be able to support 6000 volumes synchronously replicated.
After some scratching of my head, I remembered that from the OpenStack Stein release of the Pure Cinder driver there is an undocumented (well, not very well documented) parameter that allows the name of the ActiveCluster pod to be customizable and gave me an idea….
Can you configure Cinder to use the same backend as separate stanzas with different parameters in the Cinder config file?
It turns out the answer is Yes.
So, here’s how to enable your Pure FlashArray Cinder driver to use a single ActiveCluster pair of FlashArrays to allow for 6000 synchronously replicated volumes.
First, we need to edit the cinder.conf
file and create two different stanzas for the same array that is configured in an ActiveCluster pair and ensure we have enabled both of these backends:
enabled_backends = pure-1-1, pure-1-2 … [pure-1-1]volume_backend_name = pure
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
san_ip = 10.21.209.210
replication_device = backend_id:pure-2,san_ip:10.21.209.8,api_token:9c0b56bc-f941-f7a6-9f85-dcc3e9a8f6d6,type:sync
pure_api_token = bee464cc-24a9-f44c-615a-ae566082a1ae
pure_replication_pod_name = cinder-pod1
use_multipath_for_image_xfer = True
pure_eradicate_on_delete = true
image_volume_cache_enabled = True
volume_clear = none
[pure-1-2]
volume_backend_name = pure
volume_driver = cinder.volume.drivers.pure.PureISCSIDriver
replication_device = backend_id:pure-2,san_ip:10.21.209.8,api_token:9c0b56bc-f941-f7a6-9f85-dcc3e9a8f6d6,type:sync
pure_replication_pod_name = cinder-pod2
san_ip = 10.21.209.210
pure_api_token = bee464cc-24a9-f44c-615a-ae566082a1ae
use_multipath_for_image_xfer = True
pure_eradicate_on_delete = true
image_volume_cache_enabled = True
volume_clear = none
If we look at the two stanzas, the only difference is that the pure_replication_pod_name
is different. I have also set the volume_backend_name
to be the same for both configurations. There is a reason for this I will cover later.
After altering the configuration file, make sure to restart your Cinder Volume service to implement the changes.
After restarting the cinder-volume service, you will see on the FlashArray that two ActiveCluster pods now exist with the names defined in the configuration file.
This is the first step.
Now we need to enable volume types to be able to use these pods and also to load-balance across the two pods – why load-balance? It just seems to make more sense to make volumes evenly utilize the pods, but there is no specific reason for doing this. If you wanted to use each pod separately, then you would need to set a different volume_backend_name
in the Cinder configuration file for each array stanza.
When creating a volume type to use synchronous replication you need to set some specific extra_specs in the type definition. These are the commands to use:
openstack volume type create pure-repl openstack volume type set --property replication_type=’<in> sync’ pure_repl openstack volume type set --property replication_enabled=’<is> True’ pure_repl openstack volume type set --property volume_backend_name=’pure’ pure_repl
The final configuration of the volume type would now look something like this:
openstack volume type show pure-repl +--------------------+-------------------------------------------------------------------------------------------+ | Field | Value | +--------------------+-------------------------------------------------------------------------------------------+ | access_project_ids | None | | description | None | | id | 2b6fe658-5bbf-405c-a0b6-c9ac23801617 | | is_public | True | | name | pure-repl | | properties | replication_enabled='<is> True', replication_type='<in> sync', volume_backend_name='pure' | | qos_specs_id | None | +--------------------+-------------------------------------------------------------------------------------------+
Now, all we need to do is use the volume type when creating our Cinder volumes.
Let’s create two volumes and see how they appear on the FlashArray:
openstack volume create --type pure-repl --size 25 test_volume +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-11-03T14:48:13.000000 | | description | None | | encrypted | False | | id | 64ef0e40-ce89-4f4d-8c89-42e3208a96c2 | | migration_status | None | | multiattach | False | | name | test_volume | | properties | | | replication_status | None | | size | 25 | | snapshot_id | None | | source_volid | None | | status | creating | | type | pure-repl | | updated_at | None | | user_id | eca55bb4cd8c490197d8b9d2cdce29f2 | +---------------------+--------------------------------------+ openstack volume create --type pure-repl --size 25 test_volume2 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-11-03T14:48:22.000000 | | description | None | | encrypted | False | | id | e494e233-b38a-4fb6-8f3d-0aab5c7c68ec | | migration_status | None | | multiattach | False | | name | test_volume2 | | properties | | | replication_status | None | | size | 25 | | snapshot_id | None | | source_volid | None | | status | creating | | type | pure-repl | | updated_at | None | | user_id | eca55bb4cd8c490197d8b9d2cdce29f2 | +---------------------+--------------------------------------+
Looking at the FlashArray, we can see the two volumes we just created (I am filtering the volume name on cinder just so you only see the OpenStack-related volumes on this array)
The volume naming convention we use at Pure shows that these volumes are in a pod due to the double colon (::
) in the name and the pod name for each volume is cinder-pod1
and cinder-pod2
respectively.
The view of each pod also shows only one volume in each.
If you didn’t want to load-balance across the pods and needed the flexibility to specify the pod a volume exists in, all I need do is set the volume_backend_name
to be different in the configuration file array stanzas and then create two volume types. Each would point to a different volume_backend_name
setting.