Storage Provisioning on Pure with Ansible

As Ansible becomes increasingly popular as an automation toolset, it is starting to be used in corporate storage management environments.

Historically, storage admins would provision volumes, or LUNs, using, depending the the storage vendor they were provisioning against, using the array GUI or with CLI commands that might even be wrapped into a complex shell script.

Automation was not really an option as storage vendors didn’t provide a comprehensive RESTful API interface to their storage backends with which automation toolsets could interface. Those days are being relegated to the past as more and more storage vendors provide some REST API endpoints for their product lines.

Currently, the vendors with the most comprehensive set of REST API endpoints for their platforms is Pure Storage. In this post I will show how to easily provision and teardown a storage volume presented from a Pure Storage FlashArray to a Linux host using iSCSI as the dataplane protocol.

The provisioning action will automatically create the volume on the FlashArray, map it to the host requesting the volume, finally formatting and mounting the volume ready for use by a user or application.

The teardown action will unmount the volume, delete the volume and associated host on the FlashArray and cleanup both the iSCSI sessions and device multipath on the host.

PreRequisites

The environment I will be using consists of the following:

  • Pure Storage FlashArray//X (with iSCSI targets configured)
    • It could be any model of FlashArray really
  • Baremetal server (with iSCSI initiators)
    • It could be a VM as long as it has an iSCSI network path
  • Red Hat Enterprise Linux 7.9
  • Ansible 5.0
  • Pure Storage SDKs purestorage and py-pure-client.

Don’t worry too much about the Linux flavour or version as Ansible can be installed on most common Linux platforms, even some of the more esoteric Unix flavours as well, such as Solaris and AIX.

The Ansible modules I will need to use for these examples are all included in the main Ansible package as they are either core modules, included in Community collections, or in fully certified vendor collections – in this case the Pure Storage FlashArray Collection.

It is important to understand the actual configuration of your FlashArray as there are a number of parameters required from its setup that are needed to use a parameters in the Ansible playbook performing the provisioning and teardown tasks.

The information required from the FlashArray is:

  • An API token for a user with Storage Admin privilage
  • The management IP address of the FlashArray
  • The name of one of the iSCSI target interfaces on the FlashArray

The final assumption is that the hosts iSCSI initiators can connect to the FlashArray iSCSI targets.

Provisioning

The following Ansible playbook performs the provisioning of the volume on the FlashArray, creates the host object on the array and then maps the volume to the host.

Next the open-iscsi module is used to connect the host to the FlashArray over the iSCDSI network and then multipath is used to connect the volume to the host. Finally the volume is formatted and mounted ready for use.

- name: Provisioning example
  hosts: localhost
  gather_facts: true
  vars:
    array_ip: <FlashArray management IP>
    array_api: <API token>
    test_volume: testvolume
    iscsi_port: ct1.eth4    <-- This is one of the iSCSI target ports
    mount_path: /mnt/testvolume
  tasks:
  - name: Get FlashArray info
    purestorage.flasharray.purefa_info:
      gather_subset:
      - minimum
      - network
      - interfaces
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"
    register: array_info

  - name: Create volume
    purestorage.flasharray.purefa_volume:
      name: "{{ test_volume }}"
      size: 10G
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"
    register: volume_data

  - set_fact:
      volume_serial: "{{ volume_data.volume.serial }}"

  - name: Create host object on array and connect volume
    purestorage.flasharray.purefa_host:
      host: "{{ ansible_hostname }}"
      iqn: "{{ ansible_iscsi_iqn }}"
      volume: "{{ test_volume }}"
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"

  - name: Discover FlashArray for iSCSI
    open_iscsi:
      show_nodes: yes
      discover: yes
      portal: "{{ array_info.purefa_info.network[iscsi_port].address }}"
    register: array_iscsi_iqn

  - name: Connect to FlashArray over iSCSI
    open_iscsi:
      target: "{{ array_iscsi_iqn.nodes[0] }}"
      login: yes

  - name: Force multipath rescan
    command: /usr/sbin/multipath -r

  - name: Get multipath device for volume
    shell:
        cmd:  /usr/sbin/multipath -ll | grep -i {{ volume_serial }} | awk '{print $2}'
    register: mpath_dev

  - name: Format volume
    filesystem:
      fstype: ext4
      dev: '/dev/{{ mpath_dev.stdout }}'

  - name: Mount volume
    mount:
      path: "{{ mount_path }}"
      fstype: ext4
      src: '/dev/{{ mpath_dev.stdout }}'
      state: mounted

After running the playbook we can check out the new volume on the host and the hosts connectivity to volume:

# df
Filesystem                                    1K-blocks     Used Available Use% Mounted on
devtmpfs                                       65979372        0  65979372   0% /dev
tmpfs                                          65991924        0  65991924   0% /dev/shm
tmpfs                                          65991924   484636  65507288   1% /run
tmpfs                                          65991924        0  65991924   0% /sys/fs/cgroup
/dev/mapper/rhel-root                          52403200 17717456  34685744  34% /
/dev/sda2                                       1038336   193820    844516  19% /boot
/dev/mapper/rhel-home                         220093440    33088 220060352   1% /home
tmpfs                                          13198388        0  13198388   0% /run/user/0
/dev/mapper/3624a937043be47c12334399b000189dc  10190100    36888   9612540   1% /mnt/testvolume

# multipath -ll
3624a937043be47c12334399b000189dc dm-3 PURE    ,FlashArray
size=10G features='0' hwhandler='1 alua' wp=rw
`-+- policy='queue-length 0' prio=50 status=active
  |- 60:0:0:1 sdc 8:32 active ready running
  |- 61:0:0:1 sdb 8:16 active ready running
  |- 62:0:0:1 sde 8:64 active ready running
  `- 63:0:0:1 sdd 8:48 active ready running

# iscsiadm -m session
tcp: [53] 1.2.3.4:3260,1 iqn.2010-06.com.purestorage:flasharray.2111b767484e4682 (non-flash)
tcp: [54] 1.2.3.5:3260,1 iqn.2010-06.com.purestorage:flasharray.2111b767484e4682 (non-flash)
tcp: [55] 1.2.3.6:3260,1 iqn.2010-06.com.purestorage:flasharray.2111b767484e4682 (non-flash)
tcp: [56] 1.2.3.7:3260,1 iqn.2010-06.com.purestorage:flasharray.2111b767484e4682 (non-flash)

Teardown

Thew following playbook will remove the volume from the host and clean up both the FlashArray, deleting the volume and the host object, and the host by removing the outstanding multipath and iSCSI sessions:

- name: Teardown example
  hosts: localhost
  gather_facts: true
  vars:
    array_ip: <FlashArray management IP>
    array_api: <API token>
    test_volume: testvolume
    iscsi_port: ct1.eth4
    mount_path: /mnt/testvolume
  tasks:
  - name: Get FlashArray info
    purestorage.flasharray.purefa_info:
      gather_subset:
      - minimum
      - network
      - interfaces
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"
    register: array_info

  - name: Unmount filesystem
    mount:
      path: "{{ mount_path }}"
      state: absent

  - name: Disconnect volume from host on array
    purestorage.flasharray.purefa_host:
      volume: "{{ test_volume }}"
      host: "{{ ansible_hostname }}"
      state: absent
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"

  - name: Delete volume
    purestorage.flasharray.purefa_volume:
      name: "{{ test_volume }}"
      state: absent
      eradicate: true
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"

  - name: Delete host object on array
    purestorage.flasharray.purefa_host:
      host: "{{ ansible_hostname }}"
      iqn: "{{ ansible_iscsi_iqn }}"
      state: absent
      fa_url: "{{ array_ip }}"
      api_token: "{{ array_api }}"

  - name: Remove multipath links to array
    command: /usr/sbin/multipath -r

  - name: Get array IQN
    shell:
      cmd: /usr/sbin/iscsiadm -m node | grep {{ array_info.purefa_info.network[iscsi_port].address }} | awk '{print
 $2}'
    register: array_iqn

  - name:  Logout iSCSI sessions to array
    shell:
      cmd: /usr/sbin/iscsiadm -m node -T {{ array_iqn.stdout }} -u

  - name:  Delete iSCSI sessions to array
    shell:
      cmd: /usr/sbin/iscsiadm -m node -o delete -T {{ array_iqn.stdout }}

More details

These are really simple provisioning tasks that storage admins perform frequently.

By utilising Ansible for storage automation the daily tasks can be handed off allowing more productive and interesting work to be performed.

There is no reason why software packages such as ServiceNow couldn’t be used to create a user portal to allow automated storage provisioning on demand.

Pure Storage provides a wide range of Ansible modules covering every action for both FlashArrays and FlashBlades that can be performed at either the GUI or CLI level. more details of the Ansible modules provided by Pure Storage can be found in the Ansible documentation website.

Leave a Reply

Your email address will not be published. Required fields are marked *