---
# System prepended metadata

title: Advanced Storage for OpenStack Training Homework

---

# Advanced Storage for OpenStack Training Homework

This are the procedures to complete the homework for the OpenStack Advanced Storage Training from Red Hat.

## Scenario
Acme Consolidated has engaged with Red Hat to implement an OpenStack environment. They have defined the following implementation criteria.

:::info
**A single RHOSP 13 Controller Node running the core RHOSP services with two RHOSP 13 Compute nodes**
:::

***Solution***
OpenStack 13 Advanced Storage Lab already has this.

#### The workstation node root user should have a properly configured keystonerc_admin file for administrative access to the OpenStack cluster
***Solution***
SSH to the workstation node.
Generat the SSH key
```bash
[root@workstation-d4f5 ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:JJtvyHTAjC0bD4fcuDU+oC6Knb9ZYVSF5sN9jz5vWyg root@workstation-d4f5.rhpds.opentlc.com
The key's randomart image is:
+---[RSA 2048]----+
|        .o.      |
|   . O .o        |
|    X %+..       |
|   . & B+ . .    |
|  . o X S. . o   |
| .   + *    . .. |
|. .   + o  .E . .|
|oo . o .    o... |
|o o.+.       +o. |
+----[SHA256]-----+
[root@workstation-d4f5 ~]#
```

Copy the SSH key to control node with passward ==r3dh4t1!==.
```bash
[root@workstation-d4f5 ~]# ssh-copy-id root@ctrl01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ctrl01's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@ctrl01'"
and check to make sure that only the key(s) you wanted were added.

[root@workstation-d4f5 ~]# 
```
Copy the keystonerc_admin file from crtl01 to the workstation.
```bash
[root@workstation-d4f5 ~]# scp ctrl01:~/keystonerc_admin .
keystonerc_admin                                                            100%  327   198.7KB/s   00:00  
```

:::info
**The workstation node should have the python-openstackclient package installed**
:::

***Solution***
Install OpenStack client
```shell
[root@workstation-d4f5 ~]# yum install -y python2-openstackclient
```

Test out the client and credentials.
```bash
[root@workstation-d4f5 ~]# source keystonerc_admin 
[root@workstation-d4f5 ~(keystone_admin)]# openstack service list
+----------------------------------+------------+--------------+
| ID                               | Name       | Type         |
+----------------------------------+------------+--------------+
| 202f5c7a826c4fc7b2d2b592559dd03e | swift      | object-store |
| 37704169b33b42b6896b68df92929f8b | cinder     | volume       |
| 3cb9980330ae49178357937e5905e33a | cinderv3   | volumev3     |
| 43ee8cd030e847a590fac6b38dfab0ed | cinderv2   | volumev2     |
| 57f4ff0a11b64af7b748c3f26e9f31e8 | gnocchi    | metric       |
| 5c518727354a4535a878223ed26e66ae | glance     | image        |
| 9c7705f7419e4d42b72ca4ea4d37d829 | aodh       | alarming     |
| a162bde594804b0dac271269093112e9 | keystone   | identity     |
| bb5ca8b4508a43d8b9cc631b390d7a0c | ceilometer | metering     |
| d6cff27922cb43fba287cd0932ce514f | neutron    | network      |
| e6c2a5ea1e654d2daf54b424286304f5 | nova       | compute      |
| e91eb168baed40a2a29ef940a03d3dbc | placement  | placement    |
+----------------------------------+------------+--------------+
[root@workstation-d4f5 ~(keystone_admin)]# 
```

:::info
**There should be an user named swift-user in a new project named swift-project created in OpenStack**
:::
***Solution***
Create the OpenStack Project.
```bash
[root@workstation-d4f5 ~(keystone_admin)]# openstack project create --description "Swift Project" swift-project
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Swift Project                    |
| domain_id   | default                          |
| enabled     | True                             |
| id          | bc0d92bdcf264f95b2be622e72e9eb8f |
| is_domain   | False                            |
| name        | swift-project                    |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+
[root@workstation-d4f5 ~(keystone_admin)]# 
```

Create the OpenStack user.
```bash
[root@workstation-d4f5 ~(keystone_admin)]# openstack user create --project swift-project --password r3dh4t1! swift-user
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | bc0d92bdcf264f95b2be622e72e9eb8f |
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 8ad5de3b53cd473ebbd613df09f7af19 |
| name                | swift-user                       |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
[root@workstation-d4f5 ~(keystone_admin)]#
```

Add the user role.
```bash
[root@workstation-d4f5 ~(keystone_admin)]# openstack role add --user swift-user --project swift-project  _member_
```

:::info
**The workstation node root user should have a properly configured keystonerc_user file for access to the OpenStack cluster as swift-user**
:::
***Solution***
Create the keystonerc_user file with the following contents.
```bash
export OS_USERNAME=swift-user
export OS_PASSWORD=r3dh4t1!
export OS_AUTH_URL=http://172.16.7.50:5000/v3
export PS1='[\u@\h \W(swift-user)]\$ '
export OS_PROJECT_NAME=swift-project
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
```

:::info
**A Ceph Storage cluster running on 3 nodes, each containing 2 disk devices
The 3 (three) Ceph MON nodes should be configured on different nodes than OSD nodes
Ceph should be configured so that your OPENTLC account has administrative access to ceph
A RADOS gateway running on MON nodes**
:::
***Solution***
**Install and configure docker-distribution**
Install docker-distribution, skopeo and jq:
```bash
[root@workstation ~]# yum -y install docker-distribution skopeo jq
```
Replace the contents of /etc/docker-distribution/registry/config.yml with those shown below:
```yaml=
version: 0.1
log:
  fields:
    service: registry
storage:
    cache:
        layerinfo: inmemory
    filesystem:
        rootdirectory: /var/lib/registry
http:
    addr: 0.0.0.0:5000
    host: https://workstation.example.com:5000
    tls:
      certificate: /etc/docker-distribution/my_self_signed_cert.crt
      key: /etc/docker-distribution/my_self_signed.key
```

Create a certificate for the Docker registry:
```bash

[root@workstation-d4f5 ~]# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/docker-distribution/my_self_signed.key -out /etc/docker-distribution/my_self_signed_cert.crt
```
```bash
Generating a 2048 bit RSA private key
.......+++
..........................................+++
writing new private key to '/etc/mycerts/my_self_signed.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:HK
State or Province Name (full name) []:Hong Kong
Locality Name (eg, city) [Default City]:HK
Organization Name (eg, company) [Default Company Ltd]:Red Hat
Organizational Unit Name (eg, section) []:Lab
Common Name (eg, your name or your server's hostname) []:workstation.example.com
Email Address []:root@workstation.example.com
```

Enable and start the Docker distribution service:
```bash
[root@workstation ~]# systemctl enable docker-distribution --now
```

Configure RHEL to trust the self-signed certificate:
Using the openssl utility, create a PEM formatted version of my_self_signed_cert.crt that was created
```bash
[root@workstation-d4f5 ~]# openssl x509 -in /etc/docker-distribution/my_self_signed_cert.crt -out /etc/pki/ca-trust/source/anchors/workstation.pem -outform PEM
```
Update the system’s Trust Store using update-ca-trust:
```bash
[root@workstation ~]# update-ca-trust
```

Copy a container image, prior to the latest, from Red Hat to our local registry using skopeo:
```bash
[root@workstation ~]# skopeo copy \
docker://registry.access.redhat.com/rhceph/rhceph-3-rhel7:latest \
docker://workstation.example.com:5000/rhceph/rhceph-3-rhel7:latest
```



Install the ceph-ansible and ceph-common package on the workstation host:
```bash
[root@workstation ~]# yum -y install ceph-ansible ceph-common
```

Create an Ansible hosts inventory file called ~/ceph-nodes.
```bash
[root@workstation-d4f5 ~]# vi ceph-nodes
```
```bash
[ceph]
ceph-node01
ceph-node02
ceph-node03
ceph-mon01
ceph-mon02
ceph-mon03
```
Copy all SSH keys too all nodes for authentication. Using password r3dh4t1!
```bash
[root@workstation-d4f5 ~]# sed 1d ceph-nodes | xargs -i ssh-copy-id root@{}
```
Create an Ansible Playbook called ceph-preqs.yml with the following content.
```bash
[root@workstation-d4f5 ~]# vi ceph-preqs.yml
```
```yaml=
---
- name: Ceph Installation Pre-requisites
  hosts: all
  gather_facts: no
  vars:
  remote_user: root
  ignore_errors: yes
  tasks:

  - name: Create the admin user
    user:
      name: admin
      generate_ssh_key: yes
      ssh_key_bits: 2048
      ssh_key_file: .ssh/id_rsa
      password: $6$mZKDrweZ5e04Hcus$97I..Zb0Ywh1lQefdCRxGh2PJ/abNU/LIN7zp8d2E.uYUSmx1RLokyzYS3mUTpipvToZbYKyfMqdP6My7yYJW1

  - name: Create sudo file for admin user
    lineinfile:
      path: /etc/sudoers.d/admin
      state: present
      create: yes
      line: "admin ALL=(root) NOPASSWD:ALL"
      owner: root
      group: root
      mode: 0440

  - name: Push ssh key to hosts
    authorized_key:
      user: admin
      key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
      state: present

  - name: Copy certificate file from local registry
    copy:
      src: /etc/pki/ca-trust/source/anchors/workstation.pem
      dest: /etc/pki/ca-trust/source/anchors/workstation.pem

  - name: Update system's trust store with new certificate
    shell:
      /usr/bin/update-ca-trust

  - name: Get registry catalog to validate certificate trust
    get_url:
      url: https://workstation.example.com:5000/v2/_catalog
      dest: /tmp/registry-catalog.json
```



Run the ceph-preqs.yml playbook:
```bash
[root@workstation ~]# ansible-playbook -i ceph-nodes ceph-preqs.yml
```
Update the root user's SSH config file with new hosts.
```bash
[root@workstation-d4f5 ~]# vi ~/.ssh/config
```

```bash
Host ceph-node01
  Hostname ceph-node01
  User admin
Host ceph-node02
  Hostname ceph-node02
  User admin
Host ceph-node03
  Hostname ceph-node03
  User admin
Host ceph-mon01
  Hostname ceph-mon01
  User admin
Host ceph-mon02
  Hostname ceph-mon02
  User admin
Host ceph-mon03
  Hostname ceph-mon03
  User admin
```
Change the working directory to the ceph-ansible installation directory:
```bash
[root@workstation ~]# cd /usr/share/ceph-ansible/
```
Create the Ansible inventory file:
```bash
[root@workstation ceph-ansible]# vi hosts
```
```bash
[mons]
ceph-node0[1:3]

[mgrs]
ceph-node0[1:3]

[osds]
ceph-node0[1:3]

[rgws]
ceph-mon0[1:3]
```
Create the ansible.cfg file for Ansible:
```bash
[root@workstation ceph-ansible]# mv ansible.cfg ansible.cfg.orig
[root@workstation ceph-ansible]# vi ansible.cfg
```
```bash
[defaults]
inventory     = /usr/share/ceph-ansible/hosts
action_plugins = /usr/share/ceph-ansible/plugins/actions
roles_path = /usr/share/ceph-ansible/roles
log_path = /var/log/ansible.log
timeout = 60
host_key_checking = False
retry_files_enabled = False
retry_files_save_path = /usr/share/ceph-ansible/ansible-retry
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
[ssh_connection]
# see: https://github.com/ansible/ansible/issues/11536
control_path = %(directory)s/%%h-%%r-%%p
```
Create the all.yml file:
```bash
[root@workstation ceph-ansible]# vi group_vars/all.yml
```
```yaml=
---
###
# General Options
###

cluster: ceph
fetch_directory: /usr/share/ceph-ansible/ceph-ansible-keys
ntp_service_enabled: true

###
# Ceph Configuration Overrides
###

ceph_conf_overrides:
  global:
    mon_osd_allow_primary_afinity: true
    osd_pool_default_size: 2
    osd_pool_default_min_size: 1
    mon_pg_warn_min_per_osd: 0
    mon_pg_warn_max_per_osd: 0
    mon_pg_warn_max_object_skew: 0
  client:
    rbd_default_features: 1
    rbd_default_format: 2


###
# Client Options
###

rbd_cache: "true"
rbd_cache_writethrough_until_flush: "false"

###
# OSD Options
###

journal_size: 1024
public_network: 192.168.99.0/24
cluster_network: 192.168.56.0/24
osd_scenario: non-collocated

###
# Monitor Options
###

monitor_address_block: 192.168.99.0/24

###
# RADOSGW Options
###

radosgw_address_block: 192.168.99.0/24

###
# Docker
###

ceph_docker_image: rhceph/rhceph-3-rhel7
ceph_docker_image_tag: latest
ceph_docker_registry: workstation.example.com:5000
containerized_deployment: true
```
Create the osds.yml file:
```bash
[root@workstation ceph-ansible]# vi group_vars/osds.yml
```
```yaml=
---
copy_admin_key: true
devices:
  - /dev/vdb
  - /dev/vdc
  
dedicated_devices:
  - /dev/vdd
  - /dev/vdd

ceph_osd_docker_memory_limit: 1g
ceph_osd_docker_cpu_limit: 1
```
Create the mons.yml file:
```bash
[root@workstation ceph-ansible]# vi group_vars/mons.yml
```
```yaml=
---
ceph_mon_docker_memory_limit: 1g
ceph_mon_docker_cpu_limit: 1
```
Create the mgrs.yml file:
```bash
[root@workstation ceph-ansible]# vi group_vars/mgrs.yml
```
```yaml=
---
ceph_mgr_docker_memory_limit: 1g
ceph_mgr_docker_cpu_limit: 1
```
Create the rgws.yml file:
```bash
[root@workstation-d4f5 ceph-ansible]# vi group_vars/rgws.yml
```
```yaml=
---
dummy:

ceph_rgw_docker_memory_limit: 1g
ceph_rgw_docker_cpu_limit: 1
```

Create the site-docker.yml playbook:
```bash
[root@workstation ceph-ansible]# cp site-docker.yml.sample site-docker.yml
```
Invoke the site-docker.yml playbook to deploy the Ceph cluster:
```bash
[root@workstation ceph-ansible]# ansible-playbook -i hosts site-docker.yml
```


Copy the Ceph configuration and the administrative keyring to the workstation host:
```bash
[root@workstation ~]# scp root@ceph-node01:/etc/ceph/ceph.conf /etc/ceph/
[root@workstation ~]# scp root@ceph-node01:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
```

Examine the Ceph cluster status and verify access to the cluster:
```bash
[root@workstation ~]# ceph -s
```

:::info
**A Swift Storage cluster running on the differents nodes as Ceph, each also configured with 2 disk devices**

**An additonal storage policy named FOUR should be created, requiring 4 replicas of all objects and using 6 filesystems on 3 nodes.**
:::
***Solution***
Create the ~/rsyncd.conf file.
```bash
[root@workstation-d4f5 ~]# vi ~/rsyncd.conf
```
Enure the global section contains these entries:
```bash
##assumes 'swift' has been used as the Object Storage user/group
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
##address on which the rsync daemon listens
address = <IP>

[account]
max connections = 2
path = /srv/node/
read only = false
write only = no
list = yes
incoming chmod = Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r
outgoing chmod = Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
write only = no
list = yes
incoming chmod = Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r
outgoing chmod = Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
write only = no
list = yes
incoming chmod = Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r
outgoing chmod = Du=rwx,g=rx,o=rx,Fu=rw,g=r,o=r
lock file = /var/lock/object.lock
```
Copy the rsyncd.conf to all storage nodes.
```bash
[root@workstation-d4f5 ~]# grep stor swift-nodes  | xargs -i scp rsyncd.conf root@{}:/etc
```

Replace the IP in the rsyncd.conf to the according storage node.
```bash
[root@workstation-d4f5 ~]# ssh root@stor01 "sed -i 's/<IP>/172.16.7.21/' /etc/rsyncd.conf && cat /etc/rsyncd.conf"
[root@workstation-d4f5 ~]# ssh root@stor02 "sed -i 's/<IP>/172.16.7.22/' /etc/rsyncd.conf && cat /etc/rsyncd.conf"
[root@workstation-d4f5 ~]# ssh root@stor03 "sed -i 's/<IP>/172.16.7.23/' /etc/rsyncd.conf && cat /etc/rsyncd.conf"
```

Enable and start the rsyncd system service on each Swift storage node:
```bash
[root@workstation-d4f5 ~]# grep stor swift-nodes  | xargs -i ssh root@{} "systemctl enable rsyncd"
[root@workstation-d4f5 ~]# grep stor swift-nodes  | xargs -i ssh root@{} "systemctl start rsyncd"
```

Install the software packages on the Swift storage nodes:
```bash
[root@workstation-d4f5 ~]# grep stor swift-nodes  | xargs -i ssh root@{} "yum -y install openstack-swift-object openstack-swift-container openstack-swift-account  memcached"
```
Install the software on the Swift proxy nodes:
```bash
[root@workstation-d4f5 ~]# grep proxy swift-nodes  | xargs -i ssh root@{} "yum -y install openstack-swift-proxy memcached python-memcached"
```


Create a random hash value for Swift’s hash_prefix and hash_suffix:
```bash
[root@workstation ~]$ openssl rand -hex 8 | tee hash_prefix 
[root@workstation ~]$ openssl rand -hex 8 | tee hash_suffix
```

Create swift.conf.
```bash
[root@workstation-d4f5 ~]# vi swift.conf
```

With the following content.  Please replace the prefix and suffix with your own prefix and suffix.
```bash
[swift-hash]
swift_hash_path_prefix = e693e289a2e716a0
swift_hash_path_suffix = 47910b9fd8c8401f

[storage-policy:0]
name = policy-0
policy_type = replication
default = yes

[storage-policy:1]
name = FOUR
policy_type = replication

```
Copy the swift.conf file to /etc/swift/swift.conf at each node in the Swift cluster.
```bash
[root@workstation ~]# cat swift-nodes | xargs -I {} -P 6 scp swift.conf root@{}:/etc/swift/swift.conf
```
Change the bind_ip for container-server.conf, account-server.conf and object-server.conf in all storage ndoes.
```bash
[root@workstation-d4f5 ~]# for n in container-server.conf account-server.conf object-server.conf;do ssh root@stor01 "sed -i 's/127.0.0.1/172.16.7.21/' /etc/swift/${n} && cat /etc/swift/${n}";done

[root@workstation-d4f5 ~]# for n in container-server.conf account-server.conf object-server.conf;do ssh root@stor02 "sed -i 's/127.0.0.1/172.16.7.22/' /etc/swift/${n} && cat /etc/swift/${n}";done

[root@workstation-d4f5 ~]# for n in container-server.conf account-server.conf object-server.conf;do ssh root@stor03 "sed -i 's/127.0.0.1/172.16.7.23/' /etc/swift/${n} && cat /etc/swift/${n}";done
```

Install the python-swift package
```bash
[root@workstation ~]$ yum -y install python-swift
```

Create a directory for creating ring files:
```bash
[root@workstation ~]$ mkdir ~/swift-rings/
[root@workstation ~]$ cd ~/swift-rings/
```
Create all 3 rings (account, container and object)
```bash
[root@workstation-d4f5 swift-rings]# swift-ring-builder ./account.builder create 10 3 0
[root@workstation-d4f5 swift-rings]# swift-ring-builder ./container.builder create 10 3 0
[root@workstation-d4f5 swift-rings]# swift-ring-builder ./object.builder create 10 3 0
[root@workstation-d4f5 swift-rings]# swift-ring-builder ./object-1.builder create 10 4 0
```

Add devices to the account service ring, 2 disks for each storage node:
```bash
swift-ring-builder ./account.builder add z1-172.16.7.21:6202/b1 100
swift-ring-builder ./account.builder add z1-172.16.7.21:6202/c1 100
swift-ring-builder ./account.builder add z1-172.16.7.22:6202/b1 100
swift-ring-builder ./account.builder add z1-172.16.7.22:6202/c1 100
swift-ring-builder ./account.builder add z1-172.16.7.23:6202/b1 100
swift-ring-builder ./account.builder add z1-172.16.7.23:6202/c1 100
```
Add devices to the container service ring, 2 disks for each storage node:
```bash
swift-ring-builder ./container.builder add z1-172.16.7.21:6201/b1 100
swift-ring-builder ./container.builder add z1-172.16.7.21:6201/c1 100
swift-ring-builder ./container.builder add z1-172.16.7.22:6201/b1 100
swift-ring-builder ./container.builder add z1-172.16.7.22:6201/c1 100
swift-ring-builder ./container.builder add z1-172.16.7.23:6201/b1 100
swift-ring-builder ./container.builder add z1-172.16.7.23:6201/c1 100
```
Add devices to the object service ring, 2 disks for each storage node:
```bash
swift-ring-builder ./object.builder add z1-172.16.7.21:6200/b1 100
swift-ring-builder ./object.builder add z1-172.16.7.21:6200/c1 100
swift-ring-builder ./object.builder add z1-172.16.7.22:6200/b1 100
swift-ring-builder ./object.builder add z1-172.16.7.22:6200/c1 100
swift-ring-builder ./object.builder add z1-172.16.7.23:6200/b1 100
swift-ring-builder ./object.builder add z1-172.16.7.23:6200/c1 100

swift-ring-builder ./object-1.builder add z1-172.16.7.21:6200/b1 100
swift-ring-builder ./object-1.builder add z1-172.16.7.21:6200/c1 100
swift-ring-builder ./object-1.builder add z1-172.16.7.22:6200/b1 100
swift-ring-builder ./object-1.builder add z1-172.16.7.22:6200/c1 100
swift-ring-builder ./object-1.builder add z1-172.16.7.23:6200/b1 100
swift-ring-builder ./object-1.builder add z1-172.16.7.23:6200/c1 100
```

Create a binary ring files.
```bash
swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance
swift-ring-builder object-1.builder rebalance
```

Distribute the ring files to all Swift storage nodes:
```bash
[root@workstation-d4f5 swift-rings]# sed 1d ~/nodes | xargs -i scp -r * root@{}:/etc/swift/
```

Enable the Swift account, container and object server services:
```bash
[root@workstation-d4f5 swift-rings]# sed 1d ~/nodes | xargs -i ssh root@{} systemctl enable openstack-swift-account openstack-swift-container openstack-swift-object
```
Start the Swift account, container and object server services:
```bash
[root@workstation-d4f5 swift-rings]# sed 1d ~/nodes | xargs -i ssh root@{} systemctl start openstack-swift-account openstack-swift-container openstack-swift-object
```

:::info
**Two Swift proxy servers running on nodes proxy01 & proxy02**
:::

***Solution***
Install the Swift client software on the workstation node:
```bash
[root@workstation-d4f5 swift-rings]$ yum -y install python-swiftclient
```

Distribute the ring files to the Swift proxy nodes:
```bash
[root@workstation swift-rings]$ scp -r * root@proxy01:/etc/swift/
[root@workstation swift-rings]$ scp -r * root@proxy02:/etc/swift/
```

We will change the proxy nodes server conf using patch here.
Create the proxy-server.conf.patch file
```bash
[root@workstation-d4f5 ~]# vi proxy-server.conf.patch
```

Put the following contents.
```bash=
--- proxy-server.conf.orig	2019-11-25 21:04:24.068917360 -0500
+++ proxy-server.conf	2019-11-25 21:06:11.521459471 -0500
@@ -8,7 +8,7 @@
 # open to access by any client. This is almost always a very bad idea, and
 # it's overridden by OSP Director, so it is likely to go away some time
 # after Newton.
-pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
+#pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
 
 # This sample pipeline uses tempauth and is used for SAIO dev work and
 # testing. See below for a pipeline using keystone.
@@ -17,7 +17,7 @@
 # The following pipeline shows keystone integration. Comment out the one
 # above and uncomment this one. Additional steps for integrating keystone are
 # covered further below in the filter sections for authtoken and keystoneauth.
-#pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystone copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
+pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystone copy container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
 
 [app:proxy-server]
 use = egg:swift#proxy
@@ -39,7 +39,7 @@
 
 [filter:cache]
 use = egg:swift#memcache
-memcache_servers = 127.0.0.1:11211
+memcache_servers = 172.16.7.50:11211
 
 [filter:ratelimit]
 use = egg:swift#ratelimit
@@ -88,10 +88,14 @@
 
 [filter:authtoken]
 paste.filter_factory = keystonemiddleware.auth_token:filter_factory
-admin_tenant_name = %SERVICE_TENANT_NAME%
-admin_user = %SERVICE_USER%
-admin_password = %SERVICE_PASSWORD%
-auth_host = 127.0.0.1
-auth_port = 35357
-auth_protocol = http
-signing_dir = /tmp/keystone-signing-swift
+paste.filter_factory = keystonemiddleware.auth_token:filter_factory
+auth_uri = http://172.16.7.50:5000 
+auth_url = http://172.16.7.50:35357 
+memcached_servers = 172.16.7.50:11211 
+auth_type = password
+project_domain_id = default
+user_domain_id = default
+project_name = services
+username = swift 
+password = r3dh4t1! 
+delay_auth_decision = True
```

Copy the patch to all proxy servers.
```bash
[root@workstation-d4f5 ~]# for n in proxy01 proxy02;do scp proxy-server.conf.patch ${n}:/etc/swift/;done
```

Install patch command in both proxy servers.
```bash
[root@workstation-d4f5 ~]# for n in proxy01 proxy02;do ssh ${n} "yum -y install patch";done
```


Apply the patch to the proxy-server.conf file in both proxy servers.
```bash
[root@workstation-d4f5 ~]# for n in proxy01 proxy02;do ssh ${n} "patch /etc/swift/proxy-server.conf < /etc/swift/proxy-server.conf.patch";done
```
Enable memcached.service and openstack-swift-proxy.service.
```bash
[root@workstation-d4f5 ~]# for n in proxy01 proxy02;do ssh ${n} "systemctl enable memcached openstack-swift-proxy";done
```

Start both services.
```bash
[root@workstation-d4f5 ~]# for n in proxy01 proxy02;do ssh ${n} "systemctl start memcached openstack-swift-proxy";done
```

**Configure HA Proxy**
Connect to haproxy01
```bash
[root@wokstation ~]# scp /etc/hosts haproxy01:/etc/hosts
[root@workstation ~]# ssh root@haproxy01
```
Install haproxy package
```bash
[root@haproxy01 ~]# yum install -y haproxy
```
Back up the orginal haproxy.cfg
```bash
[root@haproxy01 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
```

Confgure file /etc/haproxy/haproxy.cfg with the following content.
```bash
[root@haproxy01 ~]# vi /etc/haproxy/haproxy.cfg
```
```bash
global
    maxconn 4096
    user haproxy
    group haproxy
    daemon

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    retries 3
    option  redispatch
    maxconn 2000
    contimeout    5000
    clitimeout    50000
    srvtimeout    50000

listen swift-cluster 192.168.56.80:8080
    mode    http
    stats   enable
    stats   auth username:password
    balance roundrobin
    option  httpchk HEAD /healthcheck HTTP/1.0
    option  forwardfor
    option  http-server-close
    timeout http-keep-alive 500
    server  proxy01 192.168.56.24:8080 weight 5 check inter 2000
    server  proxy02 192.168.56.25:8080 weight 5 check inter 2000
```

Enable and start the haproxy service
```bash
[root@haproxy01 ~]# systemctl enable haproxy --now
```

Go back to workstation and auth as swift-user.
```bash
[root@workstation-d4f5 ~]# source keystonerc_user 
[root@workstation-d4f5 ~(swift-user)]# 
```

Find out the swift auth key for swift-user. It's not using the swift comes with OpenStack.  Copy the AUTH key in OS_STORAGE_URL (yours will be different).
```bash
[root@workstation-d4f5 ~(swift-user)]# swift auth
export OS_STORAGE_URL=http://172.16.7.50:8080/v1/AUTH_bc0d92bdcf264f95b2be622e72e9eb8f
export OS_AUTH_TOKEN=69dc6688cddf49c49f682f4caece978a
[root@workstation-d4f5 ~(swift-user)]#
```

Edit keystonerc_user for swift-user.
```bash
[root@workstation-d4f5 ~(swift-user)]# vi keystonerc_user
```

Add the OS_STORAGE_URL with your AUTH token to the end of the file as shown below. Note that the IP is different (swift cluster now).
```bash
export OS_USERNAME=swift-user
export OS_PASSWORD=r3dh4t1!
export OS_AUTH_URL=http://172.16.7.50:5000/v3
export PS1='[\u@\h \W(swift-user)]\$ '
export OS_PROJECT_NAME=swift-project
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_STORAGE_URL=http://192.168.56.80:8080/v1/AUTH_bc0d92bdcf264f95b2be622e72e9eb8f

```
Assign the SwiftOperator role to the created user account in the swift-users project:
```bash
[root@workstation-d4f5 ~(swift-user)]# source keystonerc_admin 
[root@workstation-d4f5 ~(keystone_admin)]# openstack role add --user swift-user --project swift-project SwiftOperator
```

To test out if swift is working properly.
```bash
[root@workstation-d4f5 ~(swift-user)]# swift auth
export OS_STORAGE_URL=http://192.168.56.80:8080/v1/AUTH_bc0d92bdcf264f95b2be622e72e9eb8f
export OS_AUTH_TOKEN=1a61712a8bdd482c8527ef5145d9f021
[root@workstation-d4f5 ~(swift-user)]# swift stat
               Account: AUTH_bc0d92bdcf264f95b2be622e72e9eb8f
            Containers: 0
               Objects: 0
                 Bytes: 0
       X-Put-Timestamp: 1574738975.77705
           X-Timestamp: 1574738975.77705
            X-Trans-Id: tx9bd7be26d03840c5b72c0-005ddc9c1e
          Content-Type: text/plain; charset=utf-8
X-Openstack-Request-Id: tx9bd7be26d03840c5b72c0-005ddc9c1e
[root@workstation-d4f5 ~(swift-user)]#
```

:::info
A container named OSPAS should be created that uses storage policy policy-0
:::
***Solution***
Auth as swift user and issue the following command.
```bash
[root@workstation-d4f5 ~(swift-user)]# swift post OSPAS
```

Make sure it's in the policy-0.
```bash
[root@workstation-d4f5 ~(swift-user)]# swift stat OSPAS
               Account: AUTH_bc0d92bdcf264f95b2be622e72e9eb8f
             Container: OSPAS
               Objects: 0
                 Bytes: 0
              Read ACL:
             Write ACL:
               Sync To:
              Sync Key:
         Accept-Ranges: bytes
      X-Storage-Policy: policy-0
         Last-Modified: Tue, 26 Nov 2019 03:32:29 GMT
           X-Timestamp: 1574739148.74637
            X-Trans-Id: txa2ca3f49cb7a4db8becce-005ddc9d76
          Content-Type: application/json; charset=utf-8
X-Openstack-Request-Id: txa2ca3f49cb7a4db8becce-005ddc9d76
[root@workstation-d4f5 ~(swift-user)]# 
```

:::info
On the workstation node create a swiftrc file should be created in the root user’s home directory on the workstation node so that the swift-user OpenStack account should be able to write objects into a container called DATA that uses the Swift storage policy FOUR
:::
***Solution***
Create the container DATA with policy FOUR
```bash
[root@workstation-d4f5 ~(swift-user)]# source keystonerc_user 
[root@workstation-d4f5 ~(swift-user)]# swift post --header 'X-Storage-Policy:FOUR' DATA
```

Copy the keystonerc_user to swiftrc
```bash
[root@workstation-d4f5 ~(swift-user)]# cp keystonerc_user swiftrc
```

:::info
**Cinder should be configured with 3 backend storage pools, all of the type RBD**
**Volumes >10gb should Only be created in the cinder-large storage pool**
**Volumes <5gb should Only be created in the cinder-small storage pool**
**The third storage pool should be named cinder-med**
**Volumes >5gb & <10gb should be spread evenly across all 3 storage pools**
**Ensure that the grading script can create 5 volumes to evaluate the cinder scheduler configuration**
**Cinder should NOT be dependent on a local volume group on the controller node to start**
**Cinder backups should be configured use a NFS share from the server storage**
:::
***Solution***

Use RBD as backend instead of LVM

From the workstation node, create a all 3 storage pools in the Ceph cluster using 128 PGs.  They're called large-vol, med-vol and small-vol.
```bash
ceph osd pool create large-vol 128
ceph osd pool create med-vol 128
ceph osd pool create small-vol 128
```

Create one user to access each pool.  They're cender-l, cender-m and cinder-s.
```bash
ceph auth get-or-create client.cinder-l mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=large-vol' -o /etc/ceph/ceph.client.cinder-l.keyring
ceph auth get-or-create client.cinder-m mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=med-vol' -o /etc/ceph/ceph.client.cinder-m.keyring
ceph auth get-or-create client.cinder-s mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=small-vol' -o /etc/ceph/ceph.client.cinder-s.keyring
```

Copy the cinder user’s keyring file to the /etc/ceph directory on ctrl01 to distribute the keyring:
```bash
[root@workstation ~]$ ssh root@ctrl01 yum -y install ceph-common
[root@workstation ~]$ scp /etc/ceph/ceph.client.cinder-*.keyring root@ctrl01:/etc/ceph/
```

Retrieve each cinder user’s key and save it as a file in ctrl01.
```bash
ceph auth get-key client.cinder-l | ssh ctrl01 tee ./client.cinder-l.key
ceph auth get-key client.cinder-m | ssh ctrl01 tee ./client.cinder-m.key
ceph auth get-key client.cinder-s | ssh ctrl01 tee ./client.cinder-s.key
```

Distribute the /etc/ceph/ceph.conf file to the ctrl01 node:
```bash
[root@workstation ~]$ scp /etc/ceph/ceph.conf root@ctrl01:/etc/ceph
```

From the ctrl01 server, view the Ceph status to verify that you can access the cluster as the cinder user:
```bash
[root@workstation ~]# ssh ctrl01
[root@ctrl01 ~]$ ceph --id cinder-l -s
```

On ctrl01, set the ceph.client.cinder-*.keyring file group to cinder to enable Cinder access to the keyring, and update the file permissions to allow group read permissions:
```bash
[root@ctrl01 ~]# chgrp cinder /etc/ceph/ceph.client.cinder-*.keyring
[root@ctrl01 ~]# chmod 0640 /etc/ceph/ceph.client.cinder-*.keyring
```

Generate a UUID and save it in myuuid.txt:
```bash
[root@ctrl01 ~]$ uuidgen | tee ~/myuuid.txt
```

Add an [rbd] section, using the UUID previously generated for the rbd_secret_uuid parameter as the following sample code fragment shows:
```bash
[root@ctrl01 ~]$ vi /etc/cinder/cinder.conf
```
Sample Fragment ==(Please aware that the UUID is the UUID you generated before)==
```bash
[DEFAULT]
...
default_volume_type = cinder-large
enabled_backends = cinder-large,cinder-med,cinder-small
...
[cinder-large]
backend_host = rbd:cinder
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_pool = large-vol
rbd_user = cinder-l
rbd_secret_uuid = b3af1703-1e55-4139-a274-5d370ee35528
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rbd_flatten_volume_from_snapshot = false
filter_function = "5 <= volume.size"

[cinder-med]
backend_host = rbd:cinder
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_pool = med-vol
rbd_user = cinder-m
rbd_secret_uuid = b3af1703-1e55-4139-a274-5d370ee35528
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rbd_flatten_volume_from_snapshot = false
filter_function = "(5 <= volume.size) and (volume.size < 10)"

[cinder-small]
backend_host = rbd:cinder
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_pool = small-vol
rbd_user = cinder-s
rbd_secret_uuid = b3af1703-1e55-4139-a274-5d370ee35528
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rbd_flatten_volume_from_snapshot = false
filter_function = "volume.size < 10"
```

Go back to workstation, auth as keystone admin, then reate a new volume type.
```bash
[root@workstation-d4f5 ~]# source keystonerc_admin 
[root@workstation-d4f5 ~(keystone_admin)]# openstack volume type create cinder-large
[root@workstation-d4f5 ~(keystone_admin)]# openstack volume type create cinder-med
[root@workstation-d4f5 ~(keystone_admin)]# openstack volume type create cinder-small
```

Restart both cinder-api and cinder-volume services so that Cinder starts using the RBD back end:
```bash
[root@workstation-d4f5 ~]# ssh ctrl01 systemctl restart openstack-cinder-api  openstack-cinder-volume
```


Create a file named ceph.xml for virsh to use to create a libvirt secret: ==(uuid is your uuid)==
```bash
[root@ctrl01 ~]# vi ceph.xml
```
```xml
<secret ephemeral="no" private="no">
  <uuid> bdbed7ec-9c95-42d9-8f52-bfc4b5baaa8e </uuid>
    <usage type="ceph">
      <name>cinder secret</name>
    </usage>
</secret>
```

Distribute ceph.xml to each compute node:
```bash
[root@ctrl01 ~]# scp ceph.xml root@compute01:~/
[root@ctrl01 ~]# scp ceph.xml root@compute02:~/
```

Using virsh to define a libvirt secret using the ceph.xml file created:
```bash
[root@ctrl01 ~]# ssh compute01 virsh secret-define --file ~/ceph.xml
[root@ctrl01 ~]# ssh compute02 virsh secret-define --file ~/ceph.xml
```

Copy the cinder client’s key, stored in a file created in a previous step, to the compute nodes:
```bash
[root@ctrl01 ~]# scp client.cinder-*.key root@compute01:~/
[root@ctrl01 ~]# scp client.cinder-*.key root@compute02:~/
```

Using the UUID generated previously, assign the proper value to the libvirt secret:
```bash
ssh compute01 virsh secret-set-value --secret $(cat myuuid.txt) --base64 $(cat ~/client.cinder-l.key)
ssh compute01 virsh secret-set-value --secret $(cat myuuid.txt) --base64 $(cat ~/client.cinder-m.key)
ssh compute01 virsh secret-set-value --secret $(cat myuuid.txt) --base64 $(cat ~/client.cinder-s.key)
ssh compute02 virsh secret-set-value --secret $(cat myuuid.txt) --base64 $(cat ~/client.cinder-l.key)
ssh compute02 virsh secret-set-value --secret $(cat myuuid.txt) --base64 $(cat ~/client.cinder-m.key)
ssh compute02 virsh secret-set-value --secret $(cat myuuid.txt) --base64 $(cat ~/client.cinder-s.key)
```

Configure NFS

Connect to storage node
```bash
[root@ctrl01 ~(keystone_admin)]# ssh storage
```
Install nfs server and configure firewall
```bash
[root@storage ~]# yum install nfs-utils -y
```

Enable service and firewall.
```bash
systemctl enable nfs-server --now
firewall-cmd --add-service=nfs --permanent
firewall-cmd --add-service=rpc-bind --permanent
firewall-cmd  --add-service=mountd --permanent
firewall-cmd --reload
```
Configure export
```bash
parted -a optimal /dev/vdb mklabel gpt  mkpart primary 0% 30G
mkfs.ext4  /dev/vdb1
echo "/dev/vdb1 /exports ext4 defaults 0 0" >> /etc/fstab
mount -a
echo "/exports 172.16.7.0/24(rw,no_root_squash)" >> /etc/exports
exportfs -a
```



Go back to ctrl01 and install crudini so we can change the config file from command line.
```bash
[root@ctrl01 ~]# yum install -y crudini
```

Configure cinder to use storage as NFS as backup driver.
```bash
echo "storage:/exports" > /etc/cinder/nfs_shares
chown root:cinder /etc/cinder/nfs_shares
chmod 0640 /etc/cinder/nfs_shares
crudini --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfs_shares
crudini --set /etc/cinder/cinder.conf DEFAULT backup_driver cinder.backup.drivers.nfs
```

Enable SELinux policy
```bash
[root@ctrl01 cinder(keystone_admin)]# setsebool -P virt_use_nfs on
[root@ctrl01 cinder(keystone_admin)]# ssh compute01 setsebool -P virt_use_nfs on
[root@ctrl01 cinder(keystone_admin)]# ssh compute02 setsebool -P virt_use_nfs on
```

Restart services and get list of the pools
```bash
[root@ctrl01 cinder(keystone_admin)]# systemctl restart openstack-cinder-volume.service
```

:::info
**Glance should be configured to use a Ceph RBD storage pool called "images"**
:::
***Solution***
Create a pool using the ceph utility.
```bash
[root@workstation ~]# ceph osd pool create images 128
```

Create a CephX user for glance
```bash
[root@workstation-d4f5 ~(keystone_admin)]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' -o /etc/ceph/ceph.client.glance.keyring
```

Copy the keyring file to ctrl01
```bash
[root@workstation ~]# scp /etc/ceph/ceph.client.glance.keyring root@ctrl01:/etc/ceph
[root@workstation ~]# ssh root@ctrl01 chown root:glance /etc/ceph/ceph.client.glance.keyring
[root@workstation ~]# ssh root@ctrl01 chmod 640 /etc/ceph/ceph.client.glance.keyring
```

Go to ctrl01 and config Glance.
```bash
[root@workstation-d4f5 ~]# ssh ctrl01
```

Configure glance-api.conf to use RBD
```bash
crudini --set /etc/glance/glance-api.conf DEFAULT default_store rbd
crudini --set /etc/glance/glance-api.conf glance_store default_store rbd
crudini --set /etc/glance/glance-api.conf glance_store stores rbd
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_pool images
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_user glance
crudini --set /etc/glance/glance-api.conf glance_store rbd_store_ceph_conf /etc/ceph/ceph.conf
```

Restart openstack-glance-api service
```bash
[root@ctrl01 ~(keystone_admin)]# systemctl restart openstack-glance-api
```

:::info
**OpenStack should be configured to use the Swift Cluster running on stor01 - stor03, NOT a local Swift AIO on the controller node**
:::
***Solution***
Already configured in keystone_user and swiftrc.

:::info
**SELinux should be enabled on all nodes**
:::
***Solution***
It's done by default.  To verify, use the following command in workstation.
```bash
[root@workstation-d4f5 ~]# for HOST in `grep 172 /etc/hosts | awk '{print $3}'`; do echo -e \\n----$HOST; ssh root@$HOST sestatus; done
```
