Adv. Storage Homework

Acme Consolidated has engaged with Red Hat to implement an OpenStack environment. They have defined the following implementation criteria;

A single RHOSP 13 Controller Node running the core RHOSP services with two RHOSP 13 Compute nodes

  • The workstation node root user should have a properly configured keystonerc_admin file for administrative access to the OpenStack cluster
root@workstation-f8c3 ~(keystone_admin)$ cat keystonerc_admin
unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD='r3dh4t1!'
export OS_AUTH_URL=http://172.16.7.50:5000/v3
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
    #
  • There should be an user named swift-user in a new project named swift-project created in OpenStack
# Create OpenStack Tenant, Users, and Assign Roles
openstack project create --description "Homework Tenant" swift-project
openstack user create --project swift-project --password r3dh4t1! swift-user
openstack role add --project swift-project --user swift-user admin
  • The workstation node root user should have a properly configured keystonerc_user file for access to the OpenStack cluster as swift-user
root@workstation-f8c3 ~(keystone_admin)$ cat rc_swift-user
export OS_USERNAME=swift-user
export OS_PASSWORD=r3dh4t1!
export OS_AUTH_URL=http://172.16.7.50:5000/v3
export PS1='[\u@\h \W(swift-user)]\$ '
export OS_PROJECT_NAME=swift-project
export OS_REGION_NAME=RegionOne
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3

## Validation
root@workstation-f8c3 ~(keystone_admin)# source rc_swift-user
root@workstation-f8c3 ~(swift-user)# openstack server list
root@workstation-f8c3 ~(swift-user)# openstack service list
  • The workstation node should have the python-openstackclient package installed
yum install -y python-openstackclient

A Ceph Storage cluster running on 3 nodes, each containing 2 disk devices

  • The 3 (three) Ceph MON nodes should be configured on different nodes than OSD nodes
cat /usr/share/ceph-ansible/hosts
[mons]
ceph-mon0[1:3]

[mgrs]
ceph-mon0[1:3]

[osds]
ceph-node0[1:3]

[rgws]
ceph-mon0[1:3]

[mdss]
ceph-mon0[1:3]

[nfss]
ceph-mon0[1:3]
  • Ceph should be configured so that your OPENTLC account has administrative access to ceph
  • A RADOS gateway running on MON nodes
  • A Swift Storage cluster running on the differents nodes as Ceph, each also configured with 2 disk devices
    • An additonal storage policy named FOUR should be created, requiring 4 replicas of all objects and using 6 filesystems on 3 nodes.
    • A container named OSPAS should be created that uses storage policy policy-0
    • Two Swift proxy servers running on nodes proxy01 & proxy02
    • On the workstation node create a swiftrc file should be created in the root user’s home directory on the workstation node so that the swift-user OpenStack account should be able to write objects into a container called DATA that uses the Swift storage policy FOUR
  • Cinder should be configured with 3 backend storage pools, all of the type RBD
    • Volumes >10gb should Only be created in the cinder-large storage pool
    • Volumes <5gb should Only be created in the cinder-small storage pool
    • The third storage pool should be named cinder-med
    • Volumes >5gb & <10gb should be spread evenly across all 3 storage pools
    • Ensure that the grading script can create 5 volumes to evaluate the cinder scheduler configuration

Cinder backups should be configured use a NFS share from the server storage

Cinder should NOT be dependent on a local volume group on the controller node to start

Glance should be configured to use a Ceph RBD storage pool called "images"

OpenStack should be configured to use the Swift Cluster running on stor01 - stor03, NOT a local Swift AIO on the controller node

[x] SELinux should be enabled on all nodes

Select a repo