# OpenStack
[OpenStack Installation Guide](https://docs.openstack.org/install-guide/)
## Host networking
### controller node
1. 編輯/etc/network/interfaces
```
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
```
> 覆蓋INTERFACE_NAME為第二張網卡名稱
2. 重新啟動系統以激活更改
3. 編輯/etc/hosts
> 將節點的主機名設置為controller
```
127.0.0.1 localhost
10.0.1.97 controller
10.0.1.98 compute1
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
```
> Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry
> 本教學包含可選服務的主機條目,以便在您選擇部署它們時降低複雜性
### compute node
1. 編輯/etc/network/interfaces
```
# The provider network interface
auto INTERFACE_NAME
iface INTERFACE_NAME inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
```
> 覆蓋INTERFACE_NAME為第二張網卡名稱
2. 重新啟動系統以激活更改
3. 編輯/etc/hosts
> 將controller node主機名設置為controller
> 將compute node主機名設置為compute1
```
127.0.0.1 localhost
10.0.1.97 controller
10.0.1.98 compute1
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
```
> Some distributions add an extraneous entry in the /etc/hosts file that resolves the actual hostname to another loopback IP address such as 127.0.1.1. You must comment out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1 entry
> 本教學包含可選服務的主機條目,以便在您選擇部署它們時降低複雜性
### Verify connectivity
1. 在controller node,測試網路
```
# ping -c 4 openstack.org
PING openstack.org (162.242.140.107) 56(84) bytes of data.
64 bytes from 162.242.140.107: icmp_seq=1 ttl=48 time=165 ms
64 bytes from 162.242.140.107: icmp_seq=2 ttl=48 time=165 ms
64 bytes from 162.242.140.107: icmp_seq=3 ttl=48 time=165 ms
64 bytes from 162.242.140.107: icmp_seq=4 ttl=48 time=165 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 12168ms
rtt min/avg/max/mdev = 165.045/165.362/165.601/0.216 ms
```
2. 在controller node,測試對compute node上管理接口的訪問
```
# ping -c 4 compute1
PING compute1 (10.0.1.98) 56(84) bytes of data.
64 bytes from compute1 (10.0.1.98): icmp_seq=1 ttl=64 time=0.660 ms
64 bytes from compute1 (10.0.1.98): icmp_seq=2 ttl=64 time=0.717 ms
64 bytes from compute1 (10.0.1.98): icmp_seq=3 ttl=64 time=0.651 ms
64 bytes from compute1 (10.0.1.98): icmp_seq=4 ttl=64 time=0.661 ms
--- compute1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.651/0.672/0.717/0.031 ms
```
3. 在compute node,測試網路
```
# ping -c 4 openstack.org
PING openstack.org (162.242.140.107) 56(84) bytes of data.
64 bytes from 162.242.140.107: icmp_seq=1 ttl=48 time=165 ms
64 bytes from 162.242.140.107: icmp_seq=2 ttl=48 time=165 ms
64 bytes from 162.242.140.107: icmp_seq=3 ttl=48 time=165 ms
64 bytes from 162.242.140.107: icmp_seq=4 ttl=48 time=165 ms
--- openstack.org ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 12169ms
rtt min/avg/max/mdev = 165.270/165.421/165.630/0.138 ms
```
4. 在compute node,測試對controller node上管理接口的訪問
```
# ping -c 4 controller
PING controller (10.0.1.97) 56(84) bytes of data.
64 bytes from controller (10.0.1.97): icmp_seq=1 ttl=64 time=0.730 ms
64 bytes from controller (10.0.1.97): icmp_seq=2 ttl=64 time=0.700 ms
64 bytes from controller (10.0.1.97): icmp_seq=3 ttl=64 time=0.760 ms
64 bytes from controller (10.0.1.97): icmp_seq=4 ttl=64 time=0.684 ms
--- controller ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.684/0.718/0.760/0.039 ms
```
> RHEL,CentOS和SUSE發行版默認啟用限制性防火牆。 在安裝過程中,除非您更改或禁用防火牆,否則某些步驟將失敗。 有關保護環境的更多信息,請參閱[OpenStack Security Guide](https://docs.openstack.org/security-guide/)
> 默認情況下,Ubuntu不啟用限制性防火牆。 有關保護環境的更多信息,請參閱[OpenStack Security Guide](https://docs.openstack.org/security-guide/)。
## OpenStack packages
### Enable the Ubuntu Cloud Archive pocket as needed
OpenStack Queens for Ubuntu 16.04 LTS:
```
# apt install software-properties-common
# add-apt-repository cloud-archive:queens
```
### 完成安裝
1. 在所有節點上Upgrade the packages:
```
# apt update && apt dist-upgrade
```
2. 安裝OpenStack client:
```
# apt install python-openstackclient
```
## SQL database
大部分OpenStack服務都使用SQL Database儲存資訊,Database通常運行在controller node,本操作指南將使用MariaDB和MySQL,還支援其他SQL Database 如:PostgreSQL
>Note
>如果看到Too many connections 或 Too many open files錯誤訊息在OpenStack服務上,請檢查最大連線數限制設定是否適用於你的環境,在MariaDB你可能需要更改[open_files_limit](https://mariadb.com/kb/en/library/server-system-variables/#open_files_limit)配置
### Install and configure components
> [MariaDB和MySQL常用指令](https://www.jinnsblog.com/2017/08/mysql-mariadb-sample.html)
1. Install the packages:
```
# apt install mariadb-server python-pymysql
```
2. 新增檔案/etc/mysql/mariadb.conf.d/99-openstack.cnf並編輯:
* 新增[mysqld]
* 設定bind-address為進入IP位址
```
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
```
### 完成安裝
1. 重啟Database服務:
```
# service mysql restart
```
2. 安全性設定:
```
# mysql_secure_installation
1. Enter current password for root (enter for none):輸入root密碼,第一次設定時預設值是空的,所以直接按Enter即可,接著會詢問是否要設定root密碼,輸入「N」
2. Remove anonymous users? [Y/n]:是否要移除匿名使用者?輸入「N」
3. Disallow root login remotely? [Y/n]:是否要關閉root遠端登入的功能?依自己需求決定,一般基於安全性考量,輸入「N」
4. Remove test database and access to it? [Y/n]:是否要移除測試的資料庫?建議選擇「Y」來移除
5. Reload privilege tables now? [Y/n]:是否要重新載入表格權限?建議選擇「Y」
```
## Message queue
OpenStack使用message queue來協調服務之間的操作和狀態資訊,message queue通常執行在controller node,OpenStack支持多種message queue服務,包括:RabbitMQ、Qpid、ZeroMQ。但是,大部分發行的OpenStack package支援特定的message queue服務,本教學手冊使用RabbitMQ message queue,因為大部分發行的版本都支持RabbitMQ,如果你希望使用其他message queue,可查閱與其相關的文件。
>message queue執行在controller node
### Install and configure components
1. 安裝package:
```
# apt install rabbitmq-server
```
2. 新增OpenStack使用者:
```
# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack" ...
```
> 可使用適合的密碼替代RABBIT_PASS,但須注意,之後相關操作需使用相同密碼
3. 允許OpenStack使用者可進行讀寫訪問:
```
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
```
## Memcached
身分驗證機制服務使用Memcached來cache tokens,Memcached服務通常執行於controller node,在部署上,我們建議啟用防火牆、身分驗證和加密來保護它。
> Memcached通常執行在controller node上
### Install and configure components
1. 安裝packages:
```
# apt install memcached python-memcache
```
2. 編輯/etc/memcached.conf檔案,配置服務所使用的管理IP(使用controller node IP),開啟通道是為了讓其他節點能夠經由此管理網路:
```
-l 10.0.0.11
```
> 請更改原本存在的 -l 127.0.0.1
### Finalize installation
重啟Memcached服務:
```
# service memcached restart
```
## keystone(Identity service)
### Identity service overview:
OpenStack Identity service提供單一節點整合管理用於管理身份驗證、授權和目錄服務。
Identity service通常是使用者使用的第一個服務。經由身分驗證後,使用者可以使用其身分拜訪其他OpenStack服務。同樣的OpenStack其他服務可透過Identity service,來確認哪些使用者可以發現哪些服務並使用。Identity service也可以整合外部的使用管理系統像是LDAP。
使用者和服務可以透過Identity service的目錄來查詢其他服務,顧名思義,服務目錄OpenStack部署可用目錄的一個集合。每個服務可以有一個或多個端點,每個端點可以是以下三種類型之一:admin、internal或public。在production環境中,不同的端點類型可能在於不同的網路類型及不同使用者使用,舉例來說,public API network可以被看見,因此客戶可以透過它來管理他們的雲。而admin API network可能僅限於雲的營運商或管理者。internal API network可能僅限於OpenStack服務主機。此外,OpenStack支持多個區域以實現可伸縮性。為簡單起見,本教學將管理網絡用於所有端點類型和默認的RegionOne區域。在身份服務中創建的區域、服務和端點(regions、services、endpoints)一起構成部署的服務目錄。在部署每個OpenStack服務都需要存儲相應端點的服務條目在Identity service中。這可以在安裝和配置Identity service之後完成。
Identity service由三個元件組成:
* Server:集中式服務器使用RESTful接口提供身份驗證和授權服務。
* Drivers:驅動程式或服務後端集成到中央服務器。 它們用於訪問OpenStack外部存儲庫中的身份信息,並且可能已存在於部署OpenStack的基礎架構中(例如,SQL數據庫或LDAP服務器)。
* Modules:Middleware modules在使用Identity service的OpenStack組件的address space中運行。 這些模塊攔截服務請求,提取用戶憑據,並將它們發送到中央服務器進行授權。Middleware modules和OpenStack組件之間的集成使用Python Web服務器網關接口。
### Install and configure:
本節介紹如何在controller node上安裝和配置OpenStack Identity service(keystone),出於可伸縮性的目的,此配置部署了Fernet tokens和Apache HTTP服務器來處理請求。
### Prerequisites:
在安裝Identity service之前,你必須先創一個資料庫
1. root權限連接至資料庫:
```
# mysql
```
2. 建立keystone database:
```
MariaDB [(none)]> CREATE DATABASE keystone;
```
3. 給予合適的權限:
```
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'KEYSTONE_DBPASS';
```
> 可使用適合的密碼替代KEYSTONE_DBPASS,但需注意,之後相關操作需使用相同密碼
4. 退出資料庫連線:
```
MariaDB [(none)]> exit;
Bye
```
### Install and configure components
> 默認配置文件因佈局而異。您可能需要添加這些部分和選項,而不是修改現有的部分和選項。此外,配置片段中的省略號(...)表示您應保留的潛在默認配置選項。
> 本指南使用帶有mod_wsgi的Apache HTTP服務器在端口5000上提供身份服務請求。默認情況下,keystone服務仍會偵聽此端口。該組件為您處理所有Apache配置(包括在Apache中激活mod_wsgi apache2模塊和keystone配置)。
1. 安裝packages:
```
# apt install keystone apache2 libapache2-mod-wsgi
```
2. 編輯 /etc/keystone/keystone.conf 檔案
* 在[database] 區塊,配置database通道:
```
[database]
# ...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
```
> 需覆蓋你更改的KEYSTONE_DBPASS資料庫密碼
> 註釋掉或刪除[database]部分中的任何其他連接選項
* 在[token]區塊,配置Fernet token提供者
```
[token]
# ...
provider = fernet
```
3. 同步Identity service database:
```
# su -s /bin/sh -c "keystone-manage db_sync" keystone
```
4. 初始化Fernet key儲存庫:
```
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
```
5. Bootstrap the Identity service:
```
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
```
> 可使用適當的密碼ADMIN_PASS給administrative user使用,但需注意,之後相關操作需使用相同密碼
### Configure the Apache HTTP server:
1. 編輯/etc/apache2/apache2.conf檔案,配置ServerName選項以引用controller node:
```
ServerName controller
```
> 可透過以下方式配置IP對應名稱:
> root@controller:/home/ubuntu# vim /etc/hosts
> auto lo
> iface lo inet loopback
> 127.0.0.1 localhost
> 10.0.1.58 controller(新增行)
### Finalize the installation:
1. 重啟Apache服務:
```
# service apache2 restart
```
2. 配置administrative帳戶:
```
$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:5000/v3
$ export OS_IDENTITY_API_VERSION=3
```
> ADMIN_PASS需與之前設定相同
### Create a domain, projects, users, and roles
Identity service為每個OpenStack服務提供身份驗證服務。結合了domains、projects、users、roles。
1. 雖然本指南中的keystone-manage bootstrap步驟中已存在"default" domain(域),但創建新domain(域)的正式方法是:
```
$ openstack domain create --description "An Example Domain" example
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | An Example Domain |
| enabled | True |
| id | 509294ce37cc4a56bc2e1af4871ebcf0 |
| name | example |
| tags | [] |
+-------------+----------------------------------+
```
2. 本指南使用的服務專案包含為您添加到您的環境中的每個服務的唯一用戶。 創建服務項目:
```
$ openstack project create --domain default \
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 5efbc1269f634ac9bc3141ec12585ad2 |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
```
3. 常規(非管理員)任務應使用非特權專案和用戶。 例如,本指南創建了演示專案和用戶。
* 建立一個demo專案:
```
$ openstack project create --domain default \
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 41608f5a24f846ad973eaf5de7b313a3 |
| is_domain | False |
| name | demo |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
```
> 在為此項目創建其他用戶時,請勿重複此步驟。
* 建立demo使用者:
```
$ openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | a9b12ca3595b4ef0a2f670a8a9b9f38a |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
```
> User Password:為設定demo使用者密碼,建議使用DEMO_PASS做為密碼
* 建立user role:
```
$ openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | a1898e23a2bc49938577be8fc64348ff |
| name | user |
+-----------+----------------------------------+
```
* 將user role添加到demo project和用戶:
```
$ openstack role add --project demo --user demo user
```
> 此命令不提供輸出
> 您可以重複此過程以創建其他項目和用戶
### Create OpenStack client environment scripts
> 客戶端環境腳本的路徑不受限制。 為方便起見,您可以將腳本放在任何位置,但請確保它們可以訪問並位於適合部署的安全位置,因為它們包含敏感憑據
1. 建立admin-openrc檔案並新增以下內容,給admin使用者使用:
```
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
```
> ADMIN_PASS需與之前設定相同
2. 建立demo-openrc檔案並新增以下內容,給demo使用者適用:
```
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
```
> DEMO_PASS需與之前設定相同
### Verify operation
1. 取消設置臨時OS_AUTH_URL和OS_PASSWORD環境變數:
```
$ unset OS_AUTH_URL OS_PASSWORD
```
2. 當admin用戶,請求身份驗證token:
* Method one:
```
$ openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:14:07.056119Z |
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
```
* Method two:
```
//加載admin-openrc文件,使用Identity service的位置以及admin項目和用戶憑據填充環境變數
$ . admin-openrc
$ openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+
```
> 此命令使用admin用戶的密碼
3. 當demo用戶,請求身份驗證token:
* Method one:
```
$ openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:15:39.014479Z |
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
| project_id | ed0b60bf607743088218b0a533d5943f |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+-----------------------------------------------------------------+
```
* Method two:
```
//加載demo-openrc文件,使用Identity service的位置以及admin項目和用戶憑據填充環境變數
$ . demo-openrc
$ openstack token issue
+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:15:39.014479Z |
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
| project_id | ed0b60bf607743088218b0a533d5943f |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+------------+-----------------------------------------------------------------+
```
> 此命令使用demo用戶及密碼對API port 5000進行訪問,該端口僅允許對Identity Service API進行常規(非管理員)訪問
## glance(Image service)
### Image service overview
### Install and configure
本節介紹如何在controller node上安裝和配置代號為glance的Image服務。
### Prerequisites
在安裝和配置Image service之前,必須建立database,服務憑據和API endpoints。
1. 使用root權限透過database client連接至database server:
```
$ mysql -u root -p
Enter password:
```
> password預設值是空的直接enter
2. 建立glance database:
```
MariaDB [(none)]> CREATE DATABASE glance;
```
3. 給予合適的權限:
```
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
IDENTIFIED BY 'GLANCE_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'GLANCE_DBPASS';
```
> 可使用適合的password取代GLANCE_DBPASS,但需注意,之後相關操作需使用相同密碼
4. 退出資料庫連線:
```
MariaDB [(none)]> exit;
Bye
```
### 獲取admin憑據以獲取對admin CLI命令的訪問權限
```
$ . admin-openrc
```
### 要創建服務憑據,請完成以下步驟
1. 建立glance使用者:
```
$ openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 2607a336f87f4b97b7501f76f1306552 |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
```
> User Password:為設定glance使用者密碼,建議使用GLANCE_PASS做為密碼
2. 將admin角色添加到glance用戶和service專案中:
```
$ openstack role add --project service --user glance admin
```
> 此命令不提供輸出
3. 建立glance服務實體:
```
$ openstack service create --name glance \
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 2e7719c8220c46bd9b63a68f26267fcd |
| name | glance |
| type | image |
+-------------+----------------------------------+
```
### 建立Image service API endpoints
```
$ openstack endpoint create --region RegionOne \
image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 340be3625e9b4239a6415d034e98aace |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 0c37ed58103f4300a84ff125a539032d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+--------------+----------------------------------+
```
### Install and configure components
> 默認配置文件因佈局而異。您可能需要添加這些部分和選項,而不是修改現有的部分和選項。此外,配置片段中的省略號(…)表示您應保留的潛在默認配置選項。
本指南使用帶有mod_wsgi的Apache HTTP服務器在端口5000上提供身份服務請求。默認情況下,keystone服務仍會偵聽此端口。該組件為您處理所有Apache配置(包括在Apache中激活mod_wsgi apache2模塊和keystone配置)。
1. 安裝packages:
```
# apt install glance
```
2. 編輯/etc/glance/glance-api.conf 檔案,並完成以下步驟:
* 在[database]區塊,配置database通道:
```
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
```
> 需覆蓋你更改的GLANCE_DBPASS資料庫密碼
> 註釋掉或刪除[database]部分中的任何其他連接選項
* 在[keystone_authtoken]、[paste_deploy]區塊,配置Identity service 通道
```
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
```
> 需覆蓋你更改的GLANCE_PASS密碼
> 註釋掉或刪除[keystone_authtoken]部分中的任何其他連接選項
* 在[glance_store]區塊,配置本地文件系統存儲和映像文件的位置:
```
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
```
### 編輯/etc/glance/glance-registry.conf檔案,完成相關操作
* 在[database]區塊,配置database通道:
```
[database]
# ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
```
> 需覆蓋你更改的GLANCE_DBPASS資料庫密碼
> 註釋掉或刪除[database]部分中的任何其他連接選項
* 在[keystone_authtoken]、[paste_deploy]區塊,配置Identity service通道:
```
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
# ...
flavor = keystone
```
> 需覆蓋你更改的GLANCE_PASS密碼
> 註釋掉或刪除[keystone_authtoken]部分中的任何其他連接選項
### 同步Image service database
```
# su -s /bin/sh -c "glance-manage db_sync" glance
```
> 忽略此輸出中的任何棄用消息
### Finalize installation
1. 重啟Image services:
```
# service glance-registry restart
# service glance-api restart
```
### Verify operation
使用CirrOS驗證Image服務的操作,CirrOS是一個小型Linux映像,可幫助您測試OpenStack部署
1. 獲取admin憑據以獲取對admin CLI命令的訪問權限:
```
$ . admin-openrc
```
2. 下載image:
```
$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
```
> 如果您的發行版不包含wget,請安裝wget
3. 使用QCOW2磁盤格式、bare container format和公共可見性將image上載到Image service,以便所有項目都可以訪問它:
```
$ openstack image create "cirros" \
--file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2015-03-26T16:52:10Z |
| disk_format | qcow2 |
| file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file |
| id | cc5c6982-4910-471e-b864-1098015901b5 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | ae7a98326b9c455588edd2656d723b9d |
| protected | False |
| schema | /v2/schemas/image |
| size | 13200896 |
| status | active |
| tags | |
| updated_at | 2015-03-26T16:52:10Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
```
4. 確認Image上傳:
```
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+
```
## nova(Controller service)
### Overview
OpenStack是一項開源的雲計算平台,它支持多種類型的雲環境,該專案目的在於簡單實現高可擴展性和豐富的功能,來自世界各地的雲計算專家為該項目做出了貢獻。
OpenStack透過各種服務共同提供Infrastructure-as-a-Service (IaaS)解決方案,每項服務提供一個應用程式服務接口API,來完成。
本教學介紹適合OpenStack的初學者,逐步使用範例來部署教學OpenStack。
熟悉這些OpenStack服務的基本安裝、配置、操作、故障排除後,您可以考慮使用以下步驟部署production架構:
* 確認並實施必要的核心服務和可選服務,以滿足性能和過多的過多的需求
* 使用防火牆、加密飯服務策略等方法提供安全性
* 實現部署工具,例如:Ansible、Chef、Puppet、Salt去自動化部署及管理環境
### Example architecture
範例架構需要至少兩台主機來啟動基本的虛擬機(VM)或實例。自選的服務像是Block Storage和Object Storage需要另一個節點
> Important
> 本教學中的範例是最低的配置需求,不適合production環境上安裝,它只提供最低限度的概念驗證,以便了解OpenStack。有關特定的相關安裝及架構,請參閱[Architecture Design Guide](https://docs.openstack.org/arch-design/)
此範例架構與最小production架構不同之處如下:
* 網路代理人安裝在controller節點上,而不是一個或多個專用網路節點上
* self-service networks 覆蓋(tunnel)流量通過管理網路而不是專用網路
> 更多的資訊可以查看[ Architecture Design Guide](https://docs.openstack.org/arch-design/)、[OpenStack Operations Guide](https://wiki.openstack.org/wiki/OpsGuide)、[OpenStack Networking Guide](https://docs.openstack.org/ocata/networking-guide/)
#### Controller
運行服務包括:Identity service、Image service、management portions of Compute、 management portion of Networking、various Networking agents、Dashboard、SQL database、message queue、Network Time Protocol (NTP)
可選擇:Block Storage、Object Storage、Orchestration、Telemetry services
控制節點至少要有兩個網路介面
#### Compute
用於操作雲實例,默認的情況下,Compute使用基於kernel-based VM(KVM)管理程式(hypervisor),還運行網路代理服務,該雲實例提供將實例連接到虛擬網路,通過安全組實例提供的防火牆服務,可部署多個節點,單一節點至少要有兩個網路介面。
#### Block Storage
該節點包括區塊儲存和共享文件系統為實例提供服務硬碟。為簡單起見,計算節點和此節點之間的服務流量使用管理網路,Production環境應該使用單獨的網路環境,以提高安全性,單一節點至少需要一個網路介面
#### Object Storage
該節點服務包含儲存帳戶、containers、Object,為簡單起見,計算節點和此節點之間的服務流量使用管理網路,Production環境應該使用單獨的網路環境,以提高安全性,單一節點至少需要一個網路介面
### Install and configure controller node for Ubuntu
本節介紹如何在controller node上安裝和配置nove compute服務
### Prerequisites
安裝Compute服務之前需創立database、服務憑證、API端點
1. 創建database,請完成以下步驟:
* root權限連接至資料庫:
```
# mysql
```
* 建立nova_api、nova、nova_cell0資料庫:
```
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
```
* 給予合適的權限:
```
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \
IDENTIFIED BY 'NOVA_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'NOVA_DBPASS';
```
> 可使用適合的密碼替代NOVA_DBPASS,但需注意,之後相關操作需使用相同密碼
* 退出資料庫連線:
```
MariaDB [(none)]> exit;
Bye
```
2. 獲取admin憑證訪問權限:
```
$ . admin-openrc
```
3. 創建Compute服務憑證:
* 建立nova使用者:
```
$ openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | f18782f2370d4fe7a14111e6301f7a00 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
```
> password建議設為NOVA_PASS
* 增加admin角色(role)添加到nova使用者:
```
$ openstack role add --project service --user nova admin
```
> 注意:此command不會有輸出
* 建立nova服務實體:
```
$ openstack service create --name nova \
--description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | 0e709788679148298e7721c8c7113d64 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
```
4. 建立Compute API服務端點:
```
$ openstack endpoint create --region RegionOne \
compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 816e77b5b34c4c79bc08bb2b02bb3d5e |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0e709788679148298e7721c8c7113d64 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ad3bf9fc07b0477d8a1dd054e6479e21 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0e709788679148298e7721c8c7113d64 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 4f9ac44c7b914450b620d4bf3eeb0e40 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0e709788679148298e7721c8c7113d64 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
```
5. 建立Placement服務使用者:
```
$ openstack user create --domain default --password-prompt placement
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fba72cacb91b422ca6cd68f205a8518c |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
```
> password建議使用PLACEMENT_PASS
6. 使用admin角色將Placement使用者添加到服務專案:
```
$ openstack role add --project service --user placement admin
```
> 注意:此command不會有輸出
7. 在服務目錄中創建Placement API條目:
```
$ openstack service create --name placement --description "Placement API" placement
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | 19dff564915a4d33ba07ce5bc4095aa4 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
```
8. 創建Placement API服務端點:
```
$ openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | f29f4ff0c18f428d96867c63975ae5db |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 19dff564915a4d33ba07ce5bc4095aa4 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6158c7a58ae14c0b839d7a9adce8a3f6 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 19dff564915a4d33ba07ce5bc4095aa4 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 8001a481b8ce4248be35a03e4d96d10e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 19dff564915a4d33ba07ce5bc4095aa4 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+--------------+----------------------------------+
```
### Install and configure components
> 默認配置文件因分發版本而異。 您可能需要添加這些部分和選項,而不是修改現有的部分和選項。 此外,配置片段中的省略號(...)表示您應保留的潛在默認配置選項。
1. Install the packages:
```
# apt install nova-api nova-conductor nova-consoleauth \
nova-novncproxy nova-scheduler nova-placement-api
```
2. 編輯/etc/nova/nova.conf文件並完成以下操作:
* 在[api_database]和[database]區塊,配置database進入點
```
[api_database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
```
> 需覆蓋你更改的NOVA_DBPASS密碼
3. 在[DEFAULT]區塊,配置RabbitMQ message queue訪問
```
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
```
> 需覆蓋你更改的RABBIT_PASS密碼
4. 在[api]和[keystone_authtoken]區塊,配置Identity service訪問
```
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
```
> 需覆蓋你更改的NOVA_PASS密碼
> 註釋掉或刪除[keystone_authtoken]部分中的任何其他選項。
5. 在[DEFAULT]區塊,配置my_ip選項以使用controller node的管理接口IP地址
```
[DEFAULT]
# ...
my_ip = 10.0.0.11
```
6. 在[DEFAULT]區塊,啟用對Networking服務的支持
```
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
```
> 默認情況下,Compute使用內部防火牆驅動程式。 由於Networking服務包含防火牆驅動程式,因此必須使用nova.virt.firewall.NoopFirewallDriver防火牆驅動程式禁用Compute防火牆驅動程式。
7. 在[vnc]區塊,配置VNC代理以使用控制器節點的管理接口IP地址
```
[vnc]
enabled = true
# ...
server_listen = $my_ip
server_proxyclient_address = $my_ip
```
8. 在[glance]區塊,配置Image服務API的位置
```
[glance]
# ...
api_servers = http://controller:9292
```
9. 在[oslo_concurrency]區塊,配置鎖定路徑
```
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
```
10. 由於包裝錯誤,請從[DEFAULT]部分刪除log_dir選項
```
[DEFAULT]
# log_dir = /var/log/nova
```
11. 在[placement]區塊,配置Placement API
```
[placement]
# ...
#os_region_name = openstack
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
```
> 將PLACEMENT_PASS替換為您在Identity服務中為放置用戶選擇的密碼
> 註解掉[placement]部分中的任何其他選項
### Populate the nova-api database
```
# su -s /bin/sh -c "nova-manage api_db sync" nova
```
> 忽略此輸出中的任何棄用消息
### Register the cell0 database
```
# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
```
### Create the cell1 cell
```
# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
4fe423de-6ed9-4e72-9157-9daef86ecacc
```
### Populate the nova database
```
# su -s /bin/sh -c "nova-manage db sync" nova
```
### Verify nova cell0 and cell1 are registered correctly
```
# nova-manage cell_v2 list_cells
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| Name | UUID | Transport URL | Database Connection |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 |
| cell1 | 4fe423de-6ed9-4e72-9157-9daef86ecacc | rabbit://openstack:****@controller | mysql+pymysql://nova:****@controller/nova |
+-------+--------------------------------------+------------------------------------+-------------------------------------------------+
```
### Finalize installation
重啟Compute服務
```
# service nova-api restart
# service nova-consoleauth restart
# service nova-scheduler restart
# service nova-conductor restart
# service nova-novncproxy restart
```
## nova(Compute service)
本節介紹如何在計算節點上安裝和配置Compute服務。 該服務支持多個虛擬機管理程式來部署實例或虛擬機(VM)。 為簡單起見,此配置使用Quick EMUlator(QEMU)虛擬機管理程式和支持虛擬機硬體加速的計算節點上的基於內核的VM(KVM)擴展。 在傳統硬件上,此配置使用通用QEMU管理程式。 您可以按照這些說明進行微小修改,以使用其他計算節點水平擴展您的環境。
> 本節假定您按照本指南中的說明逐步配置第一個計算節點。 如果要配置其他計算節點,請以與示例體系結構部分中的第一個計算節點類似的方式準備它們。 每個附加計算節點都需要唯一的IP地址。
#### Install and configure components
> 默認配置文件因發布版本而異。 您可能需要添加這些部分和選項,而不是修改現有的部分和選項。此外,配置片段中的省略號(...)表示您應保留的潛在默認配置選項。
1. Install the packages:
```
# apt install nova-compute
```
2. Edit the /etc/nova/nova.conf file and complete the following actions
在[DEFAULT]區塊,配置RabbitMQ message queue訪問
```
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
```
> 需覆蓋你更改的RABBIT_PASS密碼
3. 在[api]和[keystone_authtoken]區塊,配置身分驗證服務訪問
```
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = NOVA_PASS
```
> 需覆蓋你更改的NOVA_PASS密碼
> 註釋掉或刪除[keystone_authtoken]部分中的任何其他選項
4. 在 [DEFAULT]區塊,配置my_ip IP位址
```
[DEFAULT]
# ...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
```
> 將MANAGEMENT_INTERFACE_IP_ADDRESS替換為計算節點上管理網絡接口的IP地址
5. 在[DEFAULT]區塊,啟用對Networking服務的支持
```
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
```
> 默認情況下,Compute使用內部防火牆服務。由於Networking包含防火牆服務,因此必須使用nova.virt.firewall.NoopFirewallDriver防火牆驅動程序禁用Compute防火牆服務。
6. 在[vnc]區塊,啟用和配置遠端控制台訪問
> novncproxy_base_url需設為controller node IP
```
[vnc]
# ...
enabled = True
server_listen = $my_ip
server_proxyclient_address = $my_ip
novncproxy_base_url = http://10.0.1.97:6080/vnc_auto.html
```
服務器元件偵測所有IP地址,並且proxy元件僅偵測計算節點的管理接口IP地址。基本URL指示您可以使用Web瀏覽器訪問此計算節點上的實例的遠程控制台的位置。
> 如果要訪問遠端控制台的Web瀏覽器駐留在無法解析控制器主機名的主機上,則必須使用控制器節點的管理接口IP地址替換控制器。
7. 在[glance]區塊,配置Image service API
```
[glance]
# ...
api_servers = http://controller:9292
```
8. 在[oslo_concurrency]區塊,配置鎖定路徑
```
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
```
9. 由於包裝錯誤,請從[DEFAULT]部分刪除log_dir選項
```
log_dir = /var/log/nova
```
10. 在[placement]區塊,配置Placement API
```
[placement]
# ...
# os_region_name = openstack
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
```
> 需覆蓋你更改的PLACEMENT_PASS密碼
> 註釋掉[placement]部分中的任何其他選項
### Finalize installation
1. 確定您的計算節點是否支持虛擬機的硬體加速
```
$ egrep -c '(vmx|svm)' /proc/cpuinfo
```
> 如果此命令返回值1或更大,則計算節點支持硬體加速,通常不需要其他配置
> 如果此命令返回零值,則計算節點不支持硬體加速,您必須將libvirt配置為使用QEMU而不是KVM
> * 需編輯/etc/nova/nova-compute.conf檔案中的[libvirt]區塊如下
> ```
> [libvirt]
> # ...
> # virt_type=kvm
> virt_type = qemu
> ```
2. 重啟Compute服務
```
# service nova-compute restart
```
> 如果nova-compute服務無法啟動,請檢查/var/log/nova/nova-compute.log。控制器上的錯誤消息AMQP服務器:5672無法訪問可能表示控制器節點上的防火牆阻止訪問端口5672,將防火牆配置為打開控制器節點上的端口5672並重新啟動計算節點上的nova-compute服務。
### Add the compute node to the cell database
> 在Controller node上運行以下命令
1. 獲取管理員憑據以啟用僅管理員CLI命令,然後確認數據庫中是否存在計算主機
```
$ . admin-openrc
$ openstack compute service list --service nova-compute
+----+--------------+----------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+----------+------+---------+-------+----------------------------+
| 12 | nova-compute | compute1 | nova | enabled | up | 2018-12-26T08:27:36.000000 |
+----+--------------+----------+------+---------+-------+----------------------------+
```
2. 發現計算主機
```
# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 4fe423de-6ed9-4e72-9157-9daef86ecacc
Checking host mapping for compute host 'compute1': 2432a501-9157-4a94-9f93-63c9998b65ab
Creating host mapping for compute host 'compute1': 2432a501-9157-4a94-9f93-63c9998b65ab
Found 1 unmapped computes in cell: 4fe423de-6ed9-4e72-9157-9daef86ecacc
```
> 添加新計算節點時,必須在controller node上運行nova-manage cell_v2 discover_hosts以註冊這些新計算節點。或者,您可以在/etc/nova/nova.conf中設置適當的間隔
```
[scheduler]
discover_hosts_in_cells_interval = 300
```
### Verify operation
驗證Compute服務的運行
> 在controller node上執行這些命令
1. 獲取管理員憑據以獲取對僅管理員CLI命令的訪問權限
```
$ . admin-openrc
```
2. 列出服務元件以驗證每個程式的成功啟動和註冊
```
$ openstack compute service list
+----+------------------+-------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+-------------+----------+---------+-------+----------------------------+
| 9 | nova-consoleauth | controller1 | internal | enabled | up | 2018-12-26T08:39:51.000000 |
| 10 | nova-scheduler | controller1 | internal | enabled | up | 2018-12-26T08:39:55.000000 |
| 11 | nova-conductor | controller1 | internal | enabled | up | 2018-12-26T08:39:58.000000 |
| 12 | nova-compute | compute1 | nova | enabled | up | 2018-12-26T08:39:56.000000 |
+----+------------------+-------------+----------+---------+-------+----------------------------+
```
> 此輸出應指示controller node上啟用的三個服務元件以及compute node上啟用的一個服務元件
3. 列出Identity服務中的API端點,以驗證與Identity服務的連接
> 端點列表可能會有所不同,具體取決於OpenStack元件的安裝
```
$ openstack catalog list
+-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | |
+-----------+-----------+-----------------------------------------+
```
> 忽略此輸出中的任何警告
4. 列出Image服務中的圖像以驗證與Image服務的連接
```
$ openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 84accc04-3ca7-438f-9f6f-85e726eaee19 | cirros | active |
+--------------------------------------+--------+--------+
```
5. 檢查cells和placement API是否正常運行
```
# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: API Service Version |
| Result: Success |
| Details: None |
+--------------------------------+
```
## neutron(controller node)
### Prerequisites
在配置OpenStack Networking(neutron)服務之前,必須創建資料庫,服務憑據和API端點。
1. 建立資料庫,並完成以下步驟
* 使用root權限進入database
```
$ mysql -u root -p
```
* 建立neutron資料庫
```
MariaDB [(none)] CREATE DATABASE neutron;
```
* 給予合適的權限
```
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
```
* 退出資料庫連線
```
MariaDB [(none)]> exit;
Bye
```
2. 取得admin權限
```
$ . admin-openrc
```
3. 要創建服務憑據,請完成以下步驟
* 建立neutron使用者
```
$ openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | e9813ea1e36f4a40be4023f77b36c932 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
```
> 密碼建議使用NEUTRON_PASS
* 新增admin權限給neutron使用者
```
$ openstack role add --project service --user neutron admin
```
> 此command不會輸出結果
* 建立neutron服務實體
```
$ openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 5a0ca7b395d2433b93852b9ed1a68694 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
```
4. 建立網絡服務API端點
```
$ openstack endpoint create --region RegionOne \
network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 555f2867443944bd8a14f25c3b2958b0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5a0ca7b395d2433b93852b9ed1a68694 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c96f446eac3a4ee886d149b4c485d31c |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5a0ca7b395d2433b93852b9ed1a68694 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
$ openstack endpoint create --region RegionOne \
network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a9993e648f0b4862b3b62780a3d86ffe |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5a0ca7b395d2433b93852b9ed1a68694 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
```
### Configure networking options
你可以使用在兩種網路服務架構中選擇其中一種來部署你的服務
選項一:是一種最簡單的架構,該架構僅支持雲實例附加到外部網路。沒有私有網路、路由器或浮動IP。只有管理員或其他特權用戶才能管理提供網路
選項二:是支持自主網路的layer-3服務來增強選項一。讓其他非管理員用戶可以管理自主服務網路,包括提供包括提供能夠使用路由器串接內部網路和對外網路。此外,浮動IP位址提供給雲實例可以讓內部網路與外部網路連接
Self-service networks typically use overlay networks. Overlay network protocols such as VXLAN include additional headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. The Networking service automatically provides the correct MTU value to instances via DHCP. However, some cloud images do not use DHCP or ignore the DHCP MTU option and require configuration using metadata or a script.
> 選項二還支持將雲實例附加到provider networks
選擇以下網絡選項之一以配置特定於其的服務。然後,返回此處繼續其他設定
* [Networking Option 1: Provider networks](https://docs.openstack.org/neutron/queens/install/controller-install-option1-ubuntu.html)
* [
Networking Option 2: Self-service networks](https://docs.openstack.org/neutron/queens/install/controller-install-option2-ubuntu.html)
請先完成上方設定,並繼續完成以下部分
### Configure the metadata agent
metadata agent提供配置訊息,例如雲實例的憑證
1. 編輯/etc/neutron/metadata_agent.ini檔案,並完成以下步驟
* 在[DEFAULT]區塊,配置metadata主機和共享密鑰
```
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
```
使用適當的METADATA_SECRET當作metadata proxy密鑰
### Configure the Compute service to use the Networking service
> 必須安裝Nova compute service才能完成此步驟。 有關更多詳細信息,請參閱[docs website](https://docs.openstack.org/)
1. 編輯/etc/nova/nova.conf檔案,並完成以下步驟
* 在[neutron]區塊,配置parameters通道、開啟metadata proxy並配置密鑰
```
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
```
取代NEUTRON_PASS密碼為先前設定
取代METADATA_SECRET為你使用的metadata proxy密鑰
### Finalize installation
1. 填充資料庫
```
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
```
> 由於腳本需要完整的服務器和插件配置文件,因此資料庫填充稍後會出現在網路中
2. 重啟Compute API服務
```
# service nova-api restart
```
3. 重啟網路服務
* 對於兩種網絡選項
```
# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
```
* 對於網絡選項2,還要重新啟動layer-3服務
```
# service neutron-l3-agent restart
```
## neutron(Compute node)
### Install and configure compute node
compute node負責處理雲實例的連接和安全組
### Install the components
```
# apt install neutron-linuxbridge-agent
```
### Configure the common component
Networking共同元件配置包括身份驗證機制、message queue、plug-in
> 默認配置文件因發布版本而異。 您可能需要添加這些部分和選項,而不是修改現有的部分和選項。此外,配置片段中的省略號(...)表示您應保留的潛在默認配置選項。
1. 編輯/etc/neutron/neutron.conf檔案,並完成以下步驟
* 在[database]區塊,註解掉任何連接選項,因為compute node不直接訪問數據庫
* 在[DEFAULT]區塊,配置RabbitMQ message queue通道
```
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
```
取代RABBIT_PASS為之前設定的密碼
* [DEFAULT]和[keystone_authtoken],配置Identity service通道
```
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
```
取代NEUTRON_PASS為之前設定之密碼
> 註解掉或刪除[keystone_authtoken]部分中的任何其他選項
* [oslo_concurrency],配置lock path
```
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
```
### Configure networking options
選擇與controller node相同的網路選項,以配置特定於其的服務。然後,返回此處並繼續[Configure the Compute service to use the Networking service](https://docs.openstack.org/neutron/queens/install/compute-install-ubuntu.html#neutron-compute-compute-ubuntu)
* [Networking Option 1: Provider networks](https://docs.openstack.org/neutron/queens/install/compute-install-option1-ubuntu.html)
* [Networking Option 2: Self-service networks](https://docs.openstack.org/neutron/queens/install/compute-install-option2-ubuntu.html)
### Configure the Compute service to use the Networking service
1. 編輯/etc/nova/nova.conf檔案,並完成以下步驟
* 在[neutron]區塊,配置parameters通道
```
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
```
取代NEUTRON_PASS為之前設定
### Finalize installation
1. 重啟Compute service
```
# service nova-compute restart
```
2. 重啟Linux bridge agent
```
# service neutron-linuxbridge-agent restart
```
### Verify operation
> 在controller node上執行這些命令
1. 獲取管理員憑據以啟用僅管理員CLI命令的訪問權限
```
$ . admin-openrc
```
2. 列出已加載的擴展以驗證中子服務器程式的成功啟動列表
```
$ openstack extension list --network
+----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Alias | Description |
+----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. |
| Availability Zone | availability_zone | The availability zone extension. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Auto Allocated Topology Services | auto-allocated-topology | Auto Allocated Topology Services. |
| Neutron L3 Configurable external gateway mode | ext-gw-mode | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway |
| Port Binding | binding | Expose port bindings of a virtual port to external application |
| agent | agent | The agent management extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among l3 agents |
| Tag support | tag | Enables to set tag on resources. |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Tag support for resources with standard attribute: trunk, policy, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |
| Quota management support | quotas | Expose functions for quotas management per tenant |
| If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. |
| HA Router extension | l3-ha | Adds HA capability to routers. |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |
| Quota details management support | quota_details | Expose functions for quotas usage statistics per project |
| Address scope | address-scope | Address scopes extension. |
| Neutron Extra Route | extraroute | Extra routes configuration for L3 router |
| Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |
| Router Flavor Extension | l3-flavors | Flavor support for routers. |
| Port Security | port-security | Provides port security |
| Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| security-group | security-group | The security groups extension. |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Router Availability Zone | router_availability_zone | Availability zone support for router. |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. |
| Tag support for resources: subnet, subnetpool, port, router | tag-ext | Extends tag support to more L2 and L3 resources. |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports |
| Neutron L3 Router | router | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway. |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |
| Distributed Virtual Router | dvr | Enables configuration of Distributed Virtual Routers. |
+----------------------------------------------------------------------------------------------+---------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
```
3. Networking Option 2: Self-service networks
* List agents to verify successful launch of the neutron agents
```
$ openstack network agent list
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
| 15e52339-0766-4f26-8bb8-20273d16c578 | Linux bridge agent | compute1 | None | :-) | UP | neutron-linuxbridge-agent |
| a003fae1-ddcd-4a8c-ba17-9b77931be7d2 | Linux bridge agent | controller1 | None | :-) | UP | neutron-linuxbridge-agent |
| ccf852b2-4cde-45c9-ad69-b19c18a8f865 | Metadata agent | controller1 | None | :-) | UP | neutron-metadata-agent |
| d3241bcd-3f3b-4a74-9b61-11e213c91658 | L3 agent | controller1 | nova | :-) | UP | neutron-l3-agent |
| e4efedf4-6929-48c3-b8cb-a95f66ec04ba | DHCP agent | controller1 | nova | :-) | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+-------------+-------------------+-------+-------+---------------------------+
```
> 輸出結果會顯示controller node 4個agent服務,每個compute node 1個agent服務
## Networking Option 2: Self-service networks(Controller node)
在controller node上安裝和配置網路元件
### Install the components
```
# apt install neutron-server neutron-plugin-ml2 \
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent
```
### Configure the server component
編輯/etc/neutron/neutron.conf檔案,並遵守以下步驟
* 在[database]區塊,設定database通道
```
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
```
取代密碼NEUTRON_DBPASS為你先前的設定
> 註釋掉或刪除[database]部分中的任何其他連接選項
* 在[DEFAULT]區塊,開啟Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses
```
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
```
* 在[DEFAULT]區塊,配置RabbitMQ message queue通道
```
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
```
取代RABBIT_PASS,使用先前RabbitMQ設定的密碼
* 在[DEFAULT]和[keystone_authtoken]區塊,配置Identity service通道
```
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
```
取代NEUTRON_PASS,使用先前neutron設定的密碼
> 註釋掉或刪除[keystone_authtoken]部分中的任何其他選項
* 在[DEFAULT]和[nova]區塊,配置網路通知Compute網路拓樸更改
```
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
```
取代NOVA_PASS,使用先前nova設定的密碼
### Configure the Modular Layer 2 (ML2) plug-in
> ML2套件使用Linux橋接機制為實例構建layer-2(橋接和交換)虛擬網路基礎架構
編輯/etc/neutron/plugins/ml2/ml2_conf.ini,並完成下列步驟
* 在[ml2]區塊,開啟flat, VLAN, and VXLAN networks
```
[ml2]
# ...
type_drivers = flat,vlan,vxlan
```
* 在[ml2]區塊,開啟VXLAN私有服務網路
```
[ml2]
# ...
tenant_network_types = vxlan
```
* 在[ml2]區塊,開啟Linux橋接和layer-2 population mechanisms
```
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
```
> 配置ML2插件後,刪除type_drivers選項中的值可能會導致數據庫不一致
> Linux網橋代理僅支持VXLAN overlay networks
* 在[ml2]區塊,開啟port安全擴展驅動程式
```
[ml2]
# ...
extension_drivers = port_security
```
* 在[ml2_type_flat]區塊,配置provider virtual network as a flat network
```
[ml2_type_flat]
# ...
flat_networks = provider
```
* 在[ml2_type_vxlan]區塊,配置VXLAN network identifier range for self-service networks
```
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
```
* 在[securitygroup]區塊,開啟ipset以提高安全組規則的效率
```
[securitygroup]
# ...
enable_ipset = true
```
### Configure the Linux bridge agent
Linux網橋代理為雲實例構建layer-2(橋接和交換)虛擬網絡基礎架構並處理安全組
編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini檔案,並完成以下配置
* 在[linux_bridge]區塊,將提供者虛擬網絡映射到提供者物理網絡接口
```
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
```
> 將PROVIDER_INTERFACE_NAME替換為第二張網卡名稱。有關更多信息,請參閱[Host networking](https://docs.openstack.org/neutron/queens/install/environment-networking-ubuntu.html)
* 在[vxlan]區塊,開啟VXLAN overlay networks,配置處理overlay networks實體網路介面的IP地址,開啟layer-2 population
```
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
```
將OVERLAY_INTERFACE_IP_ADDRESS替換為處理overlay networks的實體網路接口的IP地址。以範例結構使用管理接口將流量tunnel傳輸到其他節點。 因此,請將OVERLAY_INTERFACE_IP_ADDRESS替換為controller node的管理IP地址。 有關更多信息,請參閱[Host networking](https://docs.openstack.org/neutron/queens/install/environment-networking-ubuntu.html)
* 在[securitygroup]區塊,開啟security groups和配置linux橋接iptables防火牆驅動程式
```
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
```
* 通過驗證以下所有sysctl值都設置為1,確保您的Linux操作系統內核支持網路bridge filters
```
$ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
$ sysctl net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-ip6tables = 1
```
要啟用網絡橋接支持,通常需要加載br_netfilter內核模塊。有關啟用此模組的其他詳細資訊,請查看作業系統的文件
### Configure the layer-3 agent
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks
1. 編輯/etc/neutron/l3_agent.ini檔案,並完成以下步驟
* 在[DEFAULT]區塊,配置Linux橋接介面驅動程式和外部網橋接
```
[DEFAULT]
# ...
interface_driver = linuxbridge
```
### Configure the DHCP agent
The DHCP agent provides DHCP services for virtual networks
1. 編輯/etc/neutron/dhcp_agent.ini檔案,並完成以下步驟
* 在[DEFAULT]區塊,配置Linux bridge interface driver、Dnsmasq DHCP driver和開啟isolated metadata,以便提供網絡上的雲實例可以通過網路訪問metadata
```
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
```
Return to Networking controller node configuration
## Networking Option 2: Self-service networks(Compute node)
在compute node上配置網路元件
### Configure the Linux bridge agent
Linux bridge agent為雲實例構建layer-2(bridging and switching)虛擬網路基礎架構並處理安全組
1. 編輯/etc/neutron/plugins/ml2/linuxbridge_agent.ini檔案,並完成以下步驟
* 在[linux_bridge]區塊,將提供者虛擬網路映射到提供者實體網路接口
```
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
```
> 將PROVIDER_INTERFACE_NAME替換為第二張網卡名稱。有關更多信息,請參閱[Host networking](https://docs.openstack.org/neutron/queens/install/environment-networking-ubuntu.html)
* 在[vxlan]區塊,開啟VXLAN overlay networks,配置實體IP位址,開啟layer-2 population
```
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
```
將OVERLAY_INTERFACE_IP_ADDRESS替換為處理overlay networks的基礎實體網路接口的IP位址。範例結構使用管理介面透過tunnel傳輸到其他節點。 因此,請將OVERLAY_INTERFACE_IP_ADDRESS替換為compute node的管理IP地址。有關更多信息,請參閱[Host networking](https://docs.openstack.org/neutron/queens/install/environment-networking-ubuntu.html)
* 在[securitygroup]區塊,開啟安全群組和配置Linux bridge iptables firewall driver
```
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
```
* 通過驗證以下所有sysctl值都設置為1,確保您的Linux操作系統內核支持網路bridge filters
```
$ sysctl net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
$ sysctl net.bridge.bridge-nf-call-ip6tables
net.bridge.bridge-nf-call-ip6tables = 1
```
要啟用網絡橋接支持,通常需要加載br_netfilter內核模塊。有關啟用此模組的其他詳細資訊,請查看作業系統的文件
返回Networking compute node配置
## horizon(Dashboard)
本節介紹如何在controller node上安裝和配置Dashboard
Dashboard所需的唯一核心服務是身份服務。您可以將Dashboard與其他服務結合使用,例如Image service、Compute、Networking。您還可以在具有獨立服務(如對Object Storage)的環境中使用Dashboard
> 本節假定使用Apache HTTP服務器和Memcached服務正確安裝,配置和操作Identity服務
### Install and configure components
> 默認配置文件因發布版本而異。 您可能需要添加這些部分和選項,而不是修改現有的部分和選項。此外,配置片段中的省略號(...)表示您應保留的潛在默認配置選項。
1. Install the packages
```
# apt install openstack-dashboard
```
2. 編輯/etc/openstack-dashboard/local_settings.py檔案,並完成以下步驟
* 配置Dashboard在controller node上使用OpenStack服務
```
OPENSTACK_HOST = "controller"
```
* 允許您的主機訪問Dashboard
```
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
```
> 不要編輯Ubuntu配置部分下的ALLOWED_HOSTS參數
> ALLOWED_HOSTS也可以['*']接受所有主機。 這可能對開發工作有用,但可能不安全,不應在生產中使用。有關詳細信息,請參閱[Django documentation](https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts)
* 配置memcached session storage service
```
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
```
> 註解掉任何其他儲存配置
* 開啟Identity API version 3
```
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
```
* 開啟支援domains
```
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
```
* 配置API版本
```
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
```
* 配置Default您通過dashboard創建的用戶的默認domain
```
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
```
* 將用戶配置為您通過dashboard創建的用戶的默認角色
```
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
```
* (可選)配置時區
```
TIME_ZONE = "TIME_ZONE"
```
取代TIME_ZONE為Asia/Taipei,可參考[list of time zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)
3. 如果不包括,請將以下行添加到/etc/apache2/conf-available/openstack-dashboard.conf
```
WSGIApplicationGroup %{GLOBAL}
```
### Finalize installation
* 重新載入web server配置
```
# service apache2 reload
```
### Verify operation for Ubuntu
URL:[http://controller/horizon](http://controller/horizon)
Authenticate using admin or demo user and default domain credentials
## 問題解決
### 網路
* 問題
```
Temporary failure resolving 'ubuntu-cloud.archive.canonical.com'
```
* 解決方法
```
$ sudo su
$ echo "nameserver 8.8.8.8" > /etc/resolv.conf
```