# 網路模擬與分析(4/26): Group Table & NFV
###### tags: `Mininet`、`SDN`、`OpenFlow`、`OpenvSwitch`
## Group Table 介紹
SDN 的規則表中,除了有 Flow Table 之外,另一個要介紹的就是 Group Table。
而 group table 一共有4種可以使用,分別為:
1. `-All`: broadcast or multicast
2. `-Select`: load balance
3. `-Indirect`: more efficient convergence
4. `-Fast Failover`: fault tolerance
---
以往的範例,我們透過 ovs 規則去建立起通訊,例如以下這個拓樸可以分為2條路徑去走,但是當規則建立好之後,例如:規則是建立 h1-s1-s2-s4-h2 的路徑,但是另一條路徑就會保持閒置,因此對於網路的使用上來說並不是很恰當。

因此透過 group table 給予的功能,例如 load balance 會根據資料流 (5 tuples) 的不同去給予不同的路徑,因此路徑上就不只有局限在同一條了。
> 1. 採用的方式跟 round robin 循序性更換路徑相同
> 2. 5 tuples 包含: src IP, dst IP, src port, dst port, 傳輸層 (udp, tcp),只要其中一個不同,就是不同的資料流。
### Load Balance (Select)
因此我們先建立`lab3`,並創建`1.py`:
```
from mininet.net import Mininet
from mininet.node import Controller, RemoteController, OVSKernelSwitch, UserSwitch, OVSSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel
from mininet.link import Link, TCLink
def topology():
net = Mininet( controller=RemoteController, link=TCLink, switch=OVSKernelSwitch)
# Add hosts and switches
h1= net.addHost( 'h1', mac="00:00:00:00:00:01" )
h2 = net.addHost( 'h2', mac="00:00:00:00:00:02" )
# 預設只會有openflow1.0,group table需要1.3以上才支援
s1 = net.addSwitch( 's1', protocols=["OpenFlow10,OpenFlow13"], listenPort=6634 )
s2 = net.addSwitch( 's2', protocols=["OpenFlow10,OpenFlow13"], listenPort=6635 )
s3 = net.addSwitch( 's3', protocols=["OpenFlow10,OpenFlow13"], listenPort=6636 )
s4 = net.addSwitch( 's4', protocols=["OpenFlow10,OpenFlow13"], listenPort=6637 )
c0 = net.addController( 'c0', controller=RemoteController, ip='127.0.0.1', port=6633 )
net.addLink( h1, s1)
net.addLink( h2, s4)
net.addLink( s1, s2)
net.addLink( s1, s3)
net.addLink( s2, s4)
net.addLink( s3, s4)
net.build()
c0.start()
s1.start( [c0] )
s2.start( [c0] )
s3.start( [c0] )
s4.start( [c0] )
print "*** Running CLI"
CLI( net )
print "*** Stopping network"
net.stop()
if __name__ == '__main__':
setLogLevel( 'info' )
topology()
```
以及`1.sh`:
```
# -O OpenFlow13 使用 openflow1.3 protocol
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=1,actions=output:2
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s3 in_port=1,actions=output:2
ovs-ofctl -O OpenFlow13 add-flow s3 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s4 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s4 in_port=3,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s4 in_port=1,actions=output:3
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=3,actions=output:1
ovs-ofctl -O OpenFlow13 add-group s1 group_id=5,type=select,bucket=output:2,bucket=output:3
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=1,actions=group:5
```
> 1. group_id=5,type=select,bucket=output:2,bucket=output:3
> // 去執行 group 的動作,type=select 做 load balance,而 bucket (水桶) 會根據不同的資料流輸出到 n 出口 (output:2,output:3)。
> 2. actions=group:5
> // group:5,其中的5只是一個代號,一個資料流對應一個 group。
接著去執行`python 1.py`以及在另一台終端機執行`./1.sh`
> 在執行`1.sh`之前,先執行: `chmod a+x 1.sh`
接著查看規則表:
```
root@vm2:/home/user/mininet/ovs-test/lab3# chmod a+x 1.sh
root@vm2:/home/user/mininet/ovs-test/lab3# ./1.sh
root@vm2:/home/user/mininet/ovs-test/lab3# ovs-ofctl -O OpenFlow13 dump-flows s1OFPST_FLOW reply (OF1.3) (xid=0x2):
cookie=0x0, duration=18.215s, table=0, n_packets=0, n_bytes=0, in_port=2 actions=output:1
cookie=0x0, duration=18.213s, table=0, n_packets=0, n_bytes=0, in_port=3 actions=output:1
cookie=0x0, duration=18.207s, table=0, n_packets=0, n_bytes=0, in_port=1 actions=group:5
```
我們在 mininet 環境中執行`xterm h1 h2`,並且去測試 Iperf 中的 TCP:

使用`ovs-ofctl -O OpenFlow13 dump-groups s1`去檢查 groups 的狀態,可以看到目前使用了 “select” 模式。
`root@vm2:/home/user/mininet/ovs-test/lab3# ovs-ofctl -O OpenFlow13 dump-groups s1
OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
group_id=5,type=select,bucket=actions=output:2,bucket=actions=output:3
`
接著使用`ovs-ofctl -O OpenFlow13 dump-ports s1`觀察 port 的變化。
```
root@vm2:/home/user/mininet/ovs-test/lab3# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=227.955s
port 1: rx pkts=1287028, bytes=85702972, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2338133, bytes=111806634868, drop=0, errs=0, coll=0
duration=227.963s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=24, bytes=2922, drop=0, errs=0, coll=0
duration=227.963s
port 3: rx pkts=2338128, bytes=111806699362, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=1287043, bytes=85705156, drop=0, errs=0, coll=0
duration=227.963s
root@vm2:/home/user/mininet/ovs-test/lab3# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=228.829s
port 1: rx pkts=1342274, bytes=89385592, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2444410, bytes=117029852126, drop=0, errs=0, coll=0
duration=228.837s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=24, bytes=2922, drop=0, errs=0, coll=0
duration=228.837s
port 3: rx pkts=2444403, bytes=117029851394, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=1342290, bytes=89387842, drop=0, errs=0, coll=0
duration=228.837s
root@vm2:/home/user/mininet/ovs-test/lab3# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=229.611s
port 1: rx pkts=1378267, bytes=91801938, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2527581, bytes=121257469404, drop=0, errs=0, coll=0
duration=229.619s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=24, bytes=2922, drop=0, errs=0, coll=0
duration=229.619s
port 3: rx pkts=2527576, bytes=121257541204, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=1378282, bytes=91804122, drop=0, errs=0, coll=0
duration=229.619s
root@vm2:/home/user/mininet/ovs-test/lab3# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=230.376s
port 1: rx pkts=1445002, bytes=96227364, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2629003, bytes=125858569840, drop=0, errs=0, coll=0
duration=230.384s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=24, bytes=2922, drop=0, errs=0, coll=0
duration=230.384s
port 3: rx pkts=2628996, bytes=125858569108, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=1445017, bytes=96229548, drop=0, errs=0, coll=0
duration=230.384s
```
可以看到說,預設值 group 在進行封包傳輸時,都是走 port 2。
其中一個地方要注意的是,對於 select 來說,它只支援 TCP packet 功能。
---
### Fault Tolerance (Fast Failover)
接著我們將 groups 切換成 Failover 模式
並且去測試當 Iperf 封包傳輸時進行 link 中斷,會發生什麼情況?
我們建立資料夾`lab4`,進入後創建`1.py`:
```
from mininet.net import Mininet
from mininet.node import Controller, RemoteController, OVSKernelSwitch, UserSwitch, OVSSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel
from mininet.link import Link, TCLink
def topology():
net = Mininet( controller=RemoteController, link=TCLink, switch=OVSKernelSwitch)
# Add hosts and switches
h1= net.addHost( 'h1', mac="00:00:00:00:00:01" )
h2 = net.addHost( 'h2', mac="00:00:00:00:00:02" )
s1 = net.addSwitch( 's1', protocols=["OpenFlow10,OpenFlow13"], listenPort=6634 )
s2 = net.addSwitch( 's2', protocols=["OpenFlow10,OpenFlow13"], listenPort=6635 )
s3 = net.addSwitch( 's3', protocols=["OpenFlow10,OpenFlow13"], listenPort=6636 )
s4 = net.addSwitch( 's4', protocols=["OpenFlow10,OpenFlow13"], listenPort=6637 )
c0 = net.addController( 'c0', controller=RemoteController, ip='127.0.0.1', port=6633 )
net.addLink( h1, s1)
net.addLink( h2, s4)
net.addLink( s1, s2)
net.addLink( s1, s3)
net.addLink( s2, s4)
net.addLink( s3, s4)
net.build()
c0.start()
s1.start( [c0] )
s2.start( [c0] )
s3.start( [c0] )
s4.start( [c0] )
print "*** Running CLI"
CLI( net )
print "*** Stopping network"
net.stop()
if __name__ == '__main__':
setLogLevel( 'info' )
topology()
```
以及`1.sh`:
```
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=1,actions=output:2
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s3 in_port=1,actions=output:2
ovs-ofctl -O OpenFlow13 add-flow s3 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s4 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s4 in_port=3,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s4 in_port=1,actions=output:3
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=3,actions=output:1
ovs-ofctl -O OpenFlow13 add-group s1 group_id=4,type=ff,bucket=watch_port:2,output:2,bucket=watch_port:3,output:3
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=1,actions=group:4
```
檢查 groups 的 type 是否為 ff( fast failover)
```
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-groups s1
OFPST_GROUP_DESC reply (OF1.3) (xid=0x2):
group_id=4,type=ff,bucket=watch_port:2,actions=output:2,bucket=watch_port:3,actions=output:3
```
接著執行`xterm h1 h2`,並且去測試 Iperf TCP 封包傳輸。

其中使用`ovs-ofctl -O OpenFlow13 dump-ports s1`去觀察 port 變化,可以看到一開始是跑預設值 port 2。
```
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=103.653s
port 1: rx pkts=775435, bytes=36189545398, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=451633, bytes=29930247, drop=0, errs=0, coll=0
duration=103.662s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=775451, bytes=36189614215, drop=0, errs=0, coll=0
duration=103.663s
port 3: rx pkts=451623, bytes=29929194, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=23, bytes=2815, drop=0, errs=0, coll=0
duration=103.661s
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=106.027s
port 1: rx pkts=1075623, bytes=50495807230, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=615461, bytes=40793103, drop=0, errs=0, coll=0
duration=106.036s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=1075644, bytes=50496265889, drop=0, errs=0, coll=0
duration=106.037s
port 3: rx pkts=615451, bytes=40792050, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=23, bytes=2815, drop=0, errs=0, coll=0
duration=106.035s
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=107.575s
port 1: rx pkts=1268166, bytes=59779151396, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=716826, bytes=47523429, drop=0, errs=0, coll=0
duration=107.584s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=1268180, bytes=59779153473, drop=0, errs=0, coll=0
duration=107.585s
port 3: rx pkts=716817, bytes=47522442, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=23, bytes=2815, drop=0, errs=0, coll=0
duration=107.583s
```
當我們接著執行了`link s1 s2 down`之後,去觀察 port 變化。
再次執行`ovs-ofctl -O OpenFlow13 dump-ports s1`可以發現到原先跑的 port 2 透過 link 中斷後,會自動切換到 port 3,因此 Iperf 的封包可以照樣進行,且不會發生中斷。
```
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=132.653s
port 1: rx pkts=4156120, bytes=199497946016, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2227739, bytes=148054218, drop=0, errs=0, coll=0
duration=132.662s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=3680587, bytes=175451152999, drop=0, errs=0, coll=0
duration=132.663s
port 3: rx pkts=2227728, bytes=148053058, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=475556, bytes=24045848586, drop=0, errs=0, coll=0
duration=132.661s
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=133.736s
port 1: rx pkts=4293448, bytes=206069456328, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2299775, bytes=152865090, drop=0, errs=0, coll=0
duration=133.745s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=3680587, bytes=175451152999, drop=0, errs=0, coll=0
duration=133.746s
port 3: rx pkts=2299764, bytes=152863930, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=612884, bytes=30617358898, drop=0, errs=0, coll=0
duration=133.744s
root@vm2:/home/user/mininet/ovs-test/lab4# ovs-ofctl -O OpenFlow13 dump-ports s1OFPST_PORT reply (OF1.3) (xid=0x2): 4 ports
port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=0, bytes=0, drop=0, errs=0, coll=0
duration=134.768s
port 1: rx pkts=4420817, bytes=212276011890, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=2364766, bytes=157216128, drop=0, errs=0, coll=0
duration=134.777s
port 2: rx pkts=26, bytes=3136, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=3680587, bytes=175451152999, drop=0, errs=0, coll=0
duration=134.778s
port 3: rx pkts=2364755, bytes=157214968, drop=0, errs=0, frame=0, over=0, crc=0
tx pkts=740253, bytes=36823914460, drop=0, errs=0, coll=0
duration=134.776s
```
其中最大的特點是,fast failover 支援 TCP & UDP pakcet。
---
### Group Chaining 結合上述兩種功能 (Select + Fast Failover)
透過 group chain 將以上兩個功能整合,當我們的網路環境如下,若中間的鏈路們全部設置成 load balance 或 fast failover 會有什麼差別?

1. load balance: 優點是**頻寬加大**,缺點是**當一條 link 中斷,該資料流就找不回來了。**
2. fast failover: 優點是**容錯率很低**,缺點是**頻寬使用率很差。**
因此 chain 的功用是將這兩個技術結合,讓中間的鏈路們透過 load balance 擁有頻寬加大的服務,也有 fast failover 容錯的機制。
因此這個實驗中,我們將四條鏈路分成2組 load balance,port 2, port 4 為一組,port 3, port 5 為另外一組。
對於 fast failover,我們將其中的 port 2, port 3 設為 fast failover,因此當其中一條斷了,我們就可以透過 ff 機制去進行容錯。
```
from mininet.net import Mininet
from mininet.node import Controller, RemoteController, OVSKernelSwitch, UserSwitch, OVSSwitch
from mininet.cli import CLI
from mininet.log import setLogLevel
from mininet.link import Link, TCLink
def topology():
net = Mininet( controller=RemoteController, link=TCLink, switch=OVSKernelSwitch)
# Add hosts and switches
h1= net.addHost( 'h1', mac="00:00:00:00:00:01" )
h2 = net.addHost( 'h2', mac="00:00:00:00:00:02" )
s1 = net.addSwitch( 's1', protocols=["OpenFlow10,OpenFlow13"], listenPort=6634 )
s2 = net.addSwitch( 's2', protocols=["OpenFlow10,OpenFlow13"], listenPort=6635 )
c0 = net.addController( 'c0', controller=RemoteController, ip='127.0.0.1', port=6633 )
linkopt={'bw':10}
linkopt2={'bw':100}
net.addLink( h1, s1, cls=TCLink, **linkopt2)
net.addLink( h2, s2, cls=TCLink, **linkopt2)
net.addLink( s1, s2, cls=TCLink, **linkopt)
net.addLink( s1, s2, cls=TCLink, **linkopt)
net.addLink( s1, s2, cls=TCLink, **linkopt)
net.addLink( s1, s2, cls=TCLink, **linkopt)
net.build()
c0.start()
s1.start( [c0] )
s2.start( [c0] )
print "*** Running CLI"
h1.cmd("arp -s 10.0.0.2 00:00:00:00:00:02")
h2.cmd("arp -s 10.0.0.1 00:00:00:00:00:01")
CLI( net )
print "*** Stopping network"
net.stop()
if __name__ == '__main__':
setLogLevel( 'info' )
topology()
```
```
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=2,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=3,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=4,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=5,actions=output:1
ovs-ofctl -O OpenFlow13 add-flow s2 in_port=1,actions=output:5
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=5,actions=output:1
ovs-ofctl -O OpenFlow13 add-group s1 group_id=2,type=select,bucket=output:2,bucket=output:4
ovs-ofctl -O OpenFlow13 add-group s1 group_id=3,type=select,bucket=output:3,bucket=output:5
ovs-ofctl -O OpenFlow13 add-group s1 group_id=1,type=ff,bucket=watch_port:2,group:2,bucket=watch_port:3,group:3
ovs-ofctl -O OpenFlow13 add-flow s1 in_port=1,actions=group:1
```
這裡有一個值得注意的是,因為對於 select 來說,它所使用的是 “hash function” 或 “round robin”,因此在 mininet 的環境,它有可能是使用了 “hash function”,因此你在 Iperf 測試時,資料流有可能會共用到同一個 port,就會像下面的結果圖一樣。

這種時候就是盡量多試幾次,看看能不能讓資料流去跑不同的 link,再去測試`link s1 s2 down`,並且查看規則表中的 port 值有沒有跟著變化。
---
## NFV-網路功能虛擬化
網路上會有一些網路服務,例如:防火牆、IDS、Load Balance ...,而每一個網路服務都是部署在獨立的一個硬體上,因此當你要購買大量的網路服務時,所需的成本會很高。
因此,NFV 可以透過軟體的方式,將一些網路服務部署在軟體上,例如: 將防火牆功能部署在 PC 上。
> 例如: 假設說你的環境需要 iOS 系統,而你沒有 Apple 產品,你就可以在虛擬機上安裝 iOS iso 檔,因此就不需要花昂貴的錢去購買所需產品,而是在一般的設備上也可以享受同樣的服務。
---
### SDN + NFV

```
#!/usr/bin/env python
from mininet.cli import CLI
from mininet.net import Mininet
from mininet.link import Link,TCLink,Intf
from mininet.node import Controller,RemoteController
if '__main__' == __name__:
net = Mininet(link=TCLink)
h1 = net.addHost('h1', ip="10.0.0.1/24", mac="00:00:00:00:00:01")
h2 = net.addHost('h2', ip="10.0.0.2/24", mac="00:00:00:00:00:02")
h3 = net.addHost('h3', ip="10.0.0.3/24", mac="00:00:00:00:00:03")
s1 = net.addSwitch('s1')
c0 = net.addController('c0', controller=RemoteController)
net.addLink(h1, s1)
net.addLink(h2, s1)
net.addLink(h3, s1)
net.build()
c0.start()
s1.start([c0])
# rules for s1
h1.cmd("arp -s 10.0.0.2 00:00:00:00:00:02")
h1.cmd("arp -s 10.0.0.3 00:00:00:00:00:03")
h2.cmd("arp -s 10.0.0.1 00:00:00:00:00:01")
h2.cmd("arp -s 10.0.0.3 00:00:00:00:00:03")
h3.cmd("arp -s 10.0.0.1 00:00:00:00:00:01")
h3.cmd("arp -s 10.0.0.2 00:00:00:00:00:02")
h3.cmd("echo 1 > /proc/sys/net/ipv4/ip_forward")
# 防火牆設置
h3.cmd("iptables -A FORWARD -p tcp --destination-port 8080 -j ACCEPT")
h3.cmd("iptables -A FORWARD -p tcp --destination-port 80 -j DROP")
s1.cmd("ovs-ofctl add-flow s1 priority=1,in_port=1,actions=output:2")
s1.cmd("ovs-ofctl add-flow s1 priority=1,in_port=2,actions=output:1")
# 改變目的位置
s1.cmd("ovs-ofctl add-flow s1 priority=10,ip,in_port=1,actions=mod_dl_dst=00:00:00:00:00:03,output:3")
s1.cmd("ovs-ofctl add-flow s1 priority=10,ip,in_port=2,actions=mod_dl_dst=00:00:00:00:00:03,output:3")
s1.cmd("ovs-ofctl add-flow s1 priority=10,ip,in_port=3,nw_dst=10.0.0.2,actions=mod_dl_dst=00:00:00:00:00:02,output:2")
s1.cmd("ovs-ofctl add-flow s1 priority=10,ip,in_port=3,nw_dst=10.0.0.1,actions=mod_dl_dst=00:00:00:00:00:01,output:1")
CLI(net)
net.stop()
```
`xterm h1 h2 h2`,h2 分別開啟 port `80`,`8080`的服務
而 h1 去使用 `curl`測試兩個 port 的結果

可以看到說當封包進到 s1 時,因為規則表會將封包轉到 h3 進行防火牆處理,而防火牆設定是將`80`阻擋,`8080`放行。

可以看到說,當我們將 h3 設為 router 並且透過 NFV 在上面部署 firewall 的服務,我們就可以手動去設定一些 port 是否為惡意的,並進行阻擋。
---
## Reference
1. Youtube video (mininet-ovs 4): https://www.youtube.com/watch?v=dNovnDE68Wc
2. Youtube video (mininet-ovs 5): https://www.youtube.com/watch?v=4HlIRAwumlw
3. https://www.netadmin.com.tw/netadmin/zh-tw/technology/9FF6A417220F400884C788AB00FA3750
4. http://csie.nqu.edu.tw/smallko/sdn/group-fastfailover.htm
5. http://csie.nqu.edu.tw/smallko/sdn/group-select.htm
6. http://csie.nqu.edu.tw/smallko/sdn/group_chaining.htm
7. https://github.com/vicky-sunshine/SDN-note/blob/master/OpenFlow_Protocol.md