# Share an OpenShift network with VMs
I recently spoke with a friend who wanted their OpenShift Virtualization hypervisors / OpenShift nodes to access their strorage network using a dedicated NIC. No problem, that can be accomplished by creating some host-level configurations -- typically with a `NodeNetworkConfigurationPolicy`. My friend created a bond and a VLAN interface and the host/node was able to access their NFS storage.
However, they also wanted some VMs to be able to mount NFS storage directly (instead of via a PersistentVolume). This is tricky because VMs need to use a bridge interface, but he didn't create one. 😥
The solution was to remove the previous bond + vlan configuration (set `state: absent` in the `NNCP`) and create a bond + bridge + vlan configuration.
Because of a NMstate validation error using `linux-bridge`, and because Kubernetes `MultiNetworkPolicies` are awesome (think AWS EC2 Security Group) I created the network configuration using an Open vSwitch (OVS) bridge.
The YAML below configures two NICs (enp7s0 + enp8s0) into a bond. That bond plugs into a port on the OVS bridge (ovs-storage). Finally, an OVS interface that adds VLAN 222 tags is attached to the bridge.
:::info
I find it helpful to replace the term `bridge` with `virtual switch` when I think about this. In technical terms, a `switch` is also known as a`multiport bridge`
- *"A network switch is also called a MAC bridge by the [IEEE](https://en.wikipedia.org/wiki/IEEE_Standards_Association)"* - [Wikipedia: Network switch](https://en.wikipedia.org/wiki/Network_switch)
- *"The multiport bridge function serves as the basis for network switches"* - [Wikipedia: Network bridge](https://en.wikipedia.org/wiki/Network_bridge)
:::
```yaml=
---
apiVersion: nmstate.io/v1
kind: NodeNetworkConfigurationPolicy
metadata:
name: server1-ovs-storage-network
spec:
### Choose one of the filters below... (hint static IPs = hostname)
### either filter based on hostname (apply to a single server)
### or filter based on node type (apply to all workers)
nodeSelector:
kubernetes.io/hostname: 'server01.example.com'
# node-role.kubernetes.io/worker: ''
desiredState:
interfaces:
### Basic config for physical NIC(s)
### enable LLDAP RX (receive-only) and make a pretty name for "nmcli con show" output
- name: enp7s0
profile-name: ovs-storage-enp7s0
type: ethernet
state: up
controller: ovs-storage
lldp:
enabled: true
- name: enp8s0
profile-name: ovs-storage-enp8s0
type: ethernet
state: up
controller: ovs-storage
lldp:
enabled: true
### Create an Open vSwitch (OVS) bridge/switch with uplink via bond
### and a disconnected switchport configured for VLAN 222
- name: ovs-storage
profile-name: ovs-storage
type: ovs-bridge
state: up
bridge:
options:
stp: false
rstp: false
mcast-snooping-enable: false
port:
#- name: enp7s0 # commented out because of bonding
#- name: enp8s0 # commented out because of bonding
- name: ovs-storage-bond
link-aggregation:
mode: active-backup
port:
- name: enp7s0
- name: enp8s0
- name: ovs-storage.222
vlan:
mode: access
tag: 222
### Create a kernel interface and connect it to the switchport
### The static IP address here allows the nodes to present iSCSI/NFS
### to Pods/VMS via PersistentVolumeClaims (CSI)
- name: ovs-storage.222 # cannot exceed 15 characters!
type: ovs-interface
state: up
ipv4:
enabled: true
dhcp: false
address:
- ip: 172.31.254.12
prefix-length: 24
ipv6:
enabled: false
### Create localnet bridge-mappings to be used by VMs (net-attach-def)
### TODO: Reduce the 1:1 pairing of localnet:net-attach-def with a 1:many UserDefinedNetwork
ovn:
bridge-mappings:
- localnet: vlan222
bridge: ovs-storage
state: present
- localnet: vlan333 #TODO: remove extra mappings via UserDefinedNetworks
bridge: ovs-storage
state: present
```
The `NetworkAttachmentDefintion` below allows Virtual Machines to use iSCSI/NFS directly
e.g. `mount -t nfs 172.31.254.100:/data`
```yaml=
# Present localnet mapping to all VMs (with VLAN tag 222)
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
namespace: default
name: vlan222
spec:
config: '{
"netAttachDefName":"default/vlan222",
"name":"vlan222",
"vlanID": 222,
"topology":"localnet",
"type":"ovn-k8s-cni-overlay",
"cniVersion":"0.4.0"
}'
```
## Kudos / Acknowledments
I found the [nmstate examples](https://nmstate.io/devel/yaml_api.html#openvswitch-bridge-interface) to be very helpful.
A [complete example that includes the `NodeNetworkConfiguration` pieces](https://nmstate.io/kubernetes-nmstate/examples.html#open-vswitch-bridge-interface) was also very helpful!
## Appendix
For those that may be curious, the configuration above looks like this when you run `nmcli connection show`
```bash=
[root@rhel9 ~]# nmcli con show
NAME UUID TYPE DEVICE
ovs-storage 2060af5b-cfb5-418a-9b45-9dd1c0e0a6af ovs-bridge ovs-storage
ovs-storage-bond-port 0d4dc79d-396f-46cb-a77b-f688d2a364fa ovs-port ovs-storage-bond
ovs-storage-enp7s0 f061755a-d554-406b-9736-1efd28f4ffc7 ethernet enp7s0
ovs-storage-enp8s0 6f966cd3-b7c7-44ad-bcbb-10e9c9a282fe ethernet enp8s0
ovs-storage.222-if 0acd2334-31e4-42d5-95a4-a3c6924d1369 ovs-interface ovs-storage.222
ovs-storage.222-port d7d0d21c-539c-4490-b1bf-d64ed809cad8 ovs-port ovs-storage.222
```