# OpenStack on FreeBSD Project Proposal
## Introduction
Since CHERI-enabled Morello evaluation boards are available for academic and industrial research from late 2021, there's a need to support the software development and validation work related to CHERI on Morello. We believe if there's a hardware resource orchestration platform to help provision, manage, and recycle those boards would be very helpful to the progression of CHERI project in terms of speed and cost.
OpenStack is a open-sourced cloud operating system for almost all kinds of resources, from virtual machines, containers, to bare-metal servers, which we can leverage to manage our ARM-based boards. However, OpenStack's control plane is mainly running on Linux operating systems. FreeBSD is only available (though [unofficially supported](https://docs.openstack.org/image-guide/obtain-images.html#freebsd-openbsd-and-netbsd)) as a type of guest OSes on OpenStack currently, i.e., users can spawn FreeBSD instances (mostly virtual machines) on this open cloud platform, but it's not feasible for administrators/operators to setup OpenStack deployments whose components run on FreeBSD hosts.
We found this a great chance to promote FreeBSD as another supported host OS of OpenStack project by porting crucial components and let people try running OpenStack on FreeBSD hosts. At the same time, we're going to construct three FreeBSD-based OpenStack clusters. One is for managing CHERI-enabled Morello boards, which helps development of CHERI project. The other two are for FreeBSD.org netperf cluster and mini cloud, which help improve the efficiency of resource utilization of reference machines.
## Objective
The goal of this project is to:
- Build up a hardware resource (especially for CHERI-enabled ARM boards) orchestration platform leveraging a well-known open-sourced cloud operating system called OpenStack to help developers validate and experiment on CHERI-enabled Morello prototypes
- Port originally Linux-based OpenStack key components onto amd64 FreeBSD machines, including but not limited to the following components:
- Keystone
- Ironic
- Nova
- Neutron
- Package those OpenStack components to FreeBSD Ports
- Contribute installation and configuration steps back to FreeBSD handbook & wiki
### Short-term Goal
**Setup POC (Proof of Concept).** Fulfill the needs of the University of Cambridge, i.e., providing an OpenStack platform which runs on FreeBSD hosts managing the lifecycle of those Morello boards.
### Mid-term Goal
**Formalize setup steps, package essential components.** Setup a mini cloud in [FreeBSD.org cluster](https://www.freebsd.org/internal/machines/) to enhance the efficiency of lifecycle management of reference machines for developers to test, debug, and porting software.
- lwhsu will help clusteradm part
- Make essential OpenStack components available as ports
### Long-term Goal
**Integrate the two projects.** Make FreeBSD as the 1st class citizen of the OpenStack project. As stated on [the official documentation](https://wiki.openstack.org/wiki/TechnologyIntegrationPrinciples):
> OpenStack is known to run well on a variety of Host operating systems, including (without limitation):
>
> - Windows
> - Solaris
> - ESXi
> - Linux (including CentOS, Debian, Fedora, HP's Helion OS, Iocane, openSUSE, RHEL, Scientific Linux, SLES, and Ubuntu)
>
> However, as far as the OpenStack Foundation is concerned, the host operating system is not part of the OpenStack project, and OpenStack should not be considered to be "tightly coupled" to any particular OS.
It can be very helpful in increasing exposure and sustainability for the FreeBSD project in the long run if FreeBSD is both supported as host and guest OS of the OpenStack project. Also, it's more likely to attract and connect potential users of OpenStack or FreeBSD, e.g., OVHcloud, to help development of this project.
### Stretch Goal
- Submit to BSD or other related conferences in 2023
## Background
### OpenStack
> The [OpenStack](https://www.openstack.org) project is a global collaboration of developers and cloud computing technologists producing the open standard cloud computing platform for both public and private clouds.
As an open standard of cloud operating system, OpenStack manages compute, storage, and networking resources in various scale of infrastructure from corporate server farms to cloud data centers.
OpenStack has been widely adopted by many of corporations around the world -- including AT&T, eBay, PayPal, SAP, Visa, Walmart, Wells Fargo, Yahoo, and so on.
### CHERI
[CHERI (Capability Hardware Enhanced RISC Instructions)](https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/) is a DARPA-funded project, joint between SRI International and the University of Cambridge. The CHERI architecture extends conventional hardware Instruction-Set Architectures (ISAs) with new architectural features to enhance secure computing. In late 2021, Arm has announced Morello, a multi-core superscale CHERI-extended processor, SoC, and development board, is ready for shipping.
## Previous Work
A couple of years ago, there's a project called "[FreeBSD Host Support for OpenStack and OpenContrail](https://www.freebsd.org/status/report-2014-01-2014-03.html#FreeBSD-Host-Support-for-OpenStack-and-OpenContrail)" sponsored by Juniper Networks which aimed to solve similar problems. It ported OpenStack Nova to make it support bhyve via libvirt compute driver. However, the project is inactive now.
The project we proposed, unlike the previous work, is not only for virtual machines running on bhyve, but also focus on the lifecycle management and provisioning on hardware resources such as Morello boards.
## Milestones
There will be 3 phases during the whole project schedule.
### Phase 1 (6 Weeks)
Getting essential OpenStack components up and running on amd64 FreeBSD machines as cluster control plane. Making it the lifecycle manager of CHERI-enabled Morello boards. The originally essential Linux-based components must be ported to FreeBSD, including:
- Identity and service catalog service - [Keystone](https://opendev.org/openstack/keystone)
- Bare-metal provisioning service - [Ironic](https://ironicbaremetal.org)
Though Ironic can be setup as a [standalone service](https://docs.openstack.org/ironic/latest/install/standalone.html), which means there's no need of service catalog and integrated authentication provided by Keystone, it is good to have both Keystone and Ironic ported in this phase. Keystone is essential to any complete OpenStack cluster, and we'll build several of them in the upcoming phases.
People use BMC (Baseboard Management Controller) on each commodity server to achieve out-of-band control, so does Ironic. There're plenty of existing protocols and solutions, e.g., [IPMI](https://www.intel.com/content/www/us/en/products/docs/servers/ipmi/ipmi-second-gen-interface-spec-v2-rev1-1.html), [Redfish](https://www.dmtf.org/standards/redfish), [Dell iDRAC](https://www.dell.com/en-us/dt/solutions/openmanage/idrac.htm), [HPE iLO](https://www.hpe.com/us/en/servers/integrated-lights-out-ilo.html), etc. However, the Morello development boards do not come with BMCs. To make Ironic manage those ARM boards, specific Ironic drivers might have to be developed and tested in order to successfully do the provisioning, managing, and cleanup jobs towards the hardware.
Once these checkpoints are met, the OpenStack platform should be able to allocate the ARM boards, provide hardware resources to developers and let them run the workloads.
---
#### FreeBSD Netperf Cluster
The [netperf cluster](https://people.freebsd.org/~rodrigc/doc/data/projects/netperf/cluster.html) provides various specs of machines and lets developers do network functionality and performance testing. Currently, the cluster is available on a [check out basis](https://wiki.freebsd.org/TestClusterOneReservations) for FreeBSD.org developers and the machines are managed by the cluster admins. Though it has a [certain level of automation](https://wiki.freebsd.org/TestClusterOnePointers), i.e., provisioning and recycling via PXE, IPMI, and a bunch of scripts, it will be better if there's a complete system with a dashboard that can take care of the cluster in terms of hardware inventory, resource management and allocation.
Right just before phase 2 begins, we will continue our results of phase 1 and bring them to the netperf cluster. The major difference between the two is the managed hardware resources are changed from ARM arch to amd64 arch, and most of them have BMCs for out-of-band control. Some little tweaks will be needed in Ironic components.
### Phase 2 (8 Weeks)
Setting up a mini cloud in OpenStack running on FreeBSD hosts which manages several bare-metal servers and virtual machines in FreeBSD.org cluster. Doing this would help in many aspects:
- It is possible to provide self-service to the users of the cluster
- The efficiency of lifecycle management to both physical and virtual resources is enhanced
- Network connectivity of each bare-metal servers and virtual machines is assured
- Giving root privilege of the installed OS to the users is much more do-able since the admin can take back the servers at any time, and the collateral damage it could cause is physically scoped especially for the bare-metal cases
To bring the aforementioned characteristics to FreeBSD.org cluster, other crucial components must be ported to FreeBSD, that is:
- Instance lifecycle management service - [Nova](https://opendev.org/openstack/nova)
- Overlay networking service - [Neutron](https://opendev.org/openstack/neutron/)
#### Nova
As a higher level of abstraction, Nova controls the lifecycle of instances whether or not they're virtual or physical. For virtual machines, Nova communicates with hypervisors through APIs or libraries to operate them; for bare-metal servers, it is Ironic acting as the hypervisor layer providing a means to operate on bare-metal instances just like the experience of virtual ones.
According to [official documents](https://docs.openstack.org/nova/latest/user/support-matrix.html), Nova significantly uses [libvirt](https://libvirt.org) to communicate with various hypervisors, which means the libvirt driver of Nova is well developed. The other good news is, [libvirt supports bhyve natively](https://libvirt.org/drvbhyve.html). So it could be relatively easier to go through the Nova, libvirt, then bhyve path than other possible ways.
#### Neutron
In this phase, we aim to provide basic network connectivity, i.e., flat network, to bare-metal servers and virtual machines through Neutron ([nova-network is not recommended because it was deprecated since Newton release](https://docs.openstack.org/nova/pike/admin/networking-nova.html)). Advanced overlay network features will be ported in the next phase.
#### Ironic
Since there're machines of multiple types of CPU architecture inside the FreeBSD.org cluster, it is also necessary to test (or even develop new) hardware drivers of Ironic for each type of machines.
---
All OpenStack components ported to FreeBSD in phase 1 & 2 will be available as FreeBSD ports. The steps of setting up a FreeBSD-based OpenStack cluster will also be collected and provided in the FreeBSD handbook and wiki.
### Phase 3 (6 Weeks)
For a more general cluster setup, tenant-aware networking, i.e., network isolation, must be implemented. Therefore, at least one Neutron [ML2 (Modular Layer 2)](https://docs.openstack.org/neutron/xena/admin/config-ml2.html) driver should be ported appropriately in order to provide such functionality.
On the other hand, work with related OpenStack SIGs to contribute back to the upstream projects (OpenStack Keystone, Ironic, etc.) and help establish FreeBSD version testing pipeline in those projects.
*P.S. estimation of the schedule is based on 40 working hours per week*
## Future Work
For sustainability of the project, submitting a proposal and presenting on EuroBSDCon 2023, can help improve the visibility and approaching potential interested developers to contribute to this project. Also, we'll try to submit articles to [FreeBSD Journal](https://freebsdfoundation.org/our-work/journal/) as a summary of this project, which can help the project exposure and continuity, too.
## Contact Information
- Name: Chih-Hsin Chang
- Email address:
- Mailing address:
- Phone number:
I am familiar with the ecosystem of FreeBSD project. I have experiences in FreeBSD OS and Ports for around 4 years, managed 20+ servers running various applications from web hosting, BBS, email, to monitoring services. And those applications were used by students and faculties in Computer Science Dept., NCTU, Taiwan. Also, as a software engineer dedicated in the cloud field, I developed a bare-metal cloud platform which leverages OpenStack components to deliver life-cycle management to hardware resources. All of these are precious experience, making me the right choice of candidate doing this project.
## Technical Reviewer
- Name: