This lab starts an OpenNMS instance in the cloud and two Minions on your machine, using ActiveMQ for communication through Multipass
and AWS
, for learning purposes. This procedure is inspired by its Azure counterpart.
The lab doesn't cover security (in terms of encryption), which is crucial if you ever want to expose AMQ to the Internet.
Make sure to log into AWS using aws configure
and have your credentials ready in ~/.aws/credentials
prior creating resources, including your default region.
export AWS_PAGER=""
export KEY_NAME="agalue"
export VPC_ID=$(aws ec2 describe-vpcs \
--filters 'Name=isDefault,Values=true' \
--query 'Vpcs[0].VpcId' --output text)
For this exercise and to simplify the deployment, I'm assuming the chosen region has a default VPC, and its ID will be saved in VPC_ID
as shown above.
In case you don't have one already, the following creates the key in AWS and saves the private one on a file on your machine. You should create this once and reuse it for all your work with EC2 instances.
aws ec2 create-key-pair --key-name $KEY_NAME \
--query 'KeyMaterial' \
--output text > ~/.ssh/$KEY_NAME.pem
chmod 400 ~/.ssh/$KEY_NAME.pem
Create a security group in the default VPC and allow access via SSH, ActiveMQ, and the OpenNMS WebUI. Save its ID on an environment variable.
export SG_ID=$(aws ec2 create-security-group \
--group-name 'onms_access' \
--description 'OpenNMS Access' \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
for port in 22 8980 61616; do
aws ec2 authorize-security-group-ingress --group-id $SG_ID --protocol tcp --port $port --cidr 0.0.0.0/0
done
Create a bash script to deploy OpenNMS on an Amazon Linux 2 image and save it at /tmp/opennms.sh
:
#!/bin/bash
amazon-linux-extras install postgresql11 java-openjdk11 -y
yum install -y https://yum.opennms.org/repofiles/opennms-repo-stable-rhel7.noarch.rpm
yum install -y opennms-core opennms-webapp-jetty opennms-webapp-hawtio postgresql-server
/usr/bin/postgresql-setup --initdb --unit postgresql
sed -r -i "/^(local|host)/s/(peer|ident)/trust/g" /var/lib/pgsql/data/pg_hba.conf
systemctl --now enable postgresql
sed -r -i '/0.0.0.0:61616/s/([<][!]--|--[>])//g' /opt/opennms/etc/opennms-activemq.xml
sed -r -i '/enabled="false"/{$!{N;s/ enabled="false"[>]\n(.*OpenNMS:Name=Syslogd.*)/>\n\1/}}' /opt/opennms/etc/service-configuration.xml
/opt/opennms/bin/runjava -s
/opt/opennms/bin/install -dis
echo 'JAVA_HEAP_SIZE=4096' > /opt/opennms/etc/opennms.conf
systemctl --now enable opennms
The above installs the latest OpenJDK 11, the latest PostgreSQL 11, and the latest OpenNMS Horizon. I added the most basic configuration for PostgreSQL. The embedded ActiveMQ is enabled, as well as Syslogd
.
Create an EC2 instance with at least 2 CPU cores and 8 GB of RAM in the default VPC, and save the Instance ID:
export INSTANCE_ID=$(aws ec2 run-instances \
--image-id resolve:ssm:/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 \
--instance-type t3.large \
--key-name $KEY_NAME \
--user-data file:///tmp/opennms.sh \
--associate-public-ip-address \
--security-group-ids $SG_ID \
--query 'Instances[0].InstanceId' --output text)
Keep in mind that the cloud-init
process starts once the VM is running, meaning you should wait a few minutes to see OpenNMS up and running.
In case there is a problem, SSH into the VM using the public IP and the provided credentials and check /var/log/cloud-init-output.log
to verify the progress and the status of the cloud-init execution.
To Access the VM via SSH:
export ONMS_IP=$(aws ec2 describe-instances \
--instance-ids $INSTANCE_ID \
--query 'Reservations[0].Instances[0].NetworkInterfaces[0].Association.PublicIp' \
--output text)
ssh -i ~/.ssh/$KEY_NAME.pem ec2-user@$ONMS_IP
It is always useful to tag your resources, at least to add a Name to them, and perhaps for billing purposes, for instance:
aws ec2 create-tags --resource $INSTANCE_ID $SG_ID --tags \
Key=Name,Value=opennms \
Key=Environment,Value=Test \
Key=Department,Value=Support
multipass
After verifying that OpenNMS is up and running, create the cloud-init
configuration for the first Minion on your machine:
export MINION_ID1="minion01"
export MINION_ID2="minion02"
export MINION_LOCATION="Durham"
export MINION_HEAP_SIZE="1g"
export ONMS_IP=$(aws ec2 describe-instances \
--instance-ids $INSTANCE_ID \
--query 'Reservations[0].Instances[0].NetworkInterfaces[0].Association.PublicIp' \
--output text)
cat <<EOF > /tmp/$MINION_ID1.yaml
#cloud-config
package_upgrade: true
write_files:
- owner: root:root
path: /tmp/org.opennms.minion.controller.cfg
content: |
location=$MINION_LOCATION
id=$MINION_ID1
http-url=http://$ONMS_IP:8980/opennms
broker-url=failover:tcp://$ONMS_IP:61616
apt:
preserve_sources_list: true
sources:
opennms:
source: deb https://debian.opennms.org stable main main
packages:
- opennms-minion
bootcmd:
- curl -s https://debian.opennms.org/OPENNMS-GPG-KEY | apt-key add -
runcmd:
- mv -f /tmp/org.opennms.minion.controller.cfg /etc/minion/
- sed -i -r 's/# export JAVA_MIN_MEM=.*/export JAVA_MIN_MEM="$MINION_HEAP_SIZE"/' /etc/default/minion
- sed -i -r 's/# export JAVA_MAX_MEM=.*/export JAVA_MAX_MEM="$MINION_HEAP_SIZE"/' /etc/default/minion
- /usr/share/minion/bin/scvcli set opennms.http admin admin
- /usr/share/minion/bin/scvcli set opennms.broker admin admin
- systemctl --now enable minion
EOF
Then, start the new Minion via multipass
with one core and 2GB of RAM:
multipass launch -c 1 -m 2G -n $MINION_ID1 --cloud-init /tmp/$MINION_ID1.yaml
Optionally, create a cloud-init
configuration for a second Minion on your machine based on the work we did for the first one (same location):
sed "s/$MINION_ID1/$MINION_ID2/" /tmp/$MINION_ID1.yaml > /tmp/$MINION_ID2.yaml
Then, start the second Minion via multipass
:
multipass launch -c 1 -m 2G -n $MINION_ID2 --cloud-init /tmp/$MINION_ID2.yaml
In case there is a problem, access the VM (e.x., multipass shell $MINION_ID1
) and check /var/log/cloud-init-output.log
to verify the progress and the status of the cloud-init execution.
As you can see, the location name is Durham
(a.k.a. $MINION_LOCATION
), and you should see the Minions on that location registered in OpenNMS.
SSH into the OpenNMS server and create a requisition with a node in the same network as the Minion VMs, and make sure to associate it with the appropriate location. For instance,
/usr/share/opennms/bin/provision.pl requisition add Test
/usr/share/opennms/bin/provision.pl node add Test srv01 srv01
/usr/share/opennms/bin/provision.pl node set Test srv01 location Durham
/usr/share/opennms/bin/provision.pl interface add Test srv01 192.168.0.40
/usr/share/opennms/bin/provision.pl interface set Test srv01 192.168.0.40 snmp-primary P
/usr/share/opennms/bin/provision.pl requisition import Test
Make sure to replace 192.168.0.40
with the IP of a working server in your network (reachable from the Minion VM), and do not forget to use the same location as defined in $MINION_LOCATION
.
Please keep in mind that Minions are VMs on your machine. 192.168.0.40
is the IP of my machine which is why Minions can reach it (and vice versa), to access an external machine on your network, make sure to define static routes on that machine so it can reach the Minions through your machine (assuming you're running Linux or macOS).
OpenNMS which runs in AWS, and have no access to 192.168.0.40
directly, should be able to collect data and monitor that node through any of the Minions. In fact, you can stop one of them, and OpenNMS would continue monitoring it.
To test asynchronous messages, you can send SNMP traps or Syslog messages to one of the Minions. Usually, you could put a Load Balancer in front of the Minions and use its IP when sending messages from the monitored devices. Alternatively, you could use udpgen for this purpose.
The machine that will be running udpgen
must be part of the OpenNMS inventory. Then, find the IP of the Minion using multipass list
, then execute the following from the machine added as a node above (the examples assumes the IP of the Minion is 192.168.75.16
):
To send SNMP Traps:
udpgen -h 192.168.75.16 -x snmp -r 1 -p 1162
To send Syslog Messages:
udpgen -h 192.168.75.16 -x syslog -r 1 -p 1514
The C++ version of udpgen
only works on Linux. If you're on MacOS or Windows, you can use the Go version of it.
The Hawtio UI in OpenNMS can help to visualize the Camel and ActiveMQ internals, to understand what's circulating between OpenNMS and the Minions.
For OpenNMS, Hawtio is available through http://$ONMS_IP:8980/hawtio
(use the ActiveMQ Tab) if the package opennms-webapp-hawtio
was installed (which is the case with the cloud-init
template used).
For Minions, Hawtio is available through http://$MINION_IP1:8181/hawtio
and http://$MINION_IP2:8181/hawtio
respectively (use the Camel Tab).
In production, when having multiple Minions per location, it is a good practice to put a Load Balancer in front of them so that the devices can use a single destination for SNMP Traps, Syslog, and Flows.
The following creates a basic LB using nginx
through multipass
for SNMP Traps (with a listener on port 162) and Syslog Messages (with a listener on port 514):
MINION_IP1=$(multipass info $MINION_ID1 | grep IPv4 | awk '{print $2}')
MINION_IP2=$(multipass info $MINION_ID2 | grep IPv4 | awk '{print $2}')
cat <<EOF > /tmp/nginx.yaml
#cloud-config
package_upgrade: true
packages:
- nginx
write_files:
- owner: root:root
path: /etc/nginx/nginx.conf
content: |
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
worker_connections 768;
}
stream {
upstream syslog_udp {
server $MINION_IP1:1514;
server $MINION_IP2:1514;
}
upstream trap_udp {
server $MINION_IP1:1162;
server $MINION_IP2:1162;
}
server {
listen 514 udp;
proxy_pass syslog_udp;
proxy_responses 0;
}
server {
listen 162 udp;
proxy_pass trap_udp;
proxy_responses 0;
}
}
runcmd:
- systemctl restart nginx
EOF
multipass launch -n nginx --cloud-init /tmp/nginx.yaml
echo "Load Balancer $(multipass info nginx | grep IPv4)"
Flows are outside the scope of this test as that requires more configuration on Minions and OpenNMS besides having an Elasticsearch cluster up and running with the required plugin in place.
When you're done, make sure to delete the cloud resources:
aws ec2 terminate-instances --instance-ids $INSTANCE_ID
aws ec2 wait instance-terminated --instance-ids $INSTANCE_ID
aws ec2 delete-security-group --group-id $SG_ID
Then clean the local resources:
multipass delete $MINION_ID1 $MINION_ID2
multipass purge
Remember to remove the nginx
instance if you decided to use it.