FENCES Install
===
## DCM High level Steps
Download Initial Binaries from D2iQ Website
Install bzip2
Extend the Home Directory Prior to Download
Download Konvoy File to Windows jump host
Extend "/var" for Docker
Move to S3 Bucket preprovisionekonvoy init
//TODO - Either need to include Docker files into the Download archive, or provide instructions to accurately run install and run Docker on the Bastion host.
## Set up the Disks on Fences
Explore the Disk in an LVM environment:
```shell
df -h
lsblk
```
Extend Home Directory
`sudo lvextend -r -L50G /dev/vg.main/lv.home`
Extend Var Parition for Large Docker Image
`sudo lvxetend -r -L50G /dev/vg.main/lv.var`
sync the konvoy tar.gz artifact to the bastion host
`aws s3 sync se://customer0XXXX-XXXXXX .`
Expand the Konvoy artifacts on the RHEL bastion
```shell
yum install bzip2 unzip -y
cd ~
tar -jxvf konvoy_air_gapped_*.tar.bz2
```
cd konvoy_v1.4.4
## Install Docker-ce
Remove docker if already installed
```
sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine
```
`sudo yum install -y yum-utils`
Need to fetch `docker-ce` > 18.0x.x as the base repo is not in Fences
https://download.docker.com/linux/centos/docker-ce.repo
### Fetch and build docker-ce
On an external machine (not the Windows2016 Amazon Workspace) with docker or on a seperate Centos machine.
```shell
docker run -i -t centos7
yum install yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yumdownloader --resolve docker-ce
mv *.rpm /download
exit # leave dockerimage
tar -cf mytarball.tar.gz ./*
```
Using windows2016
* fetch the new tarball
* upload to s3 on fences
* pull down on the bastion
```
mkdir -p ~/docker-ce
mv *.tar ~/docker-ce
tar -xvf docker-ce.tar.gz
cd docker-ce
sudo yum install deltarpm -y #upstream dependency
sudo yum install --nogpgcheck -y *.rpm
sudo systemctl enable docker
sudo systemctl start docker
```
## Docker Default Network Conflict
The Docker Engine default bridge network is conflicting with our internal network hosts access. How do I configure the default bridge `docker0` network for Docker Engine to a different subnet?
You can configure the default bridge network by providing the bip option along with the desired subnet in the daemon.json (default location at `/etc/docker/daemon.json` on Linux) file as follows:
```json
{
"bip": "192.168.100.1/24"
}
```
Then restart the docker daemon
`sudo systemctl restart docker`
verify network settings change
`ifconfig docker0`
## Fix permissions
### changing selinux enforcement
`sudo setenforce 0`
### Add `$USER` to docker
```shell
sudo groupadd docker
sudo usermod -aG docker $USER
sudo systemctl restart docker
sudo su - $USER
//NewShell
```
Verify permissions
```shell
id
docker ps
```
## Konvoy
Install the docker CLI layers
```shell
EXPORT KONVOY_VERSION=v1.4.4_patched
konvoy --version
```
The packages versions should match.
Init the Cluster Configuration
```shell
mkdir -p ~/konvoy-cluster-name
cd ~/konvoy-cluster-name
konvoy init --cluster-name p1-test --addons-repositories /opt/konvoy/artifacts/kubernetes-base-addons@stable-1.16-1.2.0
```
### Add the Fences AWS Network Configuration
Delete the existing subnets, except for the MGMNT.
10.0.113.0/24 CIDR VPC Network
**Destroy**
10.0.113./26 SBNT1
10.0.113./26 SBNT2
10.0.113./26 SBNT3
Leave
10.0.113.192/26 MGMNT - Bastion Hosts lives in these
#### Add VPC,RouteTable, and ELB config to `cluster.yaml`
copy/paste
```yaml
spec:
aws:
vpc:
ID: vpc-###########
routeTableID: rt-#######
elb:
internal: true
```
#### Create `extras/provisioners`
Create a file called `extras/provisioners/control_plane_override.tf`
* REPLACE `__ID__` with existing security group
* REPLACE `########` with proper kms key
```json
resource "aws_instance" "control_plane" {
vpc_security_group_ids = ["${split(",", var.bastion_pool_count > 0 ? join(",", local.control_plane_security_group_with_bastion) : join(",", local.control_plane_security_group))}","__ID__"]
root_block_device {
encrypted = true
kms_key_id = "arn:aws:kms:us-east-1:##########:key/###########"
}
}
resource "aws_ebs_volume" "control_plane_imagefs" {
encrypted = true
kms_key_id = "arn:aws:kms:us-east-1:##########:key/###########"
}
```
Create a file called `extras/provisioners/workers_override.tf`
```json
resource "aws_instance" "worker_pool0" {
vpc_security_group_ids = ["${split(",", var.bastion_pool_count > 0 ? join(",", local.worker_security_group_with_bastion) : join(",", local.worker_security_group))}","__ID__"]
root_block_device {
encrypted = true
kms_key_id = "arn:aws:kms:us-east-1:##########:key/###########"
}
}
resource "aws_ebs_volume" "worker_pool0_imagefs" {
encrypted = true
kms_key_id = "arn:aws:kms:us-east-1:##########:key/###########"
}
```
Create a file called `extras/provisioners/subnets.tf`
The recommendation is to use `/18` masks but due constraints `/26` will be used:
```json
locals {
public_subnet_range = "10.0.113.0/26"
private_subnet_range = "10.0.113.64/26"
control_plane_subnet_range = "10.0.113.128/26"
}
// Rest of the file should not be modified.
resource "aws_subnet" "konvoy_public" {
vpc_id = "${var.vpc_id == "" ? join(",", aws_vpc.konvoy.*.id) : var.vpc_id}"
cidr_block = "${cidrsubnet(local.public_subnet_range, 2, count.index)}"
count = "${length(var.public_subnet_ids) == 0 ? length(var.aws_availability_zones) : 0}"
availability_zone = "${element(coalescelist(var.aws_availability_zones, data.aws_availability_zones.available.names), count.index)}"
tags = "${merge(
local.common_tags,
map(
"Name", "${local.cluster_name}-subnet-public",
"konvoy/subnet", "public"
)
)}"
}
resource "aws_subnet" "konvoy_private" {
vpc_id = "${var.vpc_id == "" ? join(",", aws_vpc.konvoy.*.id) : var.vpc_id}"
cidr_block = "${cidrsubnet(local.private_subnet_range, 2, count.index)}"
count = "${var.create_private_subnets ? length(var.aws_availability_zones) : 0}"
availability_zone = "${element(coalescelist(var.aws_availability_zones, data.aws_availability_zones.available.names), count.index)}"
tags = "${merge(
local.common_tags_no_cluster,
map(
"Name", "${local.cluster_name}-subnet-private",
"konvoy/subnet", "private"
)
)}"
}
resource "aws_subnet" "konvoy_control_plane" {
vpc_id = "${var.vpc_id == "" ? join(",", aws_vpc.konvoy.*.id) : var.vpc_id}"
cidr_block = "${cidrsubnet(local.control_plane_subnet_range, 2, count.index)}"
count = "${length(var.control_plane_subnet_ids) == 0 ? length(var.aws_availability_zones) : 0}"
availability_zone = "${element(coalescelist(var.aws_availability_zones, data.aws_availability_zones.available.names), count.index)}"
tags = "${merge(
local.common_tags,
map(
"Name", "${local.cluster_name}-subnet-control-plane",
"konvoy/subnet", "control_plane",
"kubernetes.io/role/elb", 1
)
)}"
}
```
Create a file called `extras/provisioners/main_override.tf`
This files is used because we are using IAM Roles
```json
provider "aws" {
skip_metadata_api_check = true
}
```
Create a file called `extras/provisioners/elb_override.tf`
This file is need to be able to reach the ELB for `TCP:6443`
Add bastian security group to the load balancers inbound security group.
```json
resource "aws_elb" "konvoy_control_plane" {
security_groups = ["${aws_security_group.konvoy_private.id}", "${aws_security_group.konvoy_lb_control_plane.id}", "__SG__"]
}
```
### Create a manifest with a storageclass that suppports the kms encryption
Create a file called `extra/kubernetes/awsebscsiprovisioner.yaml`
* REPLACE `########` with proper kms key
```
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubernetes.io/description: AWS EBS CSI provisioner StorageClass
storageclass.kubernetes.io/is-default-class: "true"
name: awsebscsiprovisioner
parameters:
type: gp2
encrypted: "true"
kmsKeyId: "arn:aws:kms:us-east-1:############:key/############"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
```
### Set the Correct AMI id `cluster.yaml`
For Fences `us-east-1` instances we need to specifiy the ImageID in each `nodePools`
Currently `ami-0675324a6ec80845a` is Centos7
```yaml
spec:
nodePools:
machine:
imageID: ami-0675324a6ec80845a
```
### Set the User from the AMI in the `cluster.yaml`
This is the user that konvoy will use to ssh via ansible to the workers, and control plane.
```yaml
spec:
sshCredentials:
user: fences-user
```
### Set the IAM in `cluster.yaml`
This needs to be put in each of the `nodePools`.
```yaml
machine:
aws:
iam:
instanceProfile:
arn: "arn:aws:iam::273854932432:instance-profile/some-k8s-instance-profile"
```
### Set Proxy requirement in `cluster.yaml` for fences
Ensure the docker is not proxied to the bastion
```yaml
spec:
kubernetes:
networking:
httpsProxy: "http://172.17.0.215:80"
noProxy: ["10.0.113.197"]
```
## Dry-Run Konvoy Provisioning
In your shell on the bastion host
```
export HTTPS_PROXY=http://172.17.0.215:80
export NO_PROXY=169.254.169.254,*.elb.amazonaws.com
konvoy provision --plan-only
```
Examine the plan that is create
* runs/Planning\ Infrastructure/YYY-MM-DD-....
## Optional Check Konvoy Provision
`konvoy provision -y`
The infrasturture will be created..but we dont have enough to go past provisioning.
`konvoy down -y`
## Fences Certificate Chains
Fences provides `konvoy-certs.tar`
```
mkdir ~/certs
cd ~/certs
tar -xvf ../konvoy-certs.tar
```
ca-chain.pem - Root CA & Sub CA
cert-chain.p7b
cert-chain.pem - Root CA & Sub CA & leaf konvoy.fences.dod.gov
certificate.crt
full-response.rsp
request.csr
privatekey-decrypted.pem
Validate the SAN is set correctly
`openssl x509 -text -in certifcate.crt | grep 113 `
### Trust the certificate chain in the system
```shell
sudo cp certs/ca-chain.pem /etc/pki/ca-trust/source/anchors/.
sudo update-ca-trust
```
### Trust the certificate in docker
```shell
mkdir -p /etc/docker/certs.d/10.0.119.197:443/
cp certs/ca-chain.pem /etc/docker/certs.d/10.0.119.197:443/.
```
### Ensure docker pickups the ca chain
sudo systemctl restart docker
### Docker Registry Notes
Notes on creating and configuring a new custom docker registry to be used during the deployment of the konvoy.
`docker load -i images/docker.io_registry\:2.tar`
#### With the Fences provided keys
Create a docker image registry on port 443 with `admin/password` auth.
```
cd ~
mkdir -p $(pwd)/auth
docker run --entrypoint htpasswd registry:2 -Bbn admin password > $(pwd)/auth/htpasswd
docker run -d -p 443:5000 --restart=always --name registry -v "$(pwd)"/auth:/auth -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v "$(pwd)"/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/cert-chain.pem -e REGISTRY_HTTP_TLS_KEY=/certs/privatekey-decrypted.pem registry:2
```
Login into Docker with `admin/password`
```shell
docker login 10.0.113.197
username:
password:
WARNING! Your pass......
Login Succeeded
```
#### Modify `cluster.yaml`:
Disable external repos, and local image registry
```yaml
kind: ClusterConfiguration
spec:
imageRegistries:
- server: https://10.0.113.197:5000
username: "admin"
password: "password"
default: true
osPackages:
enableAdditionalRepositories: false
```
### Seed the Docker Image Repo with Konvoy images
```shell
konvoy config images seed
(example output)
Loaded image: foo/bar:1.2.3
4393194860cb: Pushed
0011f6346dc8: Pushed
340dc52ed535: Pushed
72073bf3dbb2: Loading Layer [===================>] 234kb/234kb
627a5019997b: Pushed
62924cac48de: Pushed
33f1a94ed7fc: Mounted from blah/blah
b27287a6dbce: Pushed
47c2386f248c: Pushed
....
```
(approx 20-40 minutes)
## Konvoy Configurations
Changes required to run D2iQ Konvoy in Fences environment.
### Safe D2iQ Konvoy Addons to Disable
```yaml
- name: elasticsearch
enabled: false
- name: elasticsearch-curator
enabled: false
- name: elasticsearchexporter
enabled: false
- name: flagger
enabled: false
- name: fluentbit
enabled: false
- name: istio
enabled: false
- name: kibana
enabled: false
- name: nvidia
enabled: false
- name: prometheus
enabled: false
- name: prometheusadapter
enabled: false
- name: velero
enabled: false
```
### Modify the externalDNS name in the `cluster.yaml`
*SKIP* Do not apply the values section, unless the cluster is already running.
[WIP] - The upstream proxy is breaking this.
```yaml
- name: konvoyconfig
enabled: true
values: |
config:
clusterHostname: konvoy.fences.dod.gov
```
### Modify the ELB for airgap in the `clusterm.yaml`
```yaml
---
kind: ClusterConfiguration
apiVersion: konvoy.mesosphere.io/v1beta1
metadata:
name: konvoy
spec:
...
addons:
addonsList:
- name: traefik
enabled: true
values: |
service:
annotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "true"
- name: velero
enabled: true
values: |
minioBackendConfiguration:
service:
annotations:
"service.beta.kubernetes.io/aws-load-balancer-internal": "true"
```
### Modify the AWS EBS CSI storage driver to use the Proxy, and encrypt the drives
You can observe the system proxy by looking at a kubelet:
``/etc/systemd/system/kubelet.service.d/0-http-proxy-drop-in.conf`
```yaml
- name: awsebscsiprovisioner
enabled: true
values: |
storageclass:
encrypted: true
env:
HTTPS_PROXY: http://172.17.0.215:80
NO_PROXY: '10.0.113.133,10.0.113.136,10.0.113.140,10.0.113.68,10.0.113.73,10.0.113.74,10.0.113.77,192.168.0.0/16,10.0.0.0/18,internal-pone-test-13da-lb-control-1716256604.us-east-1.elb.amazonaws.com,localhost,127.0.0.1,169.254.169.254,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,10.0.113.197'
```
### Modify the location of the HELM repo
Add the `helmRepository`
```yaml
kind: ClusterConfiguration
apiVersion: konvoy.mesosphere.io/v1beta1
spec:
...
addons:
- configRepository: /opt/konvoy/artifacts/kubernetes-base-addons
configVersion: stable-1.16-1.2.0
helmRepository:
image: mesosphere/konvoy-addons-chart-repo:v1.4.4
```
### Provision
konvoy
```shell
export HTTPS_PROXY=http://172.17.0.215:80
export NO_PROXY=169.254.169.254
konvoy provision -y
(about 3 minutes)
konvoy deploy kubernetes -y
(about 7 minutes )
```
konvoy deploy container-networking -y
## Install `kubectl`
```
cd _rpms
tar -xvf konvoy_v1.4.4_x86_64_rpm.tar
yum install kubectl-1.16./kubectl*.rpm -y
```
### Test the KMS CSI Issue
Ensure the `kubectl get sc` shows a default storage class `(default)`
if it fails use..
`kubectl patch storageclass awsebscsiprovisioner -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'`
### Deploy Konvoy Addons
Addons include the additional charts that productionize kubernetes, and install the `admin.conf`
```shell
konvoy deploy addons -y
konvoy apply kubeconfig
```
### Validate Kubernetes is Operational
kubectl get pods -A
kubectl get svc -A
kubectl get pv -A
kubectl get sc -o yaml
kubectl get addons -A
kubectl get clusteraddons -A
## Fix the Hidden Load Balancer
Fences needs to map the *hidden-lb* to the konvoy ingress `interal-aslkdjalskdj.us-east-1.elb.amazonaws.com/ops/landing`.
## Reference URLs:
[1] docs.d2iq.com
## Reference Material:
### Custom CA
```shell
openssl genrsa -des3 -out rootCA.key 4096
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.crt
openssl genrsa -out mydomain.com.key 2048
openssl req -new -key mydomain.com.key -out mydomain.com.csr -subj "/C=US/ST=CA/O=D2iQ/CN=mydomain.com"
openssl req -new -sha256 -key mydomain.com.key -subj "/C=US/ST=CA/O=D2iQ/CN=mydomain.com" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "\n[SAN]\nsubjectAltName=DNS:mydomain.com,DNS:www.mydomain.com")) -out mydomain.com.csr
openssl x509 -req -in mydomain.com.csr -CA rootCA.crt -CAkey rootCA.key -CAcreateserial -out mydomain.com.crt -days 500 -sha256
cat mydomain.com.crt rootCA.crt >> mydomain.com.ca-bundle
```
### Cheap Helm2 CLI
```
docker run -e KUBECONFIG=/opt/konvoy/admin.conf -v $(pwd):/opt/konvoy -w /opt/konvoy --entrypoint helm mesosphere/konvoy:v1.4.4 list
```
### Patched konvoy with GPG key
https://dkoshkin-temp.s3-us-west-2.amazonaws.com/konvoy%3Av1.4.4_gpg.tar
## RPM GPG Keys PATCH Creation
Need to push these with a ansible playbook to all machines in the inventory
```
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i inventory.yaml extras/playbooks/push-rpm-gpg-keys.yaml --extra-vars="ansible_ssh_private_key_file=<path-to-key>.pem"
```
Ansible playbook
`extras/playbooks/push-rpm-gpg-keys.yaml`
https://packages.d2iq.com/konvoy/rpm-gpg-pub-key
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBF1KsfYBCAC4dLXeuiM3iHSxgTlBBAhFbl/hpHOfQeAZA8aCscciv7bdwZ0R
VthGPKsMS5g95wng9V+jJwfFEDcsIgM89a9+TOhapdX2LreQFFvIakDuNzGVdkcA
oSmlHuPpttlolChQPgTJWmHtmM+qwipmdyOddSCzBA0K8bQXkBKMZoNPx+jIB6WM
I67WqV4SVjvkji0NaGRjkitr1U3WtkNyWazotcBru0s6wUcNtkAu1JcNDIsahR8R
Sj1Tv5stErRzeXpgoxPPvZCE2gQIWQ4+wRrefwTMcLi1bV1e0BlcYhPvE7PClu6y
P1ZMzVPXvq6qxogywSVZVh/2Ay0Q+fwgHLJlABEBAAG0Lkt1YmVybmV0ZXMgS29u
dm95IG5va21lbSA8aGZlcm5hbmRlekBkMmlxLmNvbT6JAT4EEwECACgFAl1KsfYC
GwMFCQHhM4AGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEJYQ5Fr95B1lfmEI
ALMVBjLRC6tdbe+uCtnbfBtptASu9QAYkjNb/YoZRDhPMNxowvWzJvFs0pULgSW7
LAXB+ORXm0JHlCVVqIDS9pQLsSib2MhPOMvjEw0yi89cuhsoV6SsX1ksLfe/hbFL
r4QBpCYMLSl8Byc5At+m/603XbsLqVOx5wH/TrwyaPx3iBJbrCQIExq6bzACzEGW
0oeBMO6wL/vmkWA7o3XHghiRDF2NiPaTZgY3UCk6WB4S7pcVAuBPvQRaozGLbe9+
pdmqLMc0OTGIwfO5BbK/1ilMmJQVKceJc2m2l9EN8q8MmtrPws4LwWSik51Z2NYH
2ggyHRcbqtmqkneAeYn3Du+5AQ0EXUqx9gEIALuBALiG7jHPO/4ZZDSrxUwmHUWY
UeO0btmfg0lLvy3gBuOEBxZnGgdU0R+IgufRPeY6yFyd0v7fJVsrENROCCMPYs3G
zHoUUdtjs9+49HIaecTt1j1mUGwjnZ4CYb23o2Bio8xlVmofGeOzYdSV7ikcS+IY
tt4Oagt1egdxwI66CP5UXnpXR7D49CV6hKaYzwX7UqSFg5njBzmT6nlHuHs9nN0K
OI6NBwYs4fKH5+2TYxlf/2AD4sGy+3E94F0zQ5iVtpgxlMbMf8F0C9drHtZGIJ11
klhFXTArsxqPAdKnSqkLu5v1tHGTKfqW/Fq6NIElHDurtmk92RuR43DAsmsAEQEA
AYkBJQQYAQIADwUCXUqx9gIbDAUJAeEzgAAKCRCWEORa/eQdZcjrB/9rCbPDFdvm
XYvmHSCAVWMqkC9v6UkHwGW3BCFXIlyVuPC8eq+AwI0cSh7FwezTUHAQ0RPQFhkJ
w7Kq/yVE3LkQSyb4iE2ny3g3h642DRggmpfl0V1XncxeJzEz1lVZV5nbeHNshAqG
KHs0Hud9Az8BItQMLctdlN1tX77dmMxhnuAYoY0YzbVCRcv6fP0t2gy0pCsIXfRc
zE/+gnHDtqcYQzy4ArIKutqhhPWWAw8xuIdcV9oCW2Bu8dAYqi0KRJRDkAmW41ju
LVUMSDBlveNLgOwucVMQKUcHS1DUrWEw7Zvb0tXKJ5iKoK8STio4MNoCQbc57Ehv
BKE9hNe2iSrv
=+gkA
-----END PGP PUBLIC KEY BLOCK-----
```
https://download.docker.com/linux/centos/gpg
docker-gpg-pub-key
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFit5IEBEADDt86QpYKz5flnCsOyZ/fk3WwBKxfDjwHf/GIflo+4GWAXS7wJ
1PSzPsvSDATV10J44i5WQzh99q+lZvFCVRFiNhRmlmcXG+rk1QmDh3fsCCj9Q/yP
w8jn3Hx0zDtz8PIB/18ReftYJzUo34COLiHn8WiY20uGCF2pjdPgfxE+K454c4G7
gKFqVUFYgPug2CS0quaBB5b0rpFUdzTeI5RCStd27nHCpuSDCvRYAfdv+4Y1yiVh
KKdoe3Smj+RnXeVMgDxtH9FJibZ3DK7WnMN2yeob6VqXox+FvKYJCCLkbQgQmE50
uVK0uN71A1mQDcTRKQ2q3fFGlMTqJbbzr3LwnCBE6hV0a36t+DABtZTmz5O69xdJ
WGdBeePCnWVqtDb/BdEYz7hPKskcZBarygCCe2Xi7sZieoFZuq6ltPoCsdfEdfbO
+VBVKJnExqNZCcFUTEnbH4CldWROOzMS8BGUlkGpa59Sl1t0QcmWlw1EbkeMQNrN
spdR8lobcdNS9bpAJQqSHRZh3cAM9mA3Yq/bssUS/P2quRXLjJ9mIv3dky9C3udM
+q2unvnbNpPtIUly76FJ3s8g8sHeOnmYcKqNGqHq2Q3kMdA2eIbI0MqfOIo2+Xk0
rNt3ctq3g+cQiorcN3rdHPsTRSAcp+NCz1QF9TwXYtH1XV24A6QMO0+CZwARAQAB
tCtEb2NrZXIgUmVsZWFzZSAoQ0UgcnBtKSA8ZG9ja2VyQGRvY2tlci5jb20+iQI3
BBMBCgAhBQJYrep4AhsvBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEMUv62ti
Hp816C0P/iP+1uhSa6Qq3TIc5sIFE5JHxOO6y0R97cUdAmCbEqBiJHUPNQDQaaRG
VYBm0K013Q1gcJeUJvS32gthmIvhkstw7KTodwOM8Kl11CCqZ07NPFef1b2SaJ7l
TYpyUsT9+e343ph+O4C1oUQw6flaAJe+8ATCmI/4KxfhIjD2a/Q1voR5tUIxfexC
/LZTx05gyf2mAgEWlRm/cGTStNfqDN1uoKMlV+WFuB1j2oTUuO1/dr8mL+FgZAM3
ntWFo9gQCllNV9ahYOON2gkoZoNuPUnHsf4Bj6BQJnIXbAhMk9H2sZzwUi9bgObZ
XO8+OrP4D4B9kCAKqqaQqA+O46LzO2vhN74lm/Fy6PumHuviqDBdN+HgtRPMUuao
xnuVJSvBu9sPdgT/pR1N9u/KnfAnnLtR6g+fx4mWz+ts/riB/KRHzXd+44jGKZra
IhTMfniguMJNsyEOO0AN8Tqcl0eRBxcOArcri7xu8HFvvl+e+ILymu4buusbYEVL
GBkYP5YMmScfKn+jnDVN4mWoN1Bq2yMhMGx6PA3hOvzPNsUoYy2BwDxNZyflzuAi
g59mgJm2NXtzNbSRJbMamKpQ69mzLWGdFNsRd4aH7PT7uPAURaf7B5BVp3UyjERW
5alSGnBqsZmvlRnVH5BDUhYsWZMPRQS9rRr4iGW0l+TH+O2VJ8aQ
=0Zqq
-----END PGP PUBLIC KEY BLOCK-----
```
nvme-gpe-key
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.5 (GNU/Linux)
mQINBFOn/0sBEADLDyZ+DQHkcTHDQSE0a0B2iYAEXwpPvs67cJ4tmhe/iMOyVMh9
Yw/vBIF8scm6T/vPN5fopsKiW9UsAhGKg0epC6y5ed+NAUHTEa6pSOdo7CyFDwtn
4HF61Esyb4gzPT6QiSr0zvdTtgYBRZjAEPFVu3Dio0oZ5UQZ7fzdZfeixMQ8VMTQ
4y4x5vik9B+cqmGiq9AW71ixlDYVWasgR093fXiD9NLT4DTtK+KLGYNjJ8eMRqfZ
Ws7g7C+9aEGHfsGZ/SxLOumx/GfiTloal0dnq8TC7XQ/JuNdB9qjoXzRF+faDUsj
WuvNSQEqUXW1dzJjBvroEvgTdfCJfRpIgOrc256qvDMp1SxchMFltPlo5mbSMKu1
x1p4UkAzx543meMlRXOgx2/hnBm6H6L0FsSyDS6P224yF+30eeODD4Ju4BCyQ0jO
IpUxmUnApo/m0eRelI6TRl7jK6aGqSYUNhFBuFxSPKgKYBpFhVzRM63Jsvib82rY
438q3sIOUdxZY6pvMOWRkdUVoz7WBExTdx5NtGX4kdW5QtcQHM+2kht6sBnJsvcB
JYcYIwAUeA5vdRfwLKuZn6SgAUKdgeOtuf+cPR3/E68LZr784SlokiHLtQkfk98j
NXm6fJjXwJvwiM2IiFyg8aUwEEDX5U+QOCA0wYrgUQ/h8iathvBJKSc9jQARAQAB
tEJDZW50T1MtNyBLZXkgKENlbnRPUyA3IE9mZmljaWFsIFNpZ25pbmcgS2V5KSA8
c2VjdXJpdHlAY2VudG9zLm9yZz6JAjUEEwECAB8FAlOn/0sCGwMGCwkIBwMCBBUC
CAMDFgIBAh4BAheAAAoJECTGqKf0qA61TN0P/2730Th8cM+d1pEON7n0F1YiyxqG
QzwpC2Fhr2UIsXpi/lWTXIG6AlRvrajjFhw9HktYjlF4oMG032SnI0XPdmrN29lL
F+ee1ANdyvtkw4mMu2yQweVxU7Ku4oATPBvWRv+6pCQPTOMe5xPG0ZPjPGNiJ0xw
4Ns+f5Q6Gqm927oHXpylUQEmuHKsCp3dK/kZaxJOXsmq6syY1gbrLj2Anq0iWWP4
Tq8WMktUrTcc+zQ2pFR7ovEihK0Rvhmk6/N4+4JwAGijfhejxwNX8T6PCuYs5Jiv
hQvsI9FdIIlTP4XhFZ4N9ndnEwA4AH7tNBsmB3HEbLqUSmu2Rr8hGiT2Plc4Y9AO
aliW1kOMsZFYrX39krfRk2n2NXvieQJ/lw318gSGR67uckkz2ZekbCEpj/0mnHWD
3R6V7m95R6UYqjcw++Q5CtZ2tzmxomZTf42IGIKBbSVmIS75WY+cBULUx3PcZYHD
ZqAbB0Dl4MbdEH61kOI8EbN/TLl1i077r+9LXR1mOnlC3GLD03+XfY8eEBQf7137
YSMiW5r/5xwQk7xEcKlbZdmUJp3ZDTQBXT06vavvp3jlkqqH9QOE8ViZZ6aKQLqv
pL+4bs52jzuGwTMT7gOR5MzD+vT0fVS7Xm8MjOxvZgbHsAgzyFGlI1ggUQmU7lu3
uPNL0eRx4S1G4Jn5
=OGYX
-----END PGP PUBLIC KEY BLOCK-----
```
### Patch v1.4.4_gpg with fixed ansible_playbook
konvoy v1.4.4_gpg
In shell one
```
./konvoy up
(don't say yes), let it hang
```
In shell two
```shell
# Connect to the running instance from shell one aaaaaaaa
docker copy main.yaml aaaaaaa:/opt/konvoy/ansible/playbooks/roles/packages-copy-local/tasks/main.yaml
docker copy nvme aaaaaaa:/opt/konvoy/ansible/playbooks/roles/packages-copy-local/templates/nvme
docker copy docker aaaaaaa:/opt/konvoy/ansible/playbooks/roles/packages-copy-local/templates/docker
docker copy d2iq aaaaaaa:/opt/konvoy/ansible/playbooks/roles/packages-copy-local/templates/kubernetes
docker commit aaaaaaa mesosphere/konvoy:v1.4.4_patched
```
(Make sure the rpms matches the konvoy release name )
`./konvoy --version`
export KONVOY_VERSION=v1.4.4_patched
cd konvoy_v1.4.4/
cp konvoy_v1.4.4_x86_64_rpms.tar.gz konvoy_v1.4.4_patched_x86_64_rpms.tar.gz
```yaml
---
- name: create packages directories
file:
path: /opt/konvoy/rpms/
state: directory
when:
- ansible_os_family == 'RedHat'
- rpms_tar_file != ''
- block:
- name: copy nvme GPG key
template:
src: "nvme"
dest: "/opt/konvoy/nvme"
- name: import nvme GPG key
command: rpm --import /opt/konvoy/nvme
- name: copy Docker GPG key
template:
src: "docker"
dest: "/opt/konvoy/docker"
- name: import docker GPG key
command: rpm --import /opt/konvoy/docker
- name: copy Kubernetes GPG key
template:
src: "kubernetes"
dest: "/opt/konvoy/kubernetes"
- name: import kubernetes GPG key
command: rpm --import /opt/konvoy/kubernetes
when:
- ansible_os_family == 'RedHat'
- rpms_tar_file != ''
- name: copy rpm packages tar file to remote
copy:
src: "{{ rpms_tar_file }}"
dest: "/opt/konvoy/rpms/{{ spec.version }}_x86_64.tar.gz"
owner: root
register: copied
when:
- ansible_os_family == 'RedHat'
- rpms_tar_file != ''
- name: unarchive rpm packages tar file
unarchive:
src: "/opt/konvoy/rpms/{{ spec.version }}_x86_64.tar.gz"
dest: /opt/konvoy/rpms/
owner: root
remote_src: yes
when:
- ansible_os_family == 'RedHat'
- rpms_tar_file != ''
- copied.changed
- name: create packages directories
file:
path: /opt/konvoy/debs/
state: directory
when:
- ansible_os_family == 'Debian'
- debs_tar_file != ''
- name: copy deb packages tar file to remote
copy:
src: "{{ debs_tar_file }}"
dest: "/opt/konvoy/debs/{{ spec.version }}_amd64.tar.gz"
owner: root
register: copied
when:
- ansible_os_family == 'Debian'
- debs_tar_file != ''
- name: unarchive deb packages tar file
unarchive:
src: "/opt/konvoy/debs/{{ spec.version }}_amd64.tar.gz"
dest: /opt/konvoy/debs/
owner: root
remote_src: yes
when:
- ansible_os_family == 'Debian'
- debs_tar_file != ''
- copied.changed
```