In this section, we will outline the steps to deploy a full DAC infrastructure on Kubernetes. This tutorial is cloud provider-agnostic.
Prerequesites: a good understanding of Kubernetes.
You can access the YAML files referenced in this section from this public repository, feel free to reuse and adapt them according to your needs.
WIPE_COORDINATOR_DIR: "NO" # put 'YES' if you want to remove that storage before starting up
WIPE_COORDINATOR_REVEAL_DATA_DIR: "NO"
WIPE_COORDINATOR_OCTEZ_CLIENT_DIR: "YES"
COORDINATOR_DIR: "/mounted_volume/dac/coordinator"
COORDINATOR_REVEAL_DATA_DIR: "/mounted_volume_reveals/dac/coordinator/reveals"
TEZOS_RPC:"https://ghostnet.tezos.marigold.dev"
WIPE_COORDINATOR_DIR: "YES"
and restart the pod so that it gets cleaned up at startup.Bash scripts are stored in a configmap and mounted in the pods.
Basically each script does the following:
If you're operating the DAC on a Kubernetes instance or a private server managed by a cloud provider, setting up Authorization and Authentication is essential. Most often, this is done using the cloud provider's IAM (Identity and Access Management). Such implementations ensure that only authorized users can access sensitive cloud resources and that they can execute only permitted actions.
For maintaining security specifically within Kubernetes, it's imperative to follow its security best practices, which include:
For those who use GitOps workflows, tools like SOPS and SealedSecrets come in handy. They offer effective ways to encrypt secrets - in this context, the private keys of wallets.
The octez-dac-node
binary does not support CORS headers. Therefore, we use a reverse proxy positioned between the DAC nodes and the client to append CORS headers to HTTP requests.
As illustrated in the provided ingress manifests, we use the Nginx ingress controller as a reverse proxy to supplement the CORS headers:
nginx.ingress.kubernetes.io/enable-cors: "true"
(...)
Additionally, some ingress configuration options need tweaking:
# increase max body size to be able to transfer large files to the DAC:
nginx.ingress.kubernetes.io/proxy-body-size: 2g
# suppress proxy timeouts:
nginx.ingress.kubernetes.io/proxy-read-timeout: "2147483647" # We want the timeout to never trigger. There's no way to turn it off per se, so we achieve this by setting an unrealistically large number.
nginx.ingress.kubernetes.io/proxy-send-timeout: "2147483647"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "2147483647"
In our case we also use Nginx ingress controller as a way to expose DAC endpoints over the internet on a public endpoint.
You can also reach your DAC from within Kubernetes on its corresponding service: <name-of-service>.<k8s-namespace>.svc:80
Like so:
dac-coordinator-service-ghostnet.dac-ghostnet.svc:80
Please note that we had to specify the --rpc-addr 0.0.0.0 --rpc-port <port>
in the octez-dac-node
command for each node to ensure it listens for connections from the desired IP address and port.
As of now, the octez-dac-node
does not produce metrics that we could use to monitor our nodes.
Nonetheless, we can configure a livenessProbe in our deployments, allowing Kubernetes to periodically verify if the container is alive:
livenessProbe:
httpGet:
path: /health/live
port: 11100
initialDelaySeconds: 40
periodSeconds: 20
failureThreshold: 3
DAC data not only resides on Persistent Volume Claims (PVCs) โ a dependable storage solution within Kubernetes โ but is also replicated across all DAC community Members and Observers. Despite this inherent redundancy, it remains prudent to have an external backup and recovery strategy for your PVCs.
We opted for Velero, an open-source tool, that offers the ability to safely backup, recover, and migrate Kubernetes cluster resources and persistent volumes, and comes equipped with a disaster recovery feature.