ctf.tjcsec.club is an rCTF instance. rCTF hosts information about the teams, scoreboard, challenge metadata (e.g. challenge names, descriptions, files, etc.). With rCTF, challenge metadata is rendered with Markdown, meaning that cool features such as links, bolded text, and italics are easy to add to a challenge description.
You may have noticed that some challenge descriptions provide links to different sites, including a challenge server link, admin bot link, or instancer link. rCTF does nothing to manage these other servers. Instead, we use different technologies to make sure that these servers work.
Challenge servers are deployments that are individually associated with a challenge (though they are optional). They are hosted on challenge.tjcsec.club (if it is a TCP connection) or *.challenge.tjcsec.club (if it is a web server).
The admin bot simulates another user's website interactions. They are hosted on admin-bot.tjcsec.club/challenge-name and are configured with JavaScript.
The instancer makes an instance of some server(s) publicly available on demand, allowing each team their own "instance" of a challenge to work on. The instancer main page, instancer.challenge.tjcsec.club/challenge/challenge-name, sends a request to create the instance. After a configured period of time, the instancer kills the instance.
It costs a lot of money to continuously rent out servers from an extensible cloud computing platform like Google Cloud Platform (GCP). For an external CTF such as TJCTF, performance is important, so it is optimal to run everything on the cloud; this is usually fine because sponsorships cover the cost of that infrastructure. However, we don't have sponsors for our internal CTF, so we need to pay as little money as possible while still providing performant servers for students. After some scheming, we devised a system where the club needs to pay exactly $0 to host infrastructure.
We host rCTF and challenge servers on a single computer that we bought for less than $100, termed "Otto." This computer should be in an officer's house, optimally connected to internet through ethernet and accessed for configuration through SSH. Port forwarding from the officer's router is not recommended because the IP address of a home network is usually volatile and exposing ports from a home server is generally unsafe.
To obtain a static IP, we can use a free instance on Google Compute Engine (GCE). GCE instances with the following specifications are provided with the Free Tier of GCP:
e2-micro
VM instance per month in one of the following US regions:
us-west1
us-central1
us-east1
GCE instances all have a static IP, so you can set your DNS for challenge.tjcsec.club, *.challenge.tjcsec.club, and ctf.tjcsec.club to permanently point to the external address for this computer with a simple DNS A entry. At the time of writing, we use Cloudflare for DNS.
To forward internet traffic from the GCE instance to Otto, we can use a combination of a virtual private server (VPS) and a proxy. A VPS allows multiple devices to act as if they are on the same LAN, enabling computers to connect to each other without exposing a port. The TJCSec GitHub team has a Tailscale network (tailnet) set up that you can use to add the two devices, the GCE instance and Otto, on a VPS. You can also add your own personal computer to the VPS to be able to SSH to Otto from any location. Then, on the GCE instance, configure a proxy such as HAProxy to forward all TCP connections on port 443 (for rCTF and web challenge servers) and ports 31000–31999 (for TCP challenge servers) to Otto. In the following HAProxy configuration (which is currently used), all connections to ctf.tjcsec.club are TLS-terminated in HAProxy and then forwarded to Otto, while all other connections are directly forwarded; this is done because rCTF does not have TLS encryption built-in, whereas the web challenge servers use a proxy that has TLS encryption enabled. To generate the the TLS certificate for ctf.tjcsec.club (and located at /opt/rctf/certs/cert.pem), see the Certificate Generation section.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode tcp
option tcplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
listen web
bind *:31000-31999
mode tcp
server worker1 100.120.42.45
frontend http_in
mode http
bind *:80
http-request redirect scheme https
frontend https_in
mode tcp
option tcplog
bind *:443
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend recir_ctf.tjcsec.club if { req.ssl_sni -i ctf.tjcsec.club }
use_backend recir_challenge.tjcsec.club if { req.ssl_sni -m reg -i ^[^\.]+\.challenge\.tjcsec\.club$ }
backend recir_ctf.tjcsec.club
server loopback-for-tls abns@haproxy-ctf-tjcsec-club send-proxy-v2
backend recir_challenge.tjcsec.club
server loopback-for-tls abns@haproxy-challenge-tjcsec-club send-proxy-v2
frontend fe_ctf.tjcsec.club
mode http
bind abns@haproxy-ctf-tjcsec-club accept-proxy ssl crt /opt/rctf/certs/cert.pem
use_backend ctf.tjcsec.club
frontend fe_challenge.tjcsec.club
mode tcp
bind abns@haproxy-challenge-tjcsec-club accept-proxy
use_backend challenge.tjcsec.club
backend ctf.tjcsec.club
mode http
server rctf1 100.120.42.45:8080
backend challenge.tjcsec.club
mode tcp
server challs1 100.120.42.45:443
Note that the IP address used in the listen
and backend
sections of the configuration is the address of Otto in Tailscale, not its public IP address.
After this, you may set up rCTF and the challenge servers on Otto normally.
We can use the certbot CLI tool to easily generate TLS certificates signed by Let's Encrypt. After installing certbot, you can use the following command to generate a TLS certificate for both standard domains and subdomains (ctf.tjcsec.club) and wildcard subdomains (*.challenge.tjcsec.club):
certbot certonly --manual --preferred-challenges dns -d my-subdomain.tjcsec.club
The above command will prompt you to set a DNS TXT record with some random value. You should do this in Cloudflare. Changes will propagate relatively quickly. After passing the ACME challenge, your public full chain certificate will be saved to /etc/letsencrypt/live/my-subdomain.tjcsec.club/fullchain.pem, and your private key will be saved to /etc/letsencrypt/live/my-subdomain.tjcsec.club/privkey.pem. You should copy the certificate and key to the correct location for use in the respective server.
Docker is a technology that lets us easily replicate the environment that we want an application to run in. A Docker image is a template for what environment your application should run in. A container is an actual running instance of that image. Think of an image as a blueprint and a container as a building.
Many Docker containers can be run easily using Docker Compose, which is a bit like a layout for a neighborhood. Neighborhood layouts do not specify exactly what each house should look like; instead, it has directions to use a specific blueprint; likewise, docker-compose.yaml files have directions to use a specific container image for each service (i.e. application) that is deployed, along with extra directives such as what ports should be exposed to the public, what specific environment variables should be set to, etc. rCTF is run using Docker and Docker Compose. This allows many services, such as the rCTF website and the database, to be run easily on the same system.
While redpwn provides a simple rCTF installer script on the rCTF installation guide, we do not use it for club CTF (but we do use it for TJCTF). Instead, we have our own forked rCTF repository that adds Ion integration. To run the install script for that version of rCTF, run:
curl https://raw.githubusercontent.com/TJCSec/rctf/master/install/install.sh | sh
rCTF is now installed to /opt/rctf. You must now configure rCTF using various YAML file(s) located in /opt/rctf/conf.d/. These files can be called anything as long as they are in the folder, but I personally split them into three different files called 01-ui.yaml, 02-ctf.yaml, and 03-db.yaml.
01-ui.yaml:
ctfName: TJCSC CTF
meta:
description: TJCSC CTF is a year-round in-house competition designed to let you practice what you've learned at the club. We give quarterly prizes to the top scorers!
imageUrl: https://ctf.tjcsec.club/uploads/images/logo.png # see below
faviconUrl: https://ctf.tjcsec.club/uploads/images/favicon.ico # see below
homeContent: |
<image src="/uploads/images/logo.png" style="width: 30rem; margin: auto; display: block;" />
# TJCSC CTF
<hr />
TJCSC CTF is a year-round in-house competition designed to let you practice what you've learned at the club. We give quarterly prizes to the top scorers!
<timer></timer>
The above homeContent configuration in 01-ui.yaml is the Markdown content for the home page. Various special tags, such as <timer></timer>
can also be specified in this section, and it is more thoroughly documented on the rCTF installation guide. Additionally, you can provide files for rCTF to serve as the meta image and favicon image by mounting these files into the rCTF container. To do this in Docker Compose, you can mount a volume onto the rctf service by editing docker-compose.yaml:
rctf:
...
volumes:
- ./conf.d:/app/conf.d
- ./data/uploads:/app/uploads
...
Thus, you can create the files /opt/rctf/data/uploads/images/logo.png and /opt/rctf/data/uploads/images/favicon.ico to have the images available for rCTF to serve.
02-ctf.yaml:
origin: https://ctf.tjcsec.club
divisions:
tj: TJ
officers: Officers
open: Open
divisionACLs:
- match: regex
value: ^(2024dlin|2024kdonnell|2024ishanmug|2025vvemuri|2026dbalaji|2025sbhargav|2025bho|2024dqiu|2024storo)@tjhsst.edu$ # regex for Ion usernames of officers
divisions:
- officers
- match: domain
value: tjhsst.edu
divisions:
- tj
- match: any
value: ''
divisions:
- open
tokenKey: 'automatically generated by install script'
ion:
clientId: 'see below'
clientSecret: 'see below'
startTime: 1699579341087 # start time in Unix epoch timestamp in milliseconds
endTime: 1718208000000 # end time in Unix epoch timestamp in milliseconds
To generate the client ID and client secret for Ion integration used in 02-ctf.yaml, go to https://ion.tjhsst.edu/oauth/applications/register/ and add a new application with a confidential client type and authorization code authorization grant type. Add a redirect URL to https://ctf.tjcsec.club/integrations/ion/callback.
03-db.yaml:
database:
sql:
host: postgres
user: rctf
database: rctf
redis:
host: redis
migrate: before
Before we are done, we must also make a small change to docker-compose.yaml
to properly receive traffic forwarded from our GCE instance. Instead of making rCTF available on 127.0.0.1:8080, we must make it available on [Tailscale IP Address]:8080 by editing the rctf service to look like the following:
rctf:
image: ghcr.io/tjcsec/rctf:${RCTF_GIT_REF}
restart: always
ports:
- "100.120.42.45:8080:80"
networks:
- rctf
env_file:
- .env
environment:
- PORT=80
volumes:
- ./conf.d:/app/conf.d
- ./data/uploads:/app/uploads
depends_on:
- redis
- postgres
After running docker compose up -d
while in the /opt/rctf directory, rCTF should be available at ctf.tjcsec.club. Feel free to log in with your Ion account.
To create an admin account on rCTF and manage challenges using a web UI at ctf.tjcsec.club/admin/challs, you must update your user account on rCTF to have admin permissions. Connect to the PostgreSQL database using the following command:
docker exec -it rctf-postgres-1 psql -U rctf
You can now run any SQL query on the database. Set admin permissions using the following command (and setting email to your email):
UPDATE users SET perms=3 WHERE email='2024dlin@tjhsst.edu';
You should now be able to manage challenges at ctf.tjcsec.club/admin/challs. Send your login URL to the other officers for them to be able to manage challenges using that UI as well. Automatic challenge configuration through commits to a repository will be documented later in this guide.
Kubernetes is an API that different distributions implement to make an easily scalable automatic deployment system, which we leverage to deploy challenge servers. Install some type of Kubernetes distribution on Otto. At the time of writing, club challenge servers are deployed on a k3s cluster, and we have run TJCTF challenge servers on Google Kubernetes Engine (GKE) for the past few years.
On your personal computer, install kubectl, a tool to interact with Kubernetes resources. Set up kubectl to interact with your cluster by modifying your kubeconfig file to interact with your cluster. If you are using a k3s cluster, this should be as simple as copying /etc/rancher/k3s/k3s.yaml from Otto to your kubeconfig file on your personal computer (which is located at ~/.kube/config on Linux) and changing clusters[0].cluster.server to https://[Otto Tailscale IP Address]:6443
. Note that your personal computer must be connected to the VPS for this to work.
We use Traefik to manage the ingress for the web challenge servers. Traefik is a very configurable proxy that we use to properly route any access from *.challenge.tjcsec.club
to the correct web service based on the wildcard subdomain host (i.e. a.challenge.tjcsec.club vs b.challenge.tjcsec.club). We do not need to configure each route manually because we will later set up automatic challenge configuration via commits to a repository; however, we do need to actually set up Kubernetes to use Traefik for routing.
The easiest way to install Traefik as an ingress controller is by using Helm, which is a bit like a package manager for Kubernetes.
Create a file called traefik-values.yaml anywhere on the system to customize the Traefik installation. This file should have the following values:
# use json logs
logs:
access:
enabled: true
format: json
fields:
headers:
names:
X-Forwarded-For: keep
service:
spec:
externalTrafficPolicy: Local
# allow connections on ports 80 (HTTP) and 443 (HTTPS). redirect HTTP connections to HTTPS.
ports:
web:
port: 8000
expose: true
exposedPort: 80
protocol: TCP
redirectTo: websecure
websecure:
port: 8443
expose: true
exposedPort: 443
protocol: TCP
tls:
enabled: true
# disable the default dashboard route. we will manually create another route for the dashboard.
ingressRoute:
dashboard:
enabled: false
While in the same directory as the traefik-values.yaml file, install the Traefik Helm Chart (after installing Helm):
helm repo add traefik https://traefik.github.io/charts
helm repo update
helm install --namespace=ingress --values=./traefik-values.yaml traefik traefik/traefik
After Traefik is installed, you must configure it using kubectl. To make configuration changes using kubectl, you must make YAML files, called manifests, with your changes and then apply them with kubectl apply -f filename.yaml
. I recommend keeping all Traefik-related configuration files (as well as the traefik-values.yaml file) in a similar location, though they can be stored anywhere on the system with any filename.
First, add TLS termination to Traefik. To generate the the TLS certificate for *.challenge.tjcsec.club, see the Certificate Generation section. Then, create a Kubernetes secret that contains the TLS certificate for *.challenge.tjcsec.club:
apiVersion: v1
kind: Secret
metadata:
name: csc-ctf-cert
type: kubernetes.io/tls
stringData:
tls.crt: |
-----BEGIN CERTIFICATE-----
GENERATED CERTIFICATE CHAIN
-----END CERTIFICATE-----
tls.key: |
-----BEGIN PRIVATE KEY-----
GENERATED PRIVATE KEY
-----END PRIVATE KEY-----
After, tell Traefik to use the *.challenge.tjcsec.club certificate for all routes by default:
apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
spec:
defaultCertificate:
secretName: csc-ctf-cert
Next, create a route for the Traefik dashboard, authenticated through a username and password:
apiVersion: v1
kind: Secret
metadata:
name: dashboard-passwd
namespace: ingress
data:
users: |1
BASE 64-ENCODED VERSION OF "username:bcrypt-password-hash" WITHOUT THE QUOTES
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: dashboard-auth
namespace: ingress
spec:
basicAuth:
secret: dashboard-passwd
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: dashboard-route
namespace: ingress
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`traefik.challenge.tjcsec.club`) && (PathPrefix(`/dashboard`) || PathPrefix(`/api`))
services:
- kind: TraefikService
name: api@internal
middlewares:
- name: dashboard-auth
namespace: ingress
You should now see a Traefik dashboard at traefik.challenge.tjcsec.club/dashboard/. Log in with the username and password that you specified in the manifest.
Finally, you can add some middlewares that challenge servers may apply to their configurations:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: nocontenttype
namespace: ingress
spec:
contentType:
autoDetect: false
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: ratelimit
namespace: ingress
spec:
rateLimit:
average: 50
burst: 100
sourceCriterion:
ipStrategy:
depth: 1
The nocontenttype middleware ensures that the Content-Type header is not automatically set by Traefik. The ratelimit middleware rate limits clients to ensure that DDoS attacks are mitigated.
The Traefik ingress controller is now properly set up.
rCDS lets us automate the challenge deployment process. Unlike rCTF and the challenge servers, it is not installed on Otto. Instead, it is run on another remote computer every time the GitHub Actions workflow is triggered. In our club CTF repository, this should triggered on every commit to the main branch, which includes merges from other branches. You can also manually trigger it in the "Workflows" tab on GitHub.
rCDS checks every challenge to ensure that it is synced with the various "backends." That is, if any change has been made to a challenge or its associated files, it is updated appropriately. rCDS syncs all challenges in four stages:
{{ tags }}
are replaced with actual text), and relevant metadata (i.e. author, flag, description) is provided to rCTF to make the challenge available on the main CTF site.Note that you did not previously deploy a remote container registry for club CTF. For TJCTF, we use GCP's Artifact Registry, but storage on Artifact Registry costs a bit of money if you store more than 0.5 GB of images, so we can make do for club CTF by deploying a private registry on Otto with Docker.
docker run -p 0.0.0.0:5001:5000 -t registry registry:2
Ensure that port 5001 is not publicly available publicly (and it is only accessible via loopback and Tailscale). Otherwise, attackers can access any image stored in the repository.
To get started with rCDS, create a file called rcds.yaml in the root of the challenges repository. This contains all the configuration information for when rCDS is run:
docker:
image:
prefix: 100.120.42.45:5001
flagFormat: flag\{[ -z]+\}
defaults:
containers:
resources:
limits:
cpu: 500m
memory: 300Mi
requests:
cpu: 10m
memory: 30Mi
k8s:
container:
imagePullPolicy: Always
backends:
- resolve: rctf
options:
url: https://ctf.tjcsec.club
token: TOKEN FROM ADMIN LOGIN URL
scoring:
minPoints: 100
maxPoints: 500
- resolve: k8s
options:
kubeContext: default
domain: challenge.tjcsec.club
annotations:
ingress:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.middlewares: "ingress-nocontenttype@kubernetescrd,ingress-ratelimit@kubernetescrd"
Note that the rCTF token should be the URL-decoded version of the token in the login URL for your admin account.
After, create the GitHub Actions workflow to run rCDS every time a change is committed to the main branch. In the repository, .github/workflows/deploy_rcds.yaml should contain the following:
name: Deploy with rCDS
on:
workflow_dispatch: {}
push:
branches: [ "main" ]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Install rCDS
run: |
pip3 install git+https://github.com/tjcsec/rcds.git@ec65a6d617d23b3792930554b161603e1b99ccb9 && \
pip3 install markupsafe==2.0.1 chardet==5.2.0 urllib3==1.26.15 requests==2.28.2 google-auth==2.23.0
- name: Tailscale
uses: tailscale/github-action@v2
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
tags: tag:ci
- name: Authorize with kubernetes
shell: bash
run: |
mkdir -p ~/.kube && \
echo ${{ secrets.KUBE_CONFIG }} | base64 -d > ~/.kube/config
- name: Enable insecure registries
shell: bash
run: |
sudo mkdir -p /etc/docker && \
sudo tee /etc/docker/daemon.json <<EOF
{
"insecure-registries": [
"100.120.42.45:5001"
]
}
EOF
sudo systemctl restart docker
- name: Deploy
shell: bash
run: |
rcds deploy
rCDS is a bit of an old technology, so you will need to downgrade some packages to get it to work. In the future, you may need to downgrade more packages. Additionally, using our own private registry for Docker images increased the complexity of the workflow by forcing us to connect to Tailscale and having to allow insecure, unencrypted registries in our workflow.
Note that the workflow uses various GitHub secrets to function. You can set these secrets in the Settings tab in any repository. Encode your kubeconfig file (on your personal computer) in base64 and use it as the KUBE_CONFIG secret. To generate the OAuth client ID and OAuth secret for Tailscale in order to use our VPS addresses in the workflow, log in to Tailscale on your web browser and go to the OAuth Clients tab in Settings. Generate a client with the "all" scope[1]. After, set the secrets accordingly.
Once rcds.yaml and deploy_rcds.yaml are pushed, the workflow will run for the first time. Now, challenge authors may write challenges in the [category]/[challenge name] directory with the specification detailed in https://hackmd.io/@tjcsc/challenges. Feel free to test this by copying and pasting a pre-existing challenge from a previous year.
If you want to add a workflow to check challenges on branches other than main to ensure only working code gets deployed, add another workflow that is identical to the deploy workflow, with the following changes.
Change the name of the workflow and job and change it to run only on pushes to braches other than main:
name: Test with rCDS
on:
workflow_dispatch: {}
push:
branches-ignore: [ "main" ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
Also, change the final command to the test functionality instead of the full deploy:
- name: Test challenges
shell: bash
run: |
rcds test
Unlike rCTF, the admin bot does not run on GCE. Instead, it runs on Google Cloud Run (GCR), which is free because do not max out Free Tier. GCR allows users to invoke a script when a request is made, which allows us to easily run a Puppeteer instance (basically a code-controlled Chrome instance) when a user makes a request.
There are two steps to deploying the admin bot:
Terraform is a technology that is made to simplify cloud deployments immensely. It allows you to write HCl code to exactly specify what cloud components you want to deploy, which makes it simple to reuse configurations and automate deploys. In fact, the TJCSec/club-ctf-infra repository contains every commit for our club infrastructure on GCP since 2021. Artifacts from when we hosted everything on GCP are still in the repository, commented out, in case we have reason to move back to GCP. That being said, the admin bot configuration is not commented out because we still use Terraform to automate the admin bot deploy.
Before, I said that storage on the Artifact Registry costs money; however, the admin bot image is small enough that its storage on Artifact Registry is free. Additionally, hosting images on Artifact Registry simplifies the Terraform configuration since GCP project access is already configured to access GCR.
Make a new repository in Artifact Registry called "challenges." While it is currently empty, we will use this to store the admin bot image later.
To automate admin bot deploys, we use a GitHub Action job that automatically builds the admin bot image and pushes it to Artifact Registry. After, the job will run a script to automatically update a Terraform variable indicating the latest admin bot version and then update the GCR deploys.
The full job to add to your deploy workflow is shown below:
adminbot:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
credentials_json: ${{ secrets.GCLOUD_GHA_TOKEN }}
- id: 'configure-docker'
shell: bash
run: gcloud auth configure-docker us-east4-docker.pkg.dev --quiet
- name: Build Admin Bot
run: |
docker build -t us-east4-docker.pkg.dev/tjctf-2024/challenges/admin-bot:latest admin-bot
docker push us-east4-docker.pkg.dev/tjctf-2024/challenges/admin-bot:latest
- name: Deploy Admin Bot
shell: bash
env:
TFC_TOKEN: ${{ secrets.TFC_TOKEN }}
TFC_WORKSPACE: ${{ vars.TFC_WORKSPACE }}
run: |
pip3 install docker && \
python tfc.py us-east4-docker.pkg.dev/tjctf-2024/challenges/admin-bot:latest var-myvar 'Update admin-bot image'
Ensure that the artifact registry address is correct when you add the job to your workflow. You will also need to then create a service account in your GCP project and give it permissions to push images to artifact registry. Export a JSON credentials file and use it for the GCLOUD_GHA_TOKEN secret. The python file tfc.py
can be copied from a previous year's challenge repository.
Create a Team API Token in Terraform and use it as the TFC_TOKEN secret. In the Default Project, create a new workspace using a version control workflow connected to your infrastructure repository (which should be a completely different repository than your challenges repository). Use the workspace ID (i.e. ws-xxxxxxxxxxxxx) for the TFC_WORKSPACE secret. Then, use the Terraform API to find the variable ID to edit the admin bot image by going to app.terraform.io/api/v2/workspaces/ws-xxxxxxxxxxxxx/vars and looking for the admin_bot_image
variable. Change var-myvar
to that variable ID in the GitHub Actions workflow.
After you push the new GitHub Actions workflow to your challenges repository, the admin bot should automatically deploy. It will take a couple hours to deploy it for the first time, but it should deploy much faster after the first deploy. If you need to manually start a new Terraform run (to redeploy the admin bot or for other reasons), go to the infrastructure workspace in app.terraform.io and hit "New run." Note that you will also need to confirm the run after planning.
I've actually never set up the instancer locally since we've never needed an instancer for club CTF challenges; however, the Klodd documentation neatly outlines how to deploy Klodd. Like the prerequisites section says, you must add OAuth to rCTF, and the specified Cloudflare worker suffices. The ingress requirement is already met. You can continue with installation like normal, using instancer.challenge.tjcsec.club as your domain for the home page.
The infrastructure used for TJCTF is virtually the same as that used for club CTF, but it is all hosted on GCP and configured with Terraform. This means that we can reuse the Terraform infrastructure configurations from previous years, located on repositories named TJCSec/tjctf-20XX-infra, accordingly. As of writing, the latest repository is TJCSec/tjctf-2024-infra.
You should be able to completely reuse the infrastructure from previous years. Before starting a new Terraform run to deploy everything, ensure all correct Terraform variables are set and create a new GKE cluster called challenges. If you don't create the cluster beforehand, you will need to apply the run twice since Terraform tries to apply Kubernetes manifests at the same time it creates the cluster.
You will still need to install rCTF on the "rctf" compute box manually, but the steps will be the same as those specified in Setting up rCTF. Remember to install a proxy on the rctf box to enable TLS termination.
To ensure that TCP ports for challenge servers are accessible when deploying on GCP, you will also need to run the following command:
gcloud compute instance-groups managed set-target-pools NAME --target-pools=challenges
The target pool and GCE instance should already exist, so you should be able to autocomplete the instance name (NAME
in the command). This will allow the challenge servers to accept external TCP traffic.
This can be scoped down more for security. ↩︎