Try   HackMD

Thanks for opening this document! Understanding how to write a challenge is the first step to writing one for CSC (or for TJCTF), so it's great that you're getting a start on that now!

How Challenges Get Deployed

rCDS is a challenge deployment tool created by redpwn. We use it to make challenge deployment as simple as a GitHub Actions job. This is integrated with rCTF, also created by redpwn, which is the actual website that players access in order to get CTF details as well as challenge details — in the case of our club CTF, it is ctf.tjcsec.club.

For challenge authors, it is mostly unnecessary to know how rCDS works internally; however, I describe it here in order for this not to seem like a "black box."

First, the GitHub Actions workflow is triggered. In our club CTF repository, this is triggered on every commit to the main branch, which includes merges from other branches. You can also manually trigger it in the "Workflows" tab on GitHub. This workflow runs rCDS.

rCDS checks every challenge to ensure that it is synced with the various "backends." That is, if any change has been made to a challenge or its associated files, it is updated appropriately. rCDS syncs all challenges in four stages:

  1. All challenge containers are built into Docker images and pushed to a remote container registry. These images specify the environment in which all remote servers function. While Docker containers work much differently from a virtual machine under the hood, you can think of a Docker image as a saved virtual machine state that is specified through code instead of directly interacting with it. Later on, you (or anyone else) can run that image starting from the specified state fairly easily. After each container is built, it is then pushed to a remote registry (i.e. Artifact Registry), which stores the built Docker images so that they can be used in later rCDS stages. Note that this step does not actually run the server(s) associated with a challenge; it only specifies how the server works.
  2. Provided files are uploaded and made available to players. This means that these files are uploaded to rCTF, which then makes them publicly accessible on the web.
  3. Docker containers are pushed to a container runtime. This sounds complicated, but it just means that the Docker images are actually run. We deploy challenge servers in a Kubernetes cluster, which lets us easily manage and edit the configuration of the servers.
  4. Challenge details are pushed to rCTF. Challenge descriptions are rendered (i.e. {{ tags }} are replaced with actual text), and relevant metadata (i.e. author, flag, description) is provided to rCTF to make the challenge available on the main CTF site.

Docker

Docker is a technology that lets us easily replicate the environment that we want a piece of code to run in.

Note how the description of the Docker stage of rCDS is, by far, the longest. This is not a coincidence! Docker is pretty tricky to get a full grasp of, but, luckily, you don't really need a full grasp of it to write a challenge.

To get some lingo out of the way, a Docker image is a template for what environment your app should run in. A container is an actual running instance of that image. Think of an image as a blueprint and a container as a building.

The most important file that you need to work with Docker is the Dockerfile. This should be in the outermost directory of any files that you may want to include in the Docker image.

A minimal Dockerfile that runs a Flask server is specified below with comments to explain what each line does. The necessary files to run that Flask server (e.g. app.py, templates/, static/) should be in the same directory as the Dockerfile. The comments do not go into too much detail, so if you have questions about what a specific line or command does, check the Dockerfile reference or ask me!

# Use a pre-existing `python` image as a "base" for your template
FROM python:3.8.5-slim-buster

# Run the shell command `pip install ...` and save the results 
RUN pip install flask gunicorn

# Copy the files from the current directory (`.`) to
# the container's `/app` directory
COPY . /app

# All later commands will be run from the `/app` directory
WORKDIR /app

# Run the Flask app using gunicorn on all addresses, port 5000
# If you have questions about this line in particular, let me know!
CMD ["gunicorn", "-b", "0.0.0.0:5000", "app:app", "-t", "4"] 

To manually build the Docker image, in your terminal, run:

docker build -t <tag> <directory>

<tag> can be replaced with any "friendly" name that you want to call the image. <directory> is the directory of the Dockerfile (relative to your current working directory).

For example, to build the Flask server above, located in the current directory, and tag it with my-app, run:

docker build -t my-app .

To run that image, use:

docker run -p 5000:5000 my-app

The -p 5000:5000 exposes port 5000 to your host computer (i.e. your computer, outside of the Docker container). This means that you can access the server at http://localhost:5000 whereas, without the -p, the server would not be accessible to you at all.

Using pwn.red/jail

If you have seen nc challenge.tjcsec.club XXXXX before, the container has probably been using pwn.red/jail under the hood. This is a Docker image that redpwn, yet again, has written for our convenience for running sandboxed nc servers. Because of this, they can explain how to use it much better than I can. Their Competitor FAQ is extremely useful for understanding what exactly pwn.red/jail does and how to run it. Additionally, their Challenge Author Guide has some very helpful tips for using pwn.red/jail. They also provide some minimal examples of its use here.

A minimal Dockerfile using pwn.red/jail is shown below:

FROM pwn.red/jail:0.3.0
COPY --from=python:3.8.5-slim-buster / /srv
COPY hello.py /srv/app/run

hello.py is shown below. For it to run properly, it should be marked executable (with chmod +x hello.py) and have a shebang, the funny comment on the first line that specifies the interpreter.

#!/usr/local/bin/python
print('hello')

You can build the image like normal and run it with the command below:

docker run -p 5000:5000 --privileged <tag>

Challenge Specification

Now, how does rCDS know exactly what it needs to do for each challenge? Well, despite all of its strengths, it is not all-knowing, so, when you write a challenge, you need to specify a challenge.yaml in the root (or outermost) directory of said challenge.

This requires you to specify the details of your challenge in YAML format. YAML is fairly intuitive, so you probably don't need to read that article in its entirety. Instead, read this minimal challenge.yaml file below:

name: my-challenge
author: diana
description: |
    quack quack, goose!
    
    haha your mother; thinking!
value: 10
flag:
    file: flag{fake_flag}
provide:
    - file1.txt
    - file2.png
    - kind: zip
      spec:
        as: server.zip
        exclude:
            - server/flag.txt
        files:
            - server
            - instructions.txt
        additional:
            - path: server/flag.txt
              str: flag{fake_flag}

Most of these fields should seem pretty intuitive!

value specifies the point value. For TJCTF, do not specify a point value (by omitting value). Without an explicitly-stated point value, a challenge defaults to being dynamically scored, wherein the number of points a challenge is worth is scaled from 100 to 500 based off of the number of solves it has.

Additionally, in this specification, file is specified under flag. You can also specify the flag as a normal string by writing: flag: flag{my_flag}. However, you often have the flag used somewhere in your code. Instead of hardcoding it directly in your code, you can store the flag in a separate file and read from that file when needed. This flag.file lets you specify a file path to your flag file, thereby letting you specify your flag in one place instead of multiple.

The provide is a list of files. Files can be specified either by only their file path (relative to the challenge's root directory) or by a specification like the one above. This zips the files (or directories) specified in files into a zip file and provides it to players. Additionally, it has an exclude key which, predictably, excludes a file from being included in the zip file. The additional key specifies extra "fake" files that do not actually exist in the repository but you want to include inside the zip file. You may want a specification like this when you provide entire servers to players.

Let's try a less minimal specification. For example, the challenge.yaml of TJCTF 2022's Analects is specified below.

name: analects
author: kfb
description: |-
    confucius was a cool guy I think he said some things

    {{ link }}
    
flag:
    file: mysql/init/flag.txt

provide:
    - kind: zip
      spec:
        as: server.zip
        exclude:
            - mysql/init/flag.txt
        files:
            - docker-compose.yaml
            - app
            - mysql
        additional:
            - path: mysql/init/flag.txt
              str: this is not the real flag

containers:
    app:
        build: app
        replicas: 1
        ports:
          - 80
    mysql:
        build: mysql
        replicas: 1
        ports:
          - 3306
        resources:
            limits:
                memory: 500Mi
            requests:
                memory: 100Mi
expose:
  app:
    - target: 80
      http: analects

This challenge, unlike the previous one, has a remote server. The containers section is where you specify any challenge servers that you want to be deployed. Each container is its own key under containers; all the properties of that container are specified like so.

  • build specifies the path of the directory that is built and deployed; in that directory, there must be a Dockerfile.
  • replicas specifies how many different times the image is run and deployed; for club CTF, one deployment is almost always sufficient, but, for TJCTF, you may want to increase the number.
  • ports specifies the port(s) that you want to be accessible to other containers in this deployment. In this case, mysql:3306 is accessible in the app deployment (and app:80 is accessible in the mysql deployment, though it is not directly used).
  • resources limits the resources that a container can use. Use your best judgment when setting these limits (or ask me)!

expose ensures that a port of a specific deployment is publicly accessible (i.e. accessible to the player). These port(s) must also be specified in that deployment's ports specification in order to be publicly accessible. The target is the port that you want to mark accessible. Additionally, you must specify either http (and provide an arbitrary subdomain for the deployment) or tcp (and provide an arbitrary port number for the deployment). Websites should generally specify http whereas nc servers should specify tcp. Note that all http and tcp keys should be unique for all challenges.

rCDS also provides shortcuts for specifying exposed ports in the description. If you only expose one HTTP port, using {{ link }} will result in a friendly link being formatted nicely. Likewise, if only one TCP port is specified, {{ nc }} results in a friendly nc site.com XXXXX being rendered.

I did not exhaustively go over all the keys available. For example, you can deploy a pre-existing image using image or mark a challenge invisible with visible: false. You can find the full schema of challenge.yaml at the rCDS documentation.

A Side Note About pwn.red/jail

pwn.red/jail also requires certain security options to be specified in challenge.yaml. These can mostly be copied and pasted for all servers that use pwn.red/jail.

containers:
  main:
    build: bin
    replicas: 2
    ports:
      - 5000
    k8s:
      container:
        securityContext:
          readOnlyRootFilesystem: true
          capabilities:
            drop:
              - all
            add:
              - chown
              - setuid
              - setgid
              - sys_admin
              - mknod
      metadata:
        annotations:
          container.apparmor.security.beta.kubernetes.io/main: unconfined

Writing Good Challenges

"Good" challenges are difficult to write! Here, I provide some general guidelines as to how to write challenges.

Club CTF

Writing interesting challenges is not our main priority for the club CTF. Club CTF is not the place to show off how good you are. Instead, we want challenges that inform our audience. Club challenges can be designed to challenge others, but, by writing challenges that no one can solve, we are driving away the people we are trying to teach. Try to not make people think outside the box; what we are trying to do is to help others understand what is inside the box, not to imagine outside of it.

TJCTF

For TJCTF, however, try new topics! While one of our goals is to inform, we also want to push people to want to further pursue computer security by providing interesting problems. There is not one set way to write challenges, but I will share some tips that have helped me.

I personally prefer thinking of a vulnerability and basing a challenge off of said vulnerability. However, others prefer writing something that works and altering it a little to create a vulnerability. Each method has its own pros and cons, so I would suggest trying to figure out what works best for you.

If you are stuck trying to think of challenge ideas, I have found that playing in CTFs has helped me to think of interesting ideas. It is oftentimes in the things that don't work for a specific challenge that you find a stroke of genius for your own challenge.

Enjoyable challenges tend to require more thought than implementation. The general rule is that if you would not enjoy the challenge, it is likely that other people would not enjoy the challenge.

For more verbose guidelines, feel free to look at https://bit.ly/ctf-design. Good luck!