# 22s GoN Open Qual CTF, in Retrospect Written by [Xion] @ KAIST GoN ----- This is a retrospective article reviewing the process of brainstorming, planning, organizing and holding the first public CTF under the name of KAIST GoN, **2022 Spring GoN Open Qual CTF**. I, as a member and former leader of KAIST GoN, took a major role in the overall process of organizing the CTF. This article aims to resolve questions about the CTF itself, such as the reasoning behind some challenge designs, difficulty balancing, duration of the CTF and number of challenges, and many more. Also, some of the criticism and suggestions from the survey will be answered. Note that this is my own opinion based on my viewpoint, and does not represent the opinion of my affiliations or other individuals including other members of KAIST GoN. Note also that there are some spoilers to challenge solutions. If you are looking for authors' writeups, its [here](https://hackmd.io/@Xion/goq_22s_authors_writeup). If you are otherwise interested in solving the challenges yourself, solve the challenges in [Dreamhack](https://dreamhack.io/wargame/challenges/?type=ctf) and come back later! ## How GoN Qual Became "Open" As previously explained in the CTF description, GoN Qual was originally a private CTF competition. The word "Qual" is used since the competition serves as a qualification test for club members ranging from 0.5 years to 2.5 years of club experience. Challenge authors are usually selected from relatively experienced members with spare time. For the recent 4 years, GoN Qual was held for every spring and fall semesters with number of challenges ranging from about 10 to a maximum of 29. Now as all chal authors would know, developing challenges is not a simple and quick process. Although all the challenges from GoN Qual gets ported to GoN internal wargame server, it's a sad truth that the challenges will rarely ever be used. To summarize this situation, 10~30 chals are developed by busy univ/grad students on school semesters, somewhere about 25 (if many) people take a look at it for about a week, some won't even be solved, and that's effectively the end of the chals' lifetime. This is honestly an exhausting situation for the authors, and graudally the authors are less and less motivated and incentivized to create novel challenges, let alone create one. Sometimes authors submit plans to make chals and then ghosts away, but this isn't some funded CTF nor a public CTF, plus all this being voluntary work while everyone's busy with their life I couldn't pressure (or better phrased, "motivate") others to make chals. So at the start of 2021 when I was (still) the leader of KAIST GoN, I discussed with other staff members (mainly [c0m0r1]) of the possibility of holding GoN Qual as a public CTF. Us two reached a consensus that this would likely have more pros than cons, and considered platforms to run the CTF on. For the background, these are some of the major pros and cons we've considered: Pros: - Authors are more motivated to write novel, well-designed chals - Opening the chals to the public will benefit the CTF community Cons: - CTF management becomes more complex and demanding in terms of computing and human resources - Target audience is extremely diverse; we have both experienced CTF players as well as newbies who just learned how to code These are the platforms we've considered: - GoN custom CTFd + chal deployment framework on GoN server - GoN custom CTFd + plain old Dockers on cloud hosting service - A brand new platform on cloud hosting service - dreamhack.io Originally I was planning to build a CTF challenge hosting server platform where each users get a completely isolated (VM) chal environment, and where authors can simply submit their Dockerized chal environment for the server to automatically build and deploy the chal. This is because isolation between players enable many challenges that are originally complex to implement properly and scalably, mostly for challenges where exploits affect the server[^1]. Since we were already running low on human resources the above plan was held on Work Indefinitely Postponed state. We naturally chose the least demanding route, but one with almost all the requirements we desired: dreamhack.io. We contacted [Theori] about 2 weeks before the CTF to check if hosting our CTF on Dreamhack platform is possible, and to our delight it could be done for free! Probably because it's a win-win for both Dreamhack and the CTF host, but this still is a very good option for CTF hosts to consider :+1: So this is the background of how GoN Qual became "Open", as well as why we chose Dreamhack. ## Writing the Challenges To write challenges we must recruit authors. It's written in a fancy word "recruit", but in reality it's just the leader[^2] blasting `@channel` and DM-ing experienced members begging for challenges. We love to procrastinate until the very last week, so I started recruiting chal authors early on from November 15th, 2021 (according to Slack) to encourage early chal development. But of course we procrastinated until the start of Feburary 2022 :smirk: The general criteria of challenge design is that **there must be something to learn by solving the challenge**. This might be a straightforward criteria, but based on **who the target audience is** the criteria may be answered completely different. Take for example, `NullNull` challenge. This is a classic Frame Pointer Overwrite challenge where `scanf("%Ns", buf)` leads to `buf[N] = '\0'`. By overwriting LSByte of saved `rbp` to 0x00 and returning, we now have shifted the stack frame. All latter stack accesses based on `rbp` now accesses memory in shifted position, which can be abused to leak info and get AAR/W to pop shell. What is the target audience for this challenge, under the assumption that I, the author, have considered the above criteria? I myself have considered the target audience of this challenge as a _newbie CTF player with very little or no exprience in pwnable_. For such target audience solving this challenge will help them understand more about how stack is shaped, how stack pointers work, how to read assembly and see how parameters are passed and local variables are used, how one can brute-force several times to get the desired address layout, and the overall process of analyzing, debugging and writing exploits. For experienced CTF players however, this challenge is a piece of cake that appeared on CTFs and wargames for tens of times, and I get it. I understand the nuisance, I've also experienced the nuisance of solving the same old FSBs on CTFs for the fourth time in a year with the same old "gimmicks". To be honest, this can't be avoided. At least while the name of this CTF is "GoN Open Qual CTF", not simply "GoN CTF". As previously mentioned, GoN Qual is meant to be solvable for club members with 0.5 ~ 2.5 years of experience. These are our main target audience, so we try hard to design challenges that are novel, educational and solvable which are often conflicting goals to achieve. Thus, many of the challenges are designed to be solvable even for less experienced players. But we don't just make it easy, we try to make it educational. To objectively analyze this we make authors write writeups that must include _prerequisites_ for solving this challenge (what the players should already know about) and the _objective_ for this chal (what the author is trying to make players learn about). In my opinion [c0m0r1] did the best job writing such challenges. Originally a pwn/rev player, he made challenges for all pwn/rev/web/crypto to level out the number of challenges for each categories. Plus, the challenges were quite well-designed and educational, so big kudos for him! One might ask now, _What about the `Trino` series and `Showdown`? Are these "solvable" for your target audience?_ I historically always put some spice into Qual challenges. This time, `Trino` series and `Showdown` were designed to give even the most experienced CTF players some spice, as this is also a public CTF. Especially the three unsolved pwnable challenges based on cutting-edge CVEs and a 0-day were targeted for players who want to skip on the easy chals and try the most difficult ones. Also, the 3-stage design of putting two pwnable chals behind a "firewall" of web challenges for `Trino` was intentional, as there was an abundance of pwnable chals (plus, who exposes a Redis instance to the outside world?). I made sure that someone craving to solve the `Trino` pwnables be able to solve the first two stages since there are writeups for similar challenges. However, I'm still uncertain whether or not such multi-stage challenges are an acceptable design or not, as this may give an unfair (dis)advantage based on whether one can solve the former stages or not[^3]. So with these in mind we wrote the challenges. Originally the deadline for chal development was at the end of Feburary, but as we all procrastinated development went on until the day before CTF start. While developing chals we cross-checked each other's chals from about a week before CTF start. This should've been done earlier and more throughly in retrospect, as some chals had been deployed with non-critical flaws (`CS448`, `Oxidized`, `NSS`, and maybe more?) and one with wrong public files (`Legendary`) which we quickly fixed. Luckily a critical flaw was caught before the CTF (`baby-turbofan` `d8` binary allowing native os commands), so the cross-check payed off :slightly_smiling_face: Deployment on Dreamhack CTF platform generally had no issues since deployments are either on Docker or docker-compose. However `Trino` chal choked on the system unlike on my environment which ran with no problems, and it took two days to debug the issue (which was revealed to be caused by two issues, not one). Thankfully Theori was quick and responsive in fixing the issue, and everything was finally up and running :relieved: Now the CTF is ready to start! ## Running the CTF One of the reason we cross-check solutions is to keep admin support running 24/7 for the whole week. For the first several hours after CTF start, I constantly monitored VM usage and flag submissions. There were several issues at the start: 1. Chal VMs would fail to spawn, caused by excessive memory usage from huge numbers of VMs. Theori admins were also monitoring the usage and relieved the issue by spawning more challenge hosts. 2. Flag submissions would fail. This was not a flag misconfiguration but API failure on flag submission endpoint probably due to an update introduced on Dreamhack several hours before CTF start. As this was a known issue, GoN admins published a notification in both Korean and English based on Theori's previous notification. Theori admins temporarily patched the API endpoint to alleviate some constraints that might be causing the error. 3. We noticed several failures on flag submissions for `Legendary` challenge and checked public files and flag settings. The public files were mis-uploaded as an older version, and thus the challenge was taken down and revived after the public files were updated. 4. An issue with `Oxidized` server dying on failed exploitation attempts was reported. We identified the cause of server death to be `socat` exiting after forked child process death under some unknown conditions, which was unexpected. Since players can request VMs again we left the deployment as is. Failed exploitation attempts on the other hand were traced back to libc version disparity caused by the player installing gdb, which updated libc[^4]. These are the problems that occured at the start of CTF AFAIK. We made sure to be responsive, and I think we did a fine job despite the CTF being a week-long one. During GoN Quals we usually have a solve notifications channel at GoN Slack. On previous GoN Quals we used a modified CTFd plugin to post data into Slack webhook, but as we used Dreamhack this time we needed a new bot. So I quickly wrote one up using Dreamhack APIs a day after CTF start, which looks like this: ![DreamWatch Bot](https://i.imgur.com/m0mzAXz.jpg) Excluding the first day, admin support requests on Discord was mostly quiet, so we could relax a bit and watch the solve notification channel :face_with_monocle: During the CTF I noticed that the decay speed of dynamic scoring was too slow compared to previous GoN Quals. Previously we used a half-life of 5 solves (i.e. 5 solves halve the current score) to quickly adjust scores according to actual difficulty perceived by the players. In the equation used by Dreamhack it requires about 35 solves to halve the score from 1000 to 500, which I thought is definitely too much. Such a slow decay fails to motivate players to solve difficult challenges, which players have also pointed out in the survey - I definitely agree. Nearing the end of the CTF, I prepared some survey questions for feedbacks on the CTF and Dreamhack platform. We also prepared authors' writeups to release immediately after CTF end since that's what I'd like to see on every CTFs :pray: And now... the CTF has ended! ## After the CTF As prepared, we released authors' writeups immediately after CTF end. It was great to see players exchanging their writeups! As our goal of holding an Open Qual was to share the challenges, most challenges were ported to Dreamhack wargame by Theori admins. Theori admins were very responsive and helpful throughout the whole process, so give them a round of applause :clap: We have reviewed survey submissions, and thank you very much for the feedbacks! These will help us a lot to make a better GoN Open Qual. Prize is getting sorted out, so please bear with us a few more days. ## The Future of GoN Quals 2020 Spring GoN Open Qual CTF was an experimental public CTF. So we aren't yet sure whether or not to continue opening this as a public CTF. Personally I would like to continue opening it as a public CTF, possibly in a bigger format, but we've already overworked compared to previous Quals. I'm also not the leader of KAIST GoN but just a random club member getting older in the club, so I'll leave the decision up to the current core members. ----- P.S.: I've recently joined [zer0pts] after receiving an offer. I haven't applied for other teams since I like GoN so much, but zer0pts was like an ideal GoN to me with sufficiently small team size and focused members. But this doesn't mean that I've left GoN; I'll continue on making challenges for future Quals, sometimes participate on CTFs as GoN, and much more. P.P.S.: Shoutouts to all the challenge authors of KAIST GoN for writing great challenges! [Xion]: https://twitter.com/0x10n [c0m0r1]: https://twitter.com/c0m0r1 [Theori]: https://theori.io [zer0pts]: https://twitter.com/zer0pts [^1]: Server-side prototype pollution, RCEs on non-forking servers, etc. [^2]: Usually the club leader, but me this time since I suggested the Open Qual and organized everything [^3]: I believe in the invisible hands of dynamic scoring. For example, if one chal is rated higher than it should be, then players will be incentivized to solve the higher-rated chal instead of other chals. This will lead the chal scores to automatically balance out based on difficulty and other incentives to solve (multi-staged chal, duplicated chals, unintended solutions, etc.). However the decay speed was too slow, which might have deterred this effect. [^4]: Notes to chal authors: Please don't use tags in your Dockerfile, instead pin a specific digest (unless you're sure it won't cause any problems). Tags can be updated any time, causing unexpected disparity between deployment and players' environment. Also, be careful with package managers as they could change package versions any time, so pin versions and be explicit with dependencies whenever possible.