# AI Open Source Capstone class ## Unit 7: Debugging This is a unit around AI-specific techniques to support debugging. At the moment, by default, AI is terrible at debugging, it doesn't follow any of the major principles or techniques. It often confidently jumps to the wrong conclusion without properly gathering clues, and generally makes things much worse. In this unit, we'll highlight some of the key tactics/strategies in debugging, and demonstrate how to guide the AI to use those techniques. ## 1. Triaging GitHub issues First, let's find some bugs! To get involved with an open source repository, one great way to get plugged in is to start chiming in on issues. (You can also help on Discord). Repositories are flooded with more issues than maintainers can handle. The first thing a maintainer needs to do when fielding an issue is to understand and attempt to reproduce the issue. They might spend over an hour doing that, only to realize that the problem was user error. To help the maintainer, reproduce the issue yourself, and add what you found. A minimum reproducer/videos/screenshots is helpful. If there's not enough information to reproduce, ask clarifying questions. **Try it** 1. Let's look through recent issues for the Superset and/or Scalar repos. - Note: there are way more issues than the maintainers can handle - Look at the projects contributors to learn who the people are: https://github.com/apache/superset/graphs/contributors - Note: look at how few people are helping out with issues. It should be easy to get to know the maintainers over a couple of months of pitching in. 2. Dive into a few issues - Open the repo in Cursor. - Try a prompt like: ```markdown! I want to explore this issue: https://github.com/scalar/scalar/issues/7363 Can you investigate the issue and the repo deeply, and be prepared to answer questions? ``` 3. Ask many, many questions to understand the issue. Try to reproduce the issue, and explore potential fixes. 4. Report your findings back to the repository. This will help save the maintainers a lot of time, and will also help you build a relationship. ## 2. Using Audit Logs AI is terrible at debugging. It often jumps to conclusions, and claims to understand what the issue + solution is. It often makes things worse with its "fix". - Get it to create extensive timestamped logs along the calling sequence - Maybe even add some code to fetch from the database first too. - Have it first inspect and analyze the audit log before jumping to conclusions **Try it out** - Describe a particular issue, then add the prompt below. ```! Diagnose this issue, but do not fix or jump to conclusions. Identify all relevant calling sequences and data flow, and add copious logging, tagged for easy filtering. If there are multiple relevant calling sequences (e.g., for saving data or rendering data), tag them separately. ``` ## 3. Exploring database and network calls One of the reasons that AI has trouble debugging is that it doesn't have access to actual database data or network responses. While you generally don't want to give AI direct access to a database (a development database is okay), you can have it query data using your ORM. **Try it out** - For database debugging, you can try this prompt: ```! Create a test script that will query the relevant model(s), and allow us to inspect the results. ``` - For network requests, you can try this prompt: ```! For each of the relevant network requests, add debugging logic to create a separate file for the responses, along with a summary file that has the call sequence. ``` ## 4. Visual bugs At the moment, AI is pretty bad at debugging HTML / CSS issues that cause your display to be unexpected. One tool that can improve its ability to debug issues is Chrome DevTools. With Chrome DevTools (or Playwright), you can give AI direct access to the DOM, which helps it "see" which component(s) are causing the layout issue. https://developer.chrome.com/docs/devtools/ai-assistance