# AI Hackathon Wrap-Up Notes

## Introduction & Context
Over the past several months, the pace of **agent-framework releases** and **enterprise tooling for autonomous AI workflows** has accelerated dramatically. What began as a speculative frontier — "agents" in the sense of systems that plan, use tools, coordinate sub-agents, and execute multi-step tasks — is quickly becoming core infrastructure.
When we launched these hackathon teams, our goals were pretty simple:
**learn what's out there and build concrete things that make our work smarter and more creative.**
And we decided to target some new releases that came out (or were dramatically improved) in the time between when the hackathon was announced and now.
* **[OpenAI Agents SDK](https://openai.com/index/new-tools-for-building-agents/?utm_source=chatgpt.com)**
* **[OpenAI Agent Kit](https://openai.com/index/introducing-agentkit/?utm_source=chatgpt.com)** — unveiled at DevDay 2025, a bridge from prototype to production with the new Agent Builder interface.
* **[Anthropic Claude Agent SDK](https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk?utm_source=chatgpt.com)** — the open-source framework that powers Claude Code, enabling contextual reasoning and tool invocation.
* **[Claude Sonnet 4.5](https://www.anthropic.com/news/claude-sonnet-4-5?utm_source=chatgpt.com)** — upgraded model and code-execution environment emphasizing tool use, memory, and project-scale reasoning.
* **[Claude Agent Skills](https://www.anthropic.com/news/skills)**
---
## Our Goals
1. **Learn the Tools**
Systematically explore every major agent framework — OpenAI Agents, Claude SDK, Gemini Canvas — to understand their design philosophies, capabilities, and limitations.
2. **Understand the Mindset**
Study how "agentic" thinking reframes creation and collaboration: delegation, orchestration, and tool use as cognitive partners rather than replacements.
3. **Build Concrete Systems**
Translate insight into action — creating tools we can use daily in the Lab, and prototypes we can adapt for faculty and student projects.
---
## The Learning Lab's Agentic Experiments
Since the hackathon's kickoff, we've treated each Lab as a testbed for understanding what agentic design looks like in practice — grounded in the real work of teaching, research, and media production.
---
### The Display Lab
**Focus:** Multi-agent dialogue visualization and performance in physical space.
The Display Lab brings agentic data back into the physical world — onto walls, screens, and stages — creating a visible loop between human and machine thought. Built in **[Next.js 15](https://nextjs.org/)** with **Three.js**, it creates a three-projector installation system that makes AI agent interactions visible and performable. Unlike typical chatbot demos, this system treats AI dialogue as a *performance medium*—agents cycle through a pool, maintain literary constraints, and run indefinitely until manually interrupted. The installation makes agentic activity spatially legible for group viewing and discussion.
This week **Madeleine** focused on full-stack Next.js development (typically works in Python/APIs), building toward an actual faculty installation at the Barker Center. The project showcases our intergenerational team (faculty, grads, undergrads, postdocs) working on interconnected systems.
**The System:**
* **Monitor** — 3D force-directed graph of agents and dialogues with routing controls
* **Screen A/B** — Synchronized projection displays for live dialogue playback
* **Two Modes:**
* **Conversing** — Open-ended multi-agent conversations (no timeout)
* **Editing** — Collaborative line-by-line editing of sonnets and song lyrics
**Agent Pool:** Each dialogue randomly selects 5 agents from Airtable. New agents automatically cycle into running dialogues when added. Agents maintain medium constraints (iambic pentameter for sonnets, rhyme schemes for lyrics, debate topics for conversations) while editing/contributing in their "unique voices."
**Key tools**
* **Next.js 15** — App Router with streaming API routes
* **Three.js + OrbitControls** — 3D graph visualization
* **Polling-based routing** — Simple, reliable multi-screen synchronization (1s intervals)
* **OpenAI GPT-4o-mini** — Agent personality generation and dialogue streaming
* **Airtable** — Agent configuration and dialogue management
* **Prisma + PostgreSQL** — Real-time state and agent assignment
* **Blackmagic hardware + [MIDI controllers](https://en.wikipedia.org/wiki/MIDI_controller)** — physical control interfaces for projection, color, and camera events
**Outputs**
* A three-projector "chalkboard wall" installation showing live translations and agent conversations.
* "Infinite Conversation"–style experiments visualizing dialogue between AI models, where agents debate topics and collaboratively edit texts in real-time.
* Semantic maps and real-time updates drawn from Chronicle and Capture Lab data streams.
The Display Lab acts as the **public face of the AI Lab ecosystem** — where data becomes performance and agentic computation becomes visible collaboration.
---
### The Chronicle Lab
**Focus:** turning everyday communication into structured memory.
The Chronicle Lab turns everyday communication — Slack threads, Airtable logs, meeting notes — into structured memory. It's where **ephemeral collaboration becomes durable context**.
We examined the **OpenAI Agents SDK** and the **Agent Builder Kit** and pushed on our Slack automations that digest research and (eventually) produce a newsletter. It's easy to make a single emoji-triggered summary; the hard part is **modeling complex logic** across threads, channels, sources, and time.
The team did a close reading of the Agent Builder to ask: what would an Airtable look like that can actually capture that orchestration logic? At the same time, we needed to prototype quickly in Slack. That produced the core **tension** of this lab:
* Designing a **complex, "right" relational database** that can express agent workflows;
* While needing to **test behaviors now** with lightweight bots.
The result is a **complex database and simpler agents** for the moment: the database expresses the shape of where we're going; the agents stay lean so we can iterate.
**Key tools**
* **[OpenAI Agents SDK](https://openai.com/index/new-tools-for-building-agents/?utm_source=chatgpt.com)** — multi-agent orchestration
* **[Airtable](https://airtable.com/)** — relational database for logs, triggers, and long-term memory
* **[Slack API](https://api.slack.com/)** — emoji triggers and contextual automation
**Outputs**
* A structured Airtable schema that encodes agentic logic and future workflows.
* A simple "Colbert bot" prototype that fires on emoji and responds (joke/summary), illustrating fast iteration while the schema matures.
* Emoji-based triggers that fire custom framing prompts ("summarize this thread for the AI newsletter").
* A growing table of reusable **context functions**, e.g., "get all Slack messages from channel X in the last week."
**What We Learned**
* Plan the **data model** to match agent workflows, but keep initial bots **small and testable**.
* Expect a staged path: **schema first**, then progressively smarter agents plugged into it.
In practice, the Chronicle Lab teaches us that **documentation can be agentic** — authored collaboratively by humans and models as our systems evolve.
---
### The Capture Lab
**Focus:** multimodal input, real-time interaction, and embodied design.
Born from both studio needs and classroom integrity concerns, the Capture Lab experiments with **embodied interfaces** — devices that make AI interaction tactile, creative, and transparent.
We already do a lot of studio capture followed by transcription, translation, or summarization. This lab pushed toward **more precise, transparent tooling** for those streams—and toward **non-textual interfaces**.
We centered on **MIDI controllers** as inputs. They're tactile, precise, and fun by design. Using them forces a pattern: launch an agent with a button; vary properties with dials. That defamiliarizes prompting, makes the process physical, and fits our interest in unconventional UI/UX.
**Payoff:** In a translation workshop right after the hackathon, we sketched a prompt chain with index cards (think node-based visual scripting). The capture system mapped button presses to: (1) capture an image; (2) describe it; (3) generate Python code that matches a template **Madeleine** prepared— so it can drop straight into a runnable chain.
**Key tools**
* **[OpenAI API](https://platform.openai.com/docs/overview)** — transcription, translation, and real-time reasoning
* **[Slack API](https://api.slack.com/)** — logging captured data and launching follow-up tasks
* **[Next.js](https://nextjs.org/)** — runs live capture dashboards and event routing logic
* **[MIDI controllers](https://en.wikipedia.org/wiki/MIDI_controller)** — tactile agent launch and parameter control
**Outputs**
* Index-card → Python prompt-chain demo for translation and reflective writing.
* Vision-assisted presentation capture with agent-launch buttons.
* Early prototypes for classroom transcription and summarization pipelines.
**What We Learned**
* Embodied controls **expose** agent parameters and steps; they don't hide them.
* Non-textual UI helps students and faculty **see** the process, not just the output.
In this lab, **touch and thought converge** — a space where controlling an AI model can feel as intuitive as playing an instrument.
---
### The Composition Lab
**Focus:** how AI reshapes academic writing and communication.
The Composition Lab explores how AI transforms writing from a solitary process into a **collaborative, code-driven practice**. We treat the IDE itself as a new genre of scholarly environment, where text, logic, and structure intermingle.
**Christine** needed to run a large workshop for a video-games course during the hackathon, so we worked the intersection: AI-augmented writing as **vibe-coded**, playable essays. A quick example is an essay on the jump mechanic—generated fast in a Gemini-Canvas-style move—then iterated.
She also investigated the newly released **Claude Code Agents SDK**, especially **skills**. Skills act like organized folders of context so the system can fetch only what's relevant for a given move rather than relying on one giant prompt. Her prototype skill refactors a folder of messy student projects (mixed formats) into clean React components for a Next.js app—consistent structure, Tailwind/Shadcn conventions, navigable `page.tsx`.
This suggests a broader shift: academic writing as a **compositional system**—text, code, and structure living together—rather than a single linear document.
**Key tools**
* **[Claude Code](https://www.anthropic.com/news/claude-code)** — contextual reasoning for long-form writing and work
* **[Claude Agent SDK](https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk?utm_source=chatgpt.com)** — skill-based agent construction and custom skills for structuring arguments, citations, and drafts
* **[Gemini Canvas](https://blog.google/technology/ai/gemini-workspace/)** — multimodal collaborative visual composition environment
* **[Shadcn/UI](https://ui.shadcn.com/)** — structured design framework for essay-like interactive interfaces
* **Cloud Code SDK** — agent skills for project refactoring
**Outputs**
* AI-augmented essays and "vibe-coded" assignments.
* Integration of markdown-based thinking (Obsidian-style) with Claude reasoning chains.
* Experiments linking video-game narrative logic to academic essay structures.
**What We Learned**
* Treat "writing" as **interaction**: structure + logic + prose.
* Use skills to **control context** and keep drafts consistent across projects.
The Composition Lab is where **scholarly communication meets code** — reimagining writing as interaction, not output.
---
## Tooling Overview
| Category | Primary Tools | Purpose |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------- |
| Agent Frameworks | [OpenAI Agents SDK](https://openai.com/index/new-tools-for-building-agents/?utm_source=chatgpt.com), [Claude Agent SDK](https://www.anthropic.com/engineering/building-agents-with-the-claude-agent-sdk?utm_source=chatgpt.com) | Build and orchestrate multi-agent workflows |
| Writing & Reasoning | [Claude Code](https://www.anthropic.com/news/claude-code), [Gemini Canvas](https://blog.google/technology/ai/gemini-workspace/) | Agentic composition and structured reasoning |
| Data & Memory | [Airtable](https://airtable.com/), [Slack API](https://api.slack.com/) | Logging, triggers, and contextual recall |
| Interaction & Capture | [Next.js](https://nextjs.org/), [MIDI controllers](https://en.wikipedia.org/wiki/MIDI_controller) | Live capture, visualization, and control |
| Visualization | Next.js 15, Three.js, WebSocket layers | Multi-display orchestration and media performance |
Together, these form a **studio-scale architecture for agentic experimentation**, connecting inputs, memory, composition, and public display.
---
## Reflections & Next Steps
This hackathon showed that the **agentic turn** in AI is as much about *designing relationships* as it is about writing code. By distributing cognition across **Display**, **Chronicle**, **Capture**, and **Composition**, we are building **transparent, contextual, and creative systems** that extend human intention.
The "agentic turn" isn't just code; it's **designing relationships** among people, tools, and contexts. Command-line interfaces and editor plugins (VS Code, Cursor, Windsurf) help **un-black-box** what agents do so we can actually see and shape the steps.
We see two directions in agent design:
* **Rosie (black box):** one apparent "agent" hides many tiny agentic moves behind a polished surface.
* **Roomba (fleet):** many small agents motor around; humans stay in the loop more.
There's confusion in the ecosystem about what counts as an "agent." Even OpenAI's phrasing blurs it ("multiagent workflows"). Moving forward, we'll keep humans in the loop, prefer **visible processes** over hidden ones, and use agents to extend—not replace—academic making and writing.
Next steps include:
* Integrating **"The Context"** — a shared, queryable corpus of all Lab writing, media, and Slack logs.
* Reviving **Hypnomnesis** as a testbed for agentic storytelling and retrieval.
* Mapping these Labs to physical and digital spaces in the new **Learning Lab website**.
* Continuing to **un-black-box** our tools — maintaining only what's necessary, automating what isn't, and always keeping humans in the creative loop.
---
> *Everything within the black box must be maintained. But as we un-black-box our workflows — building visible, collaborative, and agentic systems — maintenance itself becomes a creative act.*
---