# ai-hackathon-project-proposals
## final project descriptions
### the capture lab
LEAD: Jordan
Studio Agents: AI-Enhanced Oral Exams: Real-time studio agents that augment oral exams and presentations with transcription, tagging, and vision. These agents detect and describe visual materials (cards, slides, images) as they appear, automatically generating interleaved Markdown summaries or transcripts. A step toward an intelligent LL Studio that documents and enhances performance-based learning.
### the chronicle lab
LEAD: Dani
AI Lab Slack Agents: Server-side research and communications agents that read, summarize, and reflect on everything the Lab posts—links, PDFs, Slack threads. These bots process information collaboratively, learning user preferences and generating public summaries, updates, and insights while building a structured “agent commons” in the Bok Airtable database.
### the dialogue on display lab
LEAD: Madeleine
Infinite Conversation Agents: An installation where user-created agents make and converse endlessly about technology, AI, and philosophy. Participants design agents through physical forms that manifest digitally, then watch them debate in a visually rich Next.js interface. Distinct AI agents handle dialogue, memory, and research functions, creating a staged ecosystem of conversation.
### the composition lab
LEAD: Christine
Agentic creative tools for building multimodal academic essays. This group will aid writers—not developers—with AI agents that assist in designing expressive web-based essays, but they'll be doing so with developer tools: claude-code, Codex, gemini-cli, and Cursor.
# AI Hackathon Projects
### **Vibe Coding Agents**
**Team:** *Multimodal Essayists*
Agentic creative tools for building multimodal academic essays. This station equips writers—not developers—with AI agents that assist in designing expressive web-based essays using tools like Claude, Codex, Gemini, and Cursor. Ideal for courses in video games, data journalism, or digital humanities, the agents co-create interactive, visual, and textually rich essays.
---
### **AI Lab Slack Agents**
**Team:** *The Bok Server Collective*
Server-side research and communications agents that read, summarize, and reflect on everything the Lab posts—links, PDFs, Slack threads. These bots process information collaboratively, learning user preferences and generating public summaries, updates, and insights while building a structured “agent commons” in the Bok Airtable database.
---
### **Infinite Conversation Agents**
**Team:** *The Talkers*
An installation where user-created bots converse endlessly about technology, AI, and philosophy. Participants design agents through physical forms that manifest digitally, then watch them debate in a visually rich Next.js interface. Distinct AI agents handle dialogue, memory, and research functions, creating a staged ecosystem of conversation.
---
### **Studio Agents: AI-Enhanced Oral Exams**
**Team:** *Delta Deck*
Real-time studio agents that augment oral exams and presentations with transcription, tagging, and vision. These agents detect and describe visual materials (cards, slides, images) as they appear, automatically generating interleaved Markdown summaries or transcripts. A step toward an intelligent LL Studio that documents and enhances performance-based learning.
---
---
## old
## Vibe Coding Expansion
*Builds on the “vibe coding” station from previous workshops.*
**Lead:** Christine
**Focus:** Create more resources and examples (like the Godzilla project) for humanities and multimodal essay projects.
**Goal:** Provide reusable materials and documentation for future workshops and creative coursework.
**Deliverable:** A set of ready-to-use examples, templates, and documentation for future courses and workshops.
## AI Lab Slack Bots
*Expands the AI Lab experimental Slack setup where shared articles and links are scraped and summarized.*
**Lead:** Marlon
**Focus:** Develop bots that discuss or analyze posts — either as playful “commentator” bots or research-assistant-style agents.
**Goal:** Build bots that learn user preferences (based on reactions/emojis) to personalize what content they surface or summarize.
**Deliverable:** Working Slack bot prototypes that demonstrate distinct conversational or analytical functions.
## Infinite Conversation Installation
*Builds on earlier Slack-based conversation demos, extending them into a more intentional, staged presentation format.*
**Lead:** Madeleine
**Faculty:** Moira
**Focus:** Prototype an “infinite conversation” installation where student-designed bots talk to each other continuously (inspired by the Guezog-style conversation website).
**Goal:** Explore how conversational AI can be staged as performance or interactive installation.
**Deliverable:** A front-end concept and working demo featuring sample conversations between bots.
## AI-Enhanced Oral Exam & Visual Essay Builder
*Builds on the AI-enhanced oral exam pilot (using “Delta Deck” or tarot-style cards).*
**Lead:** Jordan
**Faculty:** Sarah C.
**Focus:** Integrate real-time transcription and object detection to combine spoken responses with visual elements.
**Goal:** Extend the oral exam format to generate on-the-fly visual essays that merge voice, image, and text.
**Deliverable:** A working prototype demonstrating real-time visual essay generation from oral exam interactions.
**Technical Note:** Optional coding components may involve object detection or using the OpenAI Vision API.