owned this note
owned this note
Published
Linked with GitHub
# bok-ai-lab-20250425-plan
## Short Report
Two conversations recently emerged together in tech discussions: increasing interest in 'agentic workflows' (systems that automate tasks or act independently) and growing critique of 'AI sycophancy' (AI trained to constantly affirm user inputs).
At this week's Bok AI Lab, we explored these not as isolated concerns, but as connected issues with real implications for faculty experimenting in humanities classrooms. When AI systems automatically agree or flatter, they can become frictionless mirrors rather than genuine thought partners. Participants questioned directly: does an AI tool that always says 'yes' actually support critical inquiry, or does it instead trap educators and students in a false sense of interaction?
To explore this, participants experimented with today's most accessible low-code agentic systems. They may simulate dialogue, but most are designed for customer service settings—essentially mapping over FAQ trees and menu selections. However, the design patterns and ontologies built for customer service workflows aren't optimized for the needs of learning environments and academic inquiry.
This mismatch—between customer-service logic and academic inquiry—was the core tension the Lab worked to clarify. Participants explored how humanities faculty might engage more directly in the foundational design of new agentic systems, to better align them with the authentic processes of scholarly thinking and teaching.
## **AI Lab Outline – Week 5: Agents, Agency, and AI That Talks Back**
In our fifth session, we turned to the question of what it means to design with agents—while protecting the agency of educators, students, and institutions alike.
We kicked things off by reframing the word “agent,” contrasting the narrow technical sense (AI agents like bots or assistants) with a broader concern: **human agency** in systems increasingly shaped by commercial AI design patterns. The session was prompted in part by faculty concerns around **AI sycophancy**—tools that flatter rather than challenge, and how that dynamic intersects with student feedback, grade inflation, and the customer-service model creeping into higher education.
### Key Discussions and Activities:
#### 1. Thickening the Concept of Agency
Participants explored how agency functions across domains—from the philosophical to the pedagogical to the technical. We discussed the Bok Center’s potential role as an “agent” on behalf of faculty: translating their pedagogical goals into AI-informed decisions, rather than expecting them to master the tech themselves.
#### 2. Deep Research on Design Patterns
We examined models of creative production—film, theater, publishing—as a way to rethink how collaborative, multi-agent workflows could better align with faculty intuition. This included both **diachronic** views (e.g., storyboarding to editing) and **synchronic** ones (e.g., lighting, sound, and script working in parallel).
#### 3. Testing Agentic Tools
Participants experimented with emerging AI platforms that allow users to script simple multi-agent interactions—often built for customer service. We explored how these might be adapted for academic use (e.g., navigating syllabi or institutional policies), and the limits of current tooling built around call center assumptions.
### Storylines:
1. Sycophants Can’t Teach: Why AI Praise Is Pedagogically Hollow
Unpacks the dangers of affirmation-by-default in AI systems. If productive learning requires resistance, can a tool that never says “no” ever be a genuine interlocutor? A reflection on sycophancy as a design failure in pedagogical terms.
2. Humanities on a Call Center Stack
Investigates the fact that most visual agentic tools were built for corporate service environments. What happens when the same tools are used to structure interpretive classroom dialogue? A compelling critique of ontological mismatch.
3. Forking Logic vs Context Corpi
Considers how agentic systems like “external brains” might do more than automate recall. Can tools like Obsidian, when augmented with LLMs, become sites of recursive, interpretive reflection—not just data tagging? A case for epistemic rather than merely executive assistance.
4. Why Visual Scripting Falls Short
Examines visual scripting tools (e.g., Houdini, Blender, Unity), which promise intuitive ease but often produce complex, tangled interfaces ("node spaghetti") as difficult as traditional code. Considers why visual methods struggle in a workflow increasingly reliant on automation via LLMs.
### Insights and Reflections:
- **Agency ≠ Access:** Including faculty in committees isn’t enough—true agency means reshaping the systems themselves, not just responding to them.
- **AI Sycophancy Resonates:** Participants across roles connected this to broader dynamics in higher ed—highlighting the need for tools that offer resistance, not just affirmation.
- **Beyond the Single Agent:** Real learning contexts often involve multiple roles, timelines, and modes of collaboration. Designing for this complexity may require new agentic design patterns—and new tools that move beyond the one-size-fits-all assistant.
We’ll continue developing these workflows in future sessions. If you’re interested in prototyping or adapting multi-agent systems for teaching and learning, we’d love to hear from you.