# working-group-planning
---
20250303
**Pitch:**
Bok aiLAb meets weekly on Fridays from 8:30 - 10:00 AM in the Learning Lab. This isn't a discussion space reacting to familiar Artificial Intelligence anxieties (like cheating or plagiarism). Instead, we recognize an urgent need to critically and proactively engage with how AI will fundamentally transform higher education-- before its defaults are shaped by tech companies.
Our central questions:
- **How will AI structurally alter teaching and learning in the future? Or, from a different angle-- how can teaching and learning alter AI?**
- **What critical and theoretical frameworks can we use to anticipate and influence these changes?**
- **What experiments can we run today that anticipate where we’ll be in six months, a year, or five years?**
If you're intrigued by the form-- and urgency-- of this exploration, we invite you to join us.
### **How We Work**
- **News**: Over coffee and pastries, we'll discuss the top AI news from the week. These "news roundups" will then structure the rest of the session.
- **Experiment/Discuss**: We'll engage with the major stories/developments of the week by imagining things we could design in response-- and, in some cases, actually designing. This would primarily be done in Python nteobooks, but *no coding experinece is required.*
- **Reporting**: The news, discussions, and developments from the meetings will then beshared via multiple channels, like a shared group Slack.
### **How to Join**
- **Attend when you can.** Weekly meetings, but attendance is not required at each. Come every week, come once a month— it’s fluid.
- **Participate in the online space.** We extend the conversation via **Slack**, where we can post what we don’t have time to discuss, experiment with bots, and archive our discoveries. Post links, experiments, half-thoughts, and provocations.
---
**Name Ideas:**
* Critical Futures
* Pedagogical Cartographies
* Coffee & Code
* The AI Anticipatory
* The AI Speculative
* Future Tense Collective
* AI Horizons
* AI Roundtable
* AI Praxis
* Critical AI
* Cartography Collective
* Frontier Forum
**Weekly Themes:**
meh-- going to sort through after lunch:
> Set 1: Critical Pedagogies of AI
AI as Power Structure
Surveillance vs. Sousveillance
The Algorithmic Unconscious
Epistemologies of Prompting
Digital Labor in the Classroom
AI and Academic Freedom
Set 2: Experimental AI Pedagogies
AI-Generated Assignments
Autonomous Classroom Agents
Real-time Discourse Analysis
Prompting as Creative Inquiry
Pedagogical Simulations
AI-Augmented Assessments
Set 3: Philosophical Explorations
Derrida and the AI Text
Foucault's Panopticism and AI Surveillance
Lacan, AI, and the Pedagogical Other
Barthes’ Authorial Death in the Age of AI
Baudrillard’s Simulacra in AI-driven Pedagogy
Benjamin’s Arcades: AI as Associative Archive
Set 4: Humanities Toolkit: AI Edition
Curating Intellectual Canons
AI-assisted Close Reading
Algorithmic Literary Analysis
AI-driven Narrative Construction
Multimodal Scholarly Communication
Digital Humanities and AI Ethics
Set 5: Radical Educational Futures
The Post-Instructor University
Distributed AI Agency in the Classroom
AI-driven Collaborative Learning
Speculative Pedagogical Architectures
Automated Intellectual Production
Decentralized Knowledge Networks
Set 6: Interactive and Media-Rich Pedagogies
AI and Interactive Storytelling
Real-time Multimedia Integration
Augmented Reality Classrooms
Dynamic, AI-generated Visualization
AI-driven Game-Based Learning
Responsive Environments and Pedagogical Flow
Set 7: AI, Emotion, and Affect
AI-driven Emotional Analytics
Affect Theory Meets AI Pedagogy
The Sentimental Algorithm
Empathy Machines and Pedagogical Ethics
AI and the Emotional Labor of Teaching
Emotion Recognition in Learning Spaces
Set 8: Agency, Authorship, and Automation
AI as Co-author
Automated Critical Writing
Intellectual Autonomy vs. AI Dependency
Authorship, Intellectual Property, and AI
Autonomous Pedagogical AI Systems
Authorship and Generative AI Ethics
Set 9: Social and Political Implications
AI and Educational Inequality
Algorithmic Bias and Pedagogy
AI in Civic Education
Digital Citizenship in an AI World
Pedagogies of AI Activism
Critical Digital Literacy and AI
Set 10: Pedagogical Praxis and Methodology
Prototyping AI-driven Courses
Iterative Design with AI Tools
Pedagogical Prototyping Workshops
Data-driven Instructional Design
AI-driven Reflective Teaching
AI-enhanced Feedback and Revision
Older brainstroming
---
## An Experimental Pedagogy Working Group (Ultra-Concise Version)
### **What This Is**
This is **not** a space for rehashing AI concerns (cheating, plagiarism, etc.), nor is it about maintaining Harvard’s AI infrastructure or tracking past projects. Instead, we ask:
- **How will AI transform the structure of teaching and learning?**
- **How do we think critically and theoretically about AI’s pedagogical future before its defaults are set by tech companies?**
- **What experiments can we run today that anticipate where we’ll be in six months, a year, or five years?**
### **How We Work**
- **Weekly Sessions** → Framed by **AI news** from the week and a **key theme or frontier** (e.g., AI-enhanced interactivity, agent swarms, real-time media automation, pedagogical data collection).
- **Show & *Test* (Work in Progress, Not Just Results)** → If you stayed up late last night testing OpenAI’s latest API or broke Google Gemini in some unexpected way, **this is where you share that.**
- **Always-On Community** → Accompanying **Slack/Discord** for ongoing discussions, archiving, and AI bot experimentation.
### **Meeting Flow**
1. **Start with a key AI news item or release** (sets the intellectual tone).
2. **Introduce a theme or prompt** (e.g., “What would a classroom look like with 100 AI agents working in the background?”).
3. **Encourage discussion and live mini-demos**—"What did you test or build since last time?"
4. **Conclude with provocations for next week**—where are we headed?
### **Rotating Meeting Formats**
- **Show & Test** – Present an AI experiment in progress; the group pushes it further in real-time.
- **Future Scenarios** – Speculative discussions (e.g., "What does a university look like when AI tutors outnumber human instructors?").
- **Tech Dive** – Explore a **new API, tool, or model** (e.g., "Can we make GPT-4 Turbo act like an entire classroom of students?").
- **Humanities AI Clinic** – Take a **humanistic concept** (Foucault, Derrida, Barthes) and apply it to **how AI structures knowledge and power.**
### **Bridging Theory + Application**
- Since the **Learning Lab is humanist-heavy**, we ensure that experiments **structure knowledge in meaningful ways.**
- **Examples:**
- *“What would Lacan say about prompting AI as an Other?”*
- *“Is AI a sousveillance tool in the classroom? A deconstructed instructor?”*
- We experiment **not just with tools but with the structural implications of AI in education.**
---
# Full brainstorming text:
## AI Futures: An Experimental Pedagogy Working Group
### *Exploring the Next Frontiers of AI in the Classroom*
### **What This Is**
AI Futures is a **forward-looking, experimental, and highly theoretical working group** focused on **where education is going**, not where it has been. This is **not** a space for rehashing AI “concerns” (cheating, plagiarism, etc.), nor is it about maintaining Harvard’s AI infrastructure or tracking past projects. Instead, we ask:
- **How will AI transform the structure of teaching and learning?**
- **What new possibilities emerge when AI systems are not just tools but active agents in the classroom?**
- **How do we think critically and theoretically about AI’s pedagogical future before its defaults are set by tech companies?**
- **What experiments can we run today that anticipate where we’ll be in six months, a year, or five years?**
### **How We Work**
- **Weekly Sessions** → **Framed by AI news** from the week and structured around **a key theme or frontier** (e.g., AI-enhanced interactivity, agent swarms, real-time media automation, pedagogical data collection).
- **Show & Test (Work in Progress, Not Just Results)** → If you stayed up late last night testing OpenAI’s latest API or broke Google Gemini in some unexpected way, **this is where you share that.**
- **Theory Meets Code** → This group is filled with **humanists turned experimental pedagogues**, meaning we **apply our theoretical rigor to real-world AI applications**—and structure them.
- **Open-Attendance Model** → Come every week, come once a month—it’s fluid.
- **Always-On Community** → We extend the conversation via **Slack or Discord**, where we can post what we don’t have time to discuss, experiment with AI-powered bots, and archive our discoveries.
### **Why This? Why Now?**
Harvard has no shortage of AI research or implementation teams. **But who is shaping the future of AI in the classroom?** If we don’t experiment before the defaults are set, we’ll have no agency in defining how AI integrates into education.
This group is not just for Learning Lab staff—we expect participation across **faculty, grads, undergrads, and staff**, each bringing a different perspective to the conversation. Different iterations of these meetings may emerge over time.
### **How to Join**
- **Attend when you can.** Weekly meetings, structured but flexible.
- **Bring experiments, half-baked ideas, and questions.** No project is too weird.
- **Participate in the online space.** Post links, experiments, half-thoughts, and provocations.
---
## **Meeting Structure: Striking the Right Balance**
### **1. Thematic Framing + Free Exploration**
- Start with **a key AI news item or release** from the week (sets the intellectual tone).
- Introduce **a theme or prompt** (e.g., “What would a classroom look like with 100 AI agents working in the background?”).
- Encourage discussion and live mini-demos—"What did you test or build since last time?"
- Conclude with **provocations for next week**—where are we headed?
### **2. Rotation of Formats to Keep It Dynamic**
Keeping the meetings fresh is key. You might rotate formats, such as:
- **Show & Test** – Someone presents an AI experiment in progress, and we collectively **push it further in real-time.**
- **Future Scenarios** – Speculative, theoretical discussions (e.g., "What does a university look like when AI tutors outnumber human instructors?").
- **Tech Dive** – Explore a **new API, tool, or model** (e.g., “Can we make GPT-4 Turbo act like an entire classroom of students?”).
- **Humanities AI Clinic** – Instead of just testing AI tools, take **a humanistic concept or theoretical lens** (Foucault, Derrida, Barthes) and apply it **to how AI is structuring knowledge and power.**
- **Live Coding or Prompting Jam** – Try **wild** prompting techniques, chain-of-thought hacking, or multimodal experiments.
This allows **everyone to participate** in different ways, whether they’re hands-on with code or deep in theoretical critique.
### **3. Work-in-Progress & Rapid Iteration**
- **Build in an expectation** that each week, **at least one person brings something experimental**—it doesn’t have to be polished.
- If someone is **stuck** on an experiment, they **share their failure**, and the group brainstorms solutions.
This fosters a **culture of rapid iteration**—fast prototyping, fast reflection, fast improvement.
---
## **What Might Be Missing?**
### **1. Bridging Theory + Application (Beyond Just the AI)**
- Since the **Learning Lab is humanist-heavy**, should there be a standing segment that takes **theory seriously**?
- Example: “What would Lacan say about prompting AI as an Other?”
- “Is AI a **sousveillance tool** in the classroom? A deconstructed instructor?”
- How can we make sure our experiments aren’t just **technical** but are **structuring knowledge** in a way that aligns with our pedagogical commitments?
### **2. Output: What Happens to These Discussions?**
- Are we just discussing and testing things for ourselves, or do we want to **produce something ongoing**?
- Some ideas:
- A **running experiment log** (not a formal “report” but a shared, evolving record of what we’re testing).
- **Toolkits or small proof-of-concept demos** that faculty can eventually play with.
- A **yearly manifesto**—“The Future of AI in the Classroom: 2025 Edition” based on our experiments.
### **3. Who Are the Right People for This?**
- How do we **invite participation** from different Harvard groups while keeping it from turning into a generic AI discussion?
- Do we need **some meetings just for LL folks and some open to others**?
- If we do multiple iterations (for faculty, staff, undergrads), what are the key differences?
### **4. Tech Stack: Where Does This Live?**
- Do we stick with **LL Slack, or create a separate AI workspace**?
- Would an **ongoing wiki-style documentation hub** help capture our insights?
- Do we want a **bot presence** in our chat (e.g., an AI assistant that helps track meeting takeaways, curates experiments, or posts new AI releases)?