# transcript: # 2026-04-08 — Planning conversation with boss **Speakers:** Madeleine and her boss (planning preparation for a meeting Karen will have with an external party). **Audio reference:** `Harvard University 16.m4a` (in this folder). **Note on the transcript:** Two passes are preserved below. The first is the earlier dictation-app capture with characteristic transcription artifacts ("Xbox" for "Expos," "Box" for "Bok," "Acacian" for "academic communication," "Chachi PT" for "ChatGPT," etc.). The second is a higher-fidelity timestamped capture that the dictation app produced after a re-run. Both are kept verbatim for working purposes; future reference should prefer the second pass. --- ## Transcript (second pass — timestamped, higher fidelity) > 0:01: OK, so you're gonna do, so these are things that the person Karen's meeting with might be interested in, and I think we want to say that these are in a couple of categories. > 0:14: And so one is things related to the changes that AI is introducing in the teaching and learning space. > 0:20: And then secondly and relatedly are, are, ways in which we've been supporting. > 0:26: Academic communication and sort of long form student projects with this, you know, like, like essays, although alternatives to essays as well, across the curriculum and how that intersects with the, the problems and opportunities that AI is producing. > 0:40: So, on the first side, we've been working with a lot of faculty to develop, what they sometimes term AI resilient, assignments. > 0:48: And so this means oral assignments and class writing assignments, assignments where maybe if students are doing a take home paper, it's based on fresh or local data or experiences they could only have had in class or has multiple in-person touch points like oral pitches and presentations, moments where they have to resent their work in progress and get feedback on it. > 1:09: things that disincentivize cheating, to be totally frank about it, but then also are just, you know, rich and robust opportunities for students to deepen their ideas about course material in, in rigorous ways that are essential to, re-entering academics. > 1:24: we partnered really frequently with, XO courses for, projects they have towards the end of the term where students are remediating, scare quotes, their, their papers. > 1:38: This means taking an academic paper and then turning it into something like a conference presentation or a podcast or a social media campaign or an explainer video. > 1:47: and while it's true that these are fun assignments for students and so they boost student engagement, I think, one of the things we do is we partner with these courses is help them think about how this new medium can actually be a better way than conventional writing for wrestling with the subject matter, of, of the course, in question, not in exactly the same way as the paper does. > 2:11: The paper is still great at doing what it does, but once you have. > 2:14: To, for instance, turn something into a short form podcast or YouTube video that could be seen by any audience member, not just someone who's already an academic researcher in that area, you actually need a different and and times deeper understanding of the material, so that you can explain that to, you know, a new audience and help them understand all the things that you can sometimes take for granted when you're talking to people that are already experts in your domain. > 2:44: so that's in that sort of zone of, of leveling up academic communication and partnering with expos on, on multimodal student projects. > 2:52: And then in the world of AI, we are working to, offer students and faculty, workshops, that are for developing their AI literacy, the understanding of how to use different tools, and then with some courses that are really leaning in, we've begun developing. > 3:09: Things that go far beyond one or two introductory workshops to the longer form assignment sequences that teach students bit by bit, really cutting edge ways of using AI that are, you know, the limits of what is happening in AI research industry right now, but in complete alignment. > 3:28: With the learning objectives of a given course. > 3:31: For instance, in comparative literature, students engage in assignment sequences that bit by bit build up to complex web applications that analyze or produce or translate literature. > 3:48: And, this, I guess we're gonna have to go in more detail on this, but this forces students or compels students to think deeply about more texts that are assigned across the course of the term than they might have if they had just written an essay on its own, and I think that we actually don't have to unpack basically. > 4:05: So let's sort of bracket that and then we'll come, come back to that. > 4:09: many, if not, not necessarily all of these assignments terminate in a vibe coding in scare quotes, assignment where students, even who have never necessarily coded before, are using AI to write code that generates complex full stack web web applications that, perform the work that would typically be performed by a by a final paper. > 4:31: And this is great because it gives them skills that are valuable to them even beyond their lives at Harvard, but then really crucially, it's forcing them to wrestle with the course material, in a way that is, is certainly more rigorous than them just getting Chachi PT to write a paper for them, but, arguably, especially if the course. > 4:48: Material is quantitative or visual or auditory, it's actually a more appropriate way for them to be analyzing and presenting the results of that data, the results of their research on that data, than an old school essay would have, would have been. > 5:07: As for faculty, they also are using AI to develop assignments, activities, course materials, interactive simulations in the sciences or data visualizations for students to gain deeper intuitions about the equations or principles or laws they're learning about. > 5:25: and this summer we're going to be offering a set of workshops for faculty that teach them, the very same leveled up, AI skills that we're teaching the undergrads, but for faculty, they'll be moving not towards a final paper assignment, but towards a brand new course that they design from materials they have on their hard drive. > 5:44: Their course readings, old lectures that can be transcribed, all their random notes across all of their Google folders, all the emails they've written to TFs about how to grade and respond to the projects of the course, all of that becomes valuable context that they can deploy as they try to generate courses with AI. > 6:03: So, I think that's good, but like we have to do a deep dive into the Moira thing. > 6:09: OK. > 6:11: yeah. > 6:12: How about you do a deep dive into the Moira thing right now. > 6:18: So I think there are a couple of ways into this. > 6:20: I don't know that this guy cares about it, but I think for us we care about something that would say this, so. > 6:29: like, you know, the, I'll do a version of this that like is for talking to faculty personally, I would say, and not necessarily a cool thing we would say in public. > 6:38: So the reason that the, the term paper was kind of like the apex predator of the teaching and learning ecosystem for so many decades or centuries even is that it's, it's just an ideal piece of engineering. > 6:54: It's the perfect way for students to develop their understanding of course material. > 6:59: forcing them to come up with analyses of the data that the course is built around to consider counterarguments presented by other sources, whether there was other writers that are writing about the same data set or texts or it's non-theoretical models that claim to explain the entire domain that those texts come from. > 7:17: all of that, you know, wrestling with ideas was essential for the student learning process, and then the paper was also simultaneously the best way for the faculty member to judge whether the, the student actually had learned everything the course was designed to, to teach them, and, and moreover, it also was the, tool for, academic communication that the professor was using in their own life as a researcher. > 7:40: and that alignment, meant that everyone was spending just a ton of their time on this like one tool, for building ideas and disseminating ideas. > 7:51: And it was the perfect, perfect way to organize a class. > 7:55: Now chat GPT, this is why the the threat of chat GBT is such a problem, because if it, it takes away a few of those elements of the paper, potentially it's no longer the best way of judging whether students have understood the course material if those students are misusing. > 8:10: chat GPT by writing the paper, it means we have to kind of like decompose all the different things that academic writing was doing and kind of re-engineer new solutions for us in, in the age of AI, and this is what's very challenging. > 8:23: It's, it's exciting in a way because we have to kind of think deeply about, about things in the same way that COVID tried to forced you to kind of think about. > 8:31: all the various things the classroom was doing that you didn't quite understand, forcing people to, you know, dress in daytime clothes, or not to walk away for coffee in the middle of it. > 8:41: there were a ton of things we didn't really need to, actually, explicitly articulate for ourselves until we all found ourselves in a Zoom classroom. > 8:49: And likewise, there's a lot of things that academic writing is accomplishing that we haven't yet necessarily called to explicit consciousness that we're going to have to start, you know, explicitly defining and enumerating so that we can again re-engineer these things in in an AI native era. > 9:08: So that's a setup that's going to be valuable for us in in some contexts. > 9:11: But so when you're teaching the paper, you often think about all the steps that are involved in constructing a paper, and you break them down into a set of assignments that, scaffold and scare quotes that paper for students. > 9:24: So perhaps they perform like a lit review or an annotated bibliography where they go and find some text and they kind of have to. > 9:32: Have a sense of how they're going to use those texts. > 9:34: they might collect data in a quantitative social science course, and then they might work to analyze that. > 9:40: They might work to come up with some kind of outline of their thinking at some point. > 9:44: They might run ideas in development by some test audience, whether that's of their peers or the TFs. > 9:52: there's this sort of cycle to creating a paper and well designed courses. > 9:55: Give students feedback or even grades on all the steps of that process leading up to the paper, so that students were, you know, not just producing the best paper that they possibly could, but that so they were like learning how all of those intermediate steps are also ways of learning about the course material. > 10:12: Your lit review is not just a means to the end that is the paper, it's also a way of learning about the field that you're studying. > 10:19: So, what we've been trying to do in these AI enhanced assignments is take the various steps that are involved in cutting edge work with AI and mapping those on to some of those steps that people have to perform in different disciplines. > 10:34: So what does that look like? > 10:35: Well, in the world of AI right now, everyone is very excited about context engineering, for instance, and multi-agent systems. > 10:43: These are a lot of, you know, jargonny things, but when you unpack them, you can start. > 10:48: To understand ways in which they map onto certain sorts of intellectual moves that matter in the disciplines. > 10:53: So, context is important because the AI has no memory, it's like the character memento that forgets who he is every single day and has to kind of rebuild it all from scratch, and what this means is that you need to control as much as possible the text that goes into the AI before it starts predicting the next word, and that means you need to assemble and organize all of that text, every single thing that could possibly. > 11:16: be valuable for the project, and then you need to understand the map of that text so that you can make sure that the AI is seeing the right thing at the right moment. > 11:24: Well, it doesn't take a genius to realize that coming up with that systematic and well-indexed array of texts for any discipline is like just a marvelous learning opportunity for students, and in fact probably involves more breadth, if not necessarily depth, than any of the steps you would have typically put at the front end of a, you know, an essay writing assignment. > 11:46: and then as another example, the multi-agent processes in the world of AI and industry, people are very excited about this for creating a software engineering systems or call centers. > 11:57: These are not necessarily romantic to academics, but they try to think about all the different types of, Intelligence that are necessary to respond to a particular consumer problem or to a particular software engineering problem, and if students in the context of a humanities course, let's say, need to similarly think about all the intellectual moves that need to happen and operate on all that context we just mentioned in order to construct a meaningful. > 12:25: Or in order to translate a foreign text or in order perhaps even to, to write a text like an author that they're studying, they need to decompose all the various maneuvers that are part of academic writing or literary writing or translation and begin to clearly articulate what those involve in the prompts that they're going to structure their agents with. > 12:48: and so this too takes what for a lot of students, even some of the greatest students, even some of the greatest professors in a given field. > 12:54: what is often just a matter of intuitions built up by mirroring. > 12:57: You often learn to be a great writer by reading other great writers or imitating your professors, but no one can quite explicitly articulate what they're doing, as they're doing it, and maybe that is some of the magic of it, but for learning, it's often very valuable to be able to articulate those different steps and certainly for evaluating whether someone has learned, if you want it to not be vibes based, but to be. > 13:16: Just a little bit more rigorous than that. > 13:18: It's lovely to be able to see their externalizations, explicit and well articulated externalizations of what they think the thought processes in literary criticism are, or music interpretation or historical analysis, and that's what, you know, developing a multi-agent system, allows you to help students learn and then also helps you evaluate whether they've been successful. > 13:42: OK, that's, I realize it's too long, but it's, so that'll be useful for something. > 13:49: No, it's dope. --- ## Connection points / influence *Below: places this conversation touches existing Theatrum material, places it might seed new Theatrum work, and a few structural/methodological observations worth holding for future reference. The institutional context here is the Bok Center (the dictation app heard "Box" / "Bok" inconsistently) and the Expos program (heard "Xbox" / "X pause" / "expos") at Harvard, plus Moira as Madeleine's name for a faculty workflow she and her boss have been developing.* ### What this conversation is doing, structurally This is a planning conversation in which Madeleine and her boss are rehearsing how to describe their pedagogical work for an external party Karen will meet with. Two thematic categories surface: 1. **AI-resilient and AI-native assignment design** — the work the Bok Center is doing with faculty to redesign how academic writing-assignment scaffolding holds up (or fails to hold up) under the conditions of LLM-assisted student work. 2. **Multimodal academic communication / "remediation"** — the Bok Center's longer-running partnership with Expos (and other writing-intensive courses) on student projects that translate academic research into podcasts, conference presentations, social media campaigns, explainer videos, and (newly) full-stack web applications produced by AI-assisted "vibe coding." The conversation's most substantive move is **the term-paper-as-apex-predator argument** (around 6:38–7:55 in the second-pass timestamps): the term paper persisted for decades or centuries because it was an ideal piece of *combined* engineering — simultaneously the best vehicle for student learning, the best instrument for faculty assessment of that learning, and the dominant format of academic communication that the professor was already using in their own research life. **The threat of LLM-assisted writing is not that any one of those three functions is destroyed; it is that the alignment between the three is destroyed.** Once the student's paper is no longer a reliable instrument of the faculty member's assessment, the whole stack has to be re-engineered, because the three functions that were all riding on the same artifact must now be carried by separate artifacts. The COVID-era classroom analogy (8:23–8:49) is the right reference: a lot of what the term-paper-stack was doing was tacit and only became visible when the stack stopped working. Madeleine's framing — *"a lot of things that academic writing is accomplishing that we haven't yet necessarily called to explicit consciousness that we're going to have to start explicitly defining and enumerating so that we can again re-engineer these things in an AI-native era"* — is the load-bearing methodological claim of the whole talk. ### Connection points to existing Theatrum material This conversation lands on **multiple Theatrum threads at once**, which is unusual for a single notes document and worth flagging. #### To [[stances/instruments-are-not-creatures]] The whole talk is, at one level, an applied case study of the position the Theatrum holds in this stance. The reason the term paper was a good "apex predator" is that **it was an instrument that did work above its station** — it was just paper-and-prose, but it carried the weight of three completely separate professional functions (learning, assessment, scholarly communication) at once. The reason ChatGPT is a threat is not that it's a creature competing with students or with the professor; it's that it's *another instrument* that disrupts the alignment the term-paper instrument had locked in. The talk is essentially working out the practical pedagogical consequences of recognizing that *both* the term paper *and* ChatGPT *and* the multi-agent systems Madeleine is teaching students to build are *instruments*, none of them creatures, and the question is which alignment of instruments produces the best learning. The 12:48–13:16 passage is doing something even closer to the stance: *"what is often just a matter of intuitions built up by mirroring. You often learn to be a great writer by reading other great writers or imitating your professors, but no one can quite explicitly articulate what they're doing, as they're doing it, and maybe that is some of the magic of it, but for learning, it's often very valuable to be able to articulate those different steps."* This is the same instinct as the [[encounters/2026-04-06-llull-the-ninth-category]] — that the *explicit articulation* of what was previously tacit is a clarifying move that loses some magic and gains a lot of pedagogical and practical leverage. The Theatrum's whole bonsai-pass / HITL / explicit-gates discipline is the same move applied to personal knowledge work that Madeleine is describing applied to undergraduate humanities pedagogy. #### To [[concepts/compounding-knowledge]] and [[encounters/2026-04-06-summoning-and-context-bundles]] The summer faculty workshop Madeleine describes (5:25–6:03) — *"for faculty, they'll be moving not towards a final paper assignment, but towards a brand new course that they design from materials they have on their hard drive. Their course readings, old lectures that can be transcribed, all their random notes across all of their Google folders, all the emails they've written to TFs about how to grade and respond to the projects of the course, all of that becomes valuable context that they can deploy as they try to generate courses with AI"* — **is exactly the summoning-into-a-new-project workflow the [[encounters/2026-04-06-summoning-and-context-bundles]] design encounter sketches**, but applied to a specific high-value use case (course design from a faculty member's accumulated materials) rather than to a personal-wiki-into-a-new-project case. This is the most direct alignment between Madeleine's day-job pedagogical work and the Theatrum's own substrate-for-summoning orientation. The Theatrum is in some sense a personal-scale prototype of what Madeleine wants to teach Harvard faculty to do at course-design scale. The "Moira thing" Madeleine and her boss agree to deep-dive into (6:03–6:18) is the proper noun for this faculty-summoning workflow. Worth noting that the Bok Center's Moira workflow and the Theatrum's bundling-design encounter are doing structurally cognate work, and **the Moira workflow is potentially the highest-value real-world testbed for what the Theatrum has been designing**. Test B from [[encounters/2026-04-06-summoning-and-context-bundles]] could plausibly be run on the Moira-faculty-workflow case. #### To the [[motifs/spatial-mnemonics]] classificatory-cataloging branch (newly added 2026-04-07) The "context engineering" framing Madeleine uses in the 10:35–11:46 passage — *"context is important because the AI has no memory… you need to assemble and organize all of that text, every single thing that could possibly be valuable for the project, and then you need to understand the map of that text so that you can make sure that the AI is seeing the right thing at the right moment… coming up with that systematic and well-indexed array of texts for any discipline is like just a marvelous learning opportunity for students, and in fact probably involves more breadth, if not necessarily depth, than any of the steps you would have typically put at the front end of an essay writing assignment"* — **is the same kind of work that the classificatory-cataloging branch of the spatial-mnemonics motif is about**. Locke's commonplace book, Thompson's motif index, and Ranganathan's faceted classification are all *systematic well-indexed arrays of texts*, and the discipline of building and maintaining one is exactly what Madeleine is now framing as a pedagogical goal for undergraduates. **The Bok Center's AI-assignment-sequence work is unintentionally teaching undergraduates the classificatory-cataloging discipline that the Theatrum has been theorizing.** This is genuinely interesting and worth holding: a faculty-development organization at Harvard is, by following the practical demands of pedagogy in the LLM era, arriving at the same instinct the Theatrum reached by tracing Locke through Thompson to Ranganathan. The convergence is the kind of evidence that the third-branch refactor of the motif page was honest. #### To [[ramon-llull]] and the *ars combinatoria* tradition (lightly) The "multi-agent systems" framing in the 11:46–12:48 passage — *"the multi-agent processes in the world of AI and industry… they try to think about all the different types of intelligence that are necessary to respond to a particular consumer problem or to a particular software engineering problem, and if students in the context of a humanities course, let's say, need to similarly think about all the intellectual moves that need to happen and operate on all that context we just mentioned in order to construct a meaningful argument, or in order to translate a foreign text"* — is structurally cognate with what Llull was doing when he decomposed the act of theological reasoning into the rotating combinatorial wheels of the *Ars demonstrativa*. **Llull was teaching his students to decompose intellectual moves and externalize them as combinatorial procedure;** Madeleine is teaching her students to do the same thing, in different vocabulary, with different tools, on different course material. Worth noting because it suggests the Llull thread the Theatrum has been building through medieval logic and Pasquinelli's *ars combinatoria* lineage actually has a contemporary applied-pedagogy descendant — and that descendant is sitting in Madeleine's day-job workflow. #### To [[matteo-pasquinelli]] Pasquinelli's "automation of automation" framing and his claim that AI is the *automation of the historical psychometrics of labor* (in [[pasquinelli-automation-of-general-intelligence-2023]]) is in a real but uncomfortable relationship with the AI-pedagogy work Madeleine is describing. Pasquinelli would say, roughly, that the term-paper-as-apex-predator framing is itself a labor-metric framing — the term paper was apex *because* it was the most efficient single artifact for measuring student labor, and the threat of ChatGPT is a threat to the metric, not to the learning. The pedagogical move Madeleine is describing — replacing the term paper with multi-agent assignment sequences and context-engineering work — is not avoiding the labor-metric question; it's *changing what is being measured* so that the new metric captures something the old one couldn't. This is not a critique of the work Madeleine is describing (which is, on its own terms, exactly the right move for the constraints of contemporary pedagogy); it's a flag that the Pasquinelli reading is one of the available critical lenses, and that the term-paper-as-apex-predator framing has political-economy stakes that the talk does not name explicitly. Worth holding for any future Theatrum thread on the political economy of pedagogy. ### Things that might seed new Theatrum work A few things in the conversation are not yet in the Theatrum but feel like they want to be: - **The "term paper as apex predator" framing itself.** This is a good enough metaphor for what the term paper was doing that it could honestly become a small concept page or stance, especially if Madeleine wants to develop the institutional-pedagogy thread. It's a single-sentence diagnosis of why the LLM era is hard, and the Theatrum doesn't currently have anything like it. - **"Re-engineering academic writing in an AI-native era" as a broader project.** The 8:23–9:08 passage is gesturing at something the Theatrum could plausibly host as its own thread: the work of explicitly enumerating what tacit functions the term paper was carrying, and what new alignments of instruments could carry those functions in the LLM era. This is potentially a substantial concept page or even a stance, and it's the kind of thing the Theatrum is built for. - **Moira (the faculty-summoning workflow).** Currently a proper noun in this transcript and nothing else. If the Moira workflow develops into a more substantial named project, it deserves a Theatrum page — possibly under a new genre (working notes on a contemporary applied project Madeleine is leading), possibly as an encounter, possibly as a concept page on the underlying pattern. Worth flagging now so future Claude sessions know the name. - **"Vibe coding" as an assignment genre.** The term is in scare quotes in the transcript and is a real contemporary phenomenon worth pinning. If the Theatrum's AI-pedagogy thread develops, this is one of the terms that would want a small concept page. - **The COVID/Zoom classroom analogy as a methodological move.** The argument that the LLM era is doing to academic writing what Zoom did to classroom embodiment is a *clarifying analogy* in the strict sense — it's a tool for making the tacit explicit. If Madeleine ever develops a stance page on "what kind of analytic moves are valuable for re-engineering professional-knowledge-work in eras of disruption," the COVID-Zoom analogy is one of its anchors. - **The Bok Center as an institutional context worth naming.** The Theatrum doesn't have a Bok Center page or any institutional reference for the kind of faculty-development organization Madeleine works at. If a future thread on contemporary faculty-development infrastructure develops, this is the natural anchor. (Possibly a new `institution` vocabulary value: `teaching-and-learning-center` or similar — but that's a single-case observation and doesn't earn coining yet.) ### Methodological / structural notes for future reference - **The first-pass dictation file is genuinely much worse than the second pass**, and the differences are instructive. The dictation app's transcription quality on this content was poor in ways that appear systematic: technical-pedagogical vocabulary ("Expos," "Bok Center") gets mangled into consumer-product names ("Xbox," "Box"); the apostrophe-laden phrase "ChatGPT" becomes "Chachi PT"; multi-clause sentences with embedded scare quotes ("scare quotes" itself becomes "scary quotes") get badly broken. **For future transcript captures, the second-pass timestamped version is the one to trust**, and the first-pass version is best treated as a working artifact only. - **The audio file `Harvard University 16.m4a` is the canonical source.** If the transcript needs verification or correction at any point, the audio is the ground truth, not either dictation pass. - **This is the first transcript in the new `transcripts/` folder.** Convention notes for future transcripts: 1. Filename: `YYYY-MM-DD-<slug>.md`. Today's slug was `planning`; future slugs should be similarly compact and descriptive of what the meeting was about, not who attended. 2. No YAML frontmatter (per Madeleine's call). Transcripts are a working genre, not a Theatrum source. 3. Two sections: `## Transcript` (verbatim) and `## Connection points / influence` (Claude's synthesis after the transcript). Multiple transcript passes should each be preserved if they exist, with a clear note about which one is higher fidelity. 4. Audio reference (m4a or other) noted at the top if present in the same folder. 5. Speakers identified at the top. 6. *On the record* unless explicitly marked otherwise. - **Transcripts may eventually deserve their own discoverability infrastructure.** Right now `transcripts/` is a top-level folder parallel to `sources/`, `figures/`, etc., but it's not yet referenced in [[index]] or [[THEATRUM]]. If the genre proves useful and accumulates more than a handful of files, a small note in [[index]] is the right next move. Held off this round because one transcript isn't enough to justify the infrastructure addition. Worth revisiting after the second or third transcript. - **The Theatrum and Madeleine's day job are now structurally entangled in a useful way.** Until this transcript, the Theatrum's contemporary applied content has been about AI history (Pasquinelli, MemPalace) and personal knowledge work (the bundling encounter, the why-not-rag stance). This transcript is the first piece of *Madeleine's actual professional pedagogical work* in the Theatrum, and it lights up cross-references to most of the existing Theatrum threads at once. Worth watching as more transcripts arrive: the Theatrum may turn out to be the right substrate for thinking about the Bok Center's AI-pedagogy work too, not just for thinking about Madeleine's intellectual interests in mnemonics and combinatorial logic. The two have always been closer than they look from outside. --- ## Transcript (third pass — cleaned and paragraph-segmented) *Cleaned by Claude from the second-pass timestamped capture, with the obvious dictation errors corrected ("Expos" for "Xbox/X pause," "Bok" for "Box," "ChatGPT" for "Chachi PT/chat GBT," "scare quotes" for "scary quotes," "Moira" for "Moyer," "Memento" for "Momento," "software engineering" for "soccer engineering," "TFs" for "TS"), filler words and dictation restarts lightly trimmed, and the substance preserved verbatim. Use this version for working purposes; the second-pass version above remains the more honest record of what the dictation app captured. The audio file `Harvard University 16.m4a` is the canonical source if any passage needs verification.* OK, so these are things that the person Karen's meeting with might be interested in, and I think we want to say that these are in a couple of categories. One is things related to the changes that AI is introducing in the teaching and learning space. And then secondly and relatedly are ways in which we've been supporting academic communication and sort of long-form student projects — like essays, although alternatives to essays as well — across the curriculum, and how that intersects with the problems and opportunities that AI is producing. So, on the first side, we've been working with a lot of faculty to develop what they sometimes term *AI-resilient* assignments. This means oral assignments and in-class writing assignments; assignments where, if students are doing a take-home paper, it's based on fresh or local data or experiences they could only have had in class, or it has multiple in-person touch points like oral pitches and presentations — moments where they have to present their work in progress and get feedback on it. Things that disincentivize cheating, to be totally frank about it, but then also are just rich and robust opportunities for students to deepen their ideas about course material in rigorous ways that are essential to academic learning. We've partnered really frequently with Expos courses on projects they have towards the end of the term, where students are "remediating" — scare quotes — their papers. This means taking an academic paper and turning it into something like a conference presentation or a podcast or a social media campaign or an explainer video. While it's true that these are fun assignments for students and they boost student engagement, one of the things we do as we partner with these courses is help them think about how this new medium can actually be a *better* way than conventional writing for wrestling with the subject matter of the course in question — not in exactly the same way as the paper does. The paper is still great at doing what it does. But once you have to, for instance, turn something into a short-form podcast or YouTube video that could be seen by any audience member, not just someone who's already an academic researcher in that area, you actually need a different and at times deeper understanding of the material, so that you can explain it to a new audience and help them understand all the things you can sometimes take for granted when you're talking to people who are already experts in your domain. So that's in that zone of leveling up academic communication and partnering with Expos on multimodal student projects. And then in the world of AI, we are working to offer students and faculty workshops that are for developing their AI literacy — the understanding of how to use different tools — and then with some courses that are really leaning in, we've begun developing things that go far beyond one or two introductory workshops, into longer-form assignment sequences that teach students bit by bit really cutting-edge ways of using AI that are at the limits of what is happening in AI research and industry right now, but in complete alignment with the learning objectives of a given course. For instance, in comparative literature, students engage in assignment sequences that bit by bit build up to complex web applications that analyze or produce or translate literature. This forces students — or compels students — to think deeply about more texts that are assigned across the course of the term than they might have if they had just written an essay on its own. Many, if not all, of these assignments terminate in a "vibe coding" — scare quotes — assignment, where students, even those who have never necessarily coded before, are using AI to write code that generates complex full-stack web applications that perform the work that would typically be performed by a final paper. This is great because it gives them skills that are valuable to them even beyond their lives at Harvard. But more crucially, it forces them to wrestle with the course material in a way that is certainly more rigorous than just getting ChatGPT to write a paper for them, and arguably — especially if the course material is quantitative or visual or auditory — it's actually a more appropriate way for them to be analyzing and presenting the results of that data, the results of their research on that data, than an old-school essay would have been. As for faculty: they are also using AI to develop assignments, activities, course materials, interactive simulations in the sciences, data visualizations for students to gain deeper intuitions about the equations or principles or laws they're learning about. This summer we're going to be offering a set of workshops for faculty that teach them the very same leveled-up AI skills we're teaching the undergrads — but for faculty, they'll be moving not towards a final paper assignment, but towards a brand new course that they design from materials they have on their hard drives. Their course readings, old lectures that can be transcribed, all their random notes across all of their Google folders, all the emails they've written to TFs about how to grade and respond to the projects of the course — all of that becomes valuable context that they can deploy as they try to generate courses with AI. --- So I think that's good, but we have to do a deep dive into the Moira thing. — OK. — How about you do a deep dive into the Moira thing right now. — So I think there are a couple of ways into this. I don't know that this guy cares about it, but I think for *us* we care about something that would say this. I'll do a version of this that's for talking to faculty personally, and not necessarily a thing we would say in public. The reason that the term paper was kind of like the apex predator of the teaching and learning ecosystem for so many decades, or centuries even, is that it's just an ideal piece of engineering. It's the perfect way for students to develop their understanding of course material — forcing them to come up with analyses of the data that the course is built around, to consider counterarguments presented by other sources (whether other writers writing about the same data set or texts, or theoretical models that claim to explain the entire domain those texts come from). All of that wrestling with ideas was essential for the student learning process. And then the paper was also simultaneously the best way for the faculty member to judge whether the student actually had learned everything the course was designed to teach them. And moreover, it was also the tool for academic communication that the professor was using in their own life as a researcher. That alignment meant that everyone was spending just a ton of their time on this one tool, for building ideas and disseminating ideas. And it was the perfect, perfect way to organize a class. Now, ChatGPT — this is why the threat of ChatGPT is such a problem. Because if it takes away a few of those elements of the paper, potentially it's no longer the best way of judging whether students have understood the course material, if those students are misusing ChatGPT by writing the paper. It means we have to kind of decompose all the different things that academic writing was doing and re-engineer new solutions for ourselves in the age of AI. And this is what's very challenging. It's exciting in a way, because we have to think deeply about things in the same way that COVID forced us to think about all the various things the classroom was doing that you didn't quite understand — forcing people to dress in daytime clothes, or not to walk away for coffee in the middle of it. There were a ton of things we didn't really need to explicitly articulate for ourselves until we all found ourselves in a Zoom classroom. And likewise, there's a lot of things that academic writing is accomplishing that we haven't yet necessarily called to explicit consciousness — that we're going to have to start explicitly defining and enumerating so that we can re-engineer these things in an AI-native era. So that's a setup that's going to be valuable for us in some contexts. So when you're teaching the paper, you often think about all the steps that are involved in constructing a paper, and you break them down into a set of assignments that "scaffold" — scare quotes — that paper for students. So perhaps they perform a lit review or an annotated bibliography, where they go and find some texts and have a sense of how they're going to use those texts. They might collect data in a quantitative social science course, and then work to analyze it. They might work to come up with some kind of outline of their thinking at some point. They might run ideas in development by some test audience, whether that's their peers or the TFs. There's this sort of cycle to creating a paper, and well-designed courses give students feedback or even grades on all the steps of that process leading up to the paper, so that students are not just producing the best paper they possibly could, but also learning how all of those intermediate steps are *themselves* ways of learning about the course material. Your lit review is not just a means to the end that is the paper; it's also a way of learning about the field that you're studying. So, what we've been trying to do in these AI-enhanced assignments is take the various steps that are involved in cutting-edge work with AI and map those onto some of the steps people have to perform in different disciplines. What does that look like? Well, in the world of AI right now, everyone is very excited about *context engineering*, for instance, and *multi-agent systems*. These are jargon-y things, but when you unpack them you can start to understand ways in which they map onto certain sorts of intellectual moves that matter in the disciplines. Context is important because the AI has no memory — it's like the character Memento, who forgets who he is every single day and has to rebuild it all from scratch. What this means is that you need to control as much as possible the text that goes into the AI before it starts predicting the next word. And that means you need to assemble and organize all of that text — every single thing that could possibly be valuable for the project — and then you need to understand the *map* of that text, so that you can make sure the AI is seeing the right thing at the right moment. Well, it doesn't take a genius to realize that coming up with that systematic and well-indexed array of texts for any discipline is just a marvelous learning opportunity for students. In fact it probably involves more breadth, if not necessarily depth, than any of the steps you would have typically put at the front end of an essay-writing assignment. And then, as another example: multi-agent processes, in the world of AI and industry. People are very excited about this for creating software engineering systems or call centers. These are not necessarily romantic to academics, but they try to think about all the different types of intelligence that are necessary to respond to a particular consumer problem or to a particular software engineering problem. And if students in the context of a humanities course, let's say, need to similarly think about all the intellectual moves that need to happen and operate on all the context we just mentioned — in order to construct a meaningful argument, or in order to translate a foreign text, or in order even to write a text like an author they're studying — they need to decompose all the various maneuvers that are part of academic writing or literary writing or translation, and begin to clearly articulate what those involve in the prompts they're going to structure their agents with. And so this, too, takes what is often, even for some of the greatest students and even some of the greatest professors in a given field, just a matter of intuitions built up by mirroring. You often learn to be a great writer by reading other great writers, or imitating your professors. But no one can quite explicitly articulate what they're doing as they're doing it — and maybe that is some of the magic of it. But for *learning*, it's often very valuable to be able to articulate those different steps. And certainly for *evaluating* whether someone has learned, if you want it to not be vibes-based but to be just a little bit more rigorous than that, it's lovely to be able to see their externalizations — explicit, well-articulated externalizations — of what they think the thought processes in literary criticism are, or music interpretation, or historical analysis. And that's what developing a multi-agent system allows you to help students learn, and then also helps you evaluate whether they've been successful. OK, that's, I realize it's too long, but it'll be useful for something. — No, it's dope.