# bok-ai-lab-20250509-glossary full glossary [here](https://docs.google.com/document/d/13EQKWEHyeJNlQxlRf8lSzw1TKhqySilOiVhbbHUIB08/edit?tab=t.0#heading=h.lc7194olimyo) # AI Lab Glossary --- ## AI Double A synthetic reproduction of a specific individual’s voice, appearance, style, and/or decision patterns generated via machine-learning pipelines such as voice-cloning neural vocoders or fine-tuned language models. In practice, an AI double might serve as a virtual guest lecturer, a deceased author answering Q-and-A, or even a professor’s “office-hours clone” that fields FAQs. Because the clone draws on personal biometric or textual data, questions of consent, licensing, and reputational risk loom large: Who controls the model weights? Who updates or deletes them if the human changes their mind—or their scholarly opinions? Within education, AI doubles can widen access to expertise yet also blur boundaries of authorship, potentially diluting the pedagogical value of direct human presence. --- ## Doppelgänger German folklore names the *Doppelgänger* as a ghostly counterpart that appears as an omen, but the idea has traveled into literature (e.g., Dostoevsky’s **The Double**) and film (Jordan Peele’s **Us**) to symbolize identity fracture. In digital culture, deepfakes and look-alike filters create networked doppelgängers that circulate outside an original’s control. Students encountering their own algorithmically generated “twins” may experience both fascination and anxiety, prompting reflection on self-representation, authenticity, and digital permanence. The trope thus serves as a narrative lens for discussing AI cloning ethics and the psychosocial effects of ubiquitous replication technologies. --- ## Uncanny (*Das Unheimliche*) Sigmund Freud’s 1919 essay defines the uncanny as the disturbing return of something once familiar yet now estranged—a childhood doll that suddenly “comes to life,” or today’s hyper-realistic humanoid robot. Human–robot-interaction research leverages this insight through the “uncanny valley” curve, showing that near-perfect likeness can evoke revulsion rather than empathy. In the classroom, a photoreal AI avatar that gestures a bit *too* mechanically risks triggering discomfort that undermines learning. Designers counteract this with stylization (e.g., Pixar-style tutors) or by purposefully signaling artificiality so that expectations align. --- ## Mirror Stage (Lacan) Jacques Lacan argued that infants form an “I” by identifying with their mirror reflection, an alienated image that precedes linguistic self-concept. When users engage an AI double of themselves—hearing their cloned voice dispense advice, or watching a stylized avatar mimic facial expressions—they echo this formative encounter but with far more complex socio-technical feedback loops. The “specular” relationship can illuminate agency: Is the learner authoring their knowledge, or merely consuming a flattering reflection? Lacanian critics use this frame to interrogate consumer platforms that monetize self-mirrors (beauty filters, voice skins) and to caution educators about the pedagogical cost of over-personalized echoes. --- ## Aura (Walter Benjamin) Benjamin’s *aura* denotes the here-and-now uniqueness of an artwork—its spatiotemporal “presence in time and space.” Mechanical (now digital) reproduction erodes this aura, allowing endless copies that detach art from ritual. AI doubles extend this logic: a professor’s lecture can be re-synthesized, remixed, or delivered concurrently in dozens of classrooms, trading aura for scalability. Critics fear an impoverished intellectual intimacy; proponents note that democratized access often outweighs lost mystique. Pedagogically, instructors might juxtapose live sessions (high aura) with AI-mediated tutorials to highlight different modes of engagement. --- ## Simulacrum (Baudrillard) Jean Baudrillard distinguishes between reflection, perversion, masquerade, and pure simulacrum—copies with no original referent, only circulating signs. GPT-generated “Shakespearean” sonnets or AI “Einsteins” answering physics questions exemplify pure simulacra: convincing, persuasive, yet severed from genuine authorial intention. In educational design, confronting students with simulacra (e.g., a Socrates chatbot) can spark meta-discussions about authority and epistemic trust. Still, without transparent framing, learners may uncritically absorb fabricated content, so scaffolding is vital. --- ## Language-Game (Wittgenstein) For later Wittgenstein, meaning resides not in words alone but in their public use-cases—each *language-game* governed by implicit rules. Prompt engineering thus parallels crafting a rule-set in which an LLM’s replies “mean” appropriately: a therapeutic language-game differs from an academic critique. Misalignment (e.g., using casual chit-chat prompts for rigorous debate) leads to pragmatic failure, not merely syntax errors. Teaching students to articulate game-specific norms—citation style, tone, permissible speculation—strengthens AI-assisted writing fidelity. --- ## Synthetic Authority Authority traditionally flows from credentials; *synthetic authority* accrues when audiences ascribe expertise to AI personas regardless of accreditation. Students might defer to a polished chatbot over a hesitant TA, even if the latter’s content is more accurate. Designers can modulate synthetic authority by revealing provenance (“Powered by model X, accuracy Y %”) or by embedding disclaimers that invite critical interrogation. Research indicates that transparency cues, first-person uncertainty markers, and opportunities for verification reduce over-reliance on synthetic voices. --- ## Replication Anxiety From ancient myths about echoing spirits to modern fears of biomedical cloning, humanity has long harbored unease toward copies that might usurp the original. In contemporary classrooms, replication anxiety surfaces when AI recreates a student’s accent or when deepfake videos circulate misinformation. Such anxiety often intersects with cultural notions of soul, authorship, or labor displacement. Structured dialogue—e.g., comparing Walter Benjamin’s concerns to current AI-art debates—can help learners historicize and critically analyze their discomfort. --- ## Conversational AI An umbrella term for chatbots, voice assistants, and multimodal dialogue systems that parse user input (ASR/NLU) and generate context-appropriate outputs (NLG/TTS). Architectures range from retrieval-based FAQ bots to generative LLMs fine-tuned with reinforcement learning from human feedback (RLHF). Educational applications include Socratic tutors, peer-explanation bots, and language-practice partners. Implementation challenges: maintaining domain accuracy, preventing bias, logging data ethically, and blending seamlessly into face-to-face or hybrid classrooms. --- ## Large Language Model (LLM) A transformer network (e.g., GPT-4o) trained on trillions of tokens and capable of few-shot in-context learning. Pedagogically, LLMs can summarize lectures, draft feedback, or role-play debate opponents. Risks include hallucination, over-generalization, and data leakage if prompts contain sensitive material. Fine-tuning or retrieval-augmented generation pipelines can align an LLM with curriculum-specific content, while guardrails (moderation, system prompts) reduce policy violations. --- ## Prompt Engineering An emergent literacy akin to rhetoric for machines: crafting inputs—system, user, and example prompts—that steer model outputs toward desired style, substance, and length. Techniques include role assignment (“You are a skeptical historian …”), chain-of-thought exemplars, and output schemas (JSON rubrics). Pedagogical benefit: students learn metacognition by iteratively refining prompts, diagnosing model failure modes, and articulating clearer problem statements—a modern extension of writing-across-the-curriculum. --- ## Retrieval-Augmented Generation (RAG) Combines a vector-database search layer with generative models to ground answers in source material. Instructors can feed course readings into RAG pipelines so AI tutors cite canonical texts rather than hallucinating. Students querying “Explain Lacan’s mirror stage” receive excerpts from assigned readings alongside synthesized commentary, fostering evidence-based inquiry. The technique also enables real-time personalized study guides that adapt to each learner’s question history. --- ## Vector Embedding A high-dimensional numeric array encoding semantic proximity among words, sentences, or documents. Embeddings power similarity search (finding conceptually related passages), clustering (topic modeling), and personalization (matching a student’s query style to curated resources). In multimodal deployments, audio, image, and text embeddings converge, letting a system map a chalkboard diagram to relevant paragraphs in the syllabus. --- ## Persona Prompt A deliberate, often multi-paragraph instruction set that specifies knowledge scope, tone, boundaries, and behavioral quirks—e.g., “Respond as Simone de Beauvoir: cite *The Second Sex*, favor existentialist reasoning, avoid anachronisms.” Effective persona prompts are iterative artifacts, refined via user feedback and conversation transcripts to keep the AI “in character.” Over-policing personas, however, can produce stiff or tokenistic dialogue; striking a balance between authenticity and flexibility is key. --- ## Turn-Taking Algorithm Rule-based or ML-driven logic that decides when an AI speaks, pauses, or yields—crucial in multimodal classrooms with live mics, chat feeds, and projected visuals. Too-frequent interjections create cognitive overload; too sparse and the AI feels inert. Developers tune thresholds (silence duration, clause completion) and incorporate prosody cues to mimic conversational etiquette. Integrating classroom sensors (hand-raise detection) can further synchronize AI participation with human rhythms. --- ## Pedagogical Dramaturgy Approaches lesson planning like stagecraft: objectives become dramatic stakes, activities form scenes, and participants adopt roles that generate narrative tension. With AI onstage, dramaturgy extends to scripting contingency paths—what if the chatbot misinterprets a question? Live “improvisation curves” anticipate branching storylines so learning goals survive unexpected AI outputs. This theatrical lens helps instructors choreograph engaging, resilient sessions. --- ## Fourth Wall Traditionally the invisible barrier separating performers from audience; breaking it acknowledges spectators. AI clones often inhabit a liminal space: they are simultaneously artifact and “live” interlocutor. When an AI-Einstein addresses a student by name, the classroom oscillates between lecture theatre and interactive fiction. Discussing this boundary helps students parse levels of reality and maintain critical distance. --- ## Dialogic Teaching An inquiry-oriented pedagogy valuing exploratory talk where ideas are collectively built, challenged, and refined. Conversational AI can scaffold dialogic moves—asking clarifying questions, requesting evidence, synthesizing viewpoints—modeling discourse norms for quieter students. Research (e.g., Neil Mercer, Rupert Wegerif) shows such structured dialogue heightens reasoning skills; AI offers on-demand rehearsal partners. --- ## Cognitive Load John Sweller’s framework differentiates intrinsic (task complexity), extraneous (presentation), and germane (schema building) load. AI dashboards risk extraneous overload via flashing alerts or overly verbose explanations. Yet well-timed micro-summaries reduce intrinsic burden by chunking information. Designers iterate UI, pacing, and modality (audio vs. text) to keep total load within working-memory limits. --- ## Theory of Mind (ToM) The capacity to infer others’ beliefs and intentions. Cutting-edge models (e.g., DeepMind’s ToMnet) approximate ToM by predicting user goals. In tutoring, an AI anticipating misconceptions can pre-emptively scaffold. Critics argue true ToM requires embodiment; still, simulated ToM cues (e.g., “I sense you might be stuck”) measurably boost learner engagement. --- ## Social Presence Defined as the sense of “being with another” via mediated channels. High social presence correlates with motivation and satisfaction in online courses. Voice clones with natural prosody, facial-animation-on-video, and adaptive humor raise presence, but mis-calibrated familiarity may veer into *creepiness* (cf. uncanny valley). Balancing warmth and formality therefore becomes an instructional design choice. --- ## Transference / Counter-transference Psychoanalytic constructs where feelings displace onto surrogate figures; in digital pedagogy, students may project admiration or frustration onto bots, while staff project labor expectations onto automated graders. Recognizing these dynamics clarifies when emotional responses derive from personal history rather than current content, guiding more ethical, bounded AI relationships. --- ## Cognitive Apprenticeship Collins, Brown & Newman’s model emphasizes making expert reasoning visible (modeling), giving learners guided practice (coaching), and gradually fading support. AI tutors excel at relentless availability, providing step-by-step hints, and capturing process logs for reflection. However, they lack human tacit knowledge in domains requiring affective nuance, so blended human-AI mentorship remains optimal. --- ## Scaffolding Temporary supports—sentence starters, graphic organizers, adaptive hints—removed once competence stabilizes. LLMs can generate bespoke scaffolds in real time: if a student misuses a concept, the bot offers a targeted mini-tutorial; mastery triggers scaffold withdrawal. Effective systems track learner state to avoid “over-scaffolding,” which hampers autonomy. --- ## Zone of Proximal Development (ZPD) Vygotsky’s sweet spot between solo ability and assisted capability. Adaptive AI estimates ZPD via performance analytics, then tailors task difficulty (e.g., dynamic question banks). Ethical concerns: mis-estimated ZPD can pigeonhole learners; transparent dashboards and human oversight mitigate misclassification. --- ## Generative Learning Fiorella & Mayer list eight activities (summarizing, mapping, teaching-others, etc.) that drive active construction of meaning. Asking students to *teach* an AI agent triggers the “protégé effect,” boosting their effort and retention. Researchers now quantify generative load via eye-tracking and AI-scored explanation quality. --- ## Synthetic Voice Cloning Uses speaker embeddings plus neural vocoders (e.g., HiFi-GAN) to replicate timbre and prosody from \< 60 s of audio. Legit uses: accessibility (custom TTS for laryngectomy patients), heritage preservation (endangered languages). Malicious uses: scam calls, political disinformation. Watermarking and voice “passphrases” are proposed safeguards. --- ## Digital Watermarking Invisible patterns—spectral noise in audio, pixel color shifts in images, token markers in text—signify AI origin or encode ownership. NIST and industry proposals aim to standardize watermarking to flag synthetic media, helping instructors verify authenticity of student work or public-facing content. --- ## Ethical AI Framework Guides responsible design through principles: fairness (mitigate bias), accountability (audit logs), transparency (explainability), privacy (data minimization), and non-maleficence. Universities increasingly require AI impact assessments before deploying classroom bots, aligning with emerging regulations such as the EU AI Act and IEEE ECPAIS standards. --- ## Explainable AI (XAI) Techniques—SHAP values, attention visualizations, counterfactuals—that render opaque model decisions intelligible to humans. In conversational settings, an AI tutor might highlight the textbook paragraph influencing its answer, fostering trust and traceability. Regulatory bodies increasingly mandate XAI for high-stakes domains such as admissions or hiring. --- ## Metaverse Classroom An immersive 3-D environment where avatars, spatial audio, and interactive objects support embodied learning. AI doubles can inhabit the scene as non-player mentors; analytics track gesture-based participation. Pedagogical research explores whether spatial presence enhances memory retention versus traditional 2-D screens. --- ## Digital Humanness A term from HCI describing qualities—agency, empathy, vulnerability—that make virtual entities feel “human.” Designers manipulate micro-latency, hedging language, and facial micro-expressions to cultivate humanness, yet must avoid the uncanny valley. Balancing authenticity with clarity of artificiality remains an art. --- ## Embodied Cognition Theory positing that cognitive processes are grounded in bodily action and sensorimotor experience. AI tutors projected on motion-tracking whiteboards leverage embodied gestures—dragging vectors, sketching graphs—to anchor abstract concepts in kinesthetic memory, aligning with STEM research on embodied learning. --- ## Interpassivity (Žižek & Pfaller) Describes delegating one’s enjoyment or labor to an external agent (e.g., canned laughter “laughs for us”). Students might offload intellectual effort to ChatGPT, experiencing *interpassive learning*. Addressing this requires assessment designs that elicit personal reflection or unique performances that AI cannot easily substitute. --- ## Context Window The maximum token length an LLM can attend to (e.g., 128 k tokens in GPT-4o long-context). Long windows enable entire textbooks in-context but incur higher compute costs and risk *context dilution* where crucial instructions fade. Teachers must balance breadth of context with prompt clarity. --- ## Data-Privacy Impact Assessment (DPIA) A formal process (GDPR Art. 35\) for evaluating risks to personal data in new technologies. Deploying a voice-clone tutor requires a DPIA: assessing biometric data storage, student consent, retention periods, and breach protocols. Institutions that neglect DPIAs face legal penalties and reputational damage. --- ## Academically Productive Talk (APT) Coined by researchers such as Catherine O’Connor and Sarah Michaels, APT designates a repertoire of “talk moves” that press learners to clarify, elaborate, and critically examine ideas in real time. Phrases like *“Say more,” “Can you rephrase that?,”* or *“Do you agree and why?”* cultivate accountable reasoning without silencing uncertainty. When these moves are codified into an AI tutor’s dialogue policy, the bot functions less as an answer dispenser and more as a facilitator that surfaces alternative viewpoints, requests evidence, and highlights conceptual links across speakers. Pedagogically, APT-aligned agents address the chronic imbalance between vocal and reticent students: the bot can prompt quieter participants, model respectful challenge, and keep discussion threads coherent. Designers must still manage pacing—too many AI interjections risk cognitive overload—but early classroom studies show that embedding four to six well-timed talk moves per session significantly raises the proportion of student utterances that contain justification and counter-argument. --- ## Active Learning Active learning reframes students from passive recipients of information to primary agents who discuss, build, debate, simulate, or create. Core mechanisms—peer instruction, problem-based tasks, think-pair-share—shift cognitive effort from the lecturer’s explanation toward the learner’s manipulation of concepts. AI partners amplify this shift by absorbing routine explanation (e.g., definition retrieval) so that classroom time centers on higher-order application. Crucially, active learning with AI is not “set and forget.” Systems must be tuned to pose appropriately challenging questions, flag misconceptions without short-circuiting struggle, and deliver micro-summaries that keep intrinsic load manageable. When implemented well—say, an LLM that issues Socratic nudges during lab work—student performance and long-term retention routinely outpace lecture-only sections. --- ## Community of Inquiry (CoI) Randy Garrison’s CoI framework asserts that meaningful online learning arises at the intersection of social presence (the sense of real peers), cognitive presence (sustained inquiry), and teaching presence (purposeful orchestration). AI doubles can fortify social presence via immediacy cues—personal greetings, expressive voice—but may inadvertently eclipse teaching presence if they usurp the instructor’s authority without transparent coordination. Designers therefore choreograph role clarity: the human instructor frames objectives and assessment, the AI facilitates micro-dialogue and formative feedback, and the cohort negotiates meaning. Dashboards that visualize which presence is lagging (e.g., dwindling cognitive presence in week 5\) enable timely pedagogical pivots, ensuring that the triangle remains balanced rather than AI-lopsided. --- ## Formative Feedback Loop Formative feedback loops deliver rapid, low-stakes information that students use to adjust strategies before summative evaluation. With AI, loops can be near-instantaneous: a coding assistant highlights inefficient logic as a student types; an essay bot suggests citation fixes minutes after submission. Such velocity transforms the temporal ecology of a course—moving from weekly checkpoints to continuous micro-iterations. Yet speed alone is insufficient; feedback must be *actionable* and *specific*. Research shows that comments phrased as concrete revision steps (“Swap this example for empirical evidence from X”) double uptake compared with generic praise or criticism. Intelligent tutoring systems now integrate rubrics that translate model predictions into scaffolded action plans, preserving human bandwidth for nuanced motivational support. --- ## ICAP Framework The ICAP hypothesis (Interactive \> Constructive \> Active \> Passive) predicts depth of learning based on observable engagement modes. Well-designed AI pushes students upward: an LLM that merely repeats information elicits active note-taking at best, but one that asks learners to generate analogies or debate counter-claims triggers constructive or interactive cognition. Implementation hinges on prompt architecture. For instance, including *“Propose an alternative solution and critique your own idea”* forces a constructive stance, while *“Read your partner’s answer and build on it”* demands interactivity. Logging engagement-mode frequency lets instructors audit whether AI usage is genuinely elevating cognition or slipping back into passive content delivery. --- ## Impression Management Erving Goffman’s dramaturgical metaphor casts everyday interaction as a performance in which individuals curate cues to influence audience perception. Digital classrooms extend the stage: learners know that chat transcripts, voice recordings, and code commits may be stored indefinitely and—even more unpredictably—parsed by analytics algorithms. Consequently, they may sanitize language, feign certainty, or defer to AI-endorsed viewpoints to maintain a desired academic persona. Recognizing these pressures, ethical course design foregrounds transparency: what data are logged, who can view it, and how it affects grading. Allowing students to toggle visibility of exploratory drafts or to annotate AI-generated suggestions with reflective commentary reclaims agency over the performance. --- ## Improv “Yes, And” Principle Improvisational theater thrives on the rule that actors must *accept* a partner’s offer (“Yes …”) and *extend* it (“… and …”). This principle cultivates psychological safety, encourages risk-taking, and propels scenes forward. Mapping “Yes, and” to LLM response style yields agents that validate student contributions before nudging them deeper—e.g., *“Yes, your analogy captures X; and we might add Y to address the counter-example.”* However, uncritical affirmation can slide into vapid positivity. Effective implementations pair “Yes, and” with epistemic rigor: the AI affirms the effort, not necessarily the accuracy, then scaffolds refinement. Classroom trials reveal increased idea density and reduced fear of error when such calibrated acceptance governs early brainstorming phases. --- ## Mimesis (Plato) In *The Republic* and *Ion*, Plato warns that poetic imitation (mimesis) drags audiences two removes from truth: artisans craft objects, poets imitate those objects, and spectators absorb the imitation. Modern AI cloning—whether a Shakespeare-flavored LLM or a voice-cloned professor—resurrects this ontological worry. If students consult an AI “Aristotle,” are they pursuing wisdom or engaging a third-order copy detached from the philosopher’s intentions? Educators leverage the tension productively by foregrounding provenance: labeling the chatbot “Speculative Aristotle” invites meta-discussion about authority and authenticity. Assignments can ask students to triangulate the bot’s claims against primary texts, thus converting the potential epistemic deficit into a critical-reading exercise. --- ## Participatory Art From Allan Kaprow’s 1960s “Happenings” to contemporary immersive installations, participatory art positions the audience as co-creator. Analogously, AI-infused seminars become living installations: knowledge emerges not from prewritten slides but from the evolving triad of students, instructor, and algorithmic co-performer. The aesthetic lens cautions against over-scripting. If the AI’s responses are railroaded, students become spectators again; if parameters are too open, coherence dissolves. Successful participatory-art pedagogy embraces *structured indeterminacy*: clear thematic scaffolds combined with real-time prompts that route emerging ideas back into the collective artwork of understanding. --- ## Performative Utterance (Austin) J. L. Austin’s speech-act theory distinguishes utterances that describe reality from those that *enact* it—“I pronounce you married” changes legal status upon declaration. In digital classrooms, an AI saying *“Well done, you have mastered quadratic factoring”* can shift a learner’s self-efficacy even though the speaker lacks consciousness. This performative power mandates careful calibration. Premature declarations of mastery risk complacency; overly cautious hedging may erode motivation. Embedding confidence thresholds and alignment with assessment data ensures that AI speech acts responsibly, reinforcing learning without misrepresenting achievement. --- ## Relational Aesthetics Nicolas Bourriaud argues that late-20th-century art should be judged by the social relations it produces rather than its material form. Applied to AI pedagogy, the “artwork” is the emergent network of inquiry, peer support, and critique that crystallizes around the agent. A chatbot’s value therefore lies less in eloquent answers and more in catalyzing meaningful exchanges among humans. Course analytics can operationalize this lens: measuring conversation branching, reciprocity, and cross-group referencing reveals whether the AI expands the relational fabric. When metrics flatten—e.g., dialogue collapses into parallel monologues directed at the bot—designers adjust prompts or roles to re-animate interhuman connection. --- ## Retrieval Practice Cognitive-psychology studies show that the act of actively recalling information strengthens memory more than re-studying. AI makes retrieval practice frictionless: adaptive flashcard bots detect which concepts teeter on the edge of forgetting and resurface them just in time; chat interfaces randomly request definitional or application prompts during discussion. The challenge is balancing desirable difficulty: if retrieval cues are too spaced or too obscure, frustration spikes; if too easy, the testing effect wanes. Fine-grained data allow AI systems to tune intervals per learner, edging each question into the optimal struggle zone that maximizes consolidation. --- ## Self-Regulated Learning (SRL) SRL synthesizes metacognitive planning (*What is my goal?*), monitoring (*How am I doing?*), and evaluation (*What will I change?*). Chatbots scaffold SRL by prompting goal statements, delivering progress analytics, and nudging reflective journaling. For example, a weekly AI checkpoint might say, *“You attempted three integrals and made substitution errors twice. Would you like to review that strategy?”* Yet externalizing regulation can induce over-dependence. Effective designs gradually fade AI nudges or require students to predict feedback before revealing system analytics, thus internalizing regulatory habits rather than outsourcing them. --- ## Spect-Actor Augusto Boal’s *spect-actor* rejects passive spectatorship, insisting that audience members intervene to reshape the performance. In AI-mediated role-plays—say, debating a bureaucratic chatbot that enforces biased policies—students who pause the scene, rewrite the agent’s prompt, or assume a counter-role enact spect-actor agency. This dramaturgical framing reframes “prompt hacking” from a security risk to a democratic exercise: learners rehearse challenging authority, identifying hidden assumptions, and iterating policy. Assessment shifts from correctness to the quality of critical interventions—mirroring Boal’s goal of praxis over consumption. --- ## Theater of the Oppressed (TOTO) Boal’s broader methodology invites communities to replay oppressive scenarios until alternative, emancipatory outcomes surface. Integrating AI personas—e.g., a recalcitrant “algorithmic loan officer”—extends TOTO into the digital sphere. Students prototype arguments, witness algorithmic resistance, then tweak strategy or policy until equity emerges. The iterative loop demystifies algorithmic authority: rather than accepting the model as neutral, participants expose training-data biases and power differentials. Post-performance debriefs link theatrical insight to real-world advocacy, preparing students to contest inequity in both code and culture. ---