# md-version-of-web-material
## Website Map
* Teaching Resources
* Design
* Teaching and AI Landing Page
* [Designing Courses and Assignments in the Age of AI](#bookmark=id.ut3cmqarrvmd)
* [Communicating with Students about AI](#bookmark=id.ut3cmqarrvmd)
* [Getting Started with Harvard AI Tools](#bookmark=id.ve8gbyskgt1w)
* Next Steps/Going Further
* Links to “[Examples and Ideas](#bookmark=id.3dtdondyip2c)”
* Initiatives
* Teaching and AI
* Events
##
## Teaching and AI Landing Page
### WHAT IS GENERATIVE AI?
Generative AI refers to a class of artificial intelligence systems designed to create new content—such as text, images, code, or audio—by recognizing and mimicking patterns in large datasets. One prominent type of generative AI is the large language model (LLM), like OpenAI’s ChatGPT. LLMs are trained on vast collections of text and can produce human-like responses to prompts, but they do not possess understanding or awareness; rather, they generate content by predicting likely word sequences based on their training data.
Recognizing the capabilities and limitations of generative AI is essential as we consider how these tools can be thoughtfully integrated into university teaching and learning.
### HOW DOES GENERATIVE AI IMPACT TEACHING AND LEARNING?
Generative AI is increasingly shaping higher education, offering new opportunities while introducing important challenges. Recent studies highlight benefits such as enhanced student learning outcomes and support for skill development ([Kestin et al., 2025](https://www.nature.com/articles/s41598-025-97652-6); [Wang & Fan, 2025](https://www.nature.com/articles/s41599-025-04787-y)). However, research also points to concerns, including potential reductions in students’ cognitive engagement—especially if learners rely too heavily on AI-generated content in writing tasks ([Kosmyna et al., 2025](https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/)). These findings underscore the importance of carefully aligning the use of AI tools with specific learning goals, ensuring that their integration genuinely supports the intended outcomes of each course.
### GETTING STARTED
The first steps in working with generative AI in your teaching are deciding how it fits into your course design and how you will communicate those decisions to students. [Designing Courses and Assignments in the Age of AI](#bookmark=id.bnkypg83cdaa) helps you evaluate assignment formats, adapt them for AI resilience, and incorporate AI into the learning process where it adds value. [Communicating to Students about Generative AI](#bookmark=id.ut3cmqarrvmd) supports you in developing clear policies, encouraging responsible use, and addressing key ethical questions. Starting here will help you establish a consistent, transparent approach from the outset.
### NEXT STEPS
With your course approach and communication plan in place, you can begin exploring how to apply AI in practice. [Getting Started with Harvard AI Tools](#bookmark=id.ve8gbyskgt1w) outlines the supported platforms where you can safely test and refine AI activities. The [Examples and Ideas](#bookmark=id.3dtdondyip2c) page showcases specific uses of AI to explore as you brainstorm for your own course. These resources offer both the technical starting points and practical models to guide planning and implementation.
##
## Designing Courses and Assignments in the Age of AI
### OVERVIEW
The vast majority of undergraduate students at Harvard College are using generative AI. Many are using it for help on their academic work, often regardless of stated course policies. A [2024 survey of Harvard undergraduates](https://arxiv.org/pdf/2406.00833) found that 85% use AI in some way at least biweekly, and over 50% rely on it specifically for writing assignments. National data from 2025 shows even higher rates. These tools are not just pervasive; their capabilities can lead to a range of signal-noise issues for instructors as they assess student learning.
Assignments give students a framework to learn new skills and to get practice applying them. Student work becomes the evidence that instructors use to evaluate how much their students are learning and provide feedback on where they are succeeding, along with where they need more practice or support. The quality of this evaluation and feedback depends on the degree to which submitted work is, in fact, good data.
For many kinds of familiar assignments, such as response papers and p-sets, the challenge of getting good evidence is two-fold:
1. Generative AI is already to the point that today's large-language models can produce fluent academic writing, generate runnable code, and solve textbook-style problems with surprising accuracy.
2. Attempts to detect unpermitted AI use are largely unreliable, producing both false positives and false negatives.
Rather than trying to identify or police AI use, ensuring that student work is reliable evidence of their learning requires us to rethink assignment design and assessment. Specifically, we must identify what kinds of assignments are most at risk of producing unreliable evidence of student learning and which ones more effectively produce—or can be adapted to produce—reliable evidence. Doing so can allow instructors to minimize the advantage of unpermitted AI use or thoughtfully incorporate AI into the learning process.
### HIGH-RISK ASSIGNMENTS
Assignments that present a high risk of being completed fluently by AI without detection include:
* take-home short response papers and essays
* take-home p-sets
* take-home exams
### MORE EFFECTIVE ASSIGNMENTS
Other methods and modes of assessment are likely to be more effective, with or without the incorporation of AI:
* **In-person Blue Book and oral exams** can measure recall and applied understanding independent of outside technology.
* **P-sets that draw on fresh or local data** are less likely to reflect the training data of an AI.
* **Short essays and reflections that incorporate course-specific materials** (like a guest lecture or museum visit) rely on primary sources and experiences that are hard for AI tools to access.
* **Alternative assignment modalities** such as oral exams, in-person presentations, video essays, posters and infographics, “visual abstracts” of scientific papers, on-paper annotation of p-sets and printed code in class on the day of submission, and many more are likely to be more AI-resilient than traditional text-only assignments.
When retaining a large take-home project as the capstone (such as a final essay), it is always a good plan to “scaffold” it by breaking it into steps. Making at least some of these steps in person without devices (oral topic proposals, in-class outlines, reflective oral explanations on paper after submission, follow-up oral exam) offers more touchpoints and a better chance of reliable assessment. This might include touchpoints AFTER submission, such as a live interview about the project or an oral defense.
If allowing Gen AI use, it’s ideal to ensure that at least some assignments are done without it. The mental model for students should be that they are using AI to learn—to deeply internalize the concepts and skills of the course—so that they can perform them on their own without AI. For this to happen, instructors need to have graded assessments that determine whether they’re adopting this mental model (and these assessments can help incentivize them to do so).
### IMPLEMENTING AI IN ASSIGNMENT DESIGN AND ASSESSMENT
Some concrete examples of how AI can be incorporated into the “process” and “product” stages of design and assessment include:
* **Brainstorming partner.** Students use GAI as a *sparring partner* to brainstorm ideas, then require them to critique the output for bias and accuracy.
* **Transparent AI workflow logs.** Require students to include prompts, key model responses, and a short rationale describing what they kept, modified, or discarded—and why.
* **Fact checking a model.** Have students fact-check an AI-generated essay, or ask them to improve upon an AI-generated piece of code, documenting their changes and the reasoning behind them.
* **Model comparison memos.** Have students query two models (or settings) and write a brief memo on differences in accuracy, bias, or style, citing course concepts.
* **AI-to-human handoff.** Let AI produce a first pass (outline, test suite, or code comments); students complete the “last mile,” justifying design choices and revisions.
* **Source-anchored critique.** Provide course readings/lectures; students must use them to *audit* AI claims, with citations and corrections.
* **Timed “no-AI” checkpoints.** Pair AI-assisted preparation with short, in-class demonstrations (whiteboard proofs, oral mini-vivas, live coding) to confirm mastery.
### KEY TAKEAWAYS
* Have an AI policy
* Swap high-risk assignments for other options
* Connect any outside-of-class assignments with in-person, AI-proof check-ins, steps, and evaluations
* Start small: pilot one AI-integrated activity, then build out from there based on what you learn
* Where you permit AI, require transparency (logs/screenshots or prompt sheets) and assess individual learning separately
* Contact the Bok Center
##
## Communicating to Students about Generative AI
Recent studies of AI use in higher education (e.g. [Lund et al., 2025](https://link.springer.com/article/10.1007/s10805-025-09613-3); [Yusuf et al., 2024](https://link.springer.com/article/10.1186/s41239-024-00453-6)) report that students are increasingly interested in learning how to use generative AI tools responsibly and ethically, but often feel uncertain about appropriate practices. A 2024 Harvard undergraduate survey similarly highlights that students want explicit, consistent rules about AI use in their courses. Educators can help students navigate generative AI with confidence and integrity by providing clear guidelines and open communication.
### AI LITERACY FOR STUDENTS
A crucial aspect of communicating with students about AI is supporting their AI literacy. This means helping students understand what generative AI tools are, how they work, and where their strengths and limitations lie. Discussing how AI generates content, why it can sometimes produce errors or “hallucinations,” and how to responsibly use and cite these tools equips students to make informed decisions in their academic work. Some resources for this include:
* [The AI Pedagogy Project](https://aipedagogy.org/guide/tutorial/) by metaLab at Harvard
* The [Harvard Libraries Artificial Intelligence for Research and Scholarship Guide](https://guides.library.harvard.edu/c.php?g=1330621&p=10046069), which includes information about citing AI.
Building students’ AI literacy can start with a few key steps you take in your own course policies and classroom conversations. Here are some recommendations for how to communicate about AI use with your students:
* Include an AI policy on your syllabus. Be transparent with students about why you are asking them to complete a particular assignment and explain how using or not using AI tools will affect those goals. Syllabus statement advice is available through:
* [The Office of Undergraduate Education website](https://oue.fas.harvard.edu/faculty-resources/generative-ai-guidance/)
* [The Bok Center’s Illustrated Rubric for Syllabus Statements on Generative AI](https://docs.google.com/document/d/1-9CqpH4Hs-EIDJzo85tVtzmHivM2qO74w0-KuIJVb6E/edit#heading=h.t9oddx850roy)
* Check in with your students about your AI policy regularly by posting it on your Canvas site, reiterating it in assignment prompts, and discussing it during class meetings and office hours, to ensure students understand and remember your expectations.
* Require students to disclose their use of AI, whether for [brainstorming, drafting, or other purposes.](https://www.newyorker.com/culture/annals-of-inquiry/what-kind-of-writer-is-chatgpt?/)
* If students are not permitted to use AI, design assignments that minimize AI’s utility, such as personalized reflections, oral presentations, or in-class tasks, ensuring that students engage deeply with the material and demonstrate their own understanding.
* Be transparent about your use of AI, whether in preparing materials, generating content, providing feedback, and any other uses you’ve found beneficial. AI here could be framed as a tool to support learning.
### KEY DEBATES AND ETHICAL QUESTIONS
As AI becomes more integrated into academic life, it raises important questions around academic honesty, fairness, and responsible technology use. Addressing these topics with students is essential to help them understand the potential impacts of AI on their learning and future careers.
#### Bias and Fairness in AI Systems
AI systems are trained on [vast datasets](https://ig.ft.com/generative-ai/)– predominantly collected from the internet– and therefore [incorporate the biases and stereotypes](https://www.insidehighered.com/blogs/beyond-transfer/toward-ethical-and-equitable-ai-higher-education) embedded in those datasets. AI-generated content reflects dominant cultural norms and can pose the risk of marginalizing or misrepresenting, reinforcing harmful stereotypes and perpetuating inequality.
For the same reason, the use of AI might also undercut the learning goals of intellectual vitality. [Intellectual vitality](https://intellectualvitality.college.harvard.edu/our-commitment/) encourages students to question assumptions and resist arriving at premature conclusions— two areas where AI-generated output often falls short.
Another consideration is how certain AI tools can disadvantage certain student groups. For instance, AI systems that process language may struggle with non-standard dialects or multilingual speakers, leading to inaccuracies or misunderstandings. [Vigilance in recognizing and addressing AI’s limitations](https://www.sciencedirect.com/science/article/pii/S2667096823000125) is essential in diverse classroom settings.
#### Privacy and Data Security
Using AI in educational settings [raises concerns about privacy and data security](https://edtechmagazine.com/higher/article/2024/06/data-security-best-practices-ai-tools-higher-education). Many AI tools require users to input personal information or academic work into platforms that collect and store data. In some cases, this data may be used for purposes beyond the immediate educational context, such as marketing or [further training of AI models](https://www.wired.com/story/how-to-stop-your-data-from-being-used-to-train-ai/), often without the user's explicit consent. This raises ethical questions regarding the ownership and control over one's intellectual property and private information.
To combat this, FAS members are encouraged to use Harvard-approved tools (Harvard’s [AI Sandbox](https://huit.harvard.edu/ai-sandbox) and [ChatGPT Edu](https://it.fas.harvard.edu/openai-chatgpt-edu/) Workspace). These options allow for the upload of confidential materials (specifically, those materials [Level Three](https://policy.security.harvard.edu/level-3#:~:text=Level%203%20On%20Systems,applicable%20Harvard%20data%20protection%20requirements.) and below).
#### Environmental Impact
Training and running large AI models require [substantial computational power,](https://www.newyorker.com/news/daily-comment/the-obscene-energy-demands-of-ai) which in turn consumes significant energy. This energy consumption is not unique to AI; many digital processes, such as video streaming and cloud storage, also demand significant resources. [But as AI use expands](https://www.technologyreview.com/2023/12/01/1084189/making-an-image-with-generative-ai-uses-as-much-energy-as-charging-your-phone/) in education, [its contribution to carbon emissions and other environmental harms likewise grows](https://time.com/6987773/ai-data-centers-energy-usage-climate-change/).
#### Copyright and Intellectual Property
Generative AI tools rely on massive datasets that include [copyrighted material.](https://issues.org/generative-ai-copyright-law-crawford-schultz/#:~:text=Generative%20AI%20systems%20have%20used,without%20permission%20in%20some%20cases.) This can raise ethical and legal concerns around [ownership, use, and attribution of content produced with AI.](https://www.theatlantic.com/technology/archive/2024/07/perplexity-ai-search-media-partners/679294/)
##
## Getting Started with Harvard AI Tools
At Harvard, two primary “AI playgrounds” are available for experimentation—**Gemini** and the **Harvard Sandbox**—both of which integrate LLMs with auxiliary tools that improve their output:
* \[Google Gemini\](LINK)
* \[Harvard Sandbox\](LINK)
These are the main ways Harvard affiliates can interact with Large Language Models (LLMs) safely, for free. These architectures are trained on vast text datasets and use probabilistic prediction to generate human-like responses, summaries, translations, or structured content. They are not “thinking” in a human sense—they assemble text by predicting likely sequences of words—yet they can simulate reasoning in their generations.
In practice, however, LLMs rarely work alone. All enterprise AI services (ChatGPT, Claude, Gemini, etc.) augment LLM outputs with tools that enable capabilities such as:
* **Code execution** (e.g., Python or JavaScript interpreters for data analysis, visualization, or simulations)
* **Mathematical and statistical computation** (calculator backends, symbolic math engines)
* **Web search and retrieval-augmented generation (RAG)** for up-to-date, source-grounded answers
* **Optical Character Recognition (OCR)** to digitize and analyze text from images or scans
* **Speech recognition and synthesis** (e.g., Whisper for transcription, voice cloning for audio feedback)
* **Image generation and editing** (diffusion models like DALL·E or Stable Diffusion)
* **Computer vision** for motion tracking, object recognition, or visual annotation
* **Thinking models and multi-step reasoning orchestration** — systems that automatically run multiple LLM calls or chain together different AI models in response to a single query, enabling planning, reflection, and improved output quality
The Bok Center’s Learning Lab consults with faculty on how to apply LLMs and these tools to their courses. Together, we can discuss more specific products within the larger Harvard-supported services (NotebookLM, Google Colab, Gemini Gems), other AI offerings (ElevenLabs, Midjourney, etc.) and harnessing enterprise APIs to integrate AI into custom course tools, projects, and research environments.
For inspiration, see our “Examples and Ideas” page.
##
## Examples and Ideas
Below are example assignments, activities, and use cases for different course contexts. This list is far from exhaustive and will continually be updated throughout the term:
### Lecture-Intensive Courses
- Real-Time Response Analysis
- Feedback for Slide Presentations
- Lesson Plan Recaps for Absent Students
- Lecture and Discussion Preparation
- Key Passage Extraction
- Case Study Expansion with AI Support
### Seminar and Discussion-Based Courses
- Discussion Facilitators
- Persona-Based Counterargument Simulation
- Policy Dilemma Practice
- AI-Generated Content Analysis
- Case Study Expansion with AI Support
- Brainstorming for Thesis and Topic Development
### Writing and Essay-Based Courses
- AI-Assisted Peer Review Simulation
- Persona-Based Counterargument Simulation
- AI-Generated Content Analysis
- Term and Concept Definition
- Brainstorming for Thesis and Topic Development
- Key Passage Extraction
### Quantitative and PSET-Based Courses
- Equation-to-LaTeX Conversion
- Student Data Pattern Detection
- Analysis of Public Datasets
- AI-Generated Content Analysis (Math Proofs Variant)
### Performance and Studio Courses
- Spatial Ranking Activities
- Interactive AI-Augmented Maps
### Language Learning and Translation Courses
- Multimodal Translation Feedback
- AI-Assisted Language Practice
- Term and Concept Definition
- Discussion Facilitators
### Fieldwork Courses
- Policy Dilemma Practice
- Student Data Pattern Detection
- Analysis of Public Datasets
- Interactive AI-Augmented Maps
### Project-Based / Maker Courses
- Visual Annotation Analysis
- Group Work Analysis
- Spatial Ranking Activities
- AI-Enhanced Syllabus Reflection
- Rubric Creation
- Grade Norming Across Sections
**Multimodal Translation Feedback** Language instructors can use AI to support translation activities by having students annotate physical copies of AI-generated translations. The instructor can photograph these annotated documents and use AI vision tools to synthesize student notes and highlights quickly, identifying common corrections or questions. This approach preserves the benefits of handwritten work while using AI to process an entire group's feedback efficiently, allowing instructors to address areas of focus and patterns of misunderstanding in real-time.
**Visual Annotation Analysis** Instructors can transform traditional text and image commentary into rich analytical data using AI vision tools. Students physically annotate printed materials (articles, maps, photographs, charts) with highlighters, sticky notes, or handwritten comments, which instructors then photograph. These photos are processed through vision APIs (or through no-code options, like uploading images to a conversation thread) that identify, categorize, and synthesize annotation patterns across the entire class. This approach preserves the value of real-time brainstorming while providing instructors and students with immediate, comprehensive insights into student interests, understanding, misconceptions, and questions.
**Discussion Facilitators** Instructors can integrate AI chats into class discussions to serve as guides, challengers, or perspective-holders. These AI facilitators can be prompted to adopt particular roles—such as an advocate for a specific theory, a critic of an assigned text, or a representative of a stakeholder viewpoint—helping students engage with diverse positions and test their ideas in real time. Students can pose questions, defend arguments, or explore unfamiliar perspectives in a low-stakes, iterative format that complements human-to-human dialogue rather than replacing it. While most implementations rely on text-based interactions or projected responses, some instructors are experimenting with more immersive approaches—such as creating AI voice clones that simulate historical figures, theorists, or even the instructor—to push the horizon of what these facilitators can be and how dynamically they can participate in collaborative inquiry.
**AI-Assisted Language Practice** Language instructors can use AI voice tools and custom chatbots that allow students to develop their speaking and listening skills through natural conversations. Students speak with these AI systems through microphones, and the AI interlocutor can be designed simply to converse dialogically or to provide immediate feedback (on pronunciation, grammar, vocabulary usage) or both. This provides additional practice opportunities outside of class time and helps students build confidence before speaking with peers or the instructor.
**AI-Assisted Peer Review Simulation** Instructors can create tools (Gems, NotebookLMs, Custom GPTs, etc.) trained on discipline-specific writing standards to simulate the peer review process for student writing. Students submit drafts to these AI reviewers, which provide structured feedback in two, three, or more voices, similar to revise and resubmit instructions they might encounter in the process of professional scientific publishing. This helps students internalize field-specific writing expectations while providing more revision opportunities and perspectives than traditional peer review alone might support.
**Interactive AI-Augmented Maps** History instructors can create dynamic learning experiences using AI-generated maps and graphics projected onto large surfaces with which students can physically interact. For geographic content, AI can generate contextual information about regions as students annotate and discuss them. This approach combines the benefits of the richness of digital information with physical, collaborative engagement, allowing students to build knowledge of historical geographies—together as a group—through direct interaction with deeper data that is organized visually and spatially.
**Real-Time Response Analysis** For large lecture courses, instructors can implement AI tools that analyze weekly student posts, responses to discussion prompts, or quick writes in real-time. Rather than sampling just a few student contributions, the AI can identify patterns, misconceptions, and insightful perspectives across every submission, allowing the instructor to address the full spectrum of student thinking during the same class session and to focus on the most relevant or common issues. This approach can also make large classes feel more interactive and participatory while further ensuring that diverse student perspectives are captured and addressed.
**Policy Dilemma Practice** Political science or ethics instructors can organize policy simulation exercises where student teams analyze social dilemmas in different national or cultural contexts. Students create AI instances representing the perspectives of the separate, relevant groups and then prompt those discussants into a debate. In addition to revealing the salient issues and beliefs at stake, this can help students generate potential policy frameworks and then draft persuasive presentations advocating for their proposed solutions. This approach teaches students to evaluate the societal impacts of conflicting views or ideologies while also enhancing their skills in policy analysis, comparative research, and effective communication.
**Rubric Creation** An instructor can upload an assignment prompt into an AI chat to generate a first draft of an assessment rubric. The AI can then be asked to identify both explicit and implicit criteria embedded in the assignment prompt, ensuring the rubric is comprehensive. Depending on the result, the AI can be used to refine the draft to align more completely with the overall course objectives and desired grading standards.
**Grade Norming Across Sections** Ensuring consistent grading across large courses with multiple teaching fellows or instructors can be challenging. Harvard-supported AI tools can assist in analyzing uploaded course rubrics and answer keys alongside examples of graded student work. The AI can be prompted to compare submissions against the rubric and to identify discrepancies or inconsistencies in grading across sections. It can then flag student work that deviates significantly from the norm for further (human) review.
**Feedback for Slide Presentations** Faculty who teach with slides can upload their decks to an AI tool that evaluates clarity, pacing, and alignment with learning objectives. The AI can suggest where to add interactivity, simplify visuals, or trim redundant content—helping instructors design more effective presentations and avoid cognitive overload.
**Group Work Analysis** After in-class group work or collaborative activities, instructors can ask each group to submit a photo of their whiteboard, worksheet, or notes. AI vision tools can analyze and synthesize common ideas or divergent responses across groups. Instructors can project these AI summaries to guide follow-up discussion, without needing to manually scan every submission.
**AI-Enhanced Syllabus Reflection** At the end of a course, instructors can have students revisit their syllabi to explore what they have done and the connections between the weeks and array of subtopics. The students annotate physical printouts of the syllabus with their reflections, memories, and questions. These annotations, different for each student, are then photographed and analyzed by AI vision tools to identify patterns, themes, and insights across the class. The LLM can order and hierarchize the connections it identifies, inductively reasoning to reveal those foci students found most salient, most difficult, most interconnected. This approach combines tangible engagement with the material syllabus with rapid AI-powered synthesis, producing an overall picture that allows for enhanced meta-reflection on the course's content and structure as a whole. It takes the students' notes and essentially enables a bird's-eye view, giving the students a chance to see — visually mapped out and categorized orderly — the most common and complex threads. Here AI is providing immediate and empirically grounded feedback that would otherwise exceed the temporal (and methodological) limitations of the course, which can be used to guide the final discussions about course's learning outcomes, its deepest components, and the students' experiences.
**AI-Generated Content Analysis** Philosophy instructors can design assignments where students analyze and improve upon deliberately subpar AI-generated philosophical arguments. This approach teaches students to identify the differences between superficially correct writing and substantive philosophical reasoning. Students practice critical thinking by pinpointing logical fallacies, insufficient evidence, or oversimplified treatments of complex ideas in the AI-generated content, strengthening their ability to construct rigorous arguments. If desired, a second step may be taken: the students write new and improved paragraphs, and then feed those, along with the original, intentionally weak texts, into a custom GPT that they and their instructor have built, one that is designed to write sophisticatedly and reference specific sources. The GPT may be asked to analyze and compare both texts, to indicate the areas of most improvement, and to provide detailed feedback — all of which can also be used to enable deeper reflection on the material at hand. The second step effectively moves in an opposite different direction from the first; in the first, AI content is generated and then commented on by humans, in the second, human content is commented on by AI, and, if the process is repeated, a productive feedback loop is established. This exercise can help students iteratively refine their explanations, adjust their style and conceptual framing, and build out their argumentation. There are potential applications of this mechanic to proof-based mathematics, in which an advanced LLM would be tasked with generating flawed proofs, with varying degrees of error subtlety or concealment depending on the desired difficulty of the students' task.
**Spatial Ranking Activities** Instructors from many disciplines can design interactive learning activities in which students physically position concept cards, objects, art supplies, or images printed out in real time along continua (such as a large graph or a meter stick marked from 0-100) to indicate their evaluation of different ideas and their relationships. Using the unique capacities of the Learning Lab’s layout and materials, students place objects and annotations representing theories, historical events, or scientific concepts along scales and planes based on criteria like importance, chronology, causality, or ethical impact. The resulting arrangements are then photographed, and AI vision processing is used to quantify and visualize the collective results instantly. This approach combines spatial manipulation, student collaboration, and tactile and visual thinking with the analytical power of AI, allowing for macroscopic assessment of the created forms in order to augment discussion of patterns, outliers, and unexpected groupings, while also creating a permanent digital record of student thinking and experience that can be referenced throughout the course.
**Persona-Based Counterargument Simulation** To help students anticipate and respond to opposing viewpoints, instructors can have them feed their argument into an AI tool configured to generate responses from specific personas—historical figures, rival theorists, policy stakeholders, or hypothetical peer reviewers. Students then write rebuttals to these AI-generated counterarguments, strengthening their rhetorical adaptability and honing the skill of engaging with diverse perspectives in a structured, evidence-based manner. Students compare these AI-generated perspectives, identify where they converge or diverge, and reflect on how these differences affect the framing and reception of arguments.
**Term and Concept Definition** Students compile a list of key terms or foundational concepts from the course and use AI to generate plain-language definitions, illustrative examples, and cross-disciplinary analogies. Students then critique the AI’s language, ensuring accuracy and appropriateness for the field, and contribute refinements that reflect course-specific nuance. For more advanced courses, the end product becomes a collaboratively authored glossary, which can serve as a study resource for the current cohort and future classes.
**Brainstorming for Thesis and Topic Development** When students are tasked with identifying a research topic or developing a thesis statement, they can use AI to generate a wide range of possible angles, questions, or framing approaches. Students review the AI’s suggestions to identify promising ideas, combine compatible strands, and reject unhelpful or off-topic outputs. This process accelerates idea generation while reinforcing the importance of human curation and scholarly relevance in topic selection.
**Key Passage Extraction** Students upload lengthy readings, interview transcripts, or archival documents into an AI tool, which highlights candidate “key passages” and thematic clusters. Students verify, revise, and annotate these selections, explaining why each passage is significant and how it connects to broader course themes. The combined class annotations can be compiled into a shared study document.
**Equation-to-LaTeX Conversion** Students photograph handwritten equations or derivations, which AI then converts to LaTeX for inclusion in papers or presentations. Students review the AI’s transcription for accuracy, learning LaTeX syntax in the process and developing an eye for common conversion errors.
**Lecture and Discussion Preparation** Students supply AI with course materials—slides, notes, readings—which it condenses into a pre-class briefing containing summaries, anticipated discussion questions, and thematic connections. Students adapt these briefings for their own use, coming to class better prepared to participate actively.
**Student Data Pattern Detection** In field-based or lab-based courses, students submit their raw data (e.g., measurements, observations, survey responses) to AI tools that identify patterns, trends, or anomalies. The AI produces visual summaries, which students then interpret, validate, and discuss, connecting the analysis back to theoretical frameworks.
**Analysis of Public Datasets** Students download publicly available datasets relevant to the course and use AI to generate preliminary analyses—identifying correlations, trends, or anomalies. They then critique the AI’s methodological assumptions and validate its results against standard analytical techniques.
**Lesson Plan Recaps for Absent Students** Instructors can input lecture notes, transcripts, or recordings into AI to produce a concise recap with key points, discussion highlights, and suggested follow-up activities for students who missed class.
**Case Study Expansion with AI Support** Students take a brief case study and prompt AI to add relevant context, alternative perspectives, or parallel examples. They then vet these additions for accuracy, relevance, and originality before integrating them into their analysis.