# existing-copy-on-ai-projects-23-24 ## text for tb from late fall semester This year at the Learning Lab, we've focused intensely on AI, recognizing it as a crucial emergent phenomenon in the realms of teaching, learning, and knowledge production. Our approach centers on understanding how students might use AI, specifically ChatGPT, in the classroom for a positive and proactive learning experience. Despite concerns about potential misuse for cheating, our efforts are geared towards harnessing its educational benefits. We've launched a series of workshops where Christine, our assistant director, collaborates with Adam from P&P to introduce faculty to the HUIT sandbox. This secure environment ensures data privacy and prevents AI training on student interactions. Here, students and faculty can engage with ChatGPT to foster classroom innovation, testing ideas and critically analyzing responses. For instance, in gender studies, students can examine ChatGPT's language patterns to critique underlying biases in its training data. Similarly, in courses like EXPOS, students can refine their argumentative skills by evaluating and challenging ChatGPT's interpretive claims. We're also exploring how AI can augment faculty capabilities, particularly in managing the vast amount of student learning data. ChatGPT can act as a tool to help faculty swiftly identify and respond to patterns in student feedback and questions, enhancing their instructional effectiveness. Beyond educational applications, the Learning Lab is utilizing AI to augment our operational efficiency. From managing databases of educational resources to enhancing media production, AI's capabilities are broadening our capacity. We are experimenting with AI tools for transcription, image recognition, and production, which will be integral in our upcoming courses on diverse media, including comic books, filmmaking, and video games. This integration aims to facilitate a seamless transition for students between traditional and digital media, enabling them to achieve greater creativity and technical sophistication. In summary, our engagement with AI at the Learning Lab spans from enhancing classroom learning to optimizing our business processes. We are excited about the potential of these initiatives and look forward to their development in our courses. ## SLAVIC 121/TDM 121K: Ballet, Past and Present Students in SLAVIC 121 investigate ballet from an array of perspectives: they reflect on the many ways knowledge about ballet is passed from generation to generation; on the ways ballet is represent, whether in texts or drawings or images or videos; and they grapple with the difficulty of making arguments about a complex and multimodal art form in academic writing. In their time at the Learning Lab, students engaged in a series of activities that made use of the Learning Lab's media studio and new AI tools to tackle the course material in new ways (but in ways completely aligned with course's objectives). - At one station, students worked with printed frame-by-frame stills from the ballets they were analyzing, annotating them as they would in a visual essay - At another, they learned the basic video editing skills required to isolate specific ballet movements, juxtaposing them in a single frame or assembling a series as a montage, or exporting a series of looping gifs - Finally, students posed on our green screen stage, then used computer vision tools to analyze and isolate the positions of their bodies in order to feed these in to Stable Diffusion, which is the main open-source AI image generation tool. They learned to use the images of their own poses as controllers that could strictly determine key features of the exported image. In a sense, their bodies became the "input devices" that controlled the AI generation in thought provoking ways. ### moment text from showcase Daria Khitrova brought her SLAVIC 121/TDM 121K: Ballet, Past and Present students in for a workshop where they engaged in a number of different multimodal activities to reflect on ballet, the way knowledge about ballet is transmitted through times, and on media reproductions of ballet. In this particular moment, you can see a student on the green screen stage posing. We then had the OpenCV Python library analyze that student's pose using the OpenPose plugin. And then that OpenPose data is used as the control net for a stable diffusion output that you can see on the left-hand side. In this case, it's a fountain, but the student created many, many, many versions of herself in this pose in different sorts of environments. This is kind of interesting in this context because the students were studying exactly this kind of translation where a movement is captured and represented and turned into something, some new art form. Their studies spanned hundreds of years to understand the limits of human memory, the limits of dance documentation, and the limits of passing on knowledge about dance through the body from generation to generation. And here they could reflect on what goes on in those acts of transmission and translation, but through multiple iterations of those translations in the span of a 20-minute activity at the workshop. ## ENGLISH 189VG: Video Game Storytelling The Learning Lab hosted a five hour workshop for 167 students in ENGLISH 189VG Video Game Storytelling, structuring activities around Jesse Schell's Tetrad: aesthetics, mechanics, story, and technology. For example, at the Aesthetics station, Media & Design Fellow Chris Benham (PhD Candidate in Music) led students through an activity where they built basic environments out of 3d shapes in Blender, and then used Stable Diffusion to give detail, texture, and color, allowing students to create images of their own video game worlds. AI was also leveraged for character design and narrative construction at the Story station. ### moment text from showcase One of the more complex workshops we do each year is a workshop ENGLISH 189VG: Video Game Storytelling. It's a large class, usually over 100 students. We obviously can't fit 100 students in our studio, so what we do is we have the students sign up in 15-minute slots, and they show up 10 at a time, and they rotate through a series of 4 or 5 activities over the course of an hour to learn all the different elements of video games and how they're constructed. They begin in a room where they get oriented and just understand a basic map of the video game production pipeline. They understand how different sorts of assets are put into game engines, and how those game engines then push out content to different consoles with different input devices. And then they go into the other room to actually create some of this content. They went to a 3D modeling station where they learned how to create polygonal models. Then they learned how to send those models through stable diffusion to create marvelous video game worlds, some of which you can see in these images. They also learned about how to construct dialogue trees, where you can take the knowledge we know about a character or the lore of a culture in a game, and then reveal that to players through a series of interactive dialogue steps. They learned how to think about constructing such steps, and then just more concretely and technically, how to get that done in Unity, one of the major game engines. And then ultimately they went to a final station where they thought about various sorts of input and output devices, input devices such as keyboards and mice and controllers, and then output devices like screens and these days VR headsets, and they were able to experiment with these and think a little bit about what those did for the gameplay, what they did for the user, how the game changes depending on the input or the output device, and they were able to see some of the assets they had generated at previous stations revealed to them in VR or on the giant screens of the Learning Lab. ## EMR 162: Interdisciplinary Perspectives on Race and Artificial Intelligence In EMR 162, students analyze the relationship between contemporary discourses about and applications of AI, with an emphasis on the entanglement between artificial intelligence and issues related to race and ethnicity. At the EMR 162 workshop, students explored a range of image and text-generation tools across different platforms as a way to get first-hand experience working with these tools and to experience how they represent race, gender, ethnicity, and class (and the intersections between these identity categories). Students brought prompts with them to the workshop that they used to critically interrogate the output of these different tools. Students reflected on the gap between their "ideal" output (i.e., what they were hoping to get an image or a description of) and what the generative AI tool actually produced. They also experimented with a recursive loop in a low-code environment: here, students would input an initial text prompt and get returned to them the revised prompt that OpenAI uses to tell the LLM specifically what to make. Then, the students would also get a description of that image from Vision API, enabling them to reflect on what the AI "sees" as salient features of an image that it produced itself. Finally, students could then use the image created in the first part of this exercise to get Dall-E to produce the opposite image and then the opposite of that image and so on--this practice allowed the students to grapple with the AI's choices about thematic significance, composition, and identity. At the end of the workshop, students shared some of the images and text that they generated and reflected in a seminar-style discussion about the implications of these different AI-generated outputs and what these modes of representation might mean at both social and political levels. ## COMPLIT 200: Computing Fantasy: Imagination, Invention, Radical Pedagogy (Munari / Rodari / Calvino) Students in COMPLIT 200 will be coming in to the Learning Lab for a series of workshops where they’ll make use of our resources and use an array of AI tools for their collaborative final project (a book of illustrated AI folktales). The workshops will cover image generation, coding slackbots that funciton as an editorial staff, and various recursive moves that allow the AI to respond to it's own output as input. ### moment text from showcase One of the courses we worked with most this term was COMPLIT 200: Computing Fantasy, where the students have the ability to use AI to generate their own folktales based on what they had learned in the course,  and where they had been studying many theories of folktales and many modernist authors that used various sorts of algorithmic ways of generating literary content. Students came in for a series of workshops, and one of them just happened to land on the day of the eclipse, and so what you see here is a series of stable diffusion generated images that map on to the various stages of the eclipse that day. They're all Red Riding Hood themes, so you'll see lots of wolves and lots of Riding Hoods. The students in the course have been reading a series of texts by Munari, where he writes a series of different colored Riding Hoods, a Red Riding Hood, a Green Riding Hood, a Yellow Riding Hood, a Blue Riding Hood, a White Riding Hood, and one of our staff and fellow projects to get ready for the course was to create Riding Hood generators that could generate Riding Hoods of any color in preparation for the course. The eclipse was also an event, and that landed exactly during one of our COMPLIT 200 workshops, and so the eventLab team that's capable of processing whatever's taking place in the studio in real time was able to leap into action to really do their best to document this, and the AI tools over the course of the year have helped us really develop many ways of generating media in real time. So we have the stable diffusion generations of the different stages of the eclipse, but then also once we generate those, or once we get the images of the eclipse from the cameras, we were able to feed those back out into the real world in the form of everything from printed images to, as you'll see at the station, block prints that our students either created from Midjourney generated images or stable diffusion generated images, or just from imitating the photos that we had of the eclipse on the day. All year long, this move back and forth from the AI world to the digital world was one of the major themes and one of the major techniques we tried to get better at, and this day was a kind of singular instance of that. ## SLAVIC 191: Silent Film In SLAVIC 191: Silent Film, students utilized OpenAI's Whisper API for live transcriptions during their live video essays. This integration offered a novel approach to analyzing silent films, like "The Cabinet of Dr. Caligari." The AI-generated silent film-style title cards, created in response to students' interpretations, enriched their learning experience, providing a deeper insight into the nuances of silent film production and analysis. ### moment text from showcase This moment happened in a course called Slavic 191 Silent Film, where students were studying early 20th century silent film. They came in for a workshop where they learned how silent films were constructed, and they actually ended up making a silent film about silent film. At one station, they learned about lighting and they all posed for a close-up that matched one of the films they were watching for that week, Joan of Arc. At another station they designed abstract backgrounds akin to the ones that were in The Cabinet of Dr. Caligari. And then at another station, they discussed what they had learned at the previous two stations to reflect on how silent film is constructed, ideas that they had developed over the course of the term, how those ideas were impacted or changed by what they learned about film production during the workshop. And we had the AI construct title cards based on what they said and output those in order to create a silent film about silent films. ## TDM 98 Junior Tutorial In TDM 98 Junior Tutorial, students were introduced to the innovative world of AI and generative AI. They had the opportunity to code their own bots and create an improvised drama within a dedicated experimental Slack channel. This activity not only taught them about AI design and engineering but also provided a unique perspective on how skills from non-coding disciplines can be applicable in cutting-edge technological fields. The hands-on experience with AI in this tutorial led to unexpected learning outcomes and creative discoveries. ## SOC-STD 68RA: Radical Actors: The Role of Public Education in American Social Change In SOC-STD 68RA: Radical Actors: The Role of Public Education in American Social Change, the Learning Lab focused on enhancing student presentations through AI technology. The live recordings and transcripts of these sessions provided a substantial starting point for student projects, offering a pre-structured outline that included written content and visual prototypes. This innovative approach helped streamline the project development process, fostering a deeper engagement with the course content. ## CE 10: StudioLab on Creativity and Entrepreneurship In CE 10: StudioLab on Creativity and Entrepreneurship, students developed pitch videos for their entrepreneurial projects. They began with a business model canvas and received feedback from Hunt Lambert, which was then used to train an OpenAI bot. This bot provided additional, nuanced feedback on their pitches, offering a unique interactive learning experience. Despite some challenges in fine-tuning the AI, this approach added significant value to the students' entrepreneurial understanding and project development. ## images ![3d modeled shapes, including cylinders, columns, and cubes in grayscale](https://files.slack.com/files-pri/T0HTW3H0V-F06PSDFGFC2/render\_green.local\_20240221\_112926.png?pub\_secret=320389bf2f) ![Fantastical and mysterious video game scene with buildings the shapes of cylinders, columns, and cubes](https://files.slack.com/files-pri/T0HTW3H0V-F06PUT1UW20/00032-3986639472.png?pub_secret=974871e6c2) ![A student poses on the green screen with her hands up and legs crossed ](https://files.slack.com/files-pri/T0HTW3H0V-F06NYNTQZTJ/screenshot\_2024-03-06\_at\_4.41.19\_\_\_pm.png?pub_secret=c551410bbe) ![A statue of a woman with the hands up and legs crossed](https://files.slack.com/files-pri/T0HTW3H0V-F06QAJW063S/00025-3564191072\_slavic\_tdm\_121.png?pub\_secret=753e5c5ec1)