# ai-hackathon-oral-exam-workflow ## problem statement This term and next a number of faculty have expressed interest in using the Bok Learning Lab's video studio for what we might term an "AI-Augmented Oral Exam." We frequently support courses that require end-of-term reflections, presentations or "artist's statements" that we shoot in our studio. And this year, as we've worked to design such activities with faculty, the possibility of having an AI assistant ask follow-up questions has emerged again and again as a promising option to explore. There are many elements required to make this work, which is why it feels like it aligns nicely with the "agentic AI" theme of the hackathon. - live transcription of the student presentation - capture of video content if relevant - connecting the live transcription data to timestamp data on the video we are recording - continuous processing and reprocessing of the transcription by an LLM call aware of course learning objectives in search of candidate questions - ongoing evaluation of candidate questions as the speaker presents - determining question order and number of questions given time remaining post-presentation - triage of follow-up questions given time remaining - wrap-up of interview/exam - packaging of results for faculty graders - etc We obviously won't tackle all of these during the hackathon (handling even one or two would get us closer to our goal).