# AIM Labs Fall 2022 Project Proposal
## Lecture Shortener
**Contributors:** Harsh, Kyle, Ahmed, Rithvik, Ella
**Date:** October 15, 2022
### Description
Our project will dynamically change the playback speed of a recorded lecture based on the inferred information content. Our final deliverable will be an extension that takes a given YouTube Video and selects a suitable viewing speed for every segment of the video.
As MIT students, we've noticed that although the college lecture experience is instrumental to grasping important topics, sometimes gaps in the lectures or unneccessary information can be distracting and frustrating. By using the capabilitites of AI video and audio processing, we can create an extension that can identify important/critical aspects of a precorded lecture and dynamically make decisions in order to address this key issue. By changing the viewing speed, our product will help students focus on the most important concepts in the most efficient manner possible.
There are multiple ways that such an extension can be of great benefit to users. The first is in the application of saving time. When a professor goes on a tangent or there is some gap in the lecture without important content available to a student, a player can dynamically speed up during this portion of video to ensure the student saves time. On the contrary, when a user is watching a video during a portion with high retention rates/seemingly important content, the player will automatically replay those portions of the lecture to make sure the user is grasping key concepts.
A key consideration of this player is the combined inputs of both audio and video content in order to make decisions as to the next steps of the player. These two will work in a relatively independent manner, but will be combined to ensure that the player appropriately modifies the lecture speed/format without distracting the user while ensuring efficiency.
### Methods

#### Preprocessing
We will use Youtube-dl to download youtube videos as mp4. Then, we will use FFmpeg to split the videos into individual frames as well as extracting the audio files. Next, we will use OpenCV to preprocesses images for use by the model.
The target data for training the model will be the Youtube retention graphs. These graphs show how often a certain part of a Youtube video is replayed. They can be collected using an open-source [API](https://stackoverflow.com/questions/72610552/most-replayed-data-of-youtube-video-via-api). As seen below, the retention graphs give a good baseline indicator for which parts of videos are the most important / dense. Only popular videos have these retention graphs, which is why they're being used only to train the model.

*Example retention graph (the gray waveform above the timeline)*
#### Training
We will convert the audio file to a transcript and use NLP on the transcript to predict the youtube retention metric. Additionally, we will use a vision model to predict the importance of a time-block based on an image of the lecture. The output for both models will be the retention metric.
In addition, we will conduct hyperparameter tuning to further improve perfomance of the two models. Using these two models, we will train an ensemble model to combine the predictions from the audio and visual components into one importance score.
We will validate performance on videos that were held-out from the training process.
#### Youtube Extension
We will build a youtube extension or app that queries a backend AWS server with a youtube link of an ocw lecture, the server will preprocess the youtube video and run inference on the video with the model. The resulting importance scores for each time stamp will then be sent to the youtube extension which will allow it to choose how fast to play certain parts of the lecture.
### Concerns
Although we do feel pretty confident that we can execute this product, there are some concerns we do have in mind. The first key concern that we have is in making the actual player dynamic. We don't have much experience as a team in creating a dynamic video modifier, so doing this will definitely pose some challenges to us in regards to development. Additionally, we are concerned about the efficiency of such a player, as it may not be efficient enough to integrate seamlessly into an actual viewing experience (causing buffering, crashing, distracting inaccuracies, etc.). We plan to actually address these concerns through iterative testing and development.
A second key concern that we have is in the manner in which are image processing tool will work. Lectures come in varied formating, and so making a computer discern whether text is important or not is a pretty substantial task. For example, chalkboard text with tons of equations may be analyzed quite differently to typed text on a Powerpoint slide, but both may be of equal importance to an average user. To address this, we simply need to expose the Computer Vision model to a variety of training data (many lectures from different classes), and adjust the decision framework accordingly.
Another concern that we have is actually integrating our player into an existing tool like YouTube. Although we are relatively confident in our ability in modifying an mp4, for example, actually integrating such a tool as a YouTube player extension is an area that we are not as familiar with. The processing of integrating the tool may pose new technical challenges, especially in changing the location of the YouTube player dynamically through code. Also, YouTube has a number of relevant features (like the retention graphs that were recently introduced), that may be quite useful to our product but may not be able to be utilized by our model. This is something that we plan to keep in mind while training our ML algorithms during the duration of this project.
### Minimal Working Product
The Minimal Working Product that we envision will be a program that can take an input .mp4 file, use ML/AI processing to analyze important video and audio content, then export an .mp4 with a modified video that with the "condensed information".
During Week 7, we will evaluate where we are to see whether we have reached this goal (and if we were too ambitious).
Minimum Working Product Workflow Diagram:

## Calendar
### Week 1 (Due October 15)
For Week 1, we wrote up this project overview.
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh | Will create the diagrams and write training section | 30 minutes
| Kyle | Will write the methods| 20 minutes
| Ahmed | Will write the training and calendar| 15 minutes
| Rithvik | Will write up the description, concerns, and MWP sections | 45 minutes
| Ella | Contributed to the calendar | 10 minutes
### Week 2 (Due October 22)
By the end of week 2, we will have built preprocessors for each of the components of the model. Additionally, we will begin development on the model.
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh | Build preprocessing pipeline for taking YT videos to audio file and frames. | 120 minutes
| Kyle | Use Youtube's api to extract retention graph| 60 minutes
| Ahmed |Build/find tool to do transcription from audio files| 120 minutes
| Rithvik | Will begin writing the OpenCV Training Model | 45 minutes
| Ella | Write project update/slides | 45 minutes
### Week 3 (Due October 29)
By the end of Week 3 we will have begun training our models on the data that was gathered the week prior.
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh |Work on vision model for predicting retention based on frames | 120 minutes
| Kyle | Work on vision model for predicting retention based on frames| 120 minutes
| Ahmed | Use Tensorflow-nlp to predict retention based on transcript| 120 minutes
| Rithvik | Will work on the image analytics in more detail, aiming to get metrics for the player | 45 minutes
| Ella | Work on alternative importance measures other than retention graph| 45 minutes
### Week 4 (Due November 5)
We will begin hyperparam tuning with hopes of improved performance. Additionally, we will begin using an ensemble model to combine the outputs of the visual and auditory models.
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh | Build ensembler that uses the visual and auditory models | 60 minutes
| Kyle | Hyperparam tuning on visual model| 120 minutes
| Ahmed |Hyperparam tuning on audio model| 120 minutes
| Rithvik | Will work on integrating the image analytics portion of the project into the decision framework for the player | 45 minutes
| Ella | Write script to construct shortened video from model output| 120 minutes
### Week 5 (Due November 19) [Larger Deliverable Due to Long Weekend]
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh | Work on backend for real time video processing on AWS to be used by the extension | 300 minutes
| Kyle | Finalize model| 60 minutes
| Ahmed | Begin working on YT extension| 120 minutes
| Rithvik | Will work on refining OpenCV aspect of project for larger deliverable | 45 minutes
| Ella | Integrate preprocessing pipeline and model inference to complete MVP | 60 minutes
### Week 6 (Due December 3rd) [Final Project Due, Internal Demo Day]
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh | Work on slides and demo prep| 75 minutes
| Kyle | Work on slides and demo prep| 75 minutes
| Ahmed |Work on slides and demo prep| 75 minutes
| Rithvik | Will work on getting integrating project into YouTube player extension | 75 minutes
| Ella | Work on slides and demo prep | 75 minutes
### Week 7 (Due December 10th) [Final Demo Day]
Will practice final presentation and finish slides.
| Person| Progress and Personal Deliverable | Est. Time
|------|------|------|
| Harsh | Refine slides and Presentation Prep | 75 minutes
| Kyle | Presentation Prep| 45 minutes
| Ahmed | Presentation Prep| 45 minutes
| Rithvik | Presentation Prep | 45 minutes
| Ella | Presentation Prep | 45 minutes