## Sprint #_ Retrospective:
* members in attendance:

- Egemen
- Ethan
- Alex
- C.J.
* what worked well
* Implemented varying end times for clips, so generated clips are 15-60 seconds long depending on where their interest cuts off. This is a huge upgrade as clips used to find interesting parts of videos but the whole clip was not interesting.
* Tested out some ML deployment platforms (beam.cloud) for deploying our Whisper and Huggingface models as an API, and those worked. beam is serverless, so we would not need to provision an EC2 instance with high memory, and instead could use beam for the project. There are high-performance GPUs, CPUs and high memory for RAM in beam virtual machines. Although using beam.cloud or a similar platform is easy, there are negatives to it such as a limited free trial, but we doubt we will go over the threshold of the free plan for class. This is also a plus for us if we want to implement a GPT for summarizing transcriptions and generating tags.
* Set up a database for metadata and in the process of getting working endpoints in our backend layer. We opted to go for DynamoDB because our metadata was easier to model in json format and faster to extract
* Implemented a video deletion endpoint that we will soon integrate with the frontend so that users will have the ability to delete videos
* Enhanced the clip editing page so that a user can click on buttons to edit the start and stop times of the vid the video player component. The next step is to save the user-adjusted start/stop times in our database so that we can store and fetch them.
* Have a progress bar for when a user is uploading a video, to give the user an indication of upload/processing/clip generation progress
* what didn't
* Deploying the models to EC2 is still not implemented. Although there are alternatives which work (see above), we will need to talk about where we are headed towards with deployment. Additionally, we may need to delay deployment because of how much we anticipate it to cost. Also, we may consider AWS lightsail as opposed to EC2 becuase it may be cheaper. Currently, our deployment is still in the research phase.
* Still need to show transcript on clip editing page
* self assessment on progress
* where are you in relation to progress towards product and milestones?
* give an estimate of how far towards your goals you are, do you think you're on track?
* lay out *each* of the following weeks till end of term with brief goals for each
* current week 6:
* Configure editing page to retrieve start/end timestamps for clip from db; also update user-edited timestamps stored there
* use S3 videos for video playback instead of random video
* Improve how we are showing the upload progress bar to users by putting the progress bar in a fixed floating component in the bottom corner of the screen
* Implement transcript editing
* Design a way to use transcript to edit start/end timestamps
* Determine how best to lead users through a trial
* Implement correct clip end timestamps depending on interest level
* week 7: goals
* Feature freeze end of week 7
* Flesh out landing page
* Implement the experience for a new/unauthenticated
* Develop notification system for updating user about progress
* When a video is being clipped / has been clipped
* Refactor generation code and api to upload clips as they are generated to make the user experience better
* Implement a way to use transcript to edit start/end timestamps
* Implement settings page
* Password reset
* Billing
* Register domain and settle on a name (end of week) so that we can begin early-stage advertising
* week 8: goals
* Rigorous user-testing and trials - try to accumulate user feedback and iterate based on feedback
* Develop a logo
* Bug hunt and bug fixes
* Begin to advertise more seriously
* week 9: goals
* Continue bug hunting
* Continue getting user feedback and iterating
* Heavy focus on UI and making sure our frontend is as clean and professoinal as possible
* briefly summarize any other topics/discussions
* Looping a video on our video player component in our application? Ideally this would be possible. Right now, we have start timestamps and end timestamps within our long-form video, and in our clip editor, the video stops playing when you reach the end timestamp. However, you can just hit the play button and it will continue beyond the end-timestamp. Ideally, it would restart back at the start timestamp instead.
* Pre-generating clips - if we don't pre-generate our clips, then the clips will need to be generated after the clipEditor page. This will take some time because clip generation is a slow process. So, a user's clip basically won't be ready immediately after they go through the trimming process.