# Add captions demo
Author: Jonathan Hsu.
## Intro
First, I'll use "FFmpeg" to transformed the video file (.mp4) to an audio file (.wav). It is free and open sourced.
Then, use "Whisper" to generate subtitle.It is a Module developed by OpenAi. Just input the audio file, and AI will output the captions automatically.
The next job is check the subtitle, AI is not always correct. So I need to use "Jubler" to modify our captions.
Finaly, use "DaVinci Resolve 18" to put the video and the captions together.
## Install Software
### python3
It's already in my comnputer.
You can download it at
https://www.python.org/
### ffmpeg
Following websites below.
For English edition:
https://phoenixnap.com/kb/ffmpeg-windows
In Chinese:
https://forum.gamer.com.tw/C.php?bsn=60030&snA=627494
### PyTorch
Because Whisper use PyTorch to train the AI model, so we need to install PyTorch first.
Choose the version you need here.
https://pytorch.org/get-started/locally/
For me, I have a NVIDIA discrete graphics card, and my os is Windows 11, so I need to use this command to install.
```
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```

### Whisper
Followe the guide on https://github.com/openai/whisper
Copy and paste this command one by one in the terminal.
```
pip install -U openai-whisper
pip install git+https://github.com/openai/whisper.git
pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git
```
### Jubler and DaVinci Resolve
Visit their website and download the .exe file. Ckick "Next step" until you finished install.
https://www.jubler.org/
https://www.blackmagicdesign.com/products/davinciresolve
~~Be careful, it is Jubler not Jabler, if you spell it wrong, you'll see something special.~~
## Generate captions
### Check gpu
First, we need to our gpu (graphics card) is working.
Open python in terminal.
```
python
```
Import torch, and check our gpu, if gpu is available, the result will be True.
```
import torch
print(torch.cuda.is_available())
```
Check how many gpu I have. The return value will be one for me because I just have one graphics card.
```
print(torch.cuda.device_count())
```
Check which gpu I an using now. In computer science '0' means the first one, '1' means the second one, and so on.
```
torch.cuda.current_device()
```
Check the device name. My graphic card is Nvidia 1660s. If everthing's fine, I can see my card on the screen.
```
torch.cuda.get_device_name(0)
```
Take a look at the result, my gpu is working.

### Generate captions
Transformed the video file (.mp4) to an audio file (.wav) by using FFmpeg.
Use this command, in the first "", is the path of the video, the second "" is the path of the audio.
```
ffmpeg -i myfile.mp4 myfile.mp3
```
I create a .wav file, name "ReleaseM3.wav" sucessfully, wich only containing the audio of the Apple Released video.

Use the command to generate captions.
In "" is the path of the video, medium means I use the medium model to generate. If you use bigger model, the result will me more
accurately, but it will cost longer time. You can determine your model size based on your hardware.
```
whisper "D:\大一上英文報告\AppleRelease.wav" --model medium
```
As show on the photo, it is generating captions now, and the gpu is also hard-working.
Excellent!


## Make a srt file
But there's a disadvantage for the captions in the terminal. It's hard to edit. So we need to make a caption file (.srt).
I write a small program to do the job.
Replace the path ("D:\大一上英文報告\AppleRelease.wav") to the file you want to generate captions with srt file.
```python!
import whisper
from whisper.utils import get_writer
model = whisper.load_model("base")
audio = "D:\大一上英文報告\AppleRelease.wav"
result = model.transcribe(audio)
output_directory = "./"
# Save as an SRT file
srt_writer = get_writer("srt", output_directory)
srt_writer(result, audio)
```
Run the code, now we got the srt file, and we can edit in Jubler.

## put them together
Create a new project.

Put the captions file and video file in the media pool.

Move the captions and video in the timeline.

Look, they sync perfectly.

Finaly, go to the rendering page to render the video out.

Now, the captions are in the videos!

## part of me
# SB: (examining charts) March should be when the oceans globally are warmest, not August. The oceans are breaking records. It makes me nervous how much warmer the ocean may get between now and next March.
# KL: (concerned) The water feels like a bath when you jump in. Coral reefs are dying in the Gulf of Mexico.
# SB: (worried) Why are the oceans so hot now? Climate change is making them warmer, absorbing most of the heat from greenhouse gas emissions.
# KL: The more fossil fuels we burn, the longer it'll take to stabilize the oceans.
# SB: Marine heatwaves are occurring unexpectedly. They're doubling in frequency and becoming more intense.
# KL: (recalling events) In June, UK waters were 5C higher than average. Florida's sea surface hit 38.44C – like a hot tub!
# SB: We knew the sea surface would warm, but we didn't expect this. Something's happening beneath the waves.
# SB: (pointing at the map) Look at the severity in the Gulf of Mexico, North Atlantic, and the Mediterranean. El Niño is visible in the Pacific.
# KL: We're witnessing a crisis unfolding in our oceans. We need urgent action.