# Add captions demo Author: Jonathan Hsu. ## Intro First, I'll use "FFmpeg" to transformed the video file (.mp4) to an audio file (.wav). It is free and open sourced. Then, use "Whisper" to generate subtitle.It is a Module developed by OpenAi. Just input the audio file, and AI will output the captions automatically. The next job is check the subtitle, AI is not always correct. So I need to use "Jubler" to modify our captions. Finaly, use "DaVinci Resolve 18" to put the video and the captions together. ## Install Software ### python3 It's already in my comnputer. You can download it at https://www.python.org/ ### ffmpeg Following websites below. For English edition: https://phoenixnap.com/kb/ffmpeg-windows In Chinese: https://forum.gamer.com.tw/C.php?bsn=60030&snA=627494 ### PyTorch Because Whisper use PyTorch to train the AI model, so we need to install PyTorch first. Choose the version you need here. https://pytorch.org/get-started/locally/ For me, I have a NVIDIA discrete graphics card, and my os is Windows 11, so I need to use this command to install. ``` pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 ``` ![安裝成功](https://hackmd.io/_uploads/Sk8CP_QST.png) ### Whisper Followe the guide on https://github.com/openai/whisper Copy and paste this command one by one in the terminal. ``` pip install -U openai-whisper pip install git+https://github.com/openai/whisper.git pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git ``` ### Jubler and DaVinci Resolve Visit their website and download the .exe file. Ckick "Next step" until you finished install. https://www.jubler.org/ https://www.blackmagicdesign.com/products/davinciresolve ~~Be careful, it is Jubler not Jabler, if you spell it wrong, you'll see something special.~~ ## Generate captions ### Check gpu First, we need to our gpu (graphics card) is working. Open python in terminal. ``` python ``` Import torch, and check our gpu, if gpu is available, the result will be True. ``` import torch print(torch.cuda.is_available()) ``` Check how many gpu I have. The return value will be one for me because I just have one graphics card. ``` print(torch.cuda.device_count()) ``` Check which gpu I an using now. In computer science '0' means the first one, '1' means the second one, and so on. ``` torch.cuda.current_device() ``` Check the device name. My graphic card is Nvidia 1660s. If everthing's fine, I can see my card on the screen. ``` torch.cuda.get_device_name(0) ``` Take a look at the result, my gpu is working. ![確定gpu可用](https://hackmd.io/_uploads/ryz41K7ST.png) ### Generate captions Transformed the video file (.mp4) to an audio file (.wav) by using FFmpeg. Use this command, in the first "", is the path of the video, the second "" is the path of the audio. ``` ffmpeg -i myfile.mp4 myfile.mp3 ``` I create a .wav file, name "ReleaseM3.wav" sucessfully, wich only containing the audio of the Apple Released video. ![image](https://hackmd.io/_uploads/Hk8pgt7B6.png) Use the command to generate captions. In "" is the path of the video, medium means I use the medium model to generate. If you use bigger model, the result will me more accurately, but it will cost longer time. You can determine your model size based on your hardware. ``` whisper "D:\大一上英文報告\AppleRelease.wav" --model medium ``` As show on the photo, it is generating captions now, and the gpu is also hard-working. Excellent! ![gpu測試成功](https://hackmd.io/_uploads/H1s0MtXH6.png) ![成功生成字幕](https://hackmd.io/_uploads/HJLxXtQrp.png) ## Make a srt file But there's a disadvantage for the captions in the terminal. It's hard to edit. So we need to make a caption file (.srt). I write a small program to do the job. Replace the path ("D:\大一上英文報告\AppleRelease.wav") to the file you want to generate captions with srt file. ```python! import whisper from whisper.utils import get_writer model = whisper.load_model("base") audio = "D:\大一上英文報告\AppleRelease.wav" result = model.transcribe(audio) output_directory = "./" # Save as an SRT file srt_writer = get_writer("srt", output_directory) srt_writer(result, audio) ``` Run the code, now we got the srt file, and we can edit in Jubler. ![image](https://hackmd.io/_uploads/r16nEYXr6.png) ## put them together Create a new project. ![image](https://hackmd.io/_uploads/H1kMrtmBT.png) Put the captions file and video file in the media pool. ![image](https://hackmd.io/_uploads/Hk4wrY7B6.png) Move the captions and video in the timeline. ![image](https://hackmd.io/_uploads/S1_cSYXrT.png) Look, they sync perfectly. ![image](https://hackmd.io/_uploads/Bk8xIFmHa.png) Finaly, go to the rendering page to render the video out. ![image](https://hackmd.io/_uploads/ryso8F7BT.png) Now, the captions are in the videos! ![image](https://hackmd.io/_uploads/BJenDYXSp.png) ## part of me # SB: (examining charts) March should be when the oceans globally are warmest, not August. The oceans are breaking records. It makes me nervous how much warmer the ocean may get between now and next March. # KL: (concerned) The water feels like a bath when you jump in. Coral reefs are dying in the Gulf of Mexico. # SB: (worried) Why are the oceans so hot now? Climate change is making them warmer, absorbing most of the heat from greenhouse gas emissions. # KL: The more fossil fuels we burn, the longer it'll take to stabilize the oceans. # SB: Marine heatwaves are occurring unexpectedly. They're doubling in frequency and becoming more intense. # KL: (recalling events) In June, UK waters were 5C higher than average. Florida's sea surface hit 38.44C – like a hot tub! # SB: We knew the sea surface would warm, but we didn't expect this. Something's happening beneath the waves. # SB: (pointing at the map) Look at the severity in the Gulf of Mexico, North Atlantic, and the Mediterranean. El Niño is visible in the Pacific. # KL: We're witnessing a crisis unfolding in our oceans. We need urgent action.