# Jitsi Meet Front Changes
In this document I'll be going through the changes I made to Jitsi Meet Front to implement the transcription language change feature while also explaining the reasons and justifications for my choices.
Before that, I would like to first talk about how to setup the dev environment to test your changes as I struggled with it when I started.
## Dev Environment Setup
* First, you have to clone the [Jitsi Meet Repo](https://github.com/jitsi/jitsi-meet/).
* Next, you install the node dependencies through
```
npm install
```
* Now we arrive to the important part, because it is just the front you need a backend to connect to. It could either be a Jitsi Instance that you on a cluster online or a Docker Compose instance ran locally (click [here](https://jitsi.github.io/handbook/docs/devops-guide/) to see how you can set it up). Eitherway, you should have a link to the Jitsi Instance, in the case of a local one it would be https://localhost:8443/. You should then pass it as environment variable like this:
```
export WEBPACK_DEV_SERVER_PROXY_TARGET= YOUR JITSI INSTANCE LINK HERE
```
* Finally, you run the dev server through
```
make dev
```
Now, you have a React development server and you can see your changes live.
## Current status
Before I go deeper into what changes I made, it's better to start with how Jitsi manages stranscription and subtitles.
The way Jitsi communicate with Jigasi is through local participant variables. This means that Jigasi communicates with each participant independantly and retrieves the transcription language and translation language in which to display the subtitles for the participant in question.
Moreover, the way Jitsi determine the transcription language is static. It reads the interface language when the meeting is launched and adds it as a local variable. This means that the transcription language is read only once by Jigasi.
This set up works well with Google API because it can detect the language of the audio and thus transcribe and translate it regardless. But, for VOSK that needs to know the language of the audio before doing the transcription, it doesn't.
## Implementation
Now that we understand how transcription works in Jitsi, we can start discussing our approach to implement our feature.
First we added a parameter to the transcription field in the config file: `autoRecognition`. By default, it is `true` in which case it the transcription service behave as usual. But if we want to use VOSK, or any transcription service with no language auto-recognition, we have to set it to `false`.

As you can see in the diagram above, what we want to achieve is for the moderator to choose the transcription language when activating the transcription service. Then, the other participants can choose the translation language for their own subtitles.
But, we still have the two problems: Jigasi communicate with each participant independantly and the transcription language is read only once.
Thankfully we managed to solve them:
1. We propagate the choice of the moderator: when the moderator chooses a language we save it in their local parameters and then launch the transcription service. Then once the transcription starts, the other participants retrieve the local parameters of the moderator and update their own. And the participants are then free to choose the translation language.
2. We reload the transcription service: once the moderator chooses a new transcription language, we change the transcription language and then restart the transcription service that way we can be assured that Jigasi rereads the new transcription language.
## Imperfections
While the feature allow the changing of the transcription language, because of the two problems and how we solved we are stuck with some imperfections that need to be resolved:
1. Because we propagate the moderator choice only when the moderator chooses a new transcription language, new participants who entered the meeting after that event won't have the correct transcription language. This could be solved by changing the participant behavior after their enter the conference, but this would require changing some files in the `react/base` folder.
2. Because we have to relaunch the transcription service each time the moderator chooses a new transcription language, we have a 3s delay before the transcription service restarts because that's the wait time for the transcription service to shutdown. This could be solved by changing how the transcription language is read and sent to Jigasi but I haven't found a way to do so.
## Pull Requests
| Pull Request | Status |
|-------|---|
| [feat(subtitles): allow changing the transcription language for VOSK](https://github.com/jitsi/jitsi-meet/pull/12696) | open |