# Discarding silent audio packets
By looking up Jitsi's repository we have found out that they drop silent audio packets to
save up processing power and bandwidth that would be wasted on decrypting audio packets and sending them to all other present participants.
The solution to that problem on our side was to introduce a similar mechanism that would inspect RTP packet's header (which is always unencrypted), lookup the audio level and drop all the packets that contain just silence.
## Solution
By reordering elements inside a `SessionBin` element from [membrane_rtp_plugin](https://github.com/membraneframework/membrane_rtp_plugin) we managed to introduce packet filters that can be attached to any incoming SRTP stream.
Thanks to packet filters we were able to creat a `SilenceDiscarder` element that could simply be attached to a supposed audio stream and then filter out silent packets by looking up the RTP header.
Due to the fact that packets got dropped by the SFU we had to provide some update mechanism to let the jitter buffer know that some packets were dropped and that it should not report a packet loss, same goes for SRTP decryptor that internally keeps a **ROC** counter (roll over counter) that we have no access to.
For the jitter buffer we simply send an event every time we stop to receive silent packets, the event contains an amount of packets that have been discarded.
To fix a problem with decryptor we had to send every **nth** packet (in our case it is by every 1000th packet) so it could update its **ROC** counter. This packet reaches jitter buffer beforehand.
## Results
We have performed 2 tests in the following manner:
### Comparison of participant joining and then muting themselves
- a new participant joins every 60 seconds (up to 4)
- every 60 seconds a single participants mutes himself till the end of the test
#### Before change
#### Performance chart after change 
#### Interpretation
The chart shows quite a lot of CPU spikes whose source for now is rather unknown.
Besides that, every participant getting muted seems to lower CPU usage by around 15-20%.
The CPU reduction is probabaly due to skipping packets decryption and what is more important
skipping encryption for every other participant. Of course there is reduced amount of message passing and membrane's core processing performed for each dropped packet.
### All participants being in a room and then mute themselves
- all participants are present in a room (6 participants) for 60 seconds
- every 10 seconds another participants mute himself till the end of the test
#### Before change
Test has been performed for a minute as muting microphone does not change anything.

#### Performance chart after chagne 
#### Interpretation
This time chart has some spikes but not as big as it happened with a previous chart but it might be causes by a shorter period of time on which the test has been performed. Again, for every participant that mutes himself it is visible on the chart. CPU drop per participant is around 10-20% but averaging 15%.
## Conclusion
Discarding silence happened to be a decent optimization. A huge advantage is that the more participants there are in a room the more CPU we will save (given that in bigger rooms most of the people are actually muted and are not actively speaking which is a real life scenerio).
The current solution does not require any notifications on when some participant muted his microphone, it all happens automatically. Once his audio level goes up all the packet will again get forwarded to all the other participants.