**Describe the bug** Hello, we started to get coredumps with srt lib on CentOS7 system more recently. In `/var/log/messages`, the logs look like this: ``` Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: *** Error in `./transcoder': munmap_chunk(): invalid pointer: 0x00007f16a5157010 *** Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: ======= Backtrace: ========= Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: /lib64/libc.so.6(+0x7f3e4)[0x7f169ce653e4] Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: /opt/transcoder/srt/9e52606/lib64/libsrt.so.1(_ZN12CSndLossListD1Ev+0x11)[0x7f169d93a8e1] Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: /opt/transcoder/srt/9e52606/lib64/libsrt.so.1(_ZN4CUDTD1Ev+0xb9)[0x7f169d91c1c9] Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: /opt/transcoder/srt/9e52606/lib64/libsrt.so.1(_ZN10CUDTSocketD1Ev+0x2c)[0x7f169d90517c] Oct 5 19:43:53 BRIDGE-LIVE transcoder-base: /opt/transcoder/srt/9e52606/lib64/libsrt.so.1(_ZN10CUDTUnited12removeSocketEi+0x60e)[0x7f169d909d0e] Oct 5 19:43:54 BRIDGE-LIVE transcoder-base: /opt/transcoder/srt/9e52606/lib64/libsrt.so.1(_ZN10CUDTUnited18checkBrokenSocketsEv+0x3f2)[0x7f169d90a3c2] Oct 5 19:43:54 BRIDGE-LIVE transcoder-base: /opt/transcoder/srt/9e52606/lib64/libsrt.so.1(_ZN10CUDTUnited14garbageCollectEPv+0x58)[0x7f169d90a4d8] Oct 5 19:43:54 BRIDGE-LIVE transcoder-base: /lib64/libpthread.so.0(+0x7ea5)[0x7f169d1bbea5] Oct 5 19:43:54 BRIDGE-LIVE transcoder-base: /lib64/libc.so.6(clone+0x6d)[0x7f169cee48dd] ``` From generated coredump, we could got this stacktrace: ``` #0 0x00007fcbdead3387 in raise () at /lib64/libc.so.6 #1 0x00007fcbdead4a78 in abort () at /lib64/libc.so.6 #2 0x00007fcbdeb15ed7 in __libc_message () at /lib64/libc.so.6 #3 0x00007fcbdeb1c3e4 in malloc_printerr () at /lib64/libc.so.6 #4 0x00007fcbdf5fc331 in CSndLossList::~CSndLossList() (this=0x7fcb18022090, __in_chrg=<optimized out>) at /tmp/transcoder_installation_stuff/srt-50b7af06f3a0a456c172b4cb3aceafa8a5cc0036/srtcore/list.cpp:94 #5 0x00007fcbdf5db00c in CUDT::~CUDT() (this=0x7fcb180009d0, __in_chrg=<optimized out>) at /tmp/transcoder_installation_stuff/srt-50b7af06f3a0a456c172b4cb3aceafa8a5cc0036/srtcore/core.cpp:324 #6 0x00007fcbdf5c0e1c in CUDTSocket::~CUDTSocket() (this=0x7fcb180008c0, __in_chrg=<optimized out>) at /tmp/transcoder_installation_stuff/srt-50b7af06f3a0a456c172b4cb3aceafa8a5cc0036/srtcore/api.cpp:100 #7 0x00007fcbdf5c4d85 in CUDTUnited::removeSocket(int) (this=this@entry=0x7fcbdf81fa40 <CUDT::s_UDTUnited>, u=66506507) at /tmp/transcoder_installation_stuff/srt-50b7af06f3a0a456c172b4cb3aceafa8a5cc0036/srtcore/api.cpp:2565 #8 0x00007fcbdf5c581a in CUDTUnited::checkBrokenSockets() (this=this@entry=0x7fcbdf81fa40 <CUDT::s_UDTUnited>) at /tmp/transcoder_installation_stuff/srt-50b7af06f3a0a456c172b4cb3aceafa8a5cc0036/srtcore/api.cpp:2498 #9 0x00007fcbdf5c5938 in CUDTUnited::garbageCollect(void*) (p=0x7fcbdf81fa40 <CUDT::s_UDTUnited>) at /tmp/transcoder_installation_stuff/srt-50b7af06f3a0a456c172b4cb3aceafa8a5cc0036/srtcore/api.cpp:2830 #10 0x00007fcbdee72ea5 in start_thread () at /lib64/libpthread.so.0 #11 0x00007fcbdeb9b8dd in clone () at /lib64/libc.so.6 ``` We employed thread sanitizer ([tsan](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)) and it reported this: ``` WARNING: ThreadSanitizer: data race (pid=177337) Write of size 1 at 0x7b40000080d8 by thread T3 (mutexes: write M23, write M20): #0 pthread_mutex_destroy <null> (libtsan.so.0+0x2e66a) #1 srt::sync::Mutex::~Mutex() /root/srt/srtcore/sync_posix.cpp:196 (libsrt.so.1+0x1769af) #2 CUDTSocket::~CUDTSocket() /root/srt/srtcore/api.cpp:97 (libsrt.so.1+0xa4f44) #3 CUDTUnited::removeSocket(int) /root/srt/srtcore/api.cpp:2565 (libsrt.so.1+0xad2aa) #4 CUDTUnited::checkBrokenSockets() /root/srt/srtcore/api.cpp:2498 (libsrt.so.1+0xaccac) #5 CUDTUnited::garbageCollect(void*) /root/srt/srtcore/api.cpp:2830 (libsrt.so.1+0xae8b1) #6 <null> <null> (libtsan.so.0+0x2b426) Previous atomic read of size 1 at 0x7b40000080d8 by thread T1 (mutexes: write M483991051313348864, write M484835424703906008): #0 pthread_mutex_unlock <null> (libtsan.so.0+0x4275a) #1 srt::sync::Mutex::unlock() /root/srt/srtcore/sync_posix.cpp:206 (libsrt.so.1+0x176a12) #2 srt::sync::ScopedLock::~ScopedLock() /root/srt/srtcore/sync_posix.cpp:222 (libsrt.so.1+0x176afe) #3 CUDTUnited::close(CUDTSocket*) /root/srt/srtcore/api.cpp:1821 (libsrt.so.1+0xaa45e) #4 CUDTUnited::close(int) /root/srt/srtcore/api.cpp:1814 (libsrt.so.1+0xaa013) #5 CUDT::close(int) /root/srt/srtcore/api.cpp:3318 (libsrt.so.1+0xb08d6) #6 srt_close /root/srt/srtcore/srt_c_api.cpp:151 (libsrt.so.1+0x171dca) #7 cmpto::network::SrtConnection::closeSocket(int&) /root/transcoder/cmpto_network/Server/SrtConnection.cpp:93 (transcoder+0x90153c) #8 cmpto::network::SrtConnection::release() /root/transcoder/cmpto_network/Server/SrtConnection.cpp:101 (transcoder+0x90153c) ``` We looked into code a bit and we think there really might be a possibility for race when socket is being removed from protocol reasons like broken connection from listener to caller while it is being, at the same time, removed by the API `srt_close` call. Is it possible that while this lock here is active: https://github.com/Haivision/srt/blob/724b841082beb28c6e39d05a579fd2470de338bb/srtcore/api.cpp#L1821, the socket `s` gets destroyed and we then e.g. destroy the mutex before unlocking it, which is undefined or similar? I mean there is a check in [locateSocket](https://github.com/Haivision/srt/blob/724b841082beb28c6e39d05a579fd2470de338bb/srtcore/api.cpp#L2337) for `m_Status == SRTS_CLOSED` and there is a global lock there too. But I think a certain active socket may be selected for closing in `CUDTUnited::close(const SRTSOCKET u)` while it is destroyed immediately after by garbage collector thread in `checkBrokenSockets`. Isn't such scenario possible? I didn't really check everything but a quick look suggests there might be a possibility for this issue. Could you, please, say if this makes sense to you? Or if there is any obvious omission here, could you point it out? We have been debugging this problem for a while and any information helps. Thank you very much! **To Reproduce** Steps to reproduce the behavior: Hard to reproduce in practice. It so far happens only on machines of our customer. It always occurred after some hours (sometimes even 23) on a local network with callers attached and disattached at certain points (which seems to be the trigger for the crash). Necessary steps to reproduce it in theory should be: - setup listener socket - let it be destroyed naturally - at the same time, call `srt_close` -> race **Expected behavior** No crash, there is some locking mechanism to prevent this or there is an info somewhere why what we are doing as clients is wrong **Desktop (please provide the following information):** - OS: Linux CentOS7 - SRT Version / commit ID: 1.4.2