# Locks and contexts in Linux
## Locks
* arch_spin_lock(): Just a spinlock
* raw_spin_lock(): Spinlock with lockdep
* spin_lock(): Spinlock on non-RT, and rt-mutex on RT
* rwlock
* rt-mutex: Sleepable locks with Priority-Inheritance
* mutex_lock(): Sleepable locks
* rw_semphore
## Contexts
* Hardirq context
* A (threaded) hardirq handler
* non-preemptible on non-RT, preemptible on RT
* Non-threaded irq context
* A non-threaded hardirq handler
* non-preemptible
* Softirq context
* A softirq handler
* non-preemptible on non-RT, preemptible on RT
* `rcu_read_lock()`
* preemptible, but cannot voluntarily sleep
* `rcu_read_lock_bh()`/`local_bh_disable()``
* preemptiable, but cannot voluntarily sleep
* `rcu_read_lock_sched()`/`preempt_disable()`
* non-preemptible
* `raw_spin_lock()``
* Same as `preempt_disable()`
* `spin_lock()`
* non-preemptible on non-RT, preemptible on RT
* `mutex_lock()`
* preemptible, and can call sleep functions
(in lib/locking-selftest.c)
```C
/*
* wait contexts (considering PREEMPT_RT)
*
* o: inner is allowed in outer
* x: inner is disallowed in outer
*
* \ inner | RCU | RAW_SPIN | SPIN | MUTEX
* outer \ | | | |
* ---------------+-------+----------+------+-------
* HARDIRQ | o | o | o | x
* ---------------+-------+----------+------+-------
* NOTTHREADED_IRQ| o | o | x | x
* ---------------+-------+----------+------+-------
* SOFTIRQ | o | o | o | x
* ---------------+-------+----------+------+-------
* RCU | o | o | o | x
* ---------------+-------+----------+------+-------
* RCU_BH | o | o | o | x
* ---------------+-------+----------+------+-------
* RCU_SCHED | o | o | x | x
* ---------------+-------+----------+------+-------
* RAW_SPIN | o | o | x | x
* ---------------+-------+----------+------+-------
* SPIN | o | o | o | x
* ---------------+-------+----------+------+-------
* MUTEX | o | o | o | o
* ---------------+-------+----------+------+-------
*/
```
## Wait context checking
(in include/linux/lockdep_types.h)
```C
enum lockdep_wait_type {
LD_WAIT_INV = 0, /* not checked, catch all */
LD_WAIT_FREE, /* wait free, rcu etc.. */
LD_WAIT_SPIN, /* spin loops, raw_spinlock_t etc.. */
#ifdef CONFIG_PROVE_RAW_LOCK_NESTING
LD_WAIT_CONFIG, /* preemptible in PREEMPT_RT, spinlock_t etc.. */
#else
LD_WAIT_CONFIG = LD_WAIT_SPIN,
#endif
LD_WAIT_SLEEP, /* sleeping locks, mutex_t etc.. */
LD_WAIT_MAX, /* must be last */
};
```
The checking algorithm and the rules are in `check_wait_context()` (called when a new lock acquired)
* Every lock as a `inner_wait_type` and an `outer_wait_type`
* e.g. `spinlock` is `outer=LD_WAIT_INV` and `inner=LD_WAIT_CONFIG`
* e.g. `rcu_read_lock()` is `outer=LD_WAIT_FREE` and `inner=LD_WAIT_CONFIG`
* `outer==LD_WAIT_INV` means `inner == outer`
* Detect the current context's wait type
* `curr_inner` = `LD_WAIT_SPIN` if in a non-threaded irq
* `curr_inner` = `LD_WAIT_CONFIG` if in an irq or softirq
* `curr_inner` = `LD_WAIT_MAX` otherwise
* Find most restrict wait type
* `curr_inner` = min(`curr_inner`, <all `inner` of held locks>);
* this means whenever a lock is acquired, we might descrease the `curr_inner` to a more restrict context.
* Warn if
* `next_outer` > `curr_inner` (`next_outer` is the `outer` of the next lock)
Example
```clike=
// curr_inner = MAX;
rcu_read_lock(); // outer=FREE, inner=CONFIG
// curr_inner = min(rcu_read_lock.inner, curr_inner) = CONFIG;
// next_outer(SLEEP) > curr_inner(CONFIG), BUG!!!
mutex_lock(); // inner==outer==SLEEP
```
```clike=
// curr_inner = MAX;
rcu_read_lock(); // outer=FREE, inner=CONFIG
// curr_inner = min(rcu_read_lock.inner, curr_inner) = CONFIG;
// next_outer(CONFIG) == curr_inner(CONFIG), OK
spin_lock(); // inner==outer==CONFIG
// curr_inner = min(spin_lock.inner, curr_inner) = CONFIG;
// next_outer(SLEEP) > curr_inner(FREE)
mutex_lock(); // inner==outer==SLEEP
```