---
title: PhySec
---
[toc]
# I Wireless signals
## Learning Objectives
1. Students can explain how electro magnetic waves are transmitted and received by an antenna. [here](#Antenna-elementaries)
3. Students can mathematically describe and explain how bits are mapped to symbols that modulate the amplitude, phase and frequency of a Cosine wave. [here](#Wireless-signals-as-combination-of-sin-and-cos)
4. Students can mathematically describe and explain how OFDM-based Wi-Fi frames are modulated. [here](#Wifi-OFDM)
5. Students can identify the following modulations in a signal and explain their differences and means to modulate and demodulate them: AM, FM, PSK, ASK, BPSK, OFDM, DSSS, Frequency Hopping, Pulse Width Modulation, SSB, DSB **Über das ganze Kapitel verteilt**
6. Students can read and interpret waterfall diagrams including presented signals. **Not sure how that is meant**... [Am ehesten das](#Frequency-hopping)
7. Students can analyze signals in Audacity, Baudline, and MATLAB resp. Octave. **MATLAB-Übung**
8. Students can name pass filters that fulfill filtering goals. [here](#Filters)
9. Students can apply the appropriate filters required for separating signals for analysis and demodulation. [here](#Filters) and [here](#RTL-SDR)
10. Students can name and explain components of software-defined radios and explain the signal flow through the components for transmission and reception. [here](#RTL-SDR)
11. Students can explain the difference between baseband signals, complex-valued baseband signals, carrier signals, and radio frequency signals. [here](#Up--and-Downconversion) and following
12. Students can mathematically describe and explain how signals at specific frequencies can be indentified in a test signal by correlating with a reference signal. [here](#Generating-signal-with-multiple-frequencies)
13. Students can explain the effects of signals that are orthogonal to each other. [here](#Multiple-frequencies-Conclusion)
14. Students can mathematically describe and explain how a discrete resp. fast Fourier transform works and give examples where and how such a transform can be used in signal analysis. [here](#Fourier-Transform)
15. Students can explain why complex signals are required to represent both phase and amplitude of signal components. [here](#From-Real-to-Complex-signals-using-the-Hilbert-Transform)
16. Students can explain the effect to a signal‘s specturm when applying upconversion and downconversion. [here](#Up--and-Downconversion) and following
17. Students can mathematically describe and explain signal mixers. [ich denke damit ist das hier gemeint](#Quadraturmodulator)
18. Students can explain the effect of the Hilbert transform and how it works. [here](#From-Real-to-Complex-signals-using-the-Hilbert-Transform)
19. Students can explain the components of FM radio receivers and transmitters with RDS support. **Lab**
20. Students can explain and identify the baseband signal of a stereo audio signal with RDS channel before FM modulation. **Lab**
21. Students can explain how to use a software-defined radio to send FM signals to an off-the-shelve radio receiver. **Lab**
22. Students can operate USRP devices to transmit wireless signals using MATLAB. **MATLAB-Übung**
23. Students can modulate complex baseband signals in MATALB. **MATLAB-Übung**
24. Students can find information in technical documents. **That is not taught in the lecture**, **Lab**
## Antenna elementaries

* Non alternating signals cannot be transmitted by antennas
* Feeding alternating signals to a dipole antenna creates an alternating electro-magnetic (EM) field around the antenna
* The field can be observed by a second antenna that gives out an alternating signal that can be processed
## Wireless signals as combination of sin() and cos()
$$ \underset{Amplitude}{A} \cdot \sin(2\pi \underset{Frequency}{f} t + \underset{Phase}{\phi})
$$
* Amplitude Shift Keying (ASK) modifies the signal's amplitude to represent symbols

* Phase Shift Keying (PSK) modifies the signal's phase to represent symbols

* Frequency Shift Keying (FSK) modifies the signal's frequency to represent symbols

### Increasing the bitrate
Each symbol represents more than just on bit:
For example ASK:
Instead of:
$A(0) = 0.5$
$A(1) = 1$
Use:
$A(00) = 0.25$
$A(01) = 0.5$
$A(10) = 0.75$
$A(11) = 1$
### Representing symbols as complex numbers
$$A\ cos(2\pi ft + \phi) \overset{Addition\ theorem}{=} A\ cos(\phi) \cdot A\ cos(2\pi ft) - A\ sin(\phi) \cdot A\ sin(2\pi ft)
$$
Symbols based on amplitude can be represented as complex numbers independent of the frequency:
$$Ae^{j\phi}= A(cos(\phi)+ j\ sin(\phi))
$$

#### Quadrature Amplitude Modulation

#### Wifi (OFDM)
11 channels, 2.412 - 2462 GHz
20MHz wide channels with 48 subchannels usable for data transmission.
$$ A \cdot cos(2\pi\cdot \underset{Carrier\ frequency}{f_{c}} \cdot t + 2\pi\cdot \underset{subchannel\ frequency}{f_{sc}} \cdot t +\phi) = \\\
\underbrace{A \cdot cos(\phi + 2\pi f_{sc}t)}_{A \cdot cos(\phi + 2\pi f_{sc}t)} \cdot cos(2\pi f_c t) - \underbrace{A \cdot sin(\phi + 2\pi f_{sc}t)}_{j\cdot A \cdot sin(\phi + 2\pi f_{sc}t)} \cdot sin(2\pi f_c t)
$$
Combining both parts into a complex number:
$$ A \cdot cos(\phi + 2\pi f_{sc}t) + j\cdot A \cdot sin(\phi + 2\pi f_{sc}t) = \\
Ae^{j(\phi+2\pi f_{sc t})} = Ae^{j\phi} \cdot e^{j2\pi f_{sc}t}
$$
We can transmit individual data symbols on each of the 48 WiFi subchannels.
### Signal transmission

---

### Pulse Modulation

There are also possible combinations of several pulse widths and several break widths.
## Signal Spreading
### DSSS

### Frequency hopping

## Building blocks of wireless signals

## Filters

## Software Defined Radios
And how to acquire a signal using them.
1. [RTL-SDR](#RTL-SDR)
2. USRP
3. WARP (->Lab)
* FPGA Board
### RTL-SDR
RTL: Vermutlich die Chipbezeichnung, könnte auf real time logic zurückgehen. Wird nirgends genauer erklärt
SDR: Software-Defined Radio

#### Filtering by antenna:
* Use antenna for specific wavelength/frequency
#### Amplifying
* Increase signal strength
* Gain and clipping
* too high gain leads to clipping

#### Bandpass filtering
* Use bandpass filter to filter more specific
#### Downmixing

* Shift signal of interest down to zero for further analysation
#### Quadraturmodulator

##### Übung:
Show that
$I'(t) = 0.5I(t) + 0.5 [I(t)cos(4\pi ft) - Q(t)sin(4\pi ft)]$
$Q'(t) = 0.5Q(t) - 0.5 [I(t)sin(4\pi ft) + Q(t)cos(4\pi ft)]$
**I'(t)**
$I'(t) = s(t) \cdot cos(2\pi ft)$
$= I(t)cos(2\pi ft)cos(2\pi ft) - Q(t)sin(2\pi ft)cos(2\pi ft)$
$= 0.5I(t)(cos(4\pi ft)+1) - 0.5Q(t) sin(4\pi ft)$
$= 0.5I(t) + 0.5 [I(t)cos(4\pi ft) - Q(t)sin(4\pi ft)]$
**Q'(t)**
$Q'(t) = s(t)(-sin(2\pi ft))$
$= -I(t)cos(2\pi ft)sin(2\pi ft) + Q(t)sin(2\pi ft)sin(2\pi ft)$
$= -0.5 I(t)sin(4\pi ft)+0.5Q(t)(1-cos(4\pi ft))$
$= 0.5Q(t) - 0.5 [I(t)sin(4\pi ft) + Q(t)cos(4\pi ft)]$
**Wichtige Umformungen**
$cos(x)cos(x) = 0.5 (cos(2x)+1)$
$cos(x)sin(x) = 0.5 sin(2x)$
$sin(x)sin(x) = 0.5 (1-cos(2x))$
#### DC Offset Correction
Apparently there is an error in the middle of the signal. That is filtered out in this step, setting the center of the signal to zero.
> In Sopie's recap it was said that at this point the carrier frequency is in the middle. That would make sense because it was just added to be able to transport the signal [color=red][name=BM]
#### Before analog-to-digital conversion
Since the signal of interest was shifted down in the Downshifting step, we now can use a low pass filter to filter it again.
#### Sampling
Sampling frequency $f_s$ has to be two time the maximum frequency in a signal (according to Shannon)
$$sin(2\pi ft) \rightarrow sin(2\pi f\frac{n}{f_s})
$$
$$t= \frac{n}{f_s}= n\cdot T
$$

#### Correctly handling radio captures

## Generating signal with multiple frequencies

### Detecting frequencies by correlation

$Energy = \sum_{n=0}^{N-1}s^2_n$
$Power = \frac{Energy}{Time} = \frac{1}{N}\sum_{n=0}^{N-1}s^2_n$
By multiplying the test signal with different reference signals, taking the mean and absolute value of it, we can see if it was used in the generation of the test signal. (It is, if it is above 0)

In this Spectogram 440Hz, 880 Hz and 1760 Hz were used to generate the test signal, and 440Hz, 600Hz, 880Hz, 1000Hz and 1760Hz were used as reference signals.
### Multiple frequencies: Conclusion
Cosine test signals correlate with cosine refernce signal if the frequencies match.
Sine and sosine functions do not correlate.
* Mathematical proof:
* $<sin,cos> = \int^{2\pi}_0sin(\phi)cos(\phi) = 0$
* Scalar product of sine and cosine is zero. Hence, sine and cosine functions are orthogonal.
## Phase shift
* Only considering sine or cosine functions does not work, if test and reference signal are not in phase
* Remember: sine and cosine functions are orthogonal to each other (= 90° phase shift)
* By correlating with sine and cosine functions, we can represent the **phase** and the **signal power** of reference signals in a test signal.
## Fourier Transform

$FFT_{mean}(s_n)=\frac{FFT(s_n)}{N}$
$abs(FFT_{mean}(s_n))$ leads to following spectogram:

## Up- and Downconversion
Transmitting lower frequency signals at higher frequencies.
1. Transmitter upconverts signal to higher frequency.
2. Signal is transmitted at higher frequency.
3. Receiver downconverts received signal to lower frequency and handles signal.


The two sidebands are produced by the upconversion.
Filtering out one of the sidebands would be more efficient.
**Specturm plot of AM Signal**


### From Real to Complex signals using the Hilbert Transform

The Hilbert transfom leads us to following signals:

### Up and Downconversion: Conclusion
* Real signals mirroed in the frequency domain (positive and negative frequencies exist)
* When upconverting a real signal (amplitude modulation AM), we get a double side band (DSB) signal, whose envelope is the base band signal
* to remove negative frequency components, we have to use complex baseband signals
* To convert from real to comlex baseband signals, we use the Hilbert transform, that generates a 90° phase shifted signal
# II Confidentiality
## Goals
1. Students can name effects that lead to channel distortions.
1. Students can mathematically describe frequency selective channel distortions with respect to amplitude and phase changes.
1. Students can explain how frequency selective distortions affect OFDM-based Wi-Fi signals.
1. Students can describe how signals transmitted by multiple antennas are received on one antenna (MISO - Multiple Input, Single Output).
1. Students can mathematically describe MISO systems.
1. Students can describe systems with multiple receive and transmit antennas (MIMO - Multiple Input, Multiple Output).
1. Students can explain and mathematically describe how the signal quality can be increased by increasing the number of receive antennas.
1. Students can explain Wyner's Wiretap Channel.
1. Students can explain the different experiments of the STROBE paper (Omni, SUBF, Cooperative Eavesdropper, STROBE).
1. Students can mathematically describe how STROBE works.
1. Students can name short comings of the STROBE approach.
1. Students can explain attacks against the STROBE approach.
1. Students can mathematically describe a known-plaintext attack against STROBE.
## Repetition
- See Ch. 2
## Channel Representations
$$Y = H \times X + N
$$
### WiFi
$$A \cdot \cos(2\pi f_c t + 2\pi f_{sc} t + \phi)
$$
We transmit individual data symbols on each of the 48 subcarriers. Each subcarrier has its channel coefficients.
## Introduction of Multi-Antenna (MIMO) Systems
To reconstruct 2 symbols that were transmitted via 2 antennas, the receiver needs at least the same amount of antennas such that the **matrix of channel coefficients** is invertible:
$$
Y_{RX} = \underset{\text{not invertible}}{\underbrace{\begin{bmatrix} H_{RX,TX1} & H_{RX,TX2}\end{bmatrix}}} \cdot \begin{bmatrix} X_{TX1} \\ X_{TX2} \end{bmatrix}
$$
$$
\begin{bmatrix} Y_{RX1} \\ Y_{RX2} \end{bmatrix} = \underset{\text{invertible}}{\underbrace{\begin{bmatrix} H_{RX1,TX1} & H_{RX1,TX2} \\ H_{RX2,TX1} & H_{RX2,TX2} \end{bmatrix}}} \cdot \begin{bmatrix} X_{TX1} \\ X_{TX2} \end{bmatrix} \Rightarrow X = H^{-1} \cdot Y
$$
The Inverse H is used as receive filter.
## Discussion of the Strobe Paper
The secrecy capacity of Wyner's Wiretap Channel is the difference in Eve's and Bob's channel capacities.
Strobe-Paper compares 4 approaches:
1. _Omnidirectional antenna:_ TX Alice, RX Bob, RX Eve 1 antenna each.
The communication is eavesdroppable.
2. _Single-User Beamforming:_ TX Alice has 2 antennas, RX Bob, RX Eve 1 antenna each.
The data sent out by Alice is prefiltered ($\left[H_{\text{Alice}\rightarrow\text{Bob}}\right]^{-1} \times \text{Data}$).
3. _Cooperative Eavesdropper:_ Same setup as 2. If the eavesdropper cooperates, noise can be sent over her channel using the second antenna, prefiltered using her channel coefficients.
4. _STROBE:_ Same setup as 2. Since Eve does not usually cooperate, Alice can send noise orthogonally to Bob's channel.
$$
\begin{bmatrix} Bob \\ Eve \end{bmatrix} =
\begin{bmatrix} H_{\text{Alice}\rightarrow\text{Bob}} \\ H_{\text{Alice}\rightarrow\text{Eve}} \end{bmatrix}
\underset{\text{Filter}}{\underbrace{\begin{bmatrix} H_{\text{Alice}\rightarrow\text{Bob}} \\ \bot H_{\text{Alice}\rightarrow\text{Bob}} \end{bmatrix}}}^{-1}
\begin{bmatrix} \text{Data} \\ \text{Noise} \end{bmatrix}
$$
## Discussion of the Known-Plaintext Attacks Paper
To separate Data and Noise, Eve would need to know Alice’s transmit filter that is based on Alice’s channel to Bob. Additionally to be able to discern two signal dimensions (Data + Noise), the attacker needs at least 2 antennas (for an inverse matrix to exist).
$$
\underset{\text{Can be estimated by }\textbf{training an adaptive filter}}{\underbrace{\begin{bmatrix} H_{\text{Alice}\rightarrow\text{Bob}} \\ \bot H_{\text{Alice}\rightarrow\text{Bob}} \end{bmatrix}
\begin{bmatrix} H_{\text{Alice}\rightarrow\text{Eve}} \\ \end{bmatrix}^{-1}}} \cdot Eve =
\begin{bmatrix} \text{Data} \\ \text{Noise} \end{bmatrix}
$$
# III Information theory
## Goals
1. Students can explain and differentiate the following terms: Uncertainty, Entropy, Probability.
1. Students can explain and give examples for Random Experiments and Outcomes of Random Experiments.
1. Students can explain and mathematically describe the following terms: Conditional Entropy, Mutual Information, and Channel Capacity.
1. Students can explain what is understood by the term Channel in Information Theory.
1. Students can describe the Shannon's Cipher System.
1. Students can explain the term Perfect Secrecy.
1. Students can give examples on how to reach Perfect Secrecy in practice.
1. Students can explain Wyner's Degraded Wiretap Channel.
1. Students can explain the terms Equivocation of a Code, and Information Leakage.
1. Students can explain by example how information can be compressed without loosing information.
## Definitions
> Students can explain and differentiate the following terms: Uncertainty, Entropy, Probability.
> Students can explain and give examples for Random Experiments and Outcomes of Random Experiments.
> Students can explain and mathematically describe the following terms: Conditional Entropy, Mutual Information, and Channel Capacity.
To describe the **Uncertainty** of an event, we use **(Shannon) Entropy**. Therefore Entropy is a measure of Uncertainty. To compute the Entropy, which is measured in bits, we use the **Probability** of events. Entropy can be interpreted not only as measure of uncertainty but as the average number of bits needed to represent all outcomes of an experiment. Let $X$ be a random variable denoting an experiment that has $N$ outcomes, the entropy can be calculated like:
$$ \langle \#Bits \rangle = - \sum_{x \in X} Pr[\,x\,] \cdot \log_2(Pr[\,x\,])
$$
**An example:**
English has 26 letters. If we assume that each one is as likely as the other to occur in a text, we get an entropy of
$$ - \sum_{i = 1}^{26} \frac{1}{26} \cdot \log_2 \left(\frac{1}{26}\right) =-26 \cdot \frac{1}{26} \cdot \log_2 \left(\frac{1}{26}\right) = \\ -\log_2\left(\frac{1}{26}\right) \approx 4.7 bits
$$
Therefore we would need 5 bits/character for a representation.
**Conditional entropy:**
The entropy of a random variable $Y$ if another variable $X$ is known is called conditional entropy $\mathbb{H}$. It is calculated:
$$ \mathbb{H}(Y|X) = - \sum_{x \in X,\ y \in Y} Pr[\,x \land y\,] \cdot \log_2\left(\frac{Pr[\,x\,]}{Pr[\,x \land y\,]}\right)
$$
$\mathbb{H}(Y|X) = \mathbb{H}(Y)$, if $Y$ and $X$ are independent and
$\mathbb{H}(Y|X) = 0$, if $Y$ is completely determined by $X$
**Mutual information:**
By how much does the entropy of one random variable reduce if we know another? This circumstance is labelled $\mathbb{I}$ for intersection:
$$ \mathbb{I}(X;Y) = \mathbb{H}(X) - \mathbb{H}(X|Y) \\
\quad\quad\ \quad\ = \mathbb{H}(Y) - \mathbb{H}(Y|X) \\
= \sum_{y \in Y}\sum_{x \in X} Pr[\,x,y\,] \cdot \log_2\left( \frac{Pr[\,x,y\,]}{Pr[\,x\,] \cdot Pr[\,y\,]} \right)
$$
**Channel Capacity**
The Channel Capacity $C$ is the tightest upper bound on the rate of information that can be reliably transmitted over a communications channel. It is defined as:
$$ C = \underset{p_X(x)}{\sup}\mathbb{I}(X;Y)
$$
with
$X$: Random variable for the transmitted data
$Y$: Random variable for the received data
where the supremum is taken over all possible choices of $p_X(x)$
## Channels
> Students can explain what is understood by the term Channel in Information Theory.
> Students can describe the Shannon's Cipher System.
> Students can explain the term Perfect Secrecy.
> Students can give examples on how to reach Perfect Secrecy in practice.
A **Channel** links a transmitter and receiver of information. A message $M$ is first encoded by the Encoder $e$ into a symbol $X$. This symbol is transmitted over a wireless channel $H$. The channel adds distortions on the $X$ and adds additive noise, resulting in a symbol $Y$ beeing received at the other end. $Y$ has to be decoded now by the Decoder $d$ to obtain a message $M^*$. Only with low channel distortions $M=M^*$ holds.
## Shannon's Cipher System
Alice wants to transmit Information to Bob. She uses an Encoder $e$ while Bob uses a Decoder $d$. Note that in Shannon's cipher system, _channel distortions are not considered!_ If a wiretapper exists and listens, he obtains the symbol $X$, equal to the symbol sent out by Alice. Lets assume a message $M \in \mathcal{M}$, key $K \in \mathcal{K}$ the transmitted data $X \in \mathcal{X}$, then:
| Encoder | Decoder |
| ------------------------------------------------------- | ------------------------------------------------------- |
| $e:\mathcal{M}\times\mathcal{K}\rightarrow \mathcal{X}$ | $d:\mathcal{X}\times\mathcal{K}\rightarrow \mathcal{M}$ |
| $X=e(M,K)$ | $m=d(X,K)$ |
**Perfect Secrecy** now means that the knowledge of the encoded symbol does not give the adversary any new information about the message that was encoded. Mathematically formulated: $$ \mathbb{H}(M|X)=\mathbb{H}(M) \Leftrightarrow \mathbb{I}(M;X)=0
$$
This is achieved if the entropy of the key is at least as big as the entropy of the message $\mathbb{H}(K) \ge \mathbb{H}(M)$, which means that $K$ has at least as many bits as $M$ implying a **One-Time Pad** encryption.
## Wyner's Degraded Wiretap Channel
> Students can explain Wyner's Degraded Wiretap Channel.
> Students can explain the terms Equivocation of a Code, and Information Leakage.
> Students can explain by example how information can be compressed without loosing information.
**Wyner's degraded wiretap channel** models a message encoded by Alice as symbol $X$. As $X$ gets transmitted through the channel, it has a certain probability of beeing transmitted to Bob $p_{Y|X}$ and Bob receiving a symbol $Y$. Eve, beeing a wiretapper listens to that communication as well. Because she has a differnt channel with Alice, the symbol she receives is $Z$.
$$
\boxed{Alice} \xrightarrow{X} \boxed{Channel\ p_{Y|X}} \underset{\ \ \ \ \boxed{Channel\ p_{Z|Y}}\ \xrightarrow{Z}\ \boxed{Eve} }{\xrightarrow[\downarrow]{Y}} \boxed{Bob}
$$
The **Equivocation** [= conditional entropy laut wikipedia / engl. Doppeldeutigkeit] of the Code $\mathcal{C}_n$ denotes the conditional probability of the plaintext given the cryptogram. This conditional entropy can be at most as large as the rate of the purely random key stream (i.e. by implementing a One-Time Pad).
$$ \textbf{E}(\mathcal{C}_n) \widehat{=} \mathbb{H}(M|Z^n\mathcal{C}_n)
$$
When thinking of Entropy as information content (Informationsgehalt), it helps to visualize the following:
| Formula | Definition |
| ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| $\mathbb{H}(X)$ | Alices's information content |
| $\mathbb{H}(Y)$ | Bob's information content |
| $\mathbb{H}(X\|Y)$ | The Equivocation (Doppeldeutigkeit) <br>$\Rightarrow$ Information about $X$, knowing the value of the received symbol $Y$. Gets lost when Alice communicates with Bob. |
| $\mathbb{H}(Y\|X)$ | The misinformation (Fehlinformation). This is what's added by the channel. |
| $\mathbb{H}(X;Y) = \\ \mathbb{H}(X) + \mathbb{H}(Y\|X)$ | The total amount of information. Equivalent to what Alice sent $\mathbb{H}(X)$ + the misinformation added by the channel $\mathbb{H}(Y\|X)$ received. Can be viewed alternatively as what Bob received $\mathbb{H}(Y)$ + the Equivocation $\mathbb{H}(X\|Y)$. |
| $\mathbb{I}(X;Y) = \\ \mathbb{H}(Y) - \mathbb{H}(Y\|X)$ | Information leakage. this is the information leaked to the eavesdropper. Consists e.g. of the information received by Y minus the misinformation. |
The **Information Leakage** in the slides is also written as:
$$ \textbf{I}(\mathcal{C}_n)\ \widehat{=}\ \mathbb{I}(M;Z^n\mathcal{C}_n)
$$
With $Z^n$ denoting the received symbol (of length n).

# IV Jamming
## Learning Objectives
1. An introduction to wireless jamming
2. selected aspects of adversarial jamming
3. selected aspects of friendly jamming (exercise reading)
4. Introduction on how to combat jamming
---
1. Students can list real-world examples of jamming. **Hab ich nicht mit aufgenommen, weil Beispiele..**
1. Students can explain which parameters influence the jamming to signal ratio. [here](#Jamming-Parameters)
1. Students can identify basic countermeasures against adversarial jamming.
1. Students can list different types of jammers and compare them in terms of efficiency – and then apply these parameters to optimize jammers for known protocols such as WiFi. [here](#Different-kinds-of-jammers)
1. Students know how jamming can be used for downgrade attacks. [here](#Jamming-example---MITM)
1. Students can list different anti-jamming methods. [here](#Anti-Jamming-Communication)
1. Students can explain how to localize a FHSS station. [here](#Finding-FHSS-such-as-bluetooth)
1. Students understand the mathematics behind DSSS to choose signal a bandwidth that sufficiently increases the processing gain. [here](#Motivation-Shannon-channel-capacity-C-) and [here](#Processing-gain-PG)
1. Students can explain the concept of friendly jamming for confidentiality and know ist limitations. **(-> Paper)**
1. Students can explain the components of a WARP Software-defined Radio **Lab**
1. Students can describe how the 802.11 Reference Implementation processes frames. **Lab**
1. Students can modify the 802.11 Reference Implementation and explain how to reactively Jam frames using the WARP. **Lab**
## Definition of jamming
* Jammer **M**allory transmits signals on the same frequency/band on which the honest parties communicate
* Blocks the reception of the message from sender **A**lice at the receiver **B**ob

### Objectives of jamming (artifical interference)
* **Modification** (e.g. bit flipping)
* Can cause the message to change or become undecodable
* Can be (partially) addressed by Error Correction Codes
* **Overshadowing**
* The attacker's signal is dominant, the original seems like noise
* The attacker's signal makes it impossible for the radio to decode (demodulate) the message, i.e. $m_{Source}+m_{Attacker}=random/cannot\ be\ decoded$ (low SINR, implies high BER)
* Jamming and overshadowing can be (partially) addressed by **spread spectrum** and similar communication techniques
### Jamming Parameters
* **Jamming-to-signal (J/S) ratio**
The ratio of the power of the two received signals within the frequency passband of the receiver.
* Importance of Jammer's Location
* Antenna gain: The ratio of the intensity, in a given direction, to the radiation intensity that would be obtained if the power accepted by antenna were radiated isotropically
* If the receiving antenna is not omnidirectional, its gain to the jamming signal will be different (usually less) than its gain to the desired signal
#### Parameters influencing J/S
:::warning
Diese Tabelle ist schon in den Folien nur schwer verständlich. Sollten wir mal durchsprechen und dann hier verständlich korrigieren
:::
The effect of each parameter in the jamming situation on J/S:
| Parameter (increasing) | Effect on J/S |
| -------- | -------- |
| Jammer transmit power | Directly increases J/S dB for dB |
| Jammer antenna gain | Directly increases J/S dB for dB |
| Jammer-to-receiver distance | Decreases J/S as the distance |
| Signal transmit power | Directly decreases J/S db for dB |
| Transmitter-to-receiver distance | Increases J/S as the distance |
| Transmit antenna gain | Directly decreases J/S db for dB |
| (Directional) receiver antenna gain | Directly decreases J/S db for dB |
## Jamming 802.11 (WiFi)
### How does a WiFi-Frame look like?

### Different kinds of jammers
#### Channel-oblivious & memoryless
Jammer makes decisions without sensing the channel and independently from past actions.
There are two types of these jammers:
1. in continuous time, jamming pulses arrive according to a Poisson distribution
2. in discrete time, the jammer has a fixed probability of transmitting a pulse every timeslot
#### Channel-oblivious & stateful
Jammer has no access to the channel state; however their actions may be dependent on their past behaviour.
The simplest example is a **periodic jammer.** Another approach is to send bursts of pulses and then stop for a long period of time before repeating. Such a jammer could attempt to drive the nodes into a long backoff period where they dont attempt to send packets even though no jamming is occurring.
#### Channel-Aware & memoryless
Jammers have basically one jamming rate for each possible state of the channel (e.g. busy, idle).
#### Channel-Aware & stateful
* Are the most sophisticated jammers
* One in this category is the **reactive jammmer**
* the strongest jammer in this category is the **omniscient jammer**
* senses the medium and can identify the number of retransmissions that a packet went through
* whenever a non-colliding transmission is detected, it transmitts a jamming pulse with a probability that may depend on the backoff stage of the transmitter
### Jamming example - MITM
1. Vulnerability: _Lack of mutual athentication in GPRS/EDGE_
2. Vulnerability: Support for no encryption (GEA0)
3. Vulnerability: _Fallback_ to GPRS/EDGE of UMTS/HSPA (3G, 3.5G) when UMTS/HSPA is not available
This lets an attacker force devices to behave like GSM/GPRS/EDGE devices by simply using an jammer against the UMTS/HSPA frequencies, thus extendign the previous two vulnerabilities to them
## Anti-Jamming Communication
Basic principle: "If you cannot beat them: run and hide"

Communication partners need advantage over the attacker
* Secret key shared between the sender and receiver provides this advantage
* Can be used for various defenses
### Frequency Hopping Spread Spectrum (FHSS)
**Sychronized** sender and receiver share a key: From the key a sequence of frequencies is derived.
Sender and revceiver then hop frequencies according to key-sequence, making it impracticable to jam, since alle frequencies would have to be jammed.
#### Partial Band Jammer
A partial band jammer distributes its available power to achieve 0 dB J/S in each jammed channel at the jammed receiver
* J/S = 0dB is sufficient to achieve high bit error rate (BER)
* **optimizes available jamming power** to successfully jam as many channels as possible
#### Follower Jammer
1. Detect the frequency **quickly**
2. Jam it
3. Goto 1 again

#### Finding FHSS (such as bluetooth)

Detection of signal direction: When collected data shows **multiple frequencies** at **one angle of arrival**, a frequency hopper can be assumed.
### Direct Sequence Spread Spectrum (DSSS)

_"Bandwidth instead of TX power"_
With DSSS, the message signal is used to modulate a bit sequence known as a Pseudo Noise (PN) code; this PN code consists of a radio pulse that is much shorter in duration (larger bandwidth) than the original message signal.
* Secret spreading code - DSSS hides the signal
* Signal detection is now more difficult
* **signal "hidden" in the noise**
* Signal interception/modification difficult
* Jamming
* Narrowband jamming now requires much higher power
* **Broadband jamming** still effective
**Motivation:** Shannon channel capacity ($C$)
$C = B \cdot log_2(1+\frac{S}{N})$ or $\frac{C}{B} \approx 1.433 \cdot \frac{S}{N}$ for small $\frac{S}{N}<<1$
* $B$ is available channel bandwidth
* For $\frac{S}{N}<<1$ it is still possible to communicate in an error-free manner, given sufficiently large $B$!

#### Example: DSSS with BPSK modulation
Original BPSK modulated signal:
$s(t)=b(t)\cdot cos(\omega_0 t)$ with $b(t)=\{-1,+1\}$ being input data
DS spread spectrum signal
$ss(t) = a(t)\cdot s(t) = a(t)\cdot b(t)\cdot cos(\omega_0 t)$ with $a(t)=\{-1,+1\}$ being the spreading code.
The bit rate of b(t) denoted $R_b$ and of a(t) denoted $R_a$
* $R_b << R_a$ (the spreading effect)
##### Matlab exercise: DSSS (here for the graphic)

##### Changing $R_b$

#### Example: Spreading effect
The resulting signal similar to g(t)
* Bandwidth of s(t) is $2R_b$ and of ss(t) is $2R_a$
* the spectrum is spread by the ratio $\frac{R_a}{R_b}$
* the power of s(t) and ss(t) is the same, so the Power Spectral Density reduced by $\frac{R_a}{R_b}$

#### Why spreading?




### Processing gain (PG)
The ratio between the spread bandwidth and the original (unspread) bandwidth
* e.g. if a 1 kHz signal is spread to 100 kHz the processing gain is
* $\frac{100 000Hz}{1000Hz} = 100$
* $= 10 \log_{10}(100) = 20 dB$
* the **PG is a signal to jammer (interference) ratio** at the receiver after the despreading operation (removal of pseudo noise)
PG increases the jamming margin:
$M_j = PG - (SNR_{required} + Loss_{system})$
The level of interference that a system is able to accept and still maintain a specified level of performane (e.g. BER (Bit error rate))
## Conclusion on Jamming on Physical Layer
* Existing wireless data networks are easy targets of physical layer jamming
* typical as a DoS attack (which might be launched to act as a stepping stone to prepare another attack)
* **Countermeasures** are not trivial
* High transmission power, and spread-specturm are not enough
* E.g. in 802.11, jammer effort in the order of $10^{-4}$ for an IP Packet
* **Solutions** (most we did not discuss)
* directional antennas (directly derived from the J/S ratio)
* traditional anti-jamming focuses on bit protection
* cryptographic interleaving and error control codes provides much better resiliency to jamming
* There is also **"Jamming for good"**
* blinding eavesdroppers
* "shoot" malicious traffic out of the air (as a kind of wireless firewall/intrusion prevention system)
* needs to be very carefully designed/implemented:
* just shouting out loud might not hinder someone else to have a meaningful conversation
# V Integrity and authentication
## Learning objectives
1. Students can match classical security goals with appropriate physical layer mechanisms.
2. Students can explain location fingerprinting and which properties a location fingerprint should have.
3. Students can apply the concept of location fingerprinting for authentication.
4. Students can explain why classical integrity mechanisms need a trusted third party.
5. Students can apply integrity coding to a given set of binary data.
6. Students can give an application example for integrity regions.
7. Students know the goal of distance bounding.
8. Students can list the steps in Hancke & Kuhn and Brands & Chaum distance bounding.
9. Students can explain the distance hijacking attack and why the two protocols are vulnerable / secure against this.
10. Students can calculate how much security multiple rounds in distance bounding provide against random guesses.
11. Students can explain how an attacker can shorten a measured distance with early guessing bits.
## Matching of security goals with PHY layer mechanisms
> Students can match classical security goals with appropriate physical layer mechanisms.
| Security goal | Mechanism |
| -------- | -------- |
| Confidentiality | Orthogonal blinding |
| Confidentiality & Availability| Frequency Hopping
| |DSSS |
| Authentication | Fingerprinting | Passwords
| Integrity | Integrity Regions |
| | On-Off Keying (& Integrity codes) |
| Integrity & Authentication | Distance Bounding |
## Location fingerprinting
> Students can explain location fingerprinting and which properties a location fingerprint should have.
> Students can apply the concept of location fingerprinting for authentication.
Aim: Authentication. A fingerprint distinguishes channel responses of different paths. Could be comprised of:
* Multipath channel information (e.g. CSI)
* **Channel frequency response** (frequency domain)
$\Rightarrow$ hard to predict and spoof indoors ($\approx$ CSI)
* **Channel impulse response** (time domain)
* Signal strength (has downsides)
#### Properties
* *Not predictable* (by adversary)
* *Not reproducable* (by adversary)
* *Accessible* (to both, Alice and Bob)
#### The authentication
Channel response $H_0$ is estimated at time $t_0$ and then every now and then estimated again and compared with $H_0$. Higher *Tx-power*, more *measurements* $t_{0..n}$ for $H_0$ or *higher bandwidth* lead to less false alarms during authentication.
## Classical integrity mechanisms
> Students can explain why classical integrity mechanisms need a trusted third party
#### Diffie-Hellman (DH) Key Exchange
1. $g, p$ are public
2. $A \rightarrow B:g^a \mod p$
3. $B \leftarrow A:g^b \mod p$
4. $K_A = (g^b)^a \mod p$
$K_B = (g^a)^b \mod p$
**Warning!**
Traditional DH is susceptible to MITM attack (Eve jams $A$ and $B$'s communication, does DH individually with $A$ and $B$, has two keys afterwards).
$\Rightarrow$ DH uses signatures in steps $3$ and $3.1$. For signatures to be possible a *Trusted Third Party (TTP)* needs to distribute the public keys. To avoid reliance on *TTP*: E.g. integrity codes.
## Integrity codes
> Students can apply integrity coding to a given set of binary data.
Assumptions:
* Sender and receiver are in synch wrt. beginning and end of $\mathcal{C}$
* Adversary cannot block signal $1$ (except with $\varepsilon$)
#### I-Coding
**Manchester encoding rule $\epsilon$ :**

Now modifications can easily be spotted:

#### Application

\* _On-Off Keying: Simplest form of ASK, where amplitude is either 1 or 0._
## Integrity regions
> Students can give an application example for integrity regions.
#### Why?
MITM attacks can be very severe. Attacker can be outside, manipulating communication somewhere inside. Ranges of radios are often unpredictable, often TX power cannot be controlled and attacker may use high-gain (directional) antennas.
#### Assumptions
* User can assume or visually verify that there are no malicious devices within integrity region
* No certificates or pre-shared keys exchanged prior to the protocol execution
#### Main idea
Achieve message authentication through distance verification. Devices have an awareness of presence. *The method of distance verification is unspecified yet (e.g. distance-bounding).*
#### Application scenarios
- *Home network:* Attacker would have to physically enter home to act as MITM
- *Setup of WSNs:* For wireless sensor networks in the phase of first key-exchange
## Distance bounding
> Students know the goal of distance bounding
> Students can list the steps in Hancke & Kuhn and Brands & Chaum distance bounding.
Foundation for the distance measurement is the ellapsed time between Verifier $V$ and Prover $P$. Measured distance $\hat{d}_{VP}$ has to be bigger or equal to the actual distance $d_{VP}$:
$$ \hat{d}_{VP} \ge d_{VP}
$$
#### Example Applications
- Measure distance to ensure users can only log in when they sit next to their computer
- Store monitoring system against theft
- Integrity regions
- RFID access control / micropayments (credit cards)
### Brands & Chaum implementation
Prover and Verifier share a secret key $S$. The bit measurement is done multiple times because the adversary could guess a bit $\beta_i$. The probability to guess right, n times in a row is $\left(\frac{1}{2}\right)^n$. A commit encrypts the information such that - via a later delivered key in the open commit - it can be decrypted afterwards.
| Prover | | Verifier |
| ---------------------------------------------------- |:--------------------------------------------:| ---------------------------------------------- |
| Generate $m \in \{0,1\}^k$, <br> compute $commit(m)$ | $\xrightarrow{commit}$ | |
| | Start rapid bit exchange $k$-times | |
| | $\xleftarrow{\alpha_i}$ | Generate random $\alpha$ |
| $\beta_i = \alpha_i \oplus m_i$ | $\xrightarrow{\beta_i}$ | Stop clock and check if $\Delta t \le t_{max}$ |
| | End of rapid bit exchange | |
| Compute signature $sig = sign(\beta\|\alpha)$ | $\xrightarrow{sig, \textit{ (open commit)}}$ | Verify commit and verify $sign(\beta\|\alpha)$ |
### Hancke & Kuhn implementation
Verifier $V$ and Prover $P$ share a secret key $K$ and a pseudorandom function $h$.
| Prover | | Verifier |
| ---------------------------------------------------- |:----------------------------------:| ------------------------------------------------------------ |
| | $\xleftarrow{N_V}$ | Generate Nonce $N_V$ |
| Calculate $h(K,N_V)=R$ <br> Split $R=R_1\|R_2$ | | Generate random $C \in \{0,1\}^k$ |
| | Start rapid bit exchange $k$-times | |
| Choose either $R_0$ ($C_i = 0$) or $R_1$ ($C_i = 1$) | $\xleftarrow{C_i}$ | |
| | $\xrightarrow{R^{C_i}_i}$ | Check if received $R^{C_i}_i$ equals to expected $R^{C_i}_i$ |
| | End rapid bit exchange | |
## Attacks on distance bounding
> Students can explain the distance hijacking attack and why the two protocols are vulnerable / secure against this.
> Students can calculate how much security multiple rounds in distance bounding provide against random guesses.
> Students can explain how an attacker can shorten a measured distance with early guessing bits.
### Distance hijacking attack
In a distance hijacking attack, an adversary $A$ takes advantage of the measurement of an honest prover and takes his identity.
#### Brands & Chaum
Vulnerable to distance hijacking. In the completion part of the protocol, the adverary $A$ jams the last message of the honest prover and replaces it with the same message however signed with his own key. Therefore the Verifier $V$ believes $A$ to be the honest Prover.
**Problem:** Initial information is not dependent on identiy. Solution: Create such a link (as in Hancke & Kuhn's implementation).
#### Hancke & Kuhn
Is secure against distance hijacking.
### Time travelling attack
**Early detection:**
By analyzing the transmitted waveform for a shorter amount of time $T_{ed}$ than the actually defined length of the chirp $T_{chirp}$, we can save $T_{chirp} - T_{ed}$ of time. This early detection can be combined with a late commit.
**Late commit:**
Some bit $b=b_X$ is transmitted that does not depend on the actual transmitted information $b_{true}$. Only after the early detection finished, we change the transmitted bit to $b=b_{true}$. The receiver can usually still decode the information (in chirp-based modulation at least).
**Combined:**
The adversary locates himself in between $V$ and $P$. He lets the challenge-bit get to $P$, lets make $P$ his response and on the return-way he intercepts the communication. He uses early detection to learn the transmitted information $b_{true}$. Additionally he starts with his late-commit attack before even figuring out $b_{true}$.
$V$ however will only analyze the returned signal for $T_{chirp}$. That means: Even if the first part of the received signal is garbage, as long as $V$ can decode the right bit, the response seems to have arrived faster than the speed of light (which is the upper bound for distance bounding).
# VI Key Agreement
## Goals
1. Students can name and explain the wireless channel characteristics that enable key agreement on the physical layer.
2. Students can identify the differences between key extraction and key exchange.
3. Students can explain how these approaches agree on a shared key.
4. Students can explain the threat of active and passive attackers on practical physical layer key agreement protocols
5. Students can name and explain each step of key agreement/exchange schemes.
> Da es nur 5 Ziele sind und etwa gleich viele Überschriften, würden jeder Link auf eine Überschrift zeigen. Daher verlinke ich mal nicht [name=BM][time=Sun, Feb 10, 2019][color=red]
## Short Recap
_For a full recap, visit the information theory lecture._
Different types of channel distortions exist: Shadowing, reflection, refraction, diffraction, etc. as well as random noise, the distance attenuation, interference with other transmissions as well as inter symbol interference.
We recall the channel model, where Alice wants to transmit a message $M$, uses an Encoder to encode it to a symbol $X$, transmitting it over the channel $H$, where some noise $N$ gets added, resulting in the symbol that Bob receives $Y$. Bob uses a decoder to decode it to a message $M^*$ that ideally should be equal to $M$.

In the frequency domain the multipath effects are just phase shifts and attenuations and can therefore be expressed as a complex value.
$$ Y = \underset{\text{Channel Matrix}}{H} \cdot X + \underset{\text{Noise}}{N}
$$
We can measure the channel effects by transmitting **pilot symbols** first. Those symbols are known in advance and encoded in BPSK. After receiving them it is possible to estimate the channel characterictis.
**Channel characterictis** are different between each communication party. In PhySec we assume them to be unpredictable and take them as a source of (pseudo) randomness. Channels are reciprocal, i.e.
$$H_{AB} = H_{BA}^{-1}
$$
## Physical Layer Key Extraction
Generate keys from correlated random channel observations. Input and output of a wireless channel are correlated. Extract the channel randomness to establish shared keys between two parties.
:::info
M. Schulz explicitly said we should know these 7 phases for the exam.
:::
#### Phases of key extraction

During the **Reconciliation**-Phase (engl. Abstimmung / Abgleich) e.g. partial checksums and possible bit retransmissionare are used to ensure to correct the minor differences that Alice and Bob share, but not the major differences to what Eve received. In the **Privacy-Amplification** phase a key derivation function might be used to get to a secure shared secret.
## Physical Layer Key Exchange
Two parties agree on a shared secret by exchanging public information. Unlike key extraction, key exchange does not depend on the randomness of the wireless channel.
For an explanation of the Diffie-Hellman Key Exchange see [DH](#Diffie-Hellman-DH-Key-Exchange) in the Integrity and Authentication lecture.
#### iJam: Key Exchange with Artificial interference
**Artificial interference** for secrecy is also called _jamming for good_. The transmission of random signals causes interference at the eavesdroppers' thus degrading the quality of his wiretap channel.
1. Alice generates a random number and encodes it for wireless transmission. She transmits the signal twice (duplicated symbols).
2. Bob jams each sample in exactly one transmission. The jamming power shouldn't be too high (makes the jamming obvious) nor too low (symbol possibly reconstructable)
3. Since Bob knows the jamming signal, he can stitch together the transmitted secret. Meanwhile Eve only perceives random seeming noise.
4. The key exchange is performed mutually. In Step 4 Bob and Alice switch roles.

**Eve's detection possibility** is quite low, since iJam uses OFDM symbols that feature a zero-mean normal distribution in the time domain. Jamming symbols are also chosen from zero-mean normal distribution. Therefore her best chance is a hypothesis test for two independent random variables from normal distribution.
As **privacy amplification** the key exchange is performed multiple times, however with a high order modulation. The higher the order of the modulation, the higher the Bit Error Rate (BER) at Eve's. Best results are obtained with 64-QAM (256-QAM not tested). Optimal BER at Eve's is 0.5 (equivalent to guessing).
## Attacks on physical layer key agreement
While in theory we can prove secure key extraction, the teoretical models do not hold in practice. Also what about active attackers?
Multi-Antenna eavesdroppping in iJam is possible because jamming causes different effects on the receiving antennas:
1. Isolate a single sample
2. Compute the spectrum in the frequency domain
3. Apply the channel correction
4. Repeat for all antennas and compute the differences
5. Select the sample with less differences
###### tags: `2018` `TU Darmstadt` `recap` `physical layer` `PhySec` `PHY` `security` `wireless`
_Proud authors of this recap: B.M. and N.S._