# 50.012 Lecture 8
###### tags: congestion-control
# Congestion Control
## Principles of Congestion Control
The problem of congestion is informally defined as too many sources sending too much data too fast for the network to handle.
It is different from the flow control.
The consequences of congestion are:
* Lost packets (buffer overflow at routers will lead to packets being dropped)
* Long delays (queueing in router buffers)
Congestion can be controlled either at the source or destination.
## Causes of Congestion
### Scenario 1

In this scenario, there is a bottleneck at the output link.
In the first graph, it is shown that although λ~in~ can continue to increase, once the throughput reaches R/2, λ~out~ can no longer increase (reaches its maximum throughput).
This will cause the number of packets that are "stuck" in the output link buffer (in the router) to grow linearly.
Due to the buildup in the buffer, the delay will increase exponentially (since the new incoming packet will have to wait for the current packet in the buffer to be transmitted).
The limit is R/2 because there are 2 hosts sharing the same router (if there are n hosts equally sharing the throughput, it'll be R/n).
### Scenario 2

Idealization assumption #1: (the sender only sends when router buffers are available), the graph is as follows:

However, packets can be lost and dropped at router due to full buffers. Sender can only resend if packet is known to be lost.
Idealization assumption #2: the sender knows which packets are lost and only resends those packets.
The graph will be as follows:

λ~out~ is now lower than λ~in~ because some of the bandwidth is used to retransmit the lost packets.
Realistic case #1: Duplicates


The bottlenecked output link now performs worse since there are packets that are transmitted using this link that are wasted (because they are duplicates).
"Costs" of congestion:
* More work (retransmission) for given "goodput"
* Unneeded retransmissions: link carries multiple copies of packet,decreasing goodput.
### Scenario 3

As red λ~in~ increases, all ariving blue packets at upper queue are dropped, causing Blue throughput to be zero.

When packets are dropped, any "upstream" transmission capacity used for that packet was wasted.
## TCP Congestion Control
### Additive Increase Multiple Decrease
Sender increases transmission rate, probing for usable bandwidth, until loss occurs.
Additive Increase: increase congestion window (cwnd) by 1 MSS (packet size) every RTT until loss is detected.
Multiplicative Decrease: cut cwnd in half after loss.
The reason behind this approach is to bring back the transmission rate to the original transmission rate once packet loss occurs.

### Details

### TCP Slow Start

When connection begins, we can increase the rate exponentially until the first loss event:
* Initialize cwnd = 1 MSS
* Double cwnd every RTT
This is done by incrementing cwnd for every ACK received (cwnd += MSS upon receiving new ACK).
This results in a slow initial rate, but ramps up exponentially fast.
### Loss Detection and Reaction
TCP is able to detect different "types" of losses and react differently according to the types of losses.
Loss indicated by timeout:
* cwnd is set to 1 MSS.
* Window then grows exponentially (as in slow start) until it hits the slow start threshold, then it grows linearly (congestion-avoidance).
Congestion avoidance state:
$cwnd=cwnd+MSS \cdot \frac{MSS}{cwnd}$
Loss indicated by 3 duplicate ACKs (TCP Reno):
* Duplicate ACKs indicate that the network is capable of delivering some segments.
* cwnd is cut in half window then grows linearly.
TCP Tahoe always sets cwnd to 1 regardless of the loss types.

