# Assignment 5 - Jessica Werner (2563327), Varada Sudharshana Kumar (7029511), Jan-Robin Aumann (2576766)
## Exercise 1
### (a)
TR 1-6
TR 11-15
TR 24-27
### (b)
TR 20-21
### \(c\)
The segment loss is detected by RTO (meaning the RTO timer has expired) since we drop cwnd to the initcnwd. If it was detected with 3 duplicate ACKs instead we would be in the Fast Retransmit phase and just drop cnwd by half to allow the network to get out of its congested state.
### (d)
The segment loss is detected by 3 duplicate ACKs while the RTO timer hasn't expired, meaning the network is in a congested state and we use Fast Retransmit instead, meaning we drop cnwd by half and continue sending.
### (e)
32, 16, 16
### (f)
After TR 4, so during TR 5
### (g)
8 and 8
## Exercise 2
### (a)
In contrast to NewReno, TCP BBR does not detect congested node based on packet loss, but rather detects it by estimating "Bottleneck Bandwidth" and RTT by comparing packet arrival rate and total data throughput (and RTT) on a node.
When the congestion is detected by packet loss, the RTT will already be higher because more data is sent through the congested node before the congestion can be detected. This results in a lower average RTT vor TCP BBR for detecting the congestion earlier (by estimation).
### (b)
A higher initcwnd has mostly benefits; according to the experiments run with this paper they improve latency throughout almost all of the common usecases for the web today and can mitigate the long term need to open multiple concurrent tcp connections to achieve faster download speeds.
While it is mostly benefitial, a higher cnwd can have degrading effects on both the retransmission rate as well as the latency in higher percentiles (above 96th).
Overall, however, the increase in retransmission rate is at most 1% and much lower in most cases, and while the latency in higher percentiles is increased in some cases, the median latency is still down compared to the baseline (initcnwd of 1).
### \(c\)
A different function for increasing the multiplier during slow start can yield better utilization of the available network bandwith by reacting faster to changes in available bandwidth and thus saturating the connection more of the time.
It can however also lead to either sluggish adaptation after a bandwidth increase or decrease. Which of them it is is a tradeoff that has to be considered at design time.
## Exercise 3
### (a)
`tshark -r u05-trace.pcap.pcap -Y "tcp" -qz conv,tcp | grep "<->" | wc -l` finds 9 different tcp conversations.
The q option supresses the packet information, while the z option with conv,tcp aggregates the tcp conversations (flows). Since it produces a header and footer along with the data, we grep for the arrow characters used between the covering IPs with `grep "<->"` , giving us only the lines that actually contain conversations which we can count with `wc -l`.
You can also "brute force" it by using the display filter "tcp stream eq x" where you increase x until wireshark displays nothing, and then x is the number of tcp streams since the display filter is zero-based.
### (b)
Using display filter `tcp`, clicking the first line (packet) and using the "follow tcp stream" functionality we can identify the first stream.
| Stream Index | source IP | destination IP | connection start | connection end | display filter |
| -------- | -------- | -------- | -------- | -------- | -------- |
| 0 | 192.168.100.200 | 192.168.100.100 | 0.000194000 | 295.840417 | tcp stream eq 0 |
### \(c\)
| Stream Index | source IP | destination IP | connection start | connection end |
| -------- | -------- | -------- | -------- | -------- |
| 0 | 192.168.100.200 | 192.168.100.100 | 0.000194 | 295.840417 |
| 1 | 130.149.220.164 | 130.149.220.251 | 9.616225 | 9.792222 |
| 2 | 130.149.220.42 | 130.149.220.164 | 19.131856 | 299.744244 |
| 3 | 130.149.220.164 | 130.149.220.252 | 49.66605 | 313.551888 |
| 4 | 130.149.220.164 | 130.149.220.251 | 70.611999 | 70.720083 |
| 5 | 130.149.220.42 | 130.149.220.164 | 112.522806 | 288.048851 |
| 6 | 130.149.220.164 | 130.149.220.42 | 122.225324 | 281.350445 |
| 7 | 192.168.100.200 | 192.168.100.100 | 133.169129 | 213.716505 |
| 8 | 130.149.220.164 | 130.149.220.251 | 278.62401 | 281.256498 |
This can be obtained under Statistics -> Conversations -> TCP and sorting by "Rel Start".
Note: We manually added the "Rel Start" and "Duration" column together to get "connection end".
The (almost) same result can be obtained by the command `tshark -r u05-trace.pcap.pcap -qz conv,tcp` but unfortunately, sorting here according to the tshark man page is "[...] according to the total number of frames".
This can almost be fixed with the command
```shell
for tcp_stream in \
$(tshark -r u05-trace.pcap.pcap -T fields -e "tcp.stream" | sort -u); do
tshark \
-r u05-trace.pcap.pcap \
-Y "tcp.stream eq ${tcp_stream}" \
-w /tmp/tcp_stream_${tcp_stream}.pcap;
tshark \
-r /tmp/tcp_stream_${tcp_stream}.pcap \
-qz conv,tcp | grep "<->";
rm /tmp/tcp_stream_*;
done
```
except now the start time is messed up because we split the file up into indivdual files containing just the specified stream.
### (d)
We can use `tshark -r u05-trace.pcap.pcap -Y "udp" -qz conv,udp | grep "<->" | wc -l` to get the number of UDP flows: 68.
(For explanation see (a))
Similarly to (a) you can also "brute force" this.
### (e)
The TCP connection with stream index 5 is experiencing packet loss multiple times, which you can spot easily by using the display filter `tcp.stream eq 5` and scrolling through it.
## Exercise 4
### (a)
130.149.220.251 www.net.t-labs.tu-berlin.de
obtained with `tshark -r u05-trace.pcap.pcap -Y "dns.count.answers > 0" -T fields -e "dns.a" -e "dns.qry.name"` and manually removing the additional records like `130.149.220.253` for the authoratative dns server
### (b)
`tshark -r u05-trace.pcap.pcap -Y "dns.qry.type == 0x1" -T fields -e "dns.qry.name" -e "dns.a" | sort | uniq | sed 's/\,.*$//'` gets us the list of all unique queries and their responses (or lack thereof):
boa.local
kerberos-1.net.t-labs.tu-berlin.de
kerberos-1.net.t-labs.tu-berlin.de 130.149.220.9
kerberos.net.t-labs.tu-berlin.de
kerberos.net.t-labs.tu-berlin.de 130.149.220.3
mail.net.t-labs.tu-berlin.de
mail.net.t-labs.tu-berlin.de 130.149.220.252
penguin.net.t-labs.tu-berlin.de
penguin.net.t-labs.tu-berlin.de 130.149.220.42
time.net.t-labs.tu-berlin.de
time.net.t-labs.tu-berlin.de 130.149.220.2
www.net.t-labs.tu-berlin.de
www.net.t-labs.tu-berlin.de 130.149.220.251
Unfortunately, we were not able to get rid of either the duplicates nor glue record for the authoratative name server, so those have to be manually removed with `sort | uniq` and `sed`.
From here on we have to manually make the table (and add back the dns server record):
| host IP | DNS name |
| -------- | -------- |
| - | boa.local |
| 130.149.220.2 | time.net.t-labs.tu-berlin.de |
| 130.149.220.3 | kerberos.net.t-labs.tu-berlin.de |
| 130.149.220.9 | kerberos-1.net.t-labs.tu-berlin.de |
| 130.149.220.42 | penguin.net.t-labs.tu-berlin.de |
| 130.149.220.251 | www.net.t-labs.tu-berlin.de |
| 130.149.220.252 | mail.net.t-labs.tu-berlin.de |
| 130.149.220.253 | dns.net.t-labs.tu-berlin.de |
These are just the A record DNS requests though (type = 0x1).
Looking at the PTR reverse lookups (type = 0xc) there are a few requests for `42.220.149.130.in-addr.arpa` which as per the table above (and the responses obviously) resolves to `penguin.net.t-labs.tu-berlin.de`.
But there is also two requests for `200.100.168.192.in-addr.arpa`, which can't be resolved because they are in the `192.168.0.0/16` segment.
Additionally, looking at the entire pcap file and finding the set of active hosts with `tshark -r Downloads/u05-trace.pcap.pcap -T fields -e "ip.src" | sort | uniq` yields this list:
130.149.220.42
**130.149.220.164**
130.149.220.251
130.149.220.252
130.149.220.253
**192.168.100.100**
**192.168.100.200**
So here we find three previously not "discovered" hosts (in bold), bringing our final table to this:
| host IP | DNS name |
| -------- | -------- |
| - | boa.local |
| 130.149.220.2 | time.net.t-labs.tu-berlin.de |
| 130.149.220.3 | kerberos.net.t-labs.tu-berlin.de |
| 130.149.220.9 | kerberos-1.net.t-labs.tu-berlin.de |
| 130.149.220.42 | penguin.net.t-labs.tu-berlin.de |
| 130.149.220.164 | - |
| 130.149.220.251 | www.net.t-labs.tu-berlin.de |
| 130.149.220.252 | mail.net.t-labs.tu-berlin.de |
| 130.149.220.253 | dns.net.t-labs.tu-berlin.de |
| 192.168.100.100 | - |
| 192.168.100.200 | - |
## Exercise 5
### (a)
In the first connection, a user connects at 192.168.100.200 connects via telnet to the host at 192.168.100.100.
They successfully log into the server running dusty old Ubuntu 9.10 and an ancient linux kernel version 2.6.31-15 with the username badguy and the password breakin.
They first list all the files, then proceed to try (unsuccessfully) to steal the passwords from /etc/shadow. When that doesn't work out, they try and gain root access but can't since the user badguy is not a priviledged user.
They could instead have used ssh instead of telnet to at least hide their seemingly malicious actions from anyone monitoring the network, but since their attempt to gain root access was reported anyway the sysadmin of puffin will surely act on this.
In the second connection, a user at 130.149.220.164 requests /index.shtml at the webserver running on 130.149.220.251. They receive a 200 OK from the webserver and are served the document which has a MIME type of `text/html`.
Since the website doesn't seem to contain any sensitive information it's probably fine but since it is now so easy to get a certificate one should really upgrade their webserver to https so the information sent and received is encrypted.
In the third connection a user at 130.149.220.42 has already previous to the capture start established a SSH session with the server at 130.149.220.164. Due to the encryption, it is impossible to know anything about this connection.
In the fourth connection, a user at 130.149.220.164 attempts to send an email via mail.net.t-labs.tu-berlin.de (130.149.220.252). The email is from chewbacca@net.t-labs.tu-berlin.de and is to be sent to jan@net.t-labs.tu-berlin.de. The entire email is out in the open since SMTP is not encrypted by default (PGP is one way to do that).
It never gets sent anyway though, since the DATA segment is improperly ended and thus rejected by the mail server.
The fifth connection is a repetition of the second, probably a page refresh.
In the sixth connection a user at 130.149.220.42 is establishing a(nother) SSH session with the server at 130.149.220.164. Due to the encryption, it is impossible to know anything about this connection besides the fact that both systems use SSH 2.0 via OpenSSH4.7p1 on Debian 8 based Ubuntu systems and the set of key exchange algorithms they can use.
The seventh connection is like the sixth one.
The eighth connection is another ssh connection, this time from 192.168.100.200 to 192.168.100.100. Besides the fact that the former is a Debian 6 based Ubuntu and the latter is a Debian 8 based Ubuntu system as well as their key exchange algoriths we can learn nothing due to encryption.
The ninth connection a user at 130.149.220.164 requests /~jan/random.bulk at the webserver running on 130.149.220.251 with the host being www. They first receive a 302 Found from the webserver and are served a document which has a MIME type of `text/html` and contains the location of the requested document:
http://www.net.t-labs.tu-berlin.de/~jan/random.bulk
So they send another HTTP request, this time to http://www.net.t-labs.tu-berlin.de/~jan/random.bulk and shortly thereafter they receive the `text/plain` document.
Again, the socuments don't seem to contain any sensitive information but they should really upgrade their webserver to https so the information sent stays private.
There is also a lot of DNS requests for various records mentioned in exercise 4.
### (b)
The three packages in question are all telnet packages, the first is the client offering to enable the echo subcommand, the second one is the server rejecting that request and the third one is the client requesting to enable echo.
This is part of the client server setup of telnet as per [RFC 854](https://datatracker.ietf.org/doc/html/rfc854), found via basic searching on
datatracker.ietf.org.