#
1. flow/ conjction control
2. 5 layers of network
3. SMTP
4. GBN
5.
# Computer Network Before Mid-Term
:::info
:bulb:
:::
:::danger
not finished yet 😕
:::
:::spoiler
trash talk
:::
:::success
abstract
:::
:::warning
confusion
:::
## 2023108 W6
### Chapter 3
#### Demultiplexing
* receiving host receiving UDP segment
1. check dest. port num. in segment
2. directs UDP segment to socket with that port
3. use source port num. to discriminate the connection
#### connection oriented demultiplexing
* how does TCP recognize
1. source IP
2. dest. IP
3. source port num.
4. dest. port num.
#### UDP
* cons
* lost
* deliver out of order
* "best effort" service
* pros
* no connection establishment (-1 RTT delay)
* no connection state: smaller header(8 bytes)
* no congestion control
* application
* streaming multimedia
* DNS
* HTTP3
* checksum
:::success
* goal: detect error
:::
:::danger
* second class failed
:::
## 20231004 W4
### Chapter 2
#### Conditional GET

#### SMTP
* simple mail transfer protocol
* ASCII
#### DNS, Domain Name System
* IP address
:::danger
* sleep 💤😪😴😴
:::
## 20231002 W4
### Chapter 2
:::info
* RTT: round- trip delay time, time for small package to travel from client to server
:::
#### *HTTP1.0*: Non- presistent HTTP
* 2RTT+ file trasnmittion time: load page
* (2RTT+ file trasnmittion time) * number of object
* TCP connection down once server send data
* browsers open multiple parallel TCP connections to load a webpage

#### *HTTP1.1*: Presistent HTTP
* 1RTT+ (1RTT+ file trasnmittion time) * number of object
* (2RTT+ file trasnmittion time) * number of object
* implement with the concept of pipeline, send multiple request and recieve mutiple data at a time
* faster than 1.0
* server response in order, FCFS
* small object have to wait for transmission (FCFS) (HOL, head of line, blocking)
#### *HTTTP2*: increase flexibility in server sending data to client
* slice the object into frame, and transmitt interleaved

#### *HTTP3*
* add security, error, congestion control over UDP
:::spoiler
* while professor confess he is not familiar with that
:::
#### HTTP request method (coud'n)
| Method | describe |
| ------ | -------- |
| GET | recieve message from server |
| POST | sent entity body |
| HEAD | only return header |
| PUT | upload message to server |
#### HTTP response code
#### cookies
* since HTTP is **stateless**
* all request are independent of each other
#### proxy
* caches server
* server+ client at the same time
* if some of the request is satisfied in proxy server, then the end-to-end delay will be reduce since less daya transfer from **access network**, bottle neck doesn't exist anymore
## 20230925 W3
### Chapter 1
#### Network Module
#### Network Protocol
#### Encapsulation/ Decapsulation
| layer | unit |
| -------- | -------- |
| application layer | data |
| transport layer | ?? |
| network layer | package |
| link layer | frame |
| physical layer | x |
:::warning
* haders will change while transferring between source - switch - router - destination
:::
## 20230920 W2
### Chapter 1
:::spoiler
* reviewing last time
* feel free to sleep
:::
#### Circuit Switching: FDM& TDM
* FDM: Frequency Division Multiplexing
* 以頻率分割頻寬
* 廣播
* TDM: Time Division Multiplexing
* 以時間分割頻寬
* 極短的時間 (periodic slot) 裡面每個使用者擁有完整的頻寬,但其他時間無法傳輸
* 網路
:::success
* it's hard to say whether **Circuit Switching** or **Package Switching** is the final winner, since
* Circuit Switching required to **wait**
* Package Switching is much more **economic**
:::
#### Performance: Loss

* processing: ckeck error, less than microseconds
* queue: **unpredictable**
* transmission: *L/R*
* propagation: *d/s = distance/ 2 * 10^8 (m/sec)*
:::warning
* **high-speed network**, namely **wideband network** imply that *R*, the transition rate, is larger. Thus more data can be transferred within the same time compared to general network.
* it WASN'T about the physical speed
:::
#### Queuing Theory (cont'd)
:::info
* a = average package arrival rate
* traffic intensity: *La/ R*
:::
* traffic intensity ~= 0
* small
* traffic intensity < 1
* small
* traffic intensity -> 1
* large
* traffic intensity > 1
* infinity
#### Performance: Throughput
:::success
* throughput: rate at which bits are being sent from sender to receiver
* basically, the smallest transmission rate
:::
* instantaneous: in a specific time
* average: in a period of time
#### Network Security
## 20230918 W2
### Chapter 1
#### the network core
* package is the minimum unit when data transfer
* packing- switch
* application- layer message will be break into packages
#### host: sends package of data
* take application message
* breaks into small package of length *L* bits
* transmits package into access network at transmission rate *R* (bps)
* aka link capacity, link bandwidth
:::info
* L: package length, bits
* R: transmission rate, bps
:::
#### package switching: store and forward ⏩

* transmission delay
* store and forward
* end to end delay *(2L/ R) +*
* queuing delay
* package lost:
* too much data delivered into switch when queuing, than lost data
* routing
* global action:
* determine source- destination paths
* routing algorithm
* forwarding
* local action:
*
:::danger
:::
#### alternative for package switching: circuit switching
end to end resources allocated to, reserved to *call* between source and destination
:::success
pros:
* exclusively for this *call*
* no queuing delay
cons:
* resources waste, others can't use the resources even this *call* doesn't send any data
:::
## 20230913 W1
### General
* this course is more focus on the **abstracts** of computer network theories, progress, but not the actual numbers
### Chapter 1
#### Internet
#### Protocol
#### Network Edge: host, access network, physical media
* host:
* client and **server**
* access network
* wired, wireless **communication** from network edge and network core
* aka, **first mile network**, in the view of *deliver data* OR
* **last mile network**, in the view of *receive data*
#### Physical Media