# 50.012 Lecture 12: Network Layer Overview # Network Layer ![](https://i.imgur.com/IfupsrZ.png) At the center of the Internet, there are a lot of routers that do not need the layers above the network layer. They are responsible in finding the path for the packets to travel around the network. They are also responsible in forwarding the packets from the source to destination. On the sending side, the network layer encapsulates segments into datagrams. On the receiving side, the network layer delivers segments to the transport layer -- the header is analyzed to determine how the packets are transported. A router examines the header fields in all IP datagrams passing through it. :::info Note: The term switching and forwarding in the context of network layer is interchangeable. Switching is used to cover both network layer and link layer. Forwarding is used specific for the network layer. ::: ## Forwarding vs Routing ### Forwarding Moving packets from router's input to appropriate router output. This happens in the data plane. Trip analogy: Process of getting through single interchange. ### Routing Determine route taken by packets from source to destination. This happens in the control plane. Trip analogy: Process of planning trip from source to destination. ### Data Plane ![](https://i.imgur.com/RMAGYPL.png) Data plane is a local, per-router function. It determines how datagram arriving on router input port is forwarded to router output port. ### Control Plane Control plane is a network-wide logic. It determines how datagram is routed among routers along end-to-end path from source host to destination host. There are two control-plane approaches: * Traditional routing algorithms (per-router control plane): Implemented in the routers (still a network-wide logic, but communication happens among the routers) * Software-defined networking (SDN): Implemented in (remote) servers. It's a more centralized approach that collects the information from the whole network, computes the path centrally, then distributes this information to all routers. #### Per-Router Control Plane ![](https://i.imgur.com/TK1Bq2B.png) The control plane in each router designs a local forwarding table (a lookup table) that the data plane can use to refer to when forwarding the data. #### Logically Centralized Control Plane ![](https://i.imgur.com/NmiMWIN.png) A distinct (typically remote) controller interacts with local control agents (CAs). Route calculations are aggregated into the SDN controller (remote controller on the diagram above). Each router now contains a CA that will gather information about the local situation of the router (conditions of the neighbouring routers) and through some interface, this information will be passed to the SDN controller. Once the forwarding table is calculated by the SDN controller, the forwarding table will be installed into each router. Through this concept, the data plane can be more generalized. Other than forwarding, it may also be able to do Firewall, etc. ## Network Service Model There are 2 main service models for the "channel" transporting datagrams from sender to receiver, grouped into individual datagrams and flow of datagrams. These are the list of services that are expected from the upper layers. Example services for individual datagrams: * guaranteed delivery * guaranteed delivery with less than 40 msec delay. Example services for a flow of datagrams: * in-order datagram delivery * guaranteed minimum bandwidth to flow * restrictions on changes in inter-packet spacing. # Routers High-level view of generic router architecture: ![](https://i.imgur.com/G0T07wd.png) The forwarding table is stored within each port. ## Input Port Functions ![](https://i.imgur.com/jyNEhzN.png) ![](https://i.imgur.com/Fi3Yg3E.png) When a packet arrives from the line, the line will first be terminated (by the physical layer). It will then be presented as a frame by the link layer. After some processing, it will be forwarded into the next block (which is within the network layer). This block will do a lookup to the forwarding table to decide which output port to forward this packet to. The switch fabric, however, may not be able to handle the packets as fast as the incoming line. Hence, it is the input port's responsibility to hold the packets from being processed. The reason why we need to have a forwarding table in each input port is so that this lookup process could be parallelized, removing one potential bottleneck. ### Destination-based Forwarding ![](https://i.imgur.com/v4EUkB8.png) For each particular range, we can forward the packet to the same interface. This is to allow for scaling. The neighbouring IP address is usually within the same ISP or organization. The logic is such that if the first x-bit of the incoming packet matches the prefix in the forwarding table, it will be sent through the respective link interface. #### Longest Prefix Matching Motivation Problem: In the previous example, Record 3 is a prefix of record 2. How does the router know where to forward the packet to? When looking for forwarding table entry for given destination address, use longest address prefix that matches destination address. ![](https://i.imgur.com/m5vgAlM.png) example 1: 0 example 2: 1 ##### Implementation of Longest Prefix Matching (TCAM) ![](https://i.imgur.com/Gq4RxXX.png) It is often performed using ternary content addressable memories (TCAMs). TCAM: Given a content, retrieve the address. Ternary means that there are 3 possible values: '0','1','X' (others). However, TCAM is more costly: * 6x more power than SRAM * 7x more area than SRAM * 4x higher latency than SRAM **Examples** ![](https://i.imgur.com/icfMgOm.png) ![](https://i.imgur.com/IsXhVOd.png) ### Typical Forwarding Diagram ![](https://i.imgur.com/iNp13m4.png) Use IP Address to find the index of the port. The forwarding table is split into the TCAM and the SRAM. ## Switching Fabrics The main responsibility of the switching fabrics is to transfer packet from input buffer to the appropriate output buffer. There are 3 types of switching fabrics: ![](https://i.imgur.com/oZJMwD8.png) ### First Generation Routers (Switching via Memory) ![](https://i.imgur.com/jqWk2AT.png) When a packet arrives at the ethernet port, it will be forwarded to the memory via the system bus. Then, it will be forwarded to the output port. * Traditional computers with switching under direct control of CPU. * Packet copied to system's memory. * Speed limited by memory bandwidth since there are 2 bus crossings per datagram. ### Switching via Bus ![](https://i.imgur.com/2Jksf0p.png) Datagram from the input port memory is forwarded to the output port memory via a **shared bus**. Bus Contention: Switching speed is now limited by bus bandwidth. ### Switching via Interconnection Network ![](https://i.imgur.com/GVcr8ZC.png) * Overcomes bus bandwidth limitation, since switchings can be done in parallel. * Crossbar (and other interconnection nets) was initially developed to connect processors in multiprocessors. There can be various topologies of crossbars. ## Input Port Queuing ![](https://i.imgur.com/SHNBFig.png) Fabric is slower than the input ports combined. Hence, we need to implement queuing mechanisms at the input ports. Queuing delay and loss are due to input buffer overflow. Head of the Line (HOL) blocking: queued datagram at front of queue prevents others in queue from moving forward (e.g. the front of queue packet is blocked from going to one of the output ports even though the destination of the second packet is free). ## Output Port ![](https://i.imgur.com/pvVL6eh.png) Buffering is required when datagrams that arrive from the fabric is faster than the **transmission rate**. Scheduling discipline (who gets the best performance) chooses among queued datagrams for transmission. ### Discard Policies If packet arrives to full queue, there are a few policies to choose which to discard: * Tail drop: drop arriving packet * Priority: drop/remove on priority basis * Random: drop/remove randomly Random dropping could be implemented to discourage people from hogging the bandwidth (those who use up a lot of bandwidth will get a higher packet loss rate). Another implementation is for TCP, when congestion happens (e.g. upon packet loss event), we will notify clients to reduce the bandwidth so that they will experience less packet loss. By doing random dropping, there is a higher chance to notify a lot of people to reduce the congestion. ### Scheduling Mechanisms Scheduling refers to choosing the next packet to send on link. #### FIFO scheduling ![](https://i.imgur.com/eXaO9Eq.png) Send in order of arrival to queue. #### Priority Scheduling ![](https://i.imgur.com/0gC9nk7.png) Send highest priority queued packet. For the implementation, we can have one class (queue) for each priority level. Class may depend on marking or other header info, e.g. IP source/destination, port numbers, etc. #### Round Robin (RR) Scheduling ![](https://i.imgur.com/nhZ1QdK.png) We will have multiple classes (not necessarily priority-based) in this implementation. Cyclically scan class queues, sending one complete packet from each class if available). This ensures fairness (all classes will be visited). #### Weighted Fair Queuing (WFQ) ![](https://i.imgur.com/LjKgQnO.png) It is a generalized Round Robin. Each class gets weighted amount of service in each cycle.