:::warning
# <center><i class="fa fa-edit"></i> Introduction to 5G </center>
:::
[TOC]
## Module 1: Introduction to 5G
Mobile networks, which have a 40-year history that parallels the Internet’s, have undergone significant change. The first two generations supported voice and then text, with 3G defining the transition to broadband access, supporting data rates measured in hundreds of kilobits-per-second. Today, the industry is at 4G (supporting data rates typically measured in the few megabits-per-second) and transitioning to 5G, with the promise of a tenfold increase in data rates.
But 5G is about much more than increased bandwidth. 5G represents a fundamental rearchitecting of the access network in a way that leverages several key technology trends and sets it on a path to enable much greater innovation. In the same way that 3G defined the transition from voice to broadband, 5G’s promise is primarily about the transition from a single access service (broadband connectivity) to a richer collection of edge services and devices. 5G is expected to provide support for immersive user interfaces (e.g., AR/VR), mission-critical applications (e.g., public safety, autonomous vehicles), and the Internet-of-Things (IoT). Because these use cases will include everything from home appliances to industrial robots to self-driving cars, 5G won’t just support humans accessing the Internet from their smartphones, but also swarms of autonomous devices working together on their behalf. There is more to supporting these services than just improving bandwidth or latency to individual users. As we will see, a fundamentally different edge network architecture is required.
The requirements for this architecture are ambitious, and can be illustrated by three classes of capabilities:
- To support Massive Internet-of-Things, potentially including devices with ultra-low energy (10+ years of battery life), ultra-low complexity (10s of bits-per-second), and ultra-high density (1 million nodes per square kilometer).
- To support Mission-Critical Control, potentially including ultra-high availability (greater than 99.999% or “five nines”), ultra-low latency (as low as 1 ms), and extreme mobility (up to 100 km/h).
- To support Enhanced Mobile Broadband, potentially including extreme data rates (multi-Gbps peak, 100+ Mbps sustained) and extreme capacity (10 Tbps of aggregate throughput per square kilometer).
### Standardization Landscape
As of 3G, the generational designation corresponds to a standard defined by the 3rd Generation Partnership Project (3GPP). Even though its name has “3G” in it, the 3GPP continues to define the standards for 4G and 5G, each of which corresponds to a sequence of releases of the standard. Release 15 is considered the demarcation point between 4G and 5G, with Release 17 scheduled for 2021. Complicating the terminology, 4G was on a multi-release evolutionary path referred to as Long Term Evolution (LTE). 5G is on a similar evolutionary path, with several expected releases over its lifetime.
While 5G is an ambitious advance beyond 4G, it is also the case that understanding 4G is the first step to understanding 5G, as several aspects of the latter can be explained as bringing a new degree-of-freedom to the former. In the chapters that follow, we often introduce some architectural feature of 4G as a way of laying the foundation for the corresponding 5G component.
Like Wi-Fi, cellular networks transmit data at certain bandwidths in the radio spectrum. Unlike Wi-Fi, which permits anyone to use a channel at either 2.4 or 5 GHz (these are unlicensed bands), governments have auctioned off and licensed exclusive use of various frequency bands to service providers, who in turn sell mobile access service to their subscribers.
There is also a shared-license band at 3.5 GHz, called Citizens Broadband Radio Service (CBRS), set aside in North America for cellular use. Similar spectrum is being set aside in other countries. The CBRS band allows 3 tiers of users to share the spectrum: first right of use goes to the original owners of this spectrum (naval radars and satellite ground stations); followed by priority users who receive this right over 10MHz bands for three years via regional auctions; and finally the rest of the population, who can access and utilize a portion of this band as long as they first check with a central database of registered users. CBRS, along with standardization efforts to extend cellular networks to operate in the unlicensed bands, open the door for private cellular networks similar to Wi-Fi.
The specific frequency bands that are licensed for cellular networks vary around the world, and are complicated by the fact that network operators often simultaneously support both old/legacy technologies and new/next-generation technologies, each of which occupies a different frequency band. The high-level summary is that traditional cellular technologies range from 700-2400 MHz, with new mid-spectrum allocations now happening at 6 GHz, and millimeter-wave (mmWave) allocations opening above 24 GHz.
### Access Networks
The cellular network is part of the access network that implements the Internet’s so-called last mile. Other access technologies include Passive Optical Networks (PON), colloquially known as Fiber-to-the-Home. These access networks are provided by both big and small network operators. Global network operators like AT&T run access networks at thousands of aggregation points-of-presence across a country like the US, along with a national backbone that interconnects those sites. Small regional and municipal network operators might run an access network with one or two points-of-presence, and then connect to the rest of the Internet through some large operator’s backbone.
In either case, access networks are physically anchored at thousands of aggregation points-of-presence within close proximity to end users, each of which serves anywhere from 1,000-100,000 subscribers, depending on population density. In practice, the physical deployment of these “edge” locations vary from operator to operator, but one possible scenario is to anchor both the cellular and wireline access networks in Telco Central Offices.
Historically, the Central Office—officially known as the PSTN (Public Switched Telephone Network) Central Office—anchored wired access (both telephony and broadband), while the cellular network evolved independently by deploying a parallel set of Mobile Telephone Switching Offices (MTSO). Each MTSO serves as a mobile aggregation point for the set of cell towers in a given geographic area. For our purposes, the important idea is that such aggregation points exist, and it is reasonable to think of them as defining the edge of the operator-managed access network. For simplicity, we sometimes use the term “Central Office” as a synonym for both types of edge sites.
:::
Each of these units is open and it's possible to combine the units from different vendors.
### Coding and Modulation
The mobile channel over which digital data needs to be reliably transmitted brings a number of impairments, including noise, attenuation, distortion, fading, and interference. This challenge is addressed by a combination of coding and modulation, as depicted in Figure 1.

Figure 1. The role of coding and modulation in mobile communication.
At its core, coding inserts extra bits into the data to help recover from all the environmental factors that interfere with signal propagation. This typically implies some form of Forward Error Correction (e.g., turbo codes, polar codes). Modulation then generates signals that represent the encoded data stream, and it does so in a way that matches the channel characteristics: It first uses a digital modulation signal format that maximizes the number of reliably transmitted bits every second based on the specifics of the observed channel impairments; it next matches the transmission bandwidth to channel bandwidth using pulse shaping; and finally, it uses RF modulation to transmit the signal as an electromagnetic wave over an assigned carrier frequency.
For a deeper appreciation of the challenges of reliably transmitting data by propagating radio signals through the air, consider the scenario depicted in Figure 2, where the signal bounces off various stationary and moving objects, following multiple paths from the transmitter to the receiver, who may also be moving.

Figure 2. Signals propagate along multiple paths from transmitter to receiver.
As a consequence of these multiple paths, the original signal arrives at the receiver spread over time, as illustrated in Figure 3. Empirical evidence shows that the Multipath Spread—the time between the first and last signals of one transmission arriving at the receiver—is 1-10μs in urban environments and 10-30μs in suburban environments. These multipath signals can interfere with each other constructively or destructively, and this will vary over time. Theoretical bounds for the time duration for which the channel may be assumed to be invariant, known as the Coherence Time and denoted Тс, is given by: Tc=c/v*1/f, where **C** is the velocity of the signal, **V** is the velocity of the receiver (e.g., moving car or train), and **F** is the frequency of the carrier signal that is being modulated.

Figure 3. Received data spread over time due to multipath variation.
To complicate matters further, Figure 2 and 3 imply the transmission originates from a single antenna, but cell towers are equipped with an array of antennas, each transmitting in a different (but overlapping) direction. This technology, called Multiple-Input-Multiple-Output (MIMO), opens the door to purposely transmitting data from multiple antennas in an effort to reach the receiver, adding even more paths to the environment-imposed multipath propagation.
One of the most important consequences of these factors is that the transmitter must receive feedback from every receiver to judge how to best utilize the wireless medium on their behalf. 3GPP specifies a Channel Quality Indicator (CQI) for this purpose, where in practice the receiver sends a CQI status report to the base station periodically (e.g., every millisecond in LTE). These CQI messages report the observed signal-to-noise ratio, which impacts the receiver’s ability to recover the data bits. The base station then uses this information to adapt how it allocates the available radio spectrum to the subscribers it is serving, as well as which coding and modulation scheme to employ. All of these decisions are made by the scheduler.