# Geopolitics in the infrastructural ideologies of 5G Maxigas & Niels ten Oever Version 2022-12-10 Control over global media infrastructure is a key instrument for asserting geopolitical hegemony in the world system [@Zajacz2019]. It is possible to tell the story of the changing global division of labour in terms of structural changes in global media infrastructures. Studies in information policy showed how the telegraph enabled British hegemony even in the final days of the Empire [@Carey1983]; how radio telegraphy became an issue for American ascendancy [again, in @Zajacz2019]; how the Internet facilitated the establishment of the United States as a global superpower [@Carr2012]; how mobile telephony advanced European interests across the world in the past decades [@Kammerer2010]. Other significant work at the intersection between Science and Technology Studies with Media and Communication studies provides complementary perspectives on the same historical developments, but this time from "from below" [@Xavier2016]. @Medina2014a on Allende’s Chile and @Peters2016a on Russian attempts at networking are but two prominent examples. It is also notable that no critical comparable work examines the historical evolution of mobile telecommunication networks outside of Europe and the United States, despite many studies of national or regional domestication [@Winseck1999, @Winseck2017, @Schiller2011]. Contemporary scholarship on media and power is occupied with establishing the connection between the introduction of 5G and the increasing role of China in the global division of labour [@RaduAmon2021, @Tekir2020]. Ironically, English language literature on the topic represents a peculiar cross-over between the two strands of historically-oriented studies that we have highlighted above: the studies of hegemonic actors, and the studies of underdogs. This is because such literature can be read as a testimony of Western subjectivities reflecting on their own decline — a point that later on we develop further. From our situated perspective in the Netherlands, we contribute to this debate on the geopolitics of 5G based on original research on contemporary developments and secondary analysis of literature on earlier media infrastructures. A red thread running through this body of scholarship is a preoccupation with the relationship between material things — such as media technologies — and discursive formations that symbolically constitute those things — such as ideas about them. Authors question how the metaphor of the information superhighway paved the way for the socialisation of the Internet [@Bory2020], how the theory of cybernetics justified infrastructure developments for establishing the planned economy in Chile [@Medina2014a] *and* for advancing the free market in Silicon Valley [@Turner2006], or how the ideal of “openness” shaped protocol design and information policy in the era of the early Internet [@Russell2013]. We position our theoretical contribution in this area, by putting forward the notion of infrastructural ideologies. The notion was designed to illuminate the things-ideas nexus, or what we proceed to call later on the mediation between digital materialities and sociotechnical imaginaries. ## Infrastructural ideologies and digital materialities The concept of ideology has received comparatively little attention in the media and communications scholarship of the last decades. This remains true even if authors often drew theoretical inspiration from Cultural Studies, where ideology has been a key concept since the establishment of the discipline. However, our own attempt to rehabilitate the concept is rooted in cultural political economy [@Sau2021; @Jessop2010] rather than Cultural Studies *per se*. Classic theories of ideology frame the concept in relation to hegemony and coercion, institutions and practice. The purpose of hegemony is to reproduce a social order that serves the interest of the hegemon. Hegemony is developed and maintained in two ways: coercion and ideology [@Gramsci1971]. A successful ruling class imposes its ruling ideology through the institutions, minimising the need to apply coercion. Therefore, a good measure of the performance of a hegemonic project is whether social order is enforced through ideology, or whether the rulers have to rely on coercion. The product of ideology is cultural hegemony where ideas that legitimise the current mode of production become common sense. The common sense that is held by subjects, who are infused with ideology, produces a spontaneity in which subjects are convinced that the existing material conditions are both scientifically rational and serve their particular interests. As the work of @Caudwell2017 in the sociology of scientific knowledge, and the work of @Mirowski2011 in the sociology of science has shown, science and technology are not out of reach of ideology. @Althusser1970 developed a systematic theory of ideology. Ideology produces subject positions that people in society occupy spontaneously according to their role in the division of labour, where the division of labour is defined by the capitalist mode of production. The production of subject positions is instrumentalised through the institutions such as the school and the church, and manifests itself in practices such as turning around on the call of the policeman on the street, the moment where a subject occupies the subjugated position of citizen in relation to the state. Foucault’s early work on knowledge and power in the context of the prison and the hospital started from such institutionally defined configurations of power relationships that are reproduced in the context of everyday life. His notions have been developed by @Zizek1989, who further emphasises that ideology exerts its power through bodily ingrained social practices rather than through a purely symbolic order. However, none of these seminal authors made the connection between the materialisation of ideologies and the materialisation of technological objects. It was in Science and Technology Studies that the connection became a central topic of interest. Research such as by @Shapin+Schaffer1985 repeatedly demonstrated that the spirit of the age, including the ruling ideology, is concretised in scientific research and materialised through engineering practice. Thus, it serves to cement the common sense of the age by creatively translating the symbolic order into the material conditions. In the words of @Latour1990, “technology is society made durable”. From then on, ideology is not something to be convinced of, it is just how things work. @Dourish2017 writes eloquently about digital materials that shape action possibilities, correcting for earlier theoretical misunderstandings about the transcendental nature of the digital. There is a gap in theoretical elaboration at this point, between infrastructure studies and the study of ideology: while it is broadly established that ideologies work through social practices and materialised in technological objects, it is not clear how ideology, materiality, and practices come together in infrastructures that become the environment for everyday life. We propose to fill the gap between infrastructure studies and the study of ideology by elaborating on how ideologies materialise in infrastructures. The concept of infrastructural ideologies is intended to express the ideological force of infrastructures that they exert on social relations through shaping practices, as well as the role of ideology in building infrastructures. <!-- Geopolitical hegemony in the world system is held through control over territory, capital and technology: three variables that intersect in global telecommunications networks [@Zajacz2019]. Studies examined the role of telecommunications in geopolitical power struggles, how power shapes technology and how technology is used to exert power. Here, we look at how culture shapes technology shapes power. --> For this research, we engaged in code ethnography [@Rosa2022] of GSM, internet and 5G technologies, as well as participant observation in the main standard-development organizations of the internet and 5G, and semi-structured interviews with equipment vendors and network operators. Our methodological assumption — taken from world systems theory [@Wallerstein2004] — is that the character and content of imaginaries and their underpinning ideologies creatively reflect the position of actors in the global division of labor. This paper contributes to the understanding of the role of infrastructures in geopolitical power tussles and straddles the fields of science and technology studies and international relations. ## Historical reconstruction <!-- Limit 3000 words --> We reconstruct in brief the conditions that allowed for the Internet to be promoted as the technology that brings freedom, while also being acknowledged as a United States (US) enterprise. A legitimacy exchange has taken place between the values associated with the technology itself, and the hegemonic position of the US in the global division of labor. The Internet as a media infrastructure, along with the values associated with it, as well as the content it carried, legitimized the US policy of opening markets to neoliberal globalisation, and integrating them under its sphere of influence. In this context, the Internet have been a fundamental material infrastructure for funneling profits to the core economy from the half peripheries, and eventually — through the inflection of GSM — from the periphery of the global world system. 5G reverses this narrative, tracking the changing global division of labor in the world system. The US now plays the role of the hegemon in decline, and China asserts its research prowess in addition to its already recognized industrial capacity. The network ideologies behind the development of the Internet, mobile phones, and 5G reflect the particular configuration (diagram) of state, capital and civil society in the US, Europe and China respectively. The associated infrastructural imaginaries show how it is possible to make sense of these ongoing social conflicts culturally. ### Internet: open <!-- Limit 1000 words --> It is important to remember that despite the legendary proportions to which historical studies elevate the Internet, militaries, airlines and banks all had their specialised global communications infrastructures up and running by the time that the Internet grew to global proportions, not to speak of the landline telephone network that fulfilled similar needs. The early Internet that emerged from the interdisciplinary research cultures incubated in places like the Radar Laboratory at the MIT reflected and contributed to the newly established hegemonic position of the United States after the Second World War. The US policy of de-militarization brought military technology and research organization into the civilian market (cf. Eisenhower\'s military-industrial-academic complex). The Internet standards that emerged in the decades that followed are fashioned after the diagram of the market. They assume a dumb network with intelligent edges, capitalizing on the assumption that a simple system can produce complex results in an emergent way. In cyberculture, openness appears as the ideology of the Internet itself, rather than the ideology of the hegemonic US or the neoliberal market. Thus, the association between freedom and the Internet legitimised US cultural imperialism and neoliberal economic expansion in the second part of the 20^th^ century. The ideology of openness was also reflected in the US-led international standardization process that produced the protocol stack through a completely new set of standards bodies such as the IETF and others [@Russell2014]. The new standards bodies were instruments to articulate rising hegemony through bypassing the old system of standardization (the ISO) — and the global division of labor at the time. The IETF definition of Openness in standardisation came to be seen as both more efficient and more just, thus becoming common sense. <!-- packet switching --> The technical identity of the Internet has been defined by the TCP/IP protocol pair, which stands for Transmission Control Protocol and Internet Protocol, respectively. The former is responsible for what computers at the end points of the line do to transfer data, while the latter defines how routing happens within the network. They have been developed in the US together under the auspices of the ARPA (Advanced REsearch Projects Agency) military research and funding agency in Arpanet, as well as in a separate network operated by the NSF (National Science Foundation). TCP/IP represented the state of the art in a then unpopular, radical approach to networking: packet switching. This approach was actually championed by French engineers working on the CYCLADES project that soon imploded. Packet switching departed from the circuit-switching employed in contemporary communication networks since the telegraph, and in popular competitors to TCP/IP at the time — the ITU’s (International Telecommunications Union) X.25 and IBM’s SNA (Systems Network Architecture). Instead of establishing a full circuit for each data stream and holding down those resources for the time of the transmission, as in the case of an analogue telephone call, the data is divided into small parts to be sent individually over the network and assembled at the other end. TCP is responsible for dissecting and assembling these packets in the right order, while IP gets them from their source to the destination. The resilience of the network is increased by TCP asking to resend lost packets and the possibility for IP to use alternative routes. TCP/IP was adopted as a standard by the US military in 1980 and in Arpanet in 1981. An alternative proposal for the convergence of communications and computing, the OSI (Open Systems Interconnection) model was approved by the International Standardisation Organisation (ISO) in 1983. By the time the international standardisation community coordinated the release of their own standard, TCP/IP networks enjoyed wide adoption. By 1983 it was possible to transfer files through the FTP protocol, send emails with the SMTP protocol, and log in to remote computers with the Telnet protocol. By the time that the emblematic standards body of the Internet, the Internet Engineering Task Force (IETF) was formally established in January 1986, the actually existing Internet linked thousands of hosts in multiple countries. The IETF issued standards in a special track of its journal, the sucessively numbered Request for Comments. Thus, the standard version of TCP was defined in RFC 675 in 1974 and the standard version of IP in RFC 791 in 1981, and so on. The status of these documents were perceived as equivalent to ISO “international” standards. In the 1980s the US-backed TCP/IP developed by the IETF was considered the underdog to the international standards being defined by the ISO. The winds changed rapidly. Russell notes that in 1990 the superiority of IETF standards over their ISO counterparts was evident. The network engineer Lyman Chapin, who participated in both ISO and IETF, recalls that > It didn’t take long to recognize the basic irony of OSI standards development: there we were, solemnly anointing international standards for networking, and every time we needed to send electronic mail or exchange files, we were using the TCP/IP based Internet! [quoted in @Russell2014 246] By 1995 the IETF defined — and the Internet community implemented — a complete suite of workable standards for internetworking. These included the Dynamic Host Configuration Protocol (DHCP) for assigning IP addresses automatically, Network Address Translation (NAT) for managing local networks, and IPv6 (still not widely enough deployed) that would solve the problem of running out of IPv4 addresses. The often-overlooked Border Gateway Protocol (BGP) was standardised in 1989 (RFC 1105), and in active use since 1994, for keeping track of available routes between the networks that make up the Internet. <!-- the End-to-End principle --> The design and goals of the network were structured by the End-to-End principle analysed by @Gillespie2006e2e. The End-to-End principle construes the Internet as a *dumb network*, where intelligence resides in the endpoints, while the network itself is only responsible for moving packets to their declared destinations. This should be familiar to anyone from the way postal services work. Postmen are not expected to inspect or alter packages, only to deliver them. It is in this sense that the End-to-End principle stands for empowering the endpoints, carrying associations of freedom and democracy. In practice, this means that services can be developed independently by third parties at the edges of the network, working independently of the network operators: what is called *permissionless innovation*. Ultimately, the End-to-End principle provides the technical substance to the infrastructural imaginary of openness. Gillespie’s point is that in this capacity, it has been expressed in network engineering, media regulation, and civic activism. The killer applications of the Internet — demonstrating how End-to-End enabled permissionless innovation — were email and web. These were powered by the Simple Mail Transfer Protocol (SMTP, RFC 772, 1980) and Hypertext Transfer Protocol HTTP (RFC 1945, 1996). The latter was famously invented by the English computer scientist Tim-Berners Lee at CERN, along with the HyperText Markup Language (HTML), showcasing European contrinbutions. Core standards for the World Wide Web were developed in the World Wide Web Consortium (W3C) set up by Berners-Lee in 1994. The motivation for creating yet another international standards organisation was similar to the tensions that led to the ISO/IETF rivalry. Namely, web developers felt that IETF engineers were sluggish and academic about turning out standards that could answer to the changing requirements of the day. <!-- current state-of-the-art --> Despite its pragmatic approach and open standardisation process, the Internet changed relatively little in the last two decades. Today, the World Wide Web and smartphone apps are often powered by the QUIC protocol superseding TCP, incorporated into the HTTP/3 specification released as RFC 9114 in 2022. The updated routing protocol, IPv6, awaits wide deployment. Updates to the Internet protocol suite added security and improved performance, but impacted little on the basic design tenets such as packet switching or the End-to-End principle. <!-- analysis --> The open market as it has been conceived as an ideal for the best possible allocation of resources in democratic exactly in the sense that networks based on the End-to-End principle are democratic. The open market is an infrastructure that allows for goods and services to be exchanged, without asking who can participate, without monitoring transactions, or without interfering with them. Network operators are only responsible to maintain the same rules over time, and deliver all packets equally. While the design is supposed to unleash the creativity of participants, it also overcodes (national) borders and (network) boundaries to the movements of goods and data. IP addresses are distributed by Regional Internet Registries (RIRs) over continents rather than sovereign countries. What the democratic conception of the network/market hides is that some actors are better positioned and possess more resources for harnessing the advantages of openness, to the point where their participation turns into the exploitation and regulation of the smaller actors that make up the vast majority of the network. This is the mechanism of economic imperialism that the US pursued in the 20th century, most famously under the Clinton administration. Economic imperialism served as a morally superior alternative to the traditional colonialism pursued by European nation states, with its roots going back to the tellingly-named Open Door Policy towards China enunciated in 1899 [@Russell2014 8]. The pioneer who colonised the American (Wild) West is the ultimate subject position from which cyberculture has been articulated. This leaves us enterprising, entrepeneurial, free floating subjects, backed up materially by disposable IP addresses dynamically assigned at connection time. Their freedom to roam and their identity decoupled from bodies is a powerful myth that drives adoption well into the 21st century. What non-US subjects spontaneously find online today is overwhelmingly a milieau ruled by US cultural imperialism and economically driven by domestic Silicon Valley companies. Thus, the Internet easily becomes a sunction pipe for funnelling profits from around the world to the US — but there is no built-in safeguard against competition from emerging economies in Asia either. It is possible to conceptualise openness as a sociotechnical imaginary in the grand tradition of Science and Technology Studies [@Sismondo2020; @Jasanoff+Kim2015]. Such a conceptualisation accounts for the success of the imaginary for coordinating development and mobilising adoption. What the alternative framing of infrastructural ideology adds is an explanation of the power relations produced and reproduced in the process. An infrastructural ideology furnishes rational common sense and affective spontaneity in a strategic way to define dominated subject positions and serve hegemonic interests. The Internet does this by tracing the design of the market in its material construction within TCP/IP as much as in its symbolic presentation in cyberculture. In standardising and building the Internet, US state and capital performed and cemented its hegemonic position in the world system, creating a new set of standards bodies, communication protocols and standardisation procedures in the process. ### GSM: mobile <!-- Limit 1000 words --> Meanwhile, the old hegemon — Western European nation states — restored and exploited their entrenched geopolitical position in the world system, and the traditional standards bodies such as the ITU, through the development of the GSM and the advancement of mobile telephony. The GSM standards reflect the logic of telecommunications companies as national monopolies. In constrast to the dump networks defined by the End-to-End principle, these are intelligent systems with dumb edges, managed by the operator. As with the Internet, GSM is not without procedents. Mobile telecommunications have been around since several decades before it emerged in the 1990s and early 2000s as a new network paradigm, largely fuelled by a European effort to consolidate the block’s political and economical role in the global division of labour. As with the Internet, the international standardisation process was bypassed by creating new standards bodies. The first generation of mobile technologies were widely developed and implemented in the 1980s after early experimentation in the decade before. These early generation mobile devices used analog radio signals combined with a digital control signal. This innovation had quite a bit of success which led to the establishment of the Groupe de Travail Spécial pour les Service Mobiles (shortened to GSM) by the French state-owned telecommunication provider in December 1982 [@Hillebrand2001]. The second-generation (2G) of mobile phones used digital signal processing paired with a digital control signal. The second generation lasted from the end of the 1980s up to the end of the 1990s. This coincided with the liberalization of the telecommunications market in Europe, where many state-owned telecom providers were liberalized. This led to the establishment of the European Telecommunications Standards Institute (ETSI). At the same time, the European Union provided legal frameworks for competition in telecommunication terminal equipment (1988) and telecommunication services (1990) and finally established an internal market for telecommunications services (1990) [@Lemstra2018]. This led to a successful launch of GSM which in 1992 was launched in 7 countries by 13 telecommunication providers (Manninen 2002). At the same time in the United States, another standard was being developed and deployed, namely Code Division Multiple Access (CDMA), which worked in a similar manner to GSM but was not compatible with GSM and functioned on another frequency. GSM was a circuit-switched network, in line with earlier telecommunication networks. But this made it harder to transport data reliably. Since GSM was also taking off beyond Europe, for the next iteration it was crucial to establish a body that was larger than ETSI, which only facilitated Europe. In part, this happened in the ITU through the International Mobile Telecommunications for the year 2000 standards (in short: IMT-2000), which would lead to coordination of compatible frequencies, which are managed through the ITU-R. The preparation for the IMT-2000 standard that would be standardized in the ITU-T happened largely through the Third Generation Project Partnership (3GPP). The 3GPP is not a formal organization but an umbrella organization of seven telecommunications standard development organizations from the United States (ATIS), China (CCSA), Europe (ETSI), India (TSDSI), Korea (TTA), and two from Japan (one governmental (TTC) and one from the private sector (ARIB)). The 3GPP organized its work in generations and releases, such as there are 2G, 3G, 4G, and 5G. Within these generations consist of the specifications that are grouped into releases. Examples of early releases were release 98 which standardized General Packet Radio Service (GPRS) that made packet-switched data possible, next to voice over circuit-switched networks in GSM. This is why GPRS was also dubbed 2.5G. In release 98 Enhanced Data Rates for GSM Evolution (EDGE) was standardized which increased data transfer speeds. A significant breakthrough came with the release of 3G which introduced the Universal Mobile Telecommunications System. At the same time, the US was still functioning under other standards, in part because of the dominant role of Qualcomm in the US market. The alternative for UMTS in the US was called CDMA2000. In other parts of the world, UMTS needed to function on frequencies that were not yet in use by telecommunication providers, when these frequencies were allocated by the ITU-R, this set in motion a wave of spectrum auctions by national governments in Asia and Europe. Both CDMA2000 and UMTS were standardized within IMT-2000, which roughly coincided with the first full standard releases of the 3GPP, namely High Speed Packet Access (HSPA, 3GPP release 5 and 6), HSPA+ (3GPP release 7 and 8), and Long Term Evolution (LTE) standard, (3GPP release 8). The LTE standard completely fulfilled the ITU 4G requirements called IMT-Advanced. After LTE, the 3GPP created LTE advanced, which was also dubbed 4G. As one can see, there have been quite a few overlapping, and often confusing, naming conventions about the different generations of telecommunication networks. This is because the technologies that coincide with the releases do not always perfectly coincide with the generation eras. Also, for a network to be called “4G”, not all technologies that were standardized in the 4G period in the 3GPP need to be implemented. At the same time the 3GPP, which was organizationally facilitated by ETSI, was working on UMTS, a competitive organization was established which was called 3GPP2. 3GPP2 was mostly supported by Qualcomm and was based in the United States and continued to further develop the CDMA standard. However, the standards developed by the 3GPP have obtained a much larger market share [@Baron+Gupta2018]. UMTS, and with that 3G, was a big incremental step. It allowed for significant data transfer speeds, which enabled the usage and growth of the smartphone. However, the telco networks were structured in a far more top-down manner than Internet networks. This is in part due to the preconditions customers had in mobile telecommunications, for instance, that it would be possible to use a phone, and data connections while traveling at high speeds such as in trains and cars. Therefore, identification is an inherent part of telco networks, often enabled by a SIM card, something that is not used in Internet networks where one gets assigned an IP address upon connection. The fourth generation of telco networks presented the first unified standard between the US and the rest of the world. The bandwidth it can provide is roughly a hundredfold greater than 3G and is a full digital IP-based network. One of the big changes in 4G was that video and audio streaming on mobile phones now could work seamlessly if the backhaul network would provide sufficient bandwidth. In 4G, the network also fully adopted IP for telephony as well as data. In that sense, 4G has recognized the hegemony of the Internet and integrated its core protocol in the telecommunication stack. Methodological nationalism is baked into the GSM protocol suite. Rather than the democratic impetus of the Internet, GSM emphasises citizenship. Mobile phone networks are operated by one of few companies who preside over their own network covering a certain country — heirs to the national champions that European countries developed as state sanctions monopolies. Telecommunications companies provide services strongly coupled with providing access to the network. This is in stark contrast to the Internet, where under the End-to-End principle, the network operators and the service providers of the Internet are separated technically to different layers of the protocol suite, different cultures and market segments, with mergers watched over by anti-trust regulators. The infrastructural ideology of mobility TODO: analysis - main metaphor: the state - subjectivity (citizen) - territory (sovereign borders) - reflection of the position in the global division on labour ### 5G: smart <!-- Limit 1000 words --> The possibility of geopolitical hegemony for China once again established the historical conditions for the production of a new network paradigm: 5G. The diagrammatic design of the system brings together an intelligent network with intelligent edges. While in the US demilitarization through a transition of technologies to the market sector was a central concern, in Chinese bureaucratic capitalism the logic of the military, capital and civil society are more consistent. The technical design of the 5G protocol stack reflects these material conditions. The articulation of power on a global scale, and in the scope of the intelligent network and the intelligent edges, is only possible through algorithmic power and optimization. The intelligence of the network automates the translation of policies into specifications and configurations. The 5G standards mainly developed within the 3GPP subsume both GSM and Internet standards, just as China attempts to subsume American and European positions in global markets. 5G presents an excellent opportunity for infrastructure studies to follow and study a major infrastructure development in real time through its standardisation, implementation and deployment. We see the novelty of 5G in the convergence of computing and networking, a phenomenon captured in the marketing buzzword *smart networks* and the technical term *software-defined networking*. The convergence allows for a range of new applications that go beyond the conventional and popular use cases of mobile phones and even smartphones, from drone detection to self-driving cars, augmented reality to industrial automation, replacing consumer-grade broadband and surveillance cameras for crowd control, for instance. The range and variety of use cases, as well as the plethora of protocols that support them, warrant the designation of a new network paradigm. At the same time as we emphasise the novelty of 5G, we also note the continuity with both modern Internet protocols, and previous generations of GSM technologies. The confusion between generations and features noted in the previous section on GSM also applies to 5G technologies. In particular, when today’s smartphone users see the 5G sign in the status bar of their smartphones, this likely stands for an enhanced version of 4G protocols, often backed up by 4G hardware. However, massive investment from telecommunications companies and subsidies from state sources ensure that the gradual development and deployment of future 5G standards and implementations will be put in production. Showing the effectivess of the infrastructural ideologies associated with the technology, it became common sense in information policy that smart networks are a proxy of economic competitiveness in the 21^st^ century. It is already clear from this description that 5G networks are built and configured with specific applications in mind, FIXME: add - main metaphor: (civil) society - standards/protocols (releases) - standards bodies (IETF) - subjectivity (cybernetic) - territory (functional: cities, factories, etc.) - reflection of the position in the global division on labour → attempt at hegemony (contestation) ## 5G sociotechnical imaginaries <!-- Limit 1000 words --> FIMXE: infrastructural strategies, intellectual property and patent wars, industrial capacity, copying vs. invention, AI and state control in China. ## Conclusion We show the historical conditions necessary to the production of a new network paradigm: mainly a central position in the global division of labour. Within that structural constraint, we found variety in how geopolitical hegemony and media infrastructures are related. In the case of the Internet and US, the new network paradigm marked the peak of its ascendancy to a hegemonic position in the world system. In the case of GSM and Europe, the new network paradigm corresponded with the consolidation of its position in the world system as an ally of the US. In the case of 5G and China, the new network paradigm constitutes a challenge to the geopolitical order and a bid for the hegemonic position. - ideologies - current state of affairs - future research <!-- Limit 500 words --> ## References