owned this note
owned this note
Published
Linked with GitHub
# What Is The Web?
![A painting of the Argo, which is a ship involving Theseus but not the right one](https://i.imgur.com/r3bcJri.jpg)
Theseus is pretty much the standard hero we've come to expect from the Greek Cinematic
Universe. He is conceived after a cryptically on-brand pronouncement from the Oracle of
Delphi leads Princess Æthra to become simultaneously impregnated by both King Ægeus of
Athens and Poseidon, but he then grows up with neither of his dads in the picture much.
As a young man, upon discovering his ancestry, he decides to travel to Athens to
claim his kingly birthright. But instead of going the safe way — by sea — he instead
chooses the land path that takes him past no fewer than six openings of the Underworld,
because #YOLO. Long story short: he makes it, and goes on to enjoy the delights of
Athenian family life as his stepmom tries to kill him twice, then his cousins try to
kill him again, and later his (human) dad jumps off a cliff because he forgot to use
white sails instead of black on his ship.
Sail color aside, that ship is what I want to talk about here. Theseus sailed it back from
Crete where he had just slain the Minotaur, escaped the Labyrinth, saved the children of
Athens, and abandoned Ariadne like a threadbare sock. The Athenians were so thankful to
get their children back that every year afterwards they commemorated the event by taking
that ship on a pilgrimage. Whenever one of the ship's planks would (inescapably) rot, they
would replace it with new timber; and they kept doing for long enough that eventually every
one of the ship's planks and all of its parts were replaced by newer ones.
This is the *Ship of Theseus* thought experiment which
[has been keeping](https://plato.stanford.edu/entries/material-constitution/)
[philosophers](https://plato.stanford.edu/entries/identity-time/#4)
[employed](https://plato.stanford.edu/entries/identity-relative/#ShipThesPara)
[for centuries](https://plato.stanford.edu/entries/sortals/): if every component of the
ship was replaced one by one over time, is the resulting ship the same as the initial one?
The Web is the same. [HTTP/3](https://datatracker.ietf.org/doc/rfc9114/) is a
[binary multiplexed UDP protocol](https://kinsta.com/blog/http3/),
[HTTP/0.9](https://www.w3.org/Protocols/HTTP/AsImplemented.html) was just text
over TCP (and it didn't even have headers or response codes, it could *only*
transmit HTML). We used to show off just how good we were at making intricate
800x600 layouts with nested tables and transparent pixels, in ways that would
get you laughed out of the room by today's responsive subgrid crowd. People
hated XHTML so much that they all flocked to JSX. Flash used to be universal and
has vanished; SVG used to be niche and is now banal. You would get in trouble
for touching a color outside of the
[216 websafe palette](https://websafecolors.info/color-chart) but `eval` and
`document.write` were rad. We wrote Perl. We wrote PHP. Some wrote CFML and Java
applets. We wrote VBScript *in the browser*. Firefox extensions ran on RDF, as
did some feed syntaxes, at least on one of those happy weeks when you could find
two people who agreed about a feed syntax. We had addresses, then URLs, then
URIs, then IRIs, and then URLs again. A few things have stuck around throughout,
but you'll have a hard time making the case that they define the Web.
But if we can replace all the parts and still call it the Web, how can we explain what the
Web is? I don't really care that much about some mythological dude's boat, but if we want
to build the Web *together* then we need to agree about what the Web is, what it is not,
and where it is that we should be taking it next. Proceeding via small, incremental
changes to what we currently have is a healthy and laudable approach, but it helps to have a sense
for what it is that those small steps are supposed to be *incrementing towards*. After all, a
headless chicken takes it one step at a time as well.
## User Agency
Rather than try to define the Web as a bundle of technologies without a clear criterion
to establish their coherence and evolution, I believe that the Web is a principled project
that is defined in terms of its values and ends rather than its methods and means. Stated
more explicitly:
> The Web is the set of digital networked technologies that work to increase user agency.
Defining the Web as an ethical project might not prove universally popular.
As technologists we are often reluctant to engage with philosophy, a reluctance often
expressed by running in the opposite direction all limbs akimbo with an ululating shriek
reminiscent of some of the less harmonious works of exorcism. Even those of us who are
curious about philosophy rarely seem to let it influence what and how we build. But there are very
good reasons to take the time to align on what it is that we're trying to build when
we build the Web, and good reasons why this has to import at least a little bit of
conceptual engineering from the vast and alien lands of philosophy.
First, people who claim not to practice any philosophical inspection of their actions
are just sleepwalking someone else's philosophy. Keynes said it best:
> “The ideas of economists and political philosophers, both when they are right and when
> they are wrong, are more powerful than is commonly understood. Indeed, the world is
> ruled by little else. Practical men, who believe themselves to be quite exempt from any
> intellectual influences, are usually slaves of some defunct economist.”\
― [John Maynard Keynes](https://en.wikipedia.org/wiki/John_Maynard_Keynes),
*The General Theory of Employment, Interest, and Money*
Second, in the same way that the more abstract forms of computer science *can* indeed
help us produce better architectures, philosophy *can* (and should) be applied to the
creation of better technology. A quick tour of the biggest problems that we face in tech
— governance, sovereignty, speech, epistemic individualism, gatekeeping, user agency,
privacy, trust, community — reads like a syllabus for the toughest course in ethics
and political philosophy. There is no useful future for the Web that doesn't wrestle
with harder problems and social externalities. "*Ethics and technology are connected
because technologies invite or afford specific patterns of thought, behaviour, and
valuing: they open up new possibilities for human action and foreclose or obscure
others.*" (Shannon Vallor,
*[Technology and the Virtues](https://bookshop.org/books/technology-and-the-virtues-a-philosophical-guide-to-a-future-worth-wanting/9780190905286)*) Technology choices, from low-level infrastructure
all the way to the UI, decide what is made salient or silent, hard or easy. They shape what is
possible and therefore what people can so much as even *think* about acting on or
about holding one another accountable for.
Third, we have a loose, vernacular notion that we are doing this "*for the user*" and this is
captured in key architectual documents as the
[Priority of Constituencies](https://www.w3.org/TR/design-principles/#priority-of-constituencies),
but we could benefit from being a bit more precise about what we mean by "putting users
first". As I will argue below, the idea of **user agency** ties well the
[capabilities approach](https://en.wikipedia.org/wiki/Capability_approach), an approach to
ethics and human welfare that is concrete, focused on real, pragmatic improvements, and that
has been designed to operate at scale.
Overall, if we're going to work out how best to develop user agency it seems useful to agree on
some basic notions of what we as technologists can do for our users, what capabilities most
empower our user, and how to ensure that a focus on user agency is a central, sustainable, and
capture-resistant component of our technological practice. Three things that are important to
share some minimal foundations about are:
1. working towards ethical outcomes doesn't mean relying on vapid grand principles but rather
ought to be focused on concrete recommendations;
2. when considering agency we need to be thinking about *real* agency rather than theoretical
freedoms; and
3. counterintuitively, giving people greater agency sometimes means making decisions for them,
and that's okay if it's done properly.
Let's look at these three points in turn.
## Capabilities
We're all familiar with "*vaporware freedoms*", when you are given a nominal right to do
something but the world is architected in such a way to effectively prevent the exercise
of that right. Everyone can start a business or own a house! — except no bank exists that
will lend to people like you. Users can change the default as much as they want to! —
except you know that they won't because the UI makes it prohibitively cumbersome and
laborious. Everyone can speak! — except only certain voices get amplified, a vaporware
freedom which Rogers Brubaker has captured as:
"*[Gatekeepers may no longer control what gets published, but algorithms control what gets circulated.](https://www.noemamag.com/hyperconnected-culture-and-its-discontents/)*"
Boosted by pervasive surveillance that enables nudging at scale by
[an arsenal of deceptive patterns](https://thomasmildner.me/darkpatterns.html),
vaporware freedoms are clearly thriving in our digital environment, but they
aren't new. Global economic development struggled for decades with comparable issues in
response to which Martha Nussbaum and Amartya Sen developed a pragmatic understanding of
quality-of-life and social justice known as the *capabilities approach*.
The capabilities approach asks
"*What each person is able to do and to be?*"
([Martha Nussbaum](https://en.wikipedia.org/wiki/Martha_Nussbaum),
[*Creating Capabilities*](https://bookshop.org/p/books/creating-capabilities-the-human-development-approach-martha-c-nussbaum/6690885?ean=9780674072350)). It replaces know-it-all,
command-and-control, top-down approaches that create dependence on external aid, ignore
local knowledge, and destroy local initiatives. Instead, it supports an unfailing commitment
to people's ability to solve their own problems if not prevented from doing so:
> With adequate social opportunities, individuals can effectively shape their own destiny
> and help each other. They need not be seen primarily as passive recipients of the
> benefits of cunning development programs. There is indeed a strong rationale for
> recognizing the positive role of free and sustainable agency — and even of
> constructive impatience.\
> — [Amartya Sen](https://en.wikipedia.org/wiki/Amartya_Sen),
[*Development as Freedom*](https://bookshop.org/p/books/development-as-freedom-amartya-sen/8682685?ean=9780385720274)
What capabilities translate to in technical terms for those of us who want user agency
is a complex, multipronged project, many parts of which we still need to design and
assemble. One important architectural aspect is the need to remove external
authority from the system so as to prevent chokepoints of capture from emerging, and
replacing it with what Jay Graber aptly described as
"*[user-generated authority, enabled by self-certifying web protocols.](https://jaygraber.medium.com/web3-is-self-certifying-9dad77fd8d81)*"
User authority doesn't mean that people *have* to build their own thing. Not everyone can
run their own server for the same reason that not everyone can eat exclusively from their
own pet organic garden. Not all control is agency; control has to be proportionate to
bounded rationality and reasonable cost (including in time) and people have to be supported
by powerful infrastructure that does not lock them into any system. More than anything,
their interactions with a digital space have to be self-directed — no one needs to hand
their life over to [Clippy](https://en.wikipedia.org/wiki/Office_Assistant).
Capabilities were designed with development in mind: they are meant to change people's
actual lives, not to check theoretical items off a list. They are by nature concrete and
implementable. The capabilities framework is a great building block for a Web understood
as furthering user agency because, in many ways, capabilities *are* user agency.
## Ethics in the Trenches
If you've read Web (and other) standards, you're at least superficially familiar with
[RFC2119](https://datatracker.ietf.org/doc/html/rfc2119) ("*Key words for use in RFCs to
Indicate Requirement Levels*") which is famous for defining such terms as `MUST` or
`SHOULD NOT`. But one of the documents that I rank among the highest to have been published
by the IETF in its nearly-forty-year history is its companion
[RFC6919](https://datatracker.ietf.org/doc/html/rfc6919) ("*Further Key Words for Use in
RFCs to Indicate Requirement Levels*"), published April 1st 2013, which standardizes far
more versatile terms like `MIGHT`, `COULD`, `SHOULD CONSIDER`, or the barn-burning
`MUST (BUT WE KNOW YOU WON'T)`.
Altogether too often, ethical guidance can feel like it was written against RFC6919 instead
of RFC 2119. This can be particularly true of ethical tech documents. These seem to mostly be
lists of lofty principles that exude a musty scent of meeting room detergent and commit to
all manners of good behavior requirements that the legal department is confident you can
safely ignore. Picking an ethical principle as the foundation and defining objective of the
Web is unlikely to help anyone if it's nothing more than a `MAY WISH TO`.
Nothing says that it has to be this way. Focusing on user agency and on user-centric ethics
generally doesn't mean that we should get lost in reams of endless abstraction. On the contrary:
it means that we *must* focus on principles that can be implemented. When working on Web
standards, we only consider requirements that can be
[verified with a test](https://www.w3.org/TR/test-methodology/) to be meaningful
— everything else is filler. Being as strict in our ethical standards is challenging, but
we can strive for it. We can do that both at a high level, when looking at the broad
outline and impact of a piece of technology, as well as in precise detail — both matter.
At a high level, the question to always ask is "*in what ways does this technology increase
people's agency?*" This can take place in different ways, for instance by increasing people's
health, supporting their ability for critical reflection, developing meaningful work and
play, or improving their participation in choices that govern their environment. The goal is
to help each person be the author of their life, which is to say to have *author*ity over their
choices.
Technologists trying to maximize user agency often fall into the trap of
measuring agency by looking only at time saved. On the surface, the idea seems
straightforward: spend less time on one thing, have more time for other things!
That would seem to fit our mandate of improving "*What each person is able to do
and to be*". And <u>all other things being equal</u> that can be true, but the
devil is in the details: the enjoyment of doing the thing, the value in knowing
how to do it, or the authority over outcomes. Even things that many would consider
chores aren't always best automated or delegated away: you may not wish to clean
your house but you might want a say in the chemicals introduced into your home,
about how your things are organized, or over whether your
[house can be mapped](https://www.wired.com/story/amazon-irobot-roomba-acquisition-data-privacy/)
by a robot and data derived from that map sold to the highest bidder. Not all
leisure is liberation.
The more detail we have on a piece of technology that may be part of the Web, the more
readily we can assess it in very specific ways that capture aspects of improved user
agency. In fact, that's something that the Web community has been doing for a long time.
Consider:
- The great level of detail that has gone (and continues to go) into specifying
[how to make the Web and Web content accessible](https://www.w3.org/WAI/standards-guidelines/).
These guidelines and techniques can, in exceedingly concrete ways, push for a
world in which disability does not limit agency.
- An equally-impressive trove of actionable principles can be found in the
[Internationalization work](https://www.w3.org/blog/international/). This empowers
people to use the Web in the languages of their choice. We will never celebrate the
work of the [Unicode Consortium](https://home.unicode.org/) enough. Bringing all of the
world's languages into a unified system of character encoding is a historical achievement
that "*[respects and empowers users](https://home.unicode.org/about-unicode/)*".
- It's hard to act freely if you can't act safely, which makes
[work on security](https://www.w3.org/Security/) core to the agency project.
[RFC8890](https://www.rfc-editor.org/rfc/rfc8890) ("*The Internet is for End Users*")
captures this well when it states that "*User agents act as intermediaries between a
service and the end user; rather than downloading an executable program from a service
that has arbitrary access into the users' system, the user agent only allows limited
access to display content and run code in a sandboxed environment. End users are
diverse and the ability of a few user agents to represent individual interests
properly is imperfect, but this arrangement is an improvement over the alternative
— the need to trust a website completely with all information on your system to
browse it.*" This trust is empowering.
- And the same can be said about [privacy](https://www.w3.org/Privacy/), which is key to
trust as well. Privacy further matters (as discussed in the
[Privacy Principles](https://w3ctag.github.io/privacy-principles/)) in that it includes
the right to decide what identity you present to others in which contexts. Additionally,
widespread data collection creates information asymmetries and information asymmetries
create power asymmetries. The issue here isn't so much that data might be used to
support mind-controlling AI snake oil but rather that it powers more mundane (and far
more effective) manipulation techniques such as
[hypernudging](https://www.tandfonline.com/doi/abs/10.1080/1369118X.2016.1186713).
These practices (which the W3C refers to as "horizontal review" but that have broader
applicability in the Web community) are all specific, concrete implementations of the
Web's goal of developing user agency. We don't habitually think of them as ethical goals,
but they are: they aren't random things that someone did for fun — they serve a purpose.
And they work *because* they implement ethics that get dirty with the tangible details.
User agency isn't limited to these four things, important as they may be. A great way to
build the future of the Web is to work through a gap analysis of the ways in which we
could be developing user agency. This could include anything from building better ways
to find things *your* way in search and recommendations, to organize with others and
govern our online environment without having to beg a few large companies to listen, or
to [break out of the straightjacket of apps and tabs](https://en.wikipedia.org/wiki/The_Humane_Interface).
Additionally, developing ways to ensure that agency is not only possible but to the
extent possible mandated by the system also furthers the Web. For that, we need to
think beyond the individual.
## Agency is Collective
Perhaps counterintuitively, focusing on user agency is not an individualistic position.
As Amartya Sen put it, "*Individual freedom is quintessentially a social product, and there
is a two-way relation between 1) social arrangements to expand individual freedoms and 2)
the use of individual freedoms not only to improve the respective lives but also to make
the social arrangements more appropriate and effective.*" The motivation here is clear:
increasing agency has to include the ability to influence collective systems as well as
effective avenues for collection action. Under this view, the Web is (as per Aurora Apolito)
“[*a form of ‘collectivity’ that everywhere locally maximizes individual agency, while making collective emergent structures possible and interesting*](https://c4ss.org/wp-content/uploads/2020/06/Aurora-ScaleAnarchy_ful-version.pdf).”
In practical terms, this has consequences for the evolution Web architecture. Much of the
Web that exists today rests on the assumption that users exist on someone's server,
essentially as guests on someone else's property. This has bred a default interaction model
in which people have no rights other than those generously granted them by the local
authority. Ultimately, the best governance model that is available in a client/server
architecture is benevolent dictatorship. No matter how you set things up, the server
can ultimately change the rules on a whim. In order to protect user agency and to imagine
the Web as “[*a global community thoroughly structured by non-domination*](https://bookshop.org/books/reconsidering-reparations/9780197508893)” we need to shape Web technology so that it
shifts power to people rather than away from them. Doing so requires the
[return of protocols to the fore](https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech) so as to push power to the edges and some changes
focused on individuals like a drive to replace external authority with self-certifying
systems, but also the deployment at scale of cooperative computing, communities under their
own rule and their federations, and building [subsidiarity](https://en.wikipedia.org/wiki/Subsidiarity)
into the Web so that more central authorities perform only those tasks which cannot be
performed at a more local level.
I will return to these points in greater detail in future posts, but for the time being
suffice it to say: the next evolution of the Web has to further user agency by replacing
today's system in which might makes right. This means that the next web is about governance
and institutions.
## Onwards
No group has ever managed to align on a definition of the Web, and the odds are that that
debate will continue. But envisioning the Web as the set of digital networked technologies
that work to increase user agency gives is both cohesion and a sense of mission that many of
us have felt but that hadn't been put into words. This framing has further benefits:
- It is anchored in a powerful foundation of social ethics designed to impact the real world
and to be measurable, the capabilities approach.
- It connects clearly with well-established, pragmatic, details, and concrete practices that
make this ethical approach not just lofty but in many cases real. The Web communities has
been developing and documenting those extensively through its work on accessibility,
internationalization, security, privacy, and more to come.
- It directly supports the development of collective cooperative solutions through which
people can act to improve their world and their communities without having to beg
support from a handful of corporations.
Because of what I set out to do with it, this post was necessarily high level and somewhat
abstract. My goal is to use this as a foundation atop of which many of us can build better
systems together.
Philosophers across the millenia may have focused on figuring out whether the ship of
Theseus would remain the same ship if you changed every single one of its planks over
time, but they may have missed the reverse problem: if you *don't* replace each and every
plank as it decays, then eventually the whole thing will rot and you *definitely* won't
have the same ship any more, or any ship at all for that matter.
The Web, too, needs its parts replaced now and then. And frankly, some of our load-bearing
planks are quite distinctly stinking of rot and our ship is taking on water. Knowing what
the Web is and where it's headed helps us identify which parts need servicing and what to
replace them with. It's time to scrape off some barnacles and get to work.
#### Acknowledgements
Many thanks to the folllowing excellent people (in alphabetical order) for their
invaluable feedback:
Amy Guy,
Benjamin Goering,
Boris Mann,
Brian Kardell,
Dietrich Ayala,
Dominique Hazaël-Massieux,
Juan Caballero,
Kjetil Kjernsmo, and
Marcin Rataj.
Needless to say, anything dumb and stupid in this article is entirely mine.