mark nottingham

How the Next Layer of the Internet is Going to be Standardised

Monday, 21 June 2021

Standards Tech Regulation

A big change in how the Internet is defined - and who defines it - is underway.

For a while now, it’s been apparent that Internet and Web standards have stagnated at the ‘top’ of the stack. While the IETF has been busy revising HTTP and replacing TCP down below, a tremendous amount of innovation is going on up top, and it’s all in private hands. This is where most of the apparent value in the Internet now resides: when you ask people what is the Internet? they don’t say anything about end-to-end, reliable delivery, stateful resources, or the browser platform; they say ‘social networking, search and shopping’, or more likely, ‘Facebook, Google and Amazon.’

Services like social networking, instant messaging and online marketplaces don’t have to be entrusted into the hands of these companies – they could be distributed and/or federated systems like the lower layers of the Internet, where there isn’t a ‘choke point’ that allows a single entity to control them. There are even existing standards for many of these functions, but they haven’t succeeded. Sometimes, that’s because they weren’t capable enough, or able to evolve quickly enough (see Moxie’s great talk about this), or because someone used market power to shift power away from an open solution that didn’t fit their business model (Google Reader, anyone?).

With this in mind, I’ve been watching the investigations of the competition issues surrounding big tech platforms around the world with great interest.1 It seems to me that an obvious remedy to a concentration of power is to distribute that power by requiring market participants to use open interoperability standards, and competition regulators have the power to force the issue. This won’t solve all problems with the big platforms, of course, but it’s a great start.

Lots of people seem to agree; if you have some reading time, browse through papers from folks like Chris Riley, Peter Swire, the Internet Society, Kerber & Schweitzer, Ian Brown, Kades & Scott Morton, and Chinmayi Sharma - to name just a few.

However, establishing good standards isn’t easy. Balancing different stakeholders’ views, broadness of applicability, implementation concerns, security and privacy, interoperability, and the need to actually ship something is difficult even when everyone gets along. Additionally, the outcome needs to be legitimate on the same scale that the Internet is – globally. As we’ll see, that’s an aspect that requires very careful management.

Over the years, the Internet community has evolved a set of institutions and associated architectural principles to embody these tradeoffs. It’s not that every Internet or Web specification has to go through the W3C or the IETF, or that every standard that these bodies produce is successful (far from it). What these bodies provide is an ever-growing knowledge and experience of what not to do, both technically and organisationally, when you’re creating technical standards for the Internet.

In this context, I found two recent announcements about competition efforts to reign in big tech both interesting and concerning.

The CMA and the Privacy Sandbox

The saga of Google’s Privacy Sandbox and the UK Competition and Market Authority’s investigation into its competition impact deserves a paper of its own,2 but I want to focus on the CMA’s announcement last week.

In a nutshell, Google’s efforts to improve privacy by removing support for third-party cookies in Chrome have attracted the CMA’s attention, because doing so can be seen to give Google an upper hand in the advertising and publishing markets, since Google will retain access to first-party tracking information. Last week’s announcement stated that Google had made some commitments to the CMA regarding how it would develop the Privacy Sandbox.

Update: The EU has opened a similar investigation.

The CMA’s concerns about the impact of these changes on those markets are entirely reasonable, and some parts of the Privacy Sandbox are dubious at best (e.g., see Mozilla’s assessment of FLoC). However, the Privacy Sandbox also includes functions like:

These functions – which many consider to be normal or even critical browser functions – are now subject to approval by the UK’s competition regulator, at least in the most popular Web browser. By extracting commitments from Google regarding how they build Chrome, the CMA has quietly inserted itself as a ‘silent partner’ to Google. Most significantly, Google is now required to give the CMA 60 days notice before removing third-party cookies from the browser, even though other browsers have done so long ago.

What’s concerning is that the CMA is creating what’s effectively a parallel process to what standards go through, but substituting its own judgement and perspective. The outcomes will be solely dictated by the CMA and the ICO, based upon consultations with those parties that they choose, and supported by British values and a British sense of what ‘privacy’ means and its priority relative to the advertising ecosystem. Whether they get it right for the rest of the world is anybody’s guess; they have a very specific set of goals and incentives which are radically different than those that brought us the Internet and the Web.

Of course, just like other software vendors, browsers have to conform to the legal requirements in the various jurisdictions that they have a market in. This intervention is also fairly ‘light’; they’re not yet telling Google what it must do. So when I saw the news of these commitments, I was concerned for the reasons above, but not yet alarmed. What happened next took care of that.

The ACCESS Act 2021

One of the five US bills to designed to rein in big tech that have been getting a lot of press recently is the ACCESS Act 2021.

This bill requires designated platforms (likely candidates would be Google, Facebook and Amazon) to conform to interoperability standards set by new a new committee run by the FTC. If it passes, the APIs that define the next layer of the Internet will not be based upon broad community input, review, or participation.

Instead, the FTC will hand-pick the participants at its pleasure, and will retain change control over the APIs.3. Even if the right mix of people gets onto the committee, its role is explicitly ‘advisory’.4 Documentation for those APIs is only supplied to ‘competing businesses or potential competing businesses.’5 There’s no guarantee they’ll be public.

To be clear, I’m super-happy that Lina Khan is now the FTC Chair. Some big platform heads need kicking, and her boots are laced. That said, technical standards defined to force interoperability on big tech will inevitably become the de facto standards for the industries that they’re established within, potentially lasting far longer than the companies they’re designed to hobble (if they do their jobs well). That effectively makes the FTC a new Internet governance institution.

Where is this taking us?

While these developments might be appropriate in the context of domestic competition law, they could have significant negative effects on the Internet overall. I’m concerned about two possible effects: fragmentation and ossification.

Fragmentation

Creating government-set standards for Internet functions will inevitably lead other governments to decide that their priorities are different, or that they just don’t find someone else’s standards palatable. For example, the EU could decide that the UK got the wrong end of the stick on the Privacy Sandbox, or Brazil could decide that the US standards for social networking aren’t for it.

That, in turn, would increase the pressure towards fragmentation, and the corresponding diminishment of the enormous benefits that the Internet has for global society and trade.

Google will have to figure out whether to have a UK version of Chrome, or try to force the UK’s approach to privacy and advertising on everyone else. Facebook will be faced with the possibility of supporting different APIs in different countries – which will probably suit them fine, because it will help prevent a large rival from being established.

The risks of fragmentation have been widely recognised for some time, and as a result, governments all over the world have agreed that the best way to govern the Internet is with a ‘multistakeholder’ model that places responsibility for technical standardisation firmly in the hands of community-led, industry-backed organisations.

For example, in 2017 the G20 (including the UK and US) said that they were

Reaffirming the principle in the G20 Digital Economy Development and Cooperation Initiative commitment to a multistakeholder approach to Internet governance, which includes full and active participation by governments, private sector, civil society, the technical community, and international organisations, in their respective roles and responsibilities. We support multistakeholder processes and initiatives which are inclusive, transparent and accountable to all stakeholders in achieving the digitally connected world.

And, they can be viewed as bound by treaty to do so; as per the WTO Agreement on Technical Barriers to Trade s 2.6:

With a view to harmonizing technical regulations on as wide a basis as possible, Members shall play a full part, within the limits of their resources, in the preparation by appropriate international standardizing bodies of international standards for products for which they either have adopted, or expect to adopt, technical regulations.

I’m not sure if the friction between these developments and their commitments haven’t been recognised, or if it’s just being ignored. However, if the amount of pain and mistrust involved in the IANA transition is any indication, much of the rest of the world is not content to allow the US (or any other rich Western country) solely determine the future of the Internet.

Ossification

The other risk is creating friction against future, beneficial changes to the Internet. For example, creating an API for Facebook might only serve to cement them (or their business model) into place as the ‘glue’ of social networking, preventing other, more decentralised models from emerging.

Likewise, forcing Chrome to accommodate third-party cookies and other forms of tracking to prop up the third-party display advertising industry might prevent a natural and healthy evolution towards other forms of advertising and other means of supporting Web content.

In protocol design, we call this ossification – when the ability to evolve is prevented by external factors. Mandated support for specific technical specifications or features combined with the market power that is being targeted is a dangerous brew.

To put it another way: if we leave the definition of the Internet to the US Federal Trade Commission and the UK Competition and Markets Authority, we shouldn’t be surprised if the result is focused on companies and competition, rather than people and cooperation. Companies have a huge part to play in the Internet, but it doesn’t mean we should delegate all functions to them, especially when significant advances are being made in decentralisation.

What’s the alternative?

All of this raises the question: can these standards be defined within the multistakeholder system? To put it another way, are the IETF and W3C fit for this purpose?

It’s a good question. These bodies are set up for voluntary, not mandatory standards, and the spectre of mandatory implementation for some parties has a huge distorting effect. As someone who’s been involved in Internet and Web standardisation for more than twenty years, I’m not sure how it would go, and there are a lot of hard questions that would need to be answered.

Still, in some cases there are existing solutions that could be adapted as the basis for interoperability standards (e.g., ActivityStreams for social networking).

In other cases, the threat of regulation might change power dynamics enough to produce meaningful change. For example, browsers haven’t had a strong incentive to align how they handle third-party cookies to date, but that might change if the CMA, ICO and other interested regulators made their preferences known.

Beyond their standardisation processes, these organisations have pools of deeply experienced engineers who have thought about the issues involved in many relevant areas for a long time. By looking only through the lens of ‘what big companies do I need to talk to?’, the regulators miss out on valuable alternative perspectives.

However, we may not find out how much these bodies could bring to the table: apparently, many of the relevant decision-makers consider these institutions to be compromised by big tech companies. I think that’s a simplistic view of two complex organisations who have many participants with different incentives and goals. That said, this view can’t be dismissed out of hand; big tech companies clearly have significant power inside standards organisations as well as in the market, and multistakeholderism may be unstable ground for legitimacy on its own.

So, I think that these governance institutions need to do some introspection – to find how they can maintain (or build) legitimacy and trust, so that bodies like the CMA and FTC can plausibly defer to their open, community-based global processes for technical standardisation. If that isn’t possible, buckle up: the way the Internet is standardised is going to change a lot, and it’s not at all clear where we’ll end up.

  1. Enough interest to start a Law degree; but that’s another story. 

  2. I wrote one for Competition in Digital Markets; see the preprint

  3. s 4(e)(1). 

  4. s 7(d). 

  5. s 4(e)(3)(a).