Feed fetched in 221 ms.
Warning Feed URL redirected to https://mnot.net/blog/index.atom.
Warning Content type is application/atom+xml, not text/xml or applicaton/xml.
Feed is 67,605 characters long.
Feed has an ETag of "10b13-6508872803118".
Feed has a last modified date of Tue, 28 Apr 2026 17:20:27 GMT.
Feed is well-formed XML.
Warning Feed has no styling.
This is an Atom feed.
Feed title: Mark Nottingham
Error Feed self link: https://mnot.net/blog/index.atom does not match feed URL: https://www.mnot.net/blog/index.atom.
Warning Feed is missing an image.
Feed has 5 items.
First item published on 2026-04-24T00:00:00.000Z
Last item published on 2026-01-20T00:00:00.000Z
All items have published dates.
Newest item was published on 2026-04-24T00:00:00.000Z.
Info Feed's Last-Modified date is newer than the newest item's published date (2026-04-28T17:20:27.000Z > 2026-04-24T00:00:00.000Z).
Home page URL: https://mnot.net/blog/
Error Home page does not have a matching feed discovery link in the <head>.
Error Home page does not have a link to the feed in the <body>.
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Mark Nottingham</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/"/>
<link rel="self" type="application/atom+xml" href="https://mnot.net/blog/index.atom"/>
<id>tag:mnot.net,2010-11-11:/blog//1</id>
<updated>2026-04-28T17:20:19Z</updated>
<subtitle></subtitle>
<entry>
<title>What's Missing in the ‘Agentic’ Story</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/agents_as_collective_bargains"/>
<id>https://mnot.net/blog/2026/agents_as_collective_bargains</id>
<updated>2026-04-24T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>Every online interaction is a lopsided negotiation. For AI to truly work for us, we need more than just safety -- we need to start building true agency as a form of collective bargaining.</summary>
<category term="Internet and Web"/>
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/agents_as_collective_bargains"><![CDATA[<p class="intro">For much of the history of computing, it was reasonably safe to assume that a machine was doing what you told it to do (and what its creators promised it would do), because its operations were local.</p>
<p>You bought a laptop or desktop with an operating system, and it did what it said on the tin: it ran programs and stored files. You bought a spreadsheet and a word processor, and those programs performed those tasks and didn’t do anything else. Software that didn’t do this was in a separate bucket called ‘malware’ and we had ways of dealing with it.</p>
<p>That assumption has a more general precedent in tools – whether they be staplers, screwdrivers, or telescopes. When you buy a screwdriver, it turns screws; it has no agency of its own. It might do other things, but that’s because you’re misusing the tool, not because it decided to do something else. Most things that people use unambiguously follow this pattern: for example, my mechanical wristwatch can’t do anything but tell me the time.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<p>That pattern is perpetuated in most<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup> depictions of computers in fiction (especially sci-fi), which work for people diligently and always on their behalf, usually with minimal intrusion. They unambiguously act in the interest of their users, following in the footsteps of technological optimism which informs much of fiction and influenced a generation of nerds who tried to build it.</p>
<p>All of these experiences combine to lead people to trust computers fairly unquestioningly; they don’t give much thought to the other purposes that might be served. When I use my phone, it’s <strong>my</strong> phone, and so it’s working for me, right? This is perpetuated in the press: recently, I saw an article in a major newspaper about how to talk to “your” AI agent.</p>
<p>If you scratch the surface just a bit, however, none of this is <em>true</em> when applied to modern technologies, and these assumptions are not safe.</p>
<h3 id="the-state-of-trust-on-the-internet">The State of Trust on the Internet</h3>
<p>Every time you use an Internet-connected computer, you’re trusting someone (and most likely, a multitude) to act on your behalf. From an application’s code all the way down to the silicon, software and hardware and the network services they use reliably embed the interests of those that create them – and they may or may not be aligned with yours.</p>
<p>Critically, those layers are usually – but not always – arranged in such a way that the interests of their producers and users are aligned. People creating computer chips are competing with other people creating chips, and so they focus on that; if they try to abuse their position by (say) exfiltrating your passwords in a side channel, the market (and possibly a legal regulator) will punish them.</p>
<p>However, modern businesses have become adept at exploiting the gaps in this arrangement. Now, if you use a ‘smart’ watch or your phone to check the time, it’s likely more accurate but you have to contend with the possibility that it’s reporting your location, activities, and who knows what else back to its creator – and that they might be sharing that information with others. And that’s also the case for every other application running.</p>
<p>Those abuses aren’t obvious, and it’s very easy for people to look at an Internet-connected device and fail to recognise that even though it’s “theirs” and that the data it processes is also “theirs”, they’re placing an inordinate amount of trust into a galaxy of faceless parties – trust that may not be deserved or protected. For example:</p>
<ul>
<li>TVs are widely known to <a href="https://arstechnica.com/tech-policy/2025/12/texas-sues-biggest-tv-makers-alleging-smart-tvs-spy-on-users-without-consent/">spy on their users’ activities without consent</a>.</li>
<li>Meta <a href="https://arstechnica.com/tech-policy/2024/03/facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers-court-docs-say/">decided to decrypt private traffic from ‘research’ users’ phones</a> to competing services and store it on their own servers. Predictably, once the users found out, it <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.369872/gov.uscourts.cand.369872.736.0.pdf">ended up in court</a>.</li>
<li>At the same time, Facebook also <a href="https://arstechnica.com/gadgets/2024/03/netflix-ad-spend-led-to-facebook-dm-access-end-of-facebook-streaming-biz-lawsuit/2/">let Netflix have access to users’ private Direct Messages</a>, creating yet another lawsuit.</li>
<li>Microsoft quietly changed the model of their ‘new Outlook’ e-mail client to <a href="https://www.ghacks.net/2024/01/12/proton-mail-says-that-the-new-outlook-app-for-windows-is-microsofts-new-data-collection-service/">surreptitiously send passwords for third-party e-mail servers to their cloud</a>, so that they can share it with more than 700 of their closest friends (i.e., data brokers and advertisers).</li>
<li>Various automakers <a href="https://foundation.mozilla.org/en/blog/privacy-nightmare-on-wheels-every-car-brand-reviewed-by-mozilla-including-ford-volkswagen-and-toyota-flunks-privacy-test/">collect detailed information</a> and share it with other parties, including data brokers and <a href="https://www.nytimes.com/2024/03/11/technology/carmakers-driver-tracking-insurance.html">insurance companies</a> – to the point where it’s difficult to find a car that doesn’t violate your trust.</li>
<li>Ring (i.e., Amazon) was so sloppy with their security practices that ‘rogue insiders’ as well as hackers <a href="https://www.theregister.com/2024/04/25/ring_ftc_settlement/">exploited their access to people’s video cameras</a>.</li>
<li>Grindr <a href="https://arstechnica.com/tech-policy/2024/04/grindr-users-seek-payouts-after-dating-app-shared-hiv-status-with-vendors/">shared highly sensitive health information</a> with third parties without permission.</li>
<li>Photobucket <a href="https://blog.ericgoldman.org/archives/2026/03/photobuckets-attempted-tos-amended-mostly-fails-pierce-v-photobucket.htm">aggressively changed terms of service</a> to allow AI use of people’s photos, but failed in court.</li>
</ul>
<p>This is just a small selection; there are many more. All of these are stunning violations of trust. And, it’s becoming <em>normal</em>.</p>
<p class="hero">How did we get here? If I were to speculate on the reasons for that, I’d say it’s a combination of the normalisation of <strong>cloud computing</strong> (because everything is now running on or connected to computers you don’t control), the <strong>expectations of higher and higher growth and returns</strong> by investors, putting pressure on companies for new and recurring revenue, and – more than anything – the <strong>weakness of any regulating forces</strong> on these actors.</p>
<h3 id="user-agents-are-a-form-of-collective-bargaining">User Agents are a Form of Collective Bargaining</h3>
<p>Although it’s difficult to trust anyone on the Internet given the examples above, it could be much, much worse. Imagine if you had to install a program on your computer from every company, government body, and other entity that you interact with, and those programs had full access to do what they like on your system. In other words, every online interaction becomes an opportunity to install malware that can extract your personal information, delete files or hold them ransom, profile and monitor your behaviour, and generally ignore your interests in favour of theirs.</p>
<p>What prevents that on the modern Internet? In many cases, it’s the humble Web browser, which selectively exposes capabilities to Web sites without offering full access to your computer. This is called a <a href="https://www.w3.org/news/2025/group-note-draft-web-user-agents/">User Agent</a> – software that acts on your behalf, representing your interests in your interactions with other parties.</p>
<p>And while the Web browser is representing your interests, it’s <em>also</em> balancing them with the interests of the sites that you visit – it’s an <strong>agent for them too</strong>. They want the page to render in a predictable way, but some users want to use accessibility tools. People don’t want to be tracked, but sites need <em>some</em> indication of how their pages are consumed. For the Web, all of these delicate tradeoffs are made within a framework of shared principles and values and decided in transparent fora using consensus processes – namely, the relevant standards bodies (usually, the W3C or IETF). There’s also more than one Web browser, so you can choose the agent that best represents your interests – thereby creating market pressure to do so.</p>
<p class="hero">Importantly, this is done in a way that results in the <em>same deal for everyone</em>. If you had to negotiate what Web sites are allowed to do on your computer on a case-by-case basis, you’d quickly give up out of exhaustion (and indeed, we see this in cookie banners, a notable failure). In the bargain between big sites and individual users, the sites have more <em>bargaining power</em> and therefore users’ interests need to be considered holistically – not on a case-by-case basis where sites can chip away at them. A browser embeds what is effectively a global treaty between sites and users.</p>
<p>That’s not to say that Web browsers are perfectly aligned with users’ interests; the fights over DRM and advertising/tracking show that there’s disagreement on what the right balance is, or even on what those interests are. User agents can also just get it wrong; for example, Google <a href="https://www.theregister.com/2024/04/01/google_will_delete_data_incognito/">kept users’ data from private browsing mode in Chrome</a>.</p>
<p>As I’ve argued before, <a href="https://www.mnot.net/blog/2026/02/13/no">Web browsers also show a distinct lack of ambition</a>. While they protect the data and capabilities on your computer, and (mostly) isolate Web sites from each other, they don’t work hard enough to protect the data you give to sites by creating higher-level capabilities.</p>
<p>Despite those shortcomings, Web browsers are a good example of how user agency should be done. There are other platforms that aspire to represent users’ interests – for example, iOS and Android. These, however, are single implementations where all of the decisions are made opaquely by a lone corporation. The checks and balances on their power are very limited and very different to those on Web browsers.</p>
<h3 id="why-ai-needs-user-agency">Why AI Needs User Agency</h3>
<p>It’s notoriously difficult to predict how Large Language Models are going to change the world in the long term. That said, everyone is excited about the possibility of ‘agentic’ AI, with many breathlessly predicting that it will transform, well, <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage">everything</a>. Briefly, the idea is that a LLM with tool capabilities can act on your behalf – i.e., be your agent.</p>
<p>Putting aside the question of where we’re at in the hype cycle, the models of agency being discussed here are relatively simplistic, when you compare them to Web browsers. That’s largely because there’s no single definition of what an AI agent or chatbot does and does not do – it’s just a concept at this point. As a result, unless you write your own agent (or have AI do it for you), you’re using a piece of software that embeds others’ interests without much accountability, checks or balances. While it claims to work for you, you have little assurance that it’s actually doing so.<sup id="fnref:3"><a href="#fn:3" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></p>
<p>That lack of trustworthiness cuts both ways. The data and services that the agent consume have little visibility into how they will be used, because the agent could be doing <em>anything</em> – unlike a Web browser, which puts some rough guide rails around how a Web site’s data is used and creates expectations about capabilities and behaviour.</p>
<p>In other words, the <strong>lack of a well-defined user agent role in AI</strong> that’s backed up by transparent, public standards that embed checks and balances on both parties to an interaction leaves a gap – it <strong>makes it harder for a marketplace to form</strong>.</p>
<p>That’s not to say that there isn’t a place for agentic AI without a well-defined concept of a user agent role. Agents in limited domains that have assumed trust – like inside enterprises and with their third-party vendors – will likely thrive without one, because the contractual relationships between those parties will regulate their behaviour. And of course, we’re already seeing accelerating adoption of AI chatbots for accessing information online, even though they are currently opaque and unconstrained.</p>
<p>However, that will limit the usefulness and application of agentic AI. Using agents written by other people will require a leap of trust similar to that required when using Android or iOS – and it’s not clear whether the companies that will write them will be worth of that trust, especially if they proliferate. Likewise, online data sources will be reluctant to trust random agents because they don’t know what will happen to the data – the agent could use it for the purpose they say they do and then dispose of it responsibly, or they could store it or republish it.</p>
<p>Some proposals for AI agents assume that putting agentic code in a TEE or similar ‘jail’ will solve these problems, but that ignores the need to collectively bargain – if agents can ask for intrusive permissions, we’re pretty much guaranteed a world where they constantly bug us for them, and everyone will lose out in that environment, because trust will be regularly abused and thus eroded.</p>
<p>Another alternative is to have AI experiences locked up in proprietary platforms. Consider, however, what kinds of experiences that will lead to:</p>
<blockquote>
<p>It is no accident that Meta is interested in smart glasses. With built-in cameras, lenses that can display WhatsApp messages and speakers that direct sound straight to the ear, the devices only make it easier for users to share what they are up to on social media and follow what others are doing. For Meta, more time spent on its platforms means more ad revenue. Amazon would likewise be delighted to have its Echo speakers in every home and its glasses on every face to gather more data for its growing ad business and make it even easier to buy from its marketplace. And OpenAI would be well served if people ditched their screens and relied instead on a chatbot to handle their interactions with the digital world.</p>
</blockquote>
<p>– <a href="https://www.economist.com/business/2026/01/25/will-the-smartphone-survive-the-ai-age">The Economist</a></p>
<p>Defining a user agent role for AI agents would also make agents more legible to legal regulation. With a such strong focus on “AI safety” by regulators today, an architecture that assured certain properties could be an important component of a solution in this space, not only creating more competition but also forestalling more onerous legal regulation.</p>
<p>Finally, although allowing AI agents to be <em>anything</em> promises lots of opportunities, placing constraints upon them not only helps users and services build trust in them, it also helps people more easily conceptualise what they do. Simply put, users are confused when technology offers too many choices. It’s understandable that industry doesn’t want to constrain the options for agents at this early point in their development, but at some point that wide open nature is going to hurt more than help. The vast majority of people don’t understand what’s happening when they use computers, nor should they be expected to.</p>
<h3 id="what-an-ai-user-agent-might-look-like">What an AI User Agent Might Look Like</h3>
<p>The problem with developing an AI UA now is that by nature, it has to put constraints on how AI is used, at a time when everyone is still exploring what AI <em>is</em>. Being an agent means carefully considering consequences and balancing the interests, and this is easy to get wrong.</p>
<p>Consider, for example, the Ring camera. Amazon thought it was unambiguously good to allow the police to use a network of cameras to find ‘bad guys’, and that turned out to be not just naive, but disastrously wrong. Allowing people to opt out was not sufficient to balance the interests here – what was lacking was a principled approach to rights in their architecture.</p>
<p>I suspect this is one of the reasons Apple is taking so long to enhance Siri. It’s easy to install OpenClaw and let it wreak havoc on your personal data (promoting what used to be malware into something people install willfully!); it’s a lot harder to build an ecosystem that respects user rights, creates market opportunities, and promotes a healthy ecosystem that doesn’t burden the user with an avalanche of choices. If everyone is operating their own isolated and bespoke environment, we lose the collective power of agency – both for users and the market.</p>
<p>It might be that a whole new platform (whether from Apple, OpenClaw, or elsewhere) gets developed, or it might be that AI capabilities are organically added to the Web. Projects like <a href="https://a2ui.org">A2UI</a> also show some small steps in this direction.</p>
<p>In general, though, creating an agent role for AI – with all of the benefits to the user and market that brings – will require constraining the tools that it can call in a fashion that becomes ‘normal’, so that people can depend on how it behaves. That might involve standard tool APIs with appropriate constraints, permission models, sandboxing (TEE or otherwise), and much more.</p>
<p>All of these issues are currently swept up under the carpet of ‘security’ in many AI discussions. We need to start talking about them with more nuance. Security is a defensive posture; agency is a functional right.</p>
<p class="hero">But perhaps the most consequential – and hidden – aspect we should be considering is how we get to a common idea of an AI platform – including user agency. Will it be like the major mobile platforms, controlled by private and well-intentioned but self-interested and conflicted actors – with almost inevitable competition and consumer regulation following? Or will it be a publicly accountable (and inevitably messy and laggy) process, like the Web?</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>And date, and perhaps other things, depending on <a href="https://www.hodinkee.com/articles/introducing-vacheron-constantin-les-cabinotiers-solaria">how complicated it is</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2">
<p>Notable exceptions include <a href="https://www.youtube.com/watch?v=NqCCubrky00">2001: A Space Odyssey</a>. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3">
<p>Beyond that provided by legal protections such as contract and product liability. Comparing that to the regulation provided by architecture is something I’ll address in another post. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]></content>
</entry>
<entry>
<title>Using AI to Evaluate Internet Standards (Part Two)</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/using_ai"/>
<id>https://mnot.net/blog/2026/using_ai</id>
<updated>2026-03-25T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>Standards work is notoriously hard to track. Let’s explore if grounding AI in working group records can make that history more accessible.</summary>
<category term="Standards"/>
<category term="Internet and Web"/>
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/using_ai"><![CDATA[<p class="intro">I’ve previously looked at <a href="https://www.mnot.net/blog/2025/06/04/using_ai">using AI as a tool to evaluate technical standards efforts</a> – basically, asking commercially available chatbots what they think. However, “AI” is more than off-the-shelf, general-purpose chatbots. Can we do better by grounding the model in a specific context?</p>
<p>I’ve been looking for ways to use <a href="https://notebooklm.google.com">NotebookLM</a> for a while: grounding a chatbot in a specific set of documents allows you to interact with them in a genuinely new way.</p>
<p>The breakthrough question for me was simple: What if those documents were the records of a working group? Thanks to record-keeping requirements, meetings need to keep minutes, document drafts are available, and often groups keep additional information like issue lists and meeting transcripts.</p>
<p>Feed all of that into NotebookLM and you can effectively chat with the history of a standards effort – asking about why a particular choice was made, who participated, what objections came up, and how a specification evolved.</p>
<p class="hero">I suspect this capability could be significant, precisely because the barriers to entry for tracking and understanding standards work are so high. There is simply too much going on — too many emails, issues, and drafts — for most people to follow.</p>
<p>If successful, this technique might help make standards efforts more legible to:</p>
<ul>
<li><strong>New or casual participants</strong>, who currently face a “wall of text” when trying to catch up on years of debate.</li>
<li><strong>Product managers and developers</strong>, who need to understand the intent behind a specification, not just the syntax.</li>
<li><strong>Civil society and policymakers</strong>, for whom the technical archives are often effectively opaque.</li>
</ul>
<h3 id="ai-preferences">AI Preferences</h3>
<p>My first go at this technique was in a working group I chair, <a href="https://ietf-wg-aipref.github.io">AI Preferences</a>. We needed a way to get new and casual participants up to speed on discussions, so that we didn’t need to keep repeating the same arguments.</p>
<p>Here’s <a href="https://notebooklm.google.com/notebook/37add563-249f-442e-a604-1f8d8c1bc113">the notebook</a> I created.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> I asked it to summarise the arguments against proposals for a <a href="https://github.com/ietf-wg-aipref/drafts/wiki/Use-Proposals">“use” term</a> and a <a href="https://github.com/ietf-wg-aipref/drafts/wiki/Search-Proposals">“search” term</a> in the vocabulary.</p>
<p>Privately, I got feedback from new participants that these were very useful – and, critically, I was able to create them without injecting my own biases.</p>
<h3 id="geopriv">GEOPRIV</h3>
<p>Another test case is the now-finished IETF work on <a href="https://datatracker.ietf.org/wg/geopriv/about/s">Geolocation Privacy</a>. I wasn’t involved in this group, but have long heard my IETF colleagues whisper about it in hushed tones; it didn’t succeed, and caused a lot of pain on the way there.</p>
<p>After gathering the relevant documents and dragging them <a href="https://notebooklm.google.com/notebook/083c8968-7322-495d-aeb1-99bf864a2374">into a notebook</a>,<sup id="fnref:1:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> I asked:</p>
<blockquote>
<p>Why did GEOPRIV fail?</p>
</blockquote>
<p>Here’s the <a href="https://docs.google.com/document/d/1TKBwpgC9RnlX2_0ux4k0Lm47o2BgZ3Dr9rh9UlXqMC8/edit?usp=sharing">full response</a>. <a href="https://au.linkedin.com/in/martinthomson">Martin Thomson</a> (who was intimately involved in that work) reviewed that answer and said:</p>
<blockquote>
<p>The privacy part is broadly correct. The whole on-behalf-of arrangement did lead to some fairly bitter fights. […] Fights were common. The part about wars is entirely accurate. I’m not sure about the over-engineering part, though maybe that relates to the privacy aspect, which is fair. The final thing about lack of commercial success is broadly right, modulo successful deployments for emergency services geolocation.</p>
<p>So I’d say that this is maybe 80%.</p>
</blockquote>
<h3 id="a-new-tool">A New Tool</h3>
<p>The hard part of all of this is getting all of the documents together in one place to feed into NotebookLM. To make that easier, at least for IETF groups, I<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup> created a new tool, <a href="https://pypi.org/project/ietf-notebook/">ietf-notebook</a>.</p>
<p>You can install it using <a href="https://pipx.pypa.io/latest/">pipx</a>:</p>
<blockquote>
<p>pipx install ietf-notebook</p>
</blockquote>
<p>Then, use it to gather all of a group’s drafts, RFCs, meeting minutes and transcripts, its charter, and optionally its GitHub issues into a directory, ready for dragging into a new notebook, so you can chat with that group’s history.</p>
<p>It’s still rough, so bug reports, suggestions, and improvements are most welcome. In my experience, it takes less than a minute to gather the documents for most groups, so you can be chatting with a group in almost no time.</p>
<p>If you want to see a demo first, check out the notebooks for <a href="https://notebooklm.google.com/notebook/37add563-249f-442e-a604-1f8d8c1bc113">AIPREF</a>, <a href="https://notebooklm.google.com/notebook/f998edaf-e5c5-4bb6-994e-b439dfa436f5">DIEM</a>, and <a href="https://notebooklm.google.com/notebook/083c8968-7322-495d-aeb1-99bf864a2374">GEOPRIV</a>.<sup id="fnref:1:2"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>You’ll need to be logged into Google to use these notebooks. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a> <a href="#fnref:1:1" class="reversefootnote" role="doc-backlink">↩<sup>2</sup></a> <a href="#fnref:1:2" class="reversefootnote" role="doc-backlink">↩<sup>3</sup></a></p>
</li>
<li id="fn:2">
<p>OK, Gemini. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]></content>
</entry>
<entry>
<title>The Internet Isn’t Facebook: How Openness Changes Everything</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/open_systems"/>
<id>https://mnot.net/blog/2026/open_systems</id>
<updated>2026-02-20T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>Openness makes the Internet harder to govern — but also makes it resilient, innovative, and difficult to capture. Let's look at how the openness of the Internet both defines it and ensures its success.</summary>
<category term="Tech Regulation"/>
<category term="Internet and Web"/>
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/open_systems"><![CDATA[<p class="intro">“Open” tends to get thrown around a lot when talking about the Internet: Open Source, <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">Open Standards</a>, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness <em>as a system</em>. It turns out this has profound effects on both the Internet’s design and how it might be regulated.</p>
<p>This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.</p>
<h3 id="open-and-shut">Open and Shut</h3>
<p>Most often, people think and work within <em>closed systems</em> – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.</p>
<p>Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.</p>
<p>The Internet is not like that.</p>
<p>That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “<a href="https://www.internetsociety.org/blog/2014/04/permissionless-innovation-openness-not-anarchy/">permissionless innovation</a>,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an <abbr title="Standards Developing Organisation">SDO</abbr> like the <abbr title="Internet Engineering Task Force">IETF</abbr>, but often without any formal coordination.</p>
<p>This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.</p>
<h3 id="designing-for-openness">Designing for Openness</h3>
<p>Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.</p>
<p>For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.</p>
<p class="callout">This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve <a href="https://www.mnot.net/blog/2024/11/29/platforms">written previously</a> about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.</p>
<p>For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.</p>
<p>At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.</p>
<h3 id="introducing-change-in-open-systems">Introducing Change in Open Systems</h3>
<p>An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.</p>
<p>In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the <em>ways</em> it’s being used. This makes introducing changes tricky; as is often said in the <abbr title="Internet Engineering Task Force">IETF</abbr>, <strong>you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time</strong>. Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.</p>
<p>The Web is another example of an open system.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.</p>
<h3 id="governing-open-systems">Governing Open Systems</h3>
<p>Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.</p>
<p>At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.</p>
<p>On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.</p>
<p>Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like <abbr title="Domain Name System">DNS</abbr> resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative <abbr title="Domain Name System">DNS</abbr> servers, encrypted <abbr title="Domain Name System">DNS</abbr>, <abbr title="Virtual Private Networks">VPNs</abbr>, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.</p>
<p>That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).</p>
<p>On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by <a href="https://www.mnot.net/blog/2022/09/11/esafety-industry-codes">applying its regulatory mechanisms to <em>all</em> actors on the Internet</a>, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.</p>
<p>Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.</p>
<p>None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.</p>
<h3 id="why-the-internet-must-stay-open">Why the Internet Must Stay Open</h3>
<p>If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage? Certainly, it’s <em>possible</em> for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).</p>
<p>I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.</p>
<p>The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) <em>without</em> permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the <a href="https://en.wikipedia.org/wiki/End-to-end_principle">end-to-end principle</a>. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to <a href="https://www.rfc-editor.org/rfc/rfc9518.html">centralization</a>.</p>
<p>Again, all of this is not to say that closed systems on <em>top</em> of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.</p>
<p>Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, <abbr title="Distributed Denial of Service Attack">DDoS</abbr>, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – <em>still</em> outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.</p>
<p>Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.</p>
<p>Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.</p>
<p>Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.</p>
<p><em>Thanks to <a href="https://www.komaitis.org">Konstantinos Komaitis</a> for his suggestions.</em></p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>Albeit one that is the foundation for a number of very large closed systems. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]></content>
</entry>
<entry>
<title>The Power of 'No' in Internet Standards</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/no"/>
<id>https://mnot.net/blog/2026/no</id>
<updated>2026-02-13T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>The voluntary nature of Internet standards means that the biggest power move may be to avoid playing the game. Let's take a look.</summary>
<category term="Tech Regulation"/>
<category term="Standards"/>
<category term="Internet and Web"/>
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/no"><![CDATA[<p class="intro">Fairly regularly, I hear someone ask whether a particular company is expressing undue amounts of power in Internet standards, seemingly with the implication that they’re getting away with murder (or at least the Internet governance equivalent).</p>
<p>While it’s not uncommon for powerful entities to try to steer the direction that the work goes in, they don’t have free rein: the <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">open nature of Internet standards processes</a> assures that their proposals are subjected to considerable scrutiny from their competitors, technical experts, civil society representatives, and on occasion, governments. Of course there are counterexamples, but in general that’s not something I worry about <em>too</em> much.</p>
<p>The truth is that there is very little power expressed in standards themselves. Instead, it resides in the implementation, deployment, and use of a particular technology, no matter whether it was standardised in a committee or is a <em>de facto</em> standard. Open standards processes provide some useful properties, but they are <strong>not</strong> a guarantee of quality or suitability and there are many standards that have zero impact.</p>
<p>That implication of <a href="https://www.mnot.net/blog/2024/03/13/voluntary">voluntary adoption</a> is why I believe that <strong>the most undiluted expression of power in Internet standards is saying ‘no’</strong> – in particular, when a company declines to participate in or implement a specification, feature, or function. Especially if that company is central to a ‘choke point’ with already embedded power due to adoption of related technologies like an Operating System or Web browser. In the most egregious cases, this is effectively saying ‘we want that to stay proprietary.’</p>
<p>Sometimes the no is explicit. I’ve heard an engineer from a Very Big Tech Company publicly declare that their product would not implement a specification, with the very clear implication that the working group shouldn’t bother adopting the spec as a result. That’s using their embedded power to steer the outcome, hard.</p>
<p>Usually though, it’s a lot more subtle. Concerns are raised. Review of a specification is de-prioritised. Maybe a standard is published, but it never gets to implementation. Or maybe the scope of the standard or its implementation is watered down enough to deliver something actually interoperable or functional.</p>
<p>To be very clear, engineers often have very good reasons for declining to implement something. There are a <em>lot</em> of bad ideas out there, and Internet engineering imposes a lot of constraints on what is possible. Proposals have to run a gamut of technical reviews, architectural considerations, and carefully staked-out fiefdoms to see the light of day. Proponents are often convinced of the value of their contributions, only to find that they fail to get traction for reasons that can be hard to understand. The number of people who understand the nuances is small: usually, just a handful in any given field.</p>
<p>But when the ‘no’ comes about because it doesn’t suit the agendas of powerful parties, something is wrong. Even people who want to see a better Internet reduce their expectations, because they lose faith in the possibility of success.</p>
<h3 id="a-failure-of-ambition">A Failure of Ambition</h3>
<p>To me, the evidence of this phenomenon is clearest in how little ambition the we’re seeing from the Web. The Web should be a constantly raising sea of commoditised technology, cherry picking successful proprietary applications – marketplaces like Amazon and eBay, social networks like LinkedIn and Facebook, chat on WhatsApp and iMessage, search on Google, and so on – and reinventing them as public good oriented features without a centralised owner. Robin Berjon dives into this view of the Web in <a href="https://berjon.com/bigger-browser/">You’re Going to Need a Bigger Browser</a>.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<p>Instead, most current Web standards activity focuses on incremental, small features: tweaking around the edges and creating new ‘low level’ APIs that proprietary things can be built upon. This approach was codified a while back in the ‘<a href="https://github.com/extensibleweb/manifesto">Extensible Web Manifesto</a>’, which was intended to let the community to focus its resources and let a ‘thousand flowers bloom’, but the effect has been to allow silo after silo to be build upon the Web, solidifying its role as the greatest centralisation technology ever.</p>
<p>There are small signs of life. Recent features like Web Payments, federated identity and the various (somewhat) decentralised social networking protocols show promise for extending the platform in important ways, but they’re exceptional, not the rule.</p>
<h3 id="creating-upward-pressure">Creating Upward Pressure</h3>
<p>How then, can we create higher-level capabilities that serve society but aren’t proprietary?</p>
<p>Remember that <a href="https://www.mnot.net/blog/2024/03/13/voluntary">the voluntary nature of Internet standards</a> is a feature – it allows us to fail by using the marketplace as a proving function. Forcing tech companies to implement well-intentioned specifications that aren’t informed by experience is a recipe for broken, bad tech. Likewise, ‘standardising harder’ isn’t going to create better outcomes: the real influence of what standards do is in their implementation and adoption.</p>
<p>What matters is not writing specifications, it’s getting to a place where it’s not possible for private concerns to express inappropriate power over the Internet. Or as Robin <a href="https://berjon.com/digital-sovereignty/">articulates</a>: “What matters is who has the structural power to deploy the standards they want to see and avoid those they dislike.” To me, that suggests a few areas where progress can be made:</p>
<p class="hero">First, we should remember that the market is the primary force shaping companies’ behaviour right now. It used to be that paid services like Proton were <a href="https://balkaninsight.com/2025/04/01/taking-aim-at-big-tech-proton-ceo-warns-democracy-depends-on-privacy/">mocked for competing with free Google services</a>. Now they’re viable because people realised the users are the product. If we want privacy-respecting, decentralised solutions and are willing to pay for them, that changes the incentives for companies, big and small. However, the solutions need to be bigger than any one company.</p>
<p class="hero">Second, where the market fails, competition regulators can and should step in. They’ve been increasingly active recently, but I’d like to see them go further: to provide <strong>stronger guidelines for open standards processes</strong>, and to give companies stronger incentives to participate and adopt open standards, such as a <strong>presumption that adopting a specification that goes through a high-quality process is not anticompetitive</strong>. Doing so would create natural pressure for companies to be interoperable (reducing those choke points) while also being more subject to public and expert review.</p>
<p class="hero">Third, private corporations are not the only source of innovation in the world. In fact, there are <a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=36972">great arguments</a> that open collaboration is a much deeper source of innovation in the modern economy. My interest turns towards the possibilities of public sponsorship for development of the next generation of Internet technology: what’s now being called <strong>Digital Public Infrastructure</strong>. There are many challenging issues in this area – especially regarding governance and, frankly, viability – but if the needle can be threaded and the right model found, the benefits to the people who use the Internet could be massive.</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>Yes, as discussed before there are <a href="https://www.mnot.net/blog/2024/11/29/platforms">things that are harder to do without a single-company chokepoint</a>, but that shouldn’t preclude <em>trying</em>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]></content>
</entry>
<entry>
<title>Some Thoughts on the Open Web</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/open_web"/>
<id>https://mnot.net/blog/2026/open_web</id>
<updated>2026-01-20T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>The Open Web means several things to different people, depending on context, but recently discussions have focused on the Web's Openness in terms of access to information -- how easy it is to publish and obtain information without barriers there.</summary>
<category term="Internet and Web"/>
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/open_web"><![CDATA[<p class="intro">“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of <strong>access to information</strong> -- how easy it is to publish and obtain information without barriers there.</p>
<p>David Schinazi and I hosted a pair of ad hoc sessions on this topic at the last IETF meeting in Montreal and the subsequent W3C Technical Plenary in Kobe; you can see the <a href="https://docs.google.com/document/d/1WaXDfwPP6olY-UVQxDZKNkUyqvmHt-u4kREJW4ys6ms/edit?usp=sharing">notes and summaries from those sessions</a>. This post contains my thoughts on the topic so far, after some simmering.</p>
<h3 id="the-open-web-is-amazing">The Open Web is Amazing</h3>
<p>For most of human history, it’s been difficult to access information. As an average citizen, you had to work pretty hard to access academic texts, historical writings, literature, news, public information, and so on. Libraries were an amazing innovation, but locating and working with the information there was still a formidable challenge.</p>
<p>Likewise, publishing information for broad consumption required resources and relationships that were unavailable to most people. Gutenberg famously broke down some of those barriers, but many still remained: publishing and distributing books (or articles, music, art, films) required navigating extensive industries of gatekeepers, and often insurmountable costs and delays.</p>
<p>Tim Berners-Lee’s invention cut through all of that; it was now possible to communicate with the whole world at very low cost and almost instantaneously. Various media industries were disrupted (but not completely displaced) by this innovation, and reinterpreted roles for intermediaries (e.g., search engines for librarians, online marketplaces for ‘brick and mortar’ shops) were created.</p>
<p>Critically, a norm was also created; an expectation that content was easy to access, didn’t require paying or logging in. This was not enforced, and it was not always honoured: there were still subscription sites, and that’s OK, but they didn’t see the massive network effects that hyperlinks and browsers brought.</p>
<p>It is hard to overstate the benefits of this norm. Farmers in developing countries now have easy access to guidelines and data that help their crops succeed. Students around the world have access to resources that were unimaginable even a few decades ago. They can also contribute to that global commons of content, benefiting others as they build a reputation for themselves.</p>
<p>The Open Web is an amazing public good, both for those who consume information and those who produce it. By reducing costs and friction on both sides, it allows people all over the world to access and create information in a way -- and with an ease -- that would have been unimaginable to our predecessors. It’s worth fighting for.</p>
<h3 id="people-have-different-motivations-for-opening-content">People Have Different Motivations for Opening Content</h3>
<p>We talk about “The Open Web” in the singular, but in fact there are many motivations for making content available freely online.</p>
<p>Some people consciously make their content freely available on the Web because they want to contribute to the global commons, to help realise all of the benefits described above.</p>
<p>Many don’t, however.</p>
<p>Others do it because they want to be discovered and build a reputation. Or because they want to build human connections. Or because they want revenue from putting ads next to the content. Or because they want people to try their content out and then subscribe to it on the less-than-open Web.</p>
<p>Most commonly, it’s a blend of many (or even all) of these motivations.</p>
<p>Discussions of the Open Web need to consider all of them distinctly -- what about their environments are changing, and what might encourage or discourage different kinds of Open Web publishers. Only focusing on some motivations or creating “purity tests” for content isn’t helpful.</p>
<h3 id="there-are-many-degrees-of-open">There are Many Degrees of “Open”</h3>
<p>Likewise, there are many degrees of “open.” While some Open Web content doesn’t come with any strings, much of it does. You might have to allow tracking for ads. While an article might be available to search engines (to drive traffic), you might have to register for an account to view the content as an individual.</p>
<p>There are serious privacy considerations associated with both of these, but those concerns should be considered as distinct from those regarding open access to information. People sometimes need to get a library card to access information at their local library (in person or online), but that doesn’t make the information less open.</p>
<p class="callout">One of the most interesting assertions at the meetings we held was about advertising-supported content: that it was <em>more</em> equitable than “micro-transactions” and similar pay-to-view approaches, because it makes content available to those who would otherwise not be able to afford it.</p>
<p>At the same time, these ‘small’ barriers – for example, requirements to log in after reading three articles – add up, reducing the openness of the content. If the new norm is that everyone has to log in everywhere to get Web content (and we may be well on our way to that), the Open Web suffers.</p>
<p>Similarly, some open content is free to all comers and can be reused at will, where other examples have technical barriers (such as bot blockers or other selective access schemes) and/or legal barriers (namely, copyright restrictions).</p>
<h3 id="it-has-to-be-voluntary">It Has to be Voluntary</h3>
<p>Everyone who publishes on the Open Web does so because they want to – because the benefits they realise (see above) outweigh any downsides.</p>
<p>Conversely, any content not on the Open Web is not there because the owner has made the judgement that it is not worthwhile for them to do so. They cannot be forced to “open up” that content -- they can only be encouraged.</p>
<p>Affordances and changes in infrastructure, platforms, and other aspects of the ecosystem -- sometimes realised in technical standards, sometimes not -- might change that incentive structure and create the conditions for more or less content on the Open Web. They cannot, however, be forced or mandated.</p>
<p>To me, this means that attempts to coerce different parties into desired behaviors are unlikely to succeed – they have to <em>want</em> to provide their content. That includes strategies like withholding capabilities from them; they’ll just go elsewhere to obtain them, or put their content beyond a paywall.</p>
<h3 id="its-changing-rapidly">It’s Changing Rapidly</h3>
<p>We’re talking about the Open Web now because of the introduction of AI -- a massive disruption to the incentives of many content creators and publishers, because AI both leverages their content (through scraping for training) and competes with it (because it is generative).</p>
<p>For those who opened up their content because they wanted to establish reputation and build connectivity, this feels exploitative. They made their content available to benefit people, and it turns out that it’s benefiting large corporations who claim to be helping humanity but have failed to convince many.</p>
<p>For those who want to sell ads next to their content or entice people to subscribe, this feels like betrayal. Search engines built an ecosystem that benefited publishers and the platforms,but publishers see those same platforms as continually taking more value from the relationship -- as seen in efforts to force intermediation like AMP, and now AI, where sites get drastically reduced traffic in exchange for nothing at all.</p>
<p>And so people are blocking bots, putting up paywalls, changing business models, and yanking their content off the Open Web. The commons is suffering because technology (which always makes <em>something</em> easier) now makes content creation <em>and</em> consumption easier, so long as you trust your local AI vendor.</p>
<p>This change is unevenly distributed. There are still people happily publishing open content in formats like RSS, which doesn’t facilitate tracking or targeting, and is wide open to scraping and reuse. That said, there are large swathes of content that are disappearing from the Open Web because it’s no longer viable for the publisher; the balance of incentives for them has changed.</p>
<h3 id="open-is-not-free-to-provide">Open is Not Free to Provide</h3>
<p>Information may be a non-rivalrous good, but that doesn’t mean it’s free to provide. The people who produce it need to support themselves.</p>
<p>That doesn’t mean that their interests dominate all others, nor that the structures that have evolved are the best (or even a good) way to assure that they can do so; these are topics better suited for copyright discussions (where there is a very long history of such considerations being debated).</p>
<p>Furthermore, on a technical level serving content to anyone who asks for it on a global scale might be a commodity service now -- and so very inexpensive to do, in some cases -- but it’s not free, and the costs add up at scale. These costs -- again, alongside the perceived extractive nature of the relationship -- are causing some to <a href="https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO">block or otherwise try to frustrate</a> these uses.</p>
<p>Underlying this factor is an argument about whether it’s legitimate to say you’re on ‘the Open Web’ while selectively blocking clients you don’t like – either because they’re abusive technically (over-crawling), or because you don’t like what they do with the data. My observation here is that however you feel about it, that practice is now very, very widespread – evidence of great demand on the publisher side. If that capability were taken away, I strongly suspect the net result would be very negative for the Open Web.</p>
<h3 id="its-about-control">It’s About Control</h3>
<p>Lurking beneath all of these arguments is a tension between the interests of those who produce and use content. Forgive me for resorting to hyperbole: some content people want pixel-perfect control not only over how their information is presented but how it is used and who uses it, and some open access advocates want all information to be usable for any purpose any time and anywhere.</p>
<p>Either of these outcomes (hyperbole as they are) would be bad for the Open Web.</p>
<p>The challenge, then, is finding the right balance – a Web where content producers have incentives to make their content available in a way that can be reused as much as is reasonable. That balance needs to be stable and sustainable, and take into account shocks like the introduction of AI.</p>
<h3 id="a-way-forward">A Way Forward</h3>
<p>Having an Open Web available for humanity is not a guaranteed outcome; we may end up in a future where easily available information is greatly diminished or even absent.</p>
<p>With that and all of the observations above in mind, what’s most apparent to me is that we should focus on finding ways to create and strengthen incentives to publish content that’s open (for some definition of open) -- understanding that people might have a variety of motivations for doing so. If environmental factors like AI change their incentives, we need to understand why and address the underlying concerns if possible.</p>
<p>In other words, we have to create an Internet where people <em>want</em> to publish content openly – for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.</p>]]></content>
</entry>
</feed>
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
<title>Mark Nottingham</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/" />
<link rel="self" type="application/atom+xml" href="https://mnot.net/blog/index.atom" />
<id>tag:mnot.net,2010-11-11:/blog//1</id>
<updated>2026-04-28T17:20:19Z</updated>
<subtitle></subtitle>
<entry>
<title>What's Missing in the ‘Agentic’ Story</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/agents_as_collective_bargains" />
<id>https://mnot.net/blog/2026/agents_as_collective_bargains</id>
<updated>2026-04-24T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>Every online interaction is a lopsided negotiation. For AI to truly work for us, we need more than just safety -- we need to start building true agency as a form of collective bargaining.</summary>
<category term="Internet and Web" />
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/agents_as_collective_bargains">
<![CDATA[<p class="intro">For much of the history of computing, it was reasonably safe to assume that a machine was doing what you told it to do (and what its creators promised it would do), because its operations were local.</p>
<p>You bought a laptop or desktop with an operating system, and it did what it said on the tin: it ran programs and stored files. You bought a spreadsheet and a word processor, and those programs performed those tasks and didn’t do anything else. Software that didn’t do this was in a separate bucket called ‘malware’ and we had ways of dealing with it.</p>
<p>That assumption has a more general precedent in tools – whether they be staplers, screwdrivers, or telescopes. When you buy a screwdriver, it turns screws; it has no agency of its own. It might do other things, but that’s because you’re misusing the tool, not because it decided to do something else. Most things that people use unambiguously follow this pattern: for example, my mechanical wristwatch can’t do anything but tell me the time.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<p>That pattern is perpetuated in most<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup> depictions of computers in fiction (especially sci-fi), which work for people diligently and always on their behalf, usually with minimal intrusion. They unambiguously act in the interest of their users, following in the footsteps of technological optimism which informs much of fiction and influenced a generation of nerds who tried to build it.</p>
<p>All of these experiences combine to lead people to trust computers fairly unquestioningly; they don’t give much thought to the other purposes that might be served. When I use my phone, it’s <strong>my</strong> phone, and so it’s working for me, right? This is perpetuated in the press: recently, I saw an article in a major newspaper about how to talk to “your” AI agent.</p>
<p>If you scratch the surface just a bit, however, none of this is <em>true</em> when applied to modern technologies, and these assumptions are not safe.</p>
<h3 id="the-state-of-trust-on-the-internet">The State of Trust on the Internet</h3>
<p>Every time you use an Internet-connected computer, you’re trusting someone (and most likely, a multitude) to act on your behalf. From an application’s code all the way down to the silicon, software and hardware and the network services they use reliably embed the interests of those that create them – and they may or may not be aligned with yours.</p>
<p>Critically, those layers are usually – but not always – arranged in such a way that the interests of their producers and users are aligned. People creating computer chips are competing with other people creating chips, and so they focus on that; if they try to abuse their position by (say) exfiltrating your passwords in a side channel, the market (and possibly a legal regulator) will punish them.</p>
<p>However, modern businesses have become adept at exploiting the gaps in this arrangement. Now, if you use a ‘smart’ watch or your phone to check the time, it’s likely more accurate but you have to contend with the possibility that it’s reporting your location, activities, and who knows what else back to its creator – and that they might be sharing that information with others. And that’s also the case for every other application running.</p>
<p>Those abuses aren’t obvious, and it’s very easy for people to look at an Internet-connected device and fail to recognise that even though it’s “theirs” and that the data it processes is also “theirs”, they’re placing an inordinate amount of trust into a galaxy of faceless parties – trust that may not be deserved or protected. For example:</p>
<ul>
<li>TVs are widely known to <a href="https://arstechnica.com/tech-policy/2025/12/texas-sues-biggest-tv-makers-alleging-smart-tvs-spy-on-users-without-consent/">spy on their users’ activities without consent</a>.</li>
<li>Meta <a href="https://arstechnica.com/tech-policy/2024/03/facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers-court-docs-say/">decided to decrypt private traffic from ‘research’ users’ phones</a> to competing services and store it on their own servers. Predictably, once the users found out, it <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.369872/gov.uscourts.cand.369872.736.0.pdf">ended up in court</a>.</li>
<li>At the same time, Facebook also <a href="https://arstechnica.com/gadgets/2024/03/netflix-ad-spend-led-to-facebook-dm-access-end-of-facebook-streaming-biz-lawsuit/2/">let Netflix have access to users’ private Direct Messages</a>, creating yet another lawsuit.</li>
<li>Microsoft quietly changed the model of their ‘new Outlook’ e-mail client to <a href="https://www.ghacks.net/2024/01/12/proton-mail-says-that-the-new-outlook-app-for-windows-is-microsofts-new-data-collection-service/">surreptitiously send passwords for third-party e-mail servers to their cloud</a>, so that they can share it with more than 700 of their closest friends (i.e., data brokers and advertisers).</li>
<li>Various automakers <a href="https://foundation.mozilla.org/en/blog/privacy-nightmare-on-wheels-every-car-brand-reviewed-by-mozilla-including-ford-volkswagen-and-toyota-flunks-privacy-test/">collect detailed information</a> and share it with other parties, including data brokers and <a href="https://www.nytimes.com/2024/03/11/technology/carmakers-driver-tracking-insurance.html">insurance companies</a> – to the point where it’s difficult to find a car that doesn’t violate your trust.</li>
<li>Ring (i.e., Amazon) was so sloppy with their security practices that ‘rogue insiders’ as well as hackers <a href="https://www.theregister.com/2024/04/25/ring_ftc_settlement/">exploited their access to people’s video cameras</a>.</li>
<li>Grindr <a href="https://arstechnica.com/tech-policy/2024/04/grindr-users-seek-payouts-after-dating-app-shared-hiv-status-with-vendors/">shared highly sensitive health information</a> with third parties without permission.</li>
<li>Photobucket <a href="https://blog.ericgoldman.org/archives/2026/03/photobuckets-attempted-tos-amended-mostly-fails-pierce-v-photobucket.htm">aggressively changed terms of service</a> to allow AI use of people’s photos, but failed in court.</li>
</ul>
<p>This is just a small selection; there are many more. All of these are stunning violations of trust. And, it’s becoming <em>normal</em>.</p>
<p class="hero">How did we get here? If I were to speculate on the reasons for that, I’d say it’s a combination of the normalisation of <strong>cloud computing</strong> (because everything is now running on or connected to computers you don’t control), the <strong>expectations of higher and higher growth and returns</strong> by investors, putting pressure on companies for new and recurring revenue, and – more than anything – the <strong>weakness of any regulating forces</strong> on these actors.</p>
<h3 id="user-agents-are-a-form-of-collective-bargaining">User Agents are a Form of Collective Bargaining</h3>
<p>Although it’s difficult to trust anyone on the Internet given the examples above, it could be much, much worse. Imagine if you had to install a program on your computer from every company, government body, and other entity that you interact with, and those programs had full access to do what they like on your system. In other words, every online interaction becomes an opportunity to install malware that can extract your personal information, delete files or hold them ransom, profile and monitor your behaviour, and generally ignore your interests in favour of theirs.</p>
<p>What prevents that on the modern Internet? In many cases, it’s the humble Web browser, which selectively exposes capabilities to Web sites without offering full access to your computer. This is called a <a href="https://www.w3.org/news/2025/group-note-draft-web-user-agents/">User Agent</a> – software that acts on your behalf, representing your interests in your interactions with other parties.</p>
<p>And while the Web browser is representing your interests, it’s <em>also</em> balancing them with the interests of the sites that you visit – it’s an <strong>agent for them too</strong>. They want the page to render in a predictable way, but some users want to use accessibility tools. People don’t want to be tracked, but sites need <em>some</em> indication of how their pages are consumed. For the Web, all of these delicate tradeoffs are made within a framework of shared principles and values and decided in transparent fora using consensus processes – namely, the relevant standards bodies (usually, the W3C or IETF). There’s also more than one Web browser, so you can choose the agent that best represents your interests – thereby creating market pressure to do so.</p>
<p class="hero">Importantly, this is done in a way that results in the <em>same deal for everyone</em>. If you had to negotiate what Web sites are allowed to do on your computer on a case-by-case basis, you’d quickly give up out of exhaustion (and indeed, we see this in cookie banners, a notable failure). In the bargain between big sites and individual users, the sites have more <em>bargaining power</em> and therefore users’ interests need to be considered holistically – not on a case-by-case basis where sites can chip away at them. A browser embeds what is effectively a global treaty between sites and users.</p>
<p>That’s not to say that Web browsers are perfectly aligned with users’ interests; the fights over DRM and advertising/tracking show that there’s disagreement on what the right balance is, or even on what those interests are. User agents can also just get it wrong; for example, Google <a href="https://www.theregister.com/2024/04/01/google_will_delete_data_incognito/">kept users’ data from private browsing mode in Chrome</a>.</p>
<p>As I’ve argued before, <a href="https://www.mnot.net/blog/2026/02/13/no">Web browsers also show a distinct lack of ambition</a>. While they protect the data and capabilities on your computer, and (mostly) isolate Web sites from each other, they don’t work hard enough to protect the data you give to sites by creating higher-level capabilities.</p>
<p>Despite those shortcomings, Web browsers are a good example of how user agency should be done. There are other platforms that aspire to represent users’ interests – for example, iOS and Android. These, however, are single implementations where all of the decisions are made opaquely by a lone corporation. The checks and balances on their power are very limited and very different to those on Web browsers.</p>
<h3 id="why-ai-needs-user-agency">Why AI Needs User Agency</h3>
<p>It’s notoriously difficult to predict how Large Language Models are going to change the world in the long term. That said, everyone is excited about the possibility of ‘agentic’ AI, with many breathlessly predicting that it will transform, well, <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage">everything</a>. Briefly, the idea is that a LLM with tool capabilities can act on your behalf – i.e., be your agent.</p>
<p>Putting aside the question of where we’re at in the hype cycle, the models of agency being discussed here are relatively simplistic, when you compare them to Web browsers. That’s largely because there’s no single definition of what an AI agent or chatbot does and does not do – it’s just a concept at this point. As a result, unless you write your own agent (or have AI do it for you), you’re using a piece of software that embeds others’ interests without much accountability, checks or balances. While it claims to work for you, you have little assurance that it’s actually doing so.<sup id="fnref:3"><a href="#fn:3" class="footnote" rel="footnote" role="doc-noteref">3</a></sup></p>
<p>That lack of trustworthiness cuts both ways. The data and services that the agent consume have little visibility into how they will be used, because the agent could be doing <em>anything</em> – unlike a Web browser, which puts some rough guide rails around how a Web site’s data is used and creates expectations about capabilities and behaviour.</p>
<p>In other words, the <strong>lack of a well-defined user agent role in AI</strong> that’s backed up by transparent, public standards that embed checks and balances on both parties to an interaction leaves a gap – it <strong>makes it harder for a marketplace to form</strong>.</p>
<p>That’s not to say that there isn’t a place for agentic AI without a well-defined concept of a user agent role. Agents in limited domains that have assumed trust – like inside enterprises and with their third-party vendors – will likely thrive without one, because the contractual relationships between those parties will regulate their behaviour. And of course, we’re already seeing accelerating adoption of AI chatbots for accessing information online, even though they are currently opaque and unconstrained.</p>
<p>However, that will limit the usefulness and application of agentic AI. Using agents written by other people will require a leap of trust similar to that required when using Android or iOS – and it’s not clear whether the companies that will write them will be worth of that trust, especially if they proliferate. Likewise, online data sources will be reluctant to trust random agents because they don’t know what will happen to the data – the agent could use it for the purpose they say they do and then dispose of it responsibly, or they could store it or republish it.</p>
<p>Some proposals for AI agents assume that putting agentic code in a TEE or similar ‘jail’ will solve these problems, but that ignores the need to collectively bargain – if agents can ask for intrusive permissions, we’re pretty much guaranteed a world where they constantly bug us for them, and everyone will lose out in that environment, because trust will be regularly abused and thus eroded.</p>
<p>Another alternative is to have AI experiences locked up in proprietary platforms. Consider, however, what kinds of experiences that will lead to:</p>
<blockquote>
<p>It is no accident that Meta is interested in smart glasses. With built-in cameras, lenses that can display WhatsApp messages and speakers that direct sound straight to the ear, the devices only make it easier for users to share what they are up to on social media and follow what others are doing. For Meta, more time spent on its platforms means more ad revenue. Amazon would likewise be delighted to have its Echo speakers in every home and its glasses on every face to gather more data for its growing ad business and make it even easier to buy from its marketplace. And OpenAI would be well served if people ditched their screens and relied instead on a chatbot to handle their interactions with the digital world.</p>
</blockquote>
<p>– <a href="https://www.economist.com/business/2026/01/25/will-the-smartphone-survive-the-ai-age">The Economist</a></p>
<p>Defining a user agent role for AI agents would also make agents more legible to legal regulation. With a such strong focus on “AI safety” by regulators today, an architecture that assured certain properties could be an important component of a solution in this space, not only creating more competition but also forestalling more onerous legal regulation.</p>
<p>Finally, although allowing AI agents to be <em>anything</em> promises lots of opportunities, placing constraints upon them not only helps users and services build trust in them, it also helps people more easily conceptualise what they do. Simply put, users are confused when technology offers too many choices. It’s understandable that industry doesn’t want to constrain the options for agents at this early point in their development, but at some point that wide open nature is going to hurt more than help. The vast majority of people don’t understand what’s happening when they use computers, nor should they be expected to.</p>
<h3 id="what-an-ai-user-agent-might-look-like">What an AI User Agent Might Look Like</h3>
<p>The problem with developing an AI UA now is that by nature, it has to put constraints on how AI is used, at a time when everyone is still exploring what AI <em>is</em>. Being an agent means carefully considering consequences and balancing the interests, and this is easy to get wrong.</p>
<p>Consider, for example, the Ring camera. Amazon thought it was unambiguously good to allow the police to use a network of cameras to find ‘bad guys’, and that turned out to be not just naive, but disastrously wrong. Allowing people to opt out was not sufficient to balance the interests here – what was lacking was a principled approach to rights in their architecture.</p>
<p>I suspect this is one of the reasons Apple is taking so long to enhance Siri. It’s easy to install OpenClaw and let it wreak havoc on your personal data (promoting what used to be malware into something people install willfully!); it’s a lot harder to build an ecosystem that respects user rights, creates market opportunities, and promotes a healthy ecosystem that doesn’t burden the user with an avalanche of choices. If everyone is operating their own isolated and bespoke environment, we lose the collective power of agency – both for users and the market.</p>
<p>It might be that a whole new platform (whether from Apple, OpenClaw, or elsewhere) gets developed, or it might be that AI capabilities are organically added to the Web. Projects like <a href="https://a2ui.org">A2UI</a> also show some small steps in this direction.</p>
<p>In general, though, creating an agent role for AI – with all of the benefits to the user and market that brings – will require constraining the tools that it can call in a fashion that becomes ‘normal’, so that people can depend on how it behaves. That might involve standard tool APIs with appropriate constraints, permission models, sandboxing (TEE or otherwise), and much more.</p>
<p>All of these issues are currently swept up under the carpet of ‘security’ in many AI discussions. We need to start talking about them with more nuance. Security is a defensive posture; agency is a functional right.</p>
<p class="hero">But perhaps the most consequential – and hidden – aspect we should be considering is how we get to a common idea of an AI platform – including user agency. Will it be like the major mobile platforms, controlled by private and well-intentioned but self-interested and conflicted actors – with almost inevitable competition and consumer regulation following? Or will it be a publicly accountable (and inevitably messy and laggy) process, like the Web?</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>And date, and perhaps other things, depending on <a href="https://www.hodinkee.com/articles/introducing-vacheron-constantin-les-cabinotiers-solaria">how complicated it is</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2">
<p>Notable exceptions include <a href="https://www.youtube.com/watch?v=NqCCubrky00">2001: A Space Odyssey</a>. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3">
<p>Beyond that provided by legal protections such as contract and product liability. Comparing that to the regulation provided by architecture is something I’ll address in another post. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]>
</content>
</entry>
<entry>
<title>Using AI to Evaluate Internet Standards (Part Two)</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/using_ai" />
<id>https://mnot.net/blog/2026/using_ai</id>
<updated>2026-03-25T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>Standards work is notoriously hard to track. Let’s explore if grounding AI in working group records can make that history more accessible.</summary>
<category term="Standards" />
<category term="Internet and Web" />
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/using_ai">
<![CDATA[<p class="intro">I’ve previously looked at <a href="https://www.mnot.net/blog/2025/06/04/using_ai">using AI as a tool to evaluate technical standards efforts</a> – basically, asking commercially available chatbots what they think. However, “AI” is more than off-the-shelf, general-purpose chatbots. Can we do better by grounding the model in a specific context?</p>
<p>I’ve been looking for ways to use <a href="https://notebooklm.google.com">NotebookLM</a> for a while: grounding a chatbot in a specific set of documents allows you to interact with them in a genuinely new way.</p>
<p>The breakthrough question for me was simple: What if those documents were the records of a working group? Thanks to record-keeping requirements, meetings need to keep minutes, document drafts are available, and often groups keep additional information like issue lists and meeting transcripts.</p>
<p>Feed all of that into NotebookLM and you can effectively chat with the history of a standards effort – asking about why a particular choice was made, who participated, what objections came up, and how a specification evolved.</p>
<p class="hero">I suspect this capability could be significant, precisely because the barriers to entry for tracking and understanding standards work are so high. There is simply too much going on — too many emails, issues, and drafts — for most people to follow.</p>
<p>If successful, this technique might help make standards efforts more legible to:</p>
<ul>
<li><strong>New or casual participants</strong>, who currently face a “wall of text” when trying to catch up on years of debate.</li>
<li><strong>Product managers and developers</strong>, who need to understand the intent behind a specification, not just the syntax.</li>
<li><strong>Civil society and policymakers</strong>, for whom the technical archives are often effectively opaque.</li>
</ul>
<h3 id="ai-preferences">AI Preferences</h3>
<p>My first go at this technique was in a working group I chair, <a href="https://ietf-wg-aipref.github.io">AI Preferences</a>. We needed a way to get new and casual participants up to speed on discussions, so that we didn’t need to keep repeating the same arguments.</p>
<p>Here’s <a href="https://notebooklm.google.com/notebook/37add563-249f-442e-a604-1f8d8c1bc113">the notebook</a> I created.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> I asked it to summarise the arguments against proposals for a <a href="https://github.com/ietf-wg-aipref/drafts/wiki/Use-Proposals">“use” term</a> and a <a href="https://github.com/ietf-wg-aipref/drafts/wiki/Search-Proposals">“search” term</a> in the vocabulary.</p>
<p>Privately, I got feedback from new participants that these were very useful – and, critically, I was able to create them without injecting my own biases.</p>
<h3 id="geopriv">GEOPRIV</h3>
<p>Another test case is the now-finished IETF work on <a href="https://datatracker.ietf.org/wg/geopriv/about/s">Geolocation Privacy</a>. I wasn’t involved in this group, but have long heard my IETF colleagues whisper about it in hushed tones; it didn’t succeed, and caused a lot of pain on the way there.</p>
<p>After gathering the relevant documents and dragging them <a href="https://notebooklm.google.com/notebook/083c8968-7322-495d-aeb1-99bf864a2374">into a notebook</a>,<sup id="fnref:1:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> I asked:</p>
<blockquote>
<p>Why did GEOPRIV fail?</p>
</blockquote>
<p>Here’s the <a href="https://docs.google.com/document/d/1TKBwpgC9RnlX2_0ux4k0Lm47o2BgZ3Dr9rh9UlXqMC8/edit?usp=sharing">full response</a>. <a href="https://au.linkedin.com/in/martinthomson">Martin Thomson</a> (who was intimately involved in that work) reviewed that answer and said:</p>
<blockquote>
<p>The privacy part is broadly correct. The whole on-behalf-of arrangement did lead to some fairly bitter fights. […] Fights were common. The part about wars is entirely accurate. I’m not sure about the over-engineering part, though maybe that relates to the privacy aspect, which is fair. The final thing about lack of commercial success is broadly right, modulo successful deployments for emergency services geolocation.</p>
<p>So I’d say that this is maybe 80%.</p>
</blockquote>
<h3 id="a-new-tool">A New Tool</h3>
<p>The hard part of all of this is getting all of the documents together in one place to feed into NotebookLM. To make that easier, at least for IETF groups, I<sup id="fnref:2"><a href="#fn:2" class="footnote" rel="footnote" role="doc-noteref">2</a></sup> created a new tool, <a href="https://pypi.org/project/ietf-notebook/">ietf-notebook</a>.</p>
<p>You can install it using <a href="https://pipx.pypa.io/latest/">pipx</a>:</p>
<blockquote>
<p>pipx install ietf-notebook</p>
</blockquote>
<p>Then, use it to gather all of a group’s drafts, RFCs, meeting minutes and transcripts, its charter, and optionally its GitHub issues into a directory, ready for dragging into a new notebook, so you can chat with that group’s history.</p>
<p>It’s still rough, so bug reports, suggestions, and improvements are most welcome. In my experience, it takes less than a minute to gather the documents for most groups, so you can be chatting with a group in almost no time.</p>
<p>If you want to see a demo first, check out the notebooks for <a href="https://notebooklm.google.com/notebook/37add563-249f-442e-a604-1f8d8c1bc113">AIPREF</a>, <a href="https://notebooklm.google.com/notebook/f998edaf-e5c5-4bb6-994e-b439dfa436f5">DIEM</a>, and <a href="https://notebooklm.google.com/notebook/083c8968-7322-495d-aeb1-99bf864a2374">GEOPRIV</a>.<sup id="fnref:1:2"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>You’ll need to be logged into Google to use these notebooks. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a> <a href="#fnref:1:1" class="reversefootnote" role="doc-backlink">↩<sup>2</sup></a> <a href="#fnref:1:2" class="reversefootnote" role="doc-backlink">↩<sup>3</sup></a></p>
</li>
<li id="fn:2">
<p>OK, Gemini. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]>
</content>
</entry>
<entry>
<title>The Internet Isn’t Facebook: How Openness Changes Everything</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/open_systems" />
<id>https://mnot.net/blog/2026/open_systems</id>
<updated>2026-02-20T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>Openness makes the Internet harder to govern — but also makes it resilient, innovative, and difficult to capture. Let's look at how the openness of the Internet both defines it and ensures its success.</summary>
<category term="Tech Regulation" />
<category term="Internet and Web" />
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/open_systems">
<![CDATA[<p class="intro">“Open” tends to get thrown around a lot when talking about the Internet: Open Source, <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">Open Standards</a>, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness <em>as a system</em>. It turns out this has profound effects on both the Internet’s design and how it might be regulated.</p>
<p>This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.</p>
<h3 id="open-and-shut">Open and Shut</h3>
<p>Most often, people think and work within <em>closed systems</em> – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.</p>
<p>Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.</p>
<p>The Internet is not like that.</p>
<p>That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “<a href="https://www.internetsociety.org/blog/2014/04/permissionless-innovation-openness-not-anarchy/">permissionless innovation</a>,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an <abbr title="Standards Developing Organisation">SDO</abbr> like the <abbr title="Internet Engineering Task Force">IETF</abbr>, but often without any formal coordination.</p>
<p>This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.</p>
<h3 id="designing-for-openness">Designing for Openness</h3>
<p>Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.</p>
<p>For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.</p>
<p class="callout">This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve <a href="https://www.mnot.net/blog/2024/11/29/platforms">written previously</a> about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.</p>
<p>For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.</p>
<p>At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.</p>
<h3 id="introducing-change-in-open-systems">Introducing Change in Open Systems</h3>
<p>An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.</p>
<p>In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the <em>ways</em> it’s being used. This makes introducing changes tricky; as is often said in the <abbr title="Internet Engineering Task Force">IETF</abbr>, <strong>you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time</strong>. Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.</p>
<p>The Web is another example of an open system.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.</p>
<h3 id="governing-open-systems">Governing Open Systems</h3>
<p>Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.</p>
<p>At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.</p>
<p>On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.</p>
<p>Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like <abbr title="Domain Name System">DNS</abbr> resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative <abbr title="Domain Name System">DNS</abbr> servers, encrypted <abbr title="Domain Name System">DNS</abbr>, <abbr title="Virtual Private Networks">VPNs</abbr>, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.</p>
<p>That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).</p>
<p>On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by <a href="https://www.mnot.net/blog/2022/09/11/esafety-industry-codes">applying its regulatory mechanisms to <em>all</em> actors on the Internet</a>, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.</p>
<p>Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.</p>
<p>None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.</p>
<h3 id="why-the-internet-must-stay-open">Why the Internet Must Stay Open</h3>
<p>If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage? Certainly, it’s <em>possible</em> for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).</p>
<p>I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.</p>
<p>The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) <em>without</em> permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the <a href="https://en.wikipedia.org/wiki/End-to-end_principle">end-to-end principle</a>. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to <a href="https://www.rfc-editor.org/rfc/rfc9518.html">centralization</a>.</p>
<p>Again, all of this is not to say that closed systems on <em>top</em> of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.</p>
<p>Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, <abbr title="Distributed Denial of Service Attack">DDoS</abbr>, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – <em>still</em> outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.</p>
<p>Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.</p>
<p>Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.</p>
<p>Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.</p>
<p><em>Thanks to <a href="https://www.komaitis.org">Konstantinos Komaitis</a> for his suggestions.</em></p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>Albeit one that is the foundation for a number of very large closed systems. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]>
</content>
</entry>
<entry>
<title>The Power of 'No' in Internet Standards</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/no" />
<id>https://mnot.net/blog/2026/no</id>
<updated>2026-02-13T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>The voluntary nature of Internet standards means that the biggest power move may be to avoid playing the game. Let's take a look.</summary>
<category term="Tech Regulation" />
<category term="Standards" />
<category term="Internet and Web" />
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/no">
<![CDATA[<p class="intro">Fairly regularly, I hear someone ask whether a particular company is expressing undue amounts of power in Internet standards, seemingly with the implication that they’re getting away with murder (or at least the Internet governance equivalent).</p>
<p>While it’s not uncommon for powerful entities to try to steer the direction that the work goes in, they don’t have free rein: the <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">open nature of Internet standards processes</a> assures that their proposals are subjected to considerable scrutiny from their competitors, technical experts, civil society representatives, and on occasion, governments. Of course there are counterexamples, but in general that’s not something I worry about <em>too</em> much.</p>
<p>The truth is that there is very little power expressed in standards themselves. Instead, it resides in the implementation, deployment, and use of a particular technology, no matter whether it was standardised in a committee or is a <em>de facto</em> standard. Open standards processes provide some useful properties, but they are <strong>not</strong> a guarantee of quality or suitability and there are many standards that have zero impact.</p>
<p>That implication of <a href="https://www.mnot.net/blog/2024/03/13/voluntary">voluntary adoption</a> is why I believe that <strong>the most undiluted expression of power in Internet standards is saying ‘no’</strong> – in particular, when a company declines to participate in or implement a specification, feature, or function. Especially if that company is central to a ‘choke point’ with already embedded power due to adoption of related technologies like an Operating System or Web browser. In the most egregious cases, this is effectively saying ‘we want that to stay proprietary.’</p>
<p>Sometimes the no is explicit. I’ve heard an engineer from a Very Big Tech Company publicly declare that their product would not implement a specification, with the very clear implication that the working group shouldn’t bother adopting the spec as a result. That’s using their embedded power to steer the outcome, hard.</p>
<p>Usually though, it’s a lot more subtle. Concerns are raised. Review of a specification is de-prioritised. Maybe a standard is published, but it never gets to implementation. Or maybe the scope of the standard or its implementation is watered down enough to deliver something actually interoperable or functional.</p>
<p>To be very clear, engineers often have very good reasons for declining to implement something. There are a <em>lot</em> of bad ideas out there, and Internet engineering imposes a lot of constraints on what is possible. Proposals have to run a gamut of technical reviews, architectural considerations, and carefully staked-out fiefdoms to see the light of day. Proponents are often convinced of the value of their contributions, only to find that they fail to get traction for reasons that can be hard to understand. The number of people who understand the nuances is small: usually, just a handful in any given field.</p>
<p>But when the ‘no’ comes about because it doesn’t suit the agendas of powerful parties, something is wrong. Even people who want to see a better Internet reduce their expectations, because they lose faith in the possibility of success.</p>
<h3 id="a-failure-of-ambition">A Failure of Ambition</h3>
<p>To me, the evidence of this phenomenon is clearest in how little ambition the we’re seeing from the Web. The Web should be a constantly raising sea of commoditised technology, cherry picking successful proprietary applications – marketplaces like Amazon and eBay, social networks like LinkedIn and Facebook, chat on WhatsApp and iMessage, search on Google, and so on – and reinventing them as public good oriented features without a centralised owner. Robin Berjon dives into this view of the Web in <a href="https://berjon.com/bigger-browser/">You’re Going to Need a Bigger Browser</a>.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>
<p>Instead, most current Web standards activity focuses on incremental, small features: tweaking around the edges and creating new ‘low level’ APIs that proprietary things can be built upon. This approach was codified a while back in the ‘<a href="https://github.com/extensibleweb/manifesto">Extensible Web Manifesto</a>’, which was intended to let the community to focus its resources and let a ‘thousand flowers bloom’, but the effect has been to allow silo after silo to be build upon the Web, solidifying its role as the greatest centralisation technology ever.</p>
<p>There are small signs of life. Recent features like Web Payments, federated identity and the various (somewhat) decentralised social networking protocols show promise for extending the platform in important ways, but they’re exceptional, not the rule.</p>
<h3 id="creating-upward-pressure">Creating Upward Pressure</h3>
<p>How then, can we create higher-level capabilities that serve society but aren’t proprietary?</p>
<p>Remember that <a href="https://www.mnot.net/blog/2024/03/13/voluntary">the voluntary nature of Internet standards</a> is a feature – it allows us to fail by using the marketplace as a proving function. Forcing tech companies to implement well-intentioned specifications that aren’t informed by experience is a recipe for broken, bad tech. Likewise, ‘standardising harder’ isn’t going to create better outcomes: the real influence of what standards do is in their implementation and adoption.</p>
<p>What matters is not writing specifications, it’s getting to a place where it’s not possible for private concerns to express inappropriate power over the Internet. Or as Robin <a href="https://berjon.com/digital-sovereignty/">articulates</a>: “What matters is who has the structural power to deploy the standards they want to see and avoid those they dislike.” To me, that suggests a few areas where progress can be made:</p>
<p class="hero">First, we should remember that the market is the primary force shaping companies’ behaviour right now. It used to be that paid services like Proton were <a href="https://balkaninsight.com/2025/04/01/taking-aim-at-big-tech-proton-ceo-warns-democracy-depends-on-privacy/">mocked for competing with free Google services</a>. Now they’re viable because people realised the users are the product. If we want privacy-respecting, decentralised solutions and are willing to pay for them, that changes the incentives for companies, big and small. However, the solutions need to be bigger than any one company.</p>
<p class="hero">Second, where the market fails, competition regulators can and should step in. They’ve been increasingly active recently, but I’d like to see them go further: to provide <strong>stronger guidelines for open standards processes</strong>, and to give companies stronger incentives to participate and adopt open standards, such as a <strong>presumption that adopting a specification that goes through a high-quality process is not anticompetitive</strong>. Doing so would create natural pressure for companies to be interoperable (reducing those choke points) while also being more subject to public and expert review.</p>
<p class="hero">Third, private corporations are not the only source of innovation in the world. In fact, there are <a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=36972">great arguments</a> that open collaboration is a much deeper source of innovation in the modern economy. My interest turns towards the possibilities of public sponsorship for development of the next generation of Internet technology: what’s now being called <strong>Digital Public Infrastructure</strong>. There are many challenging issues in this area – especially regarding governance and, frankly, viability – but if the needle can be threaded and the right model found, the benefits to the people who use the Internet could be massive.</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1">
<p>Yes, as discussed before there are <a href="https://www.mnot.net/blog/2024/11/29/platforms">things that are harder to do without a single-company chokepoint</a>, but that shouldn’t preclude <em>trying</em>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>]]>
</content>
</entry>
<entry>
<title>Some Thoughts on the Open Web</title>
<link rel="alternate" type="text/html" href="https://mnot.net/blog/2026/open_web" />
<id>https://mnot.net/blog/2026/open_web</id>
<updated>2026-01-20T00:00:00Z</updated>
<author>
<name>Mark Nottingham</name>
<uri>https://mnot.net/personal/</uri>
</author>
<summary>The Open Web means several things to different people, depending on context, but recently discussions have focused on the Web's Openness in terms of access to information -- how easy it is to publish and obtain information without barriers there.</summary>
<category term="Internet and Web" />
<content type="html" xml:lang="en" xml:base="https://mnot.net/blog/2026/open_web">
<![CDATA[<p class="intro">“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of <strong>access to information</strong> -- how easy it is to publish and obtain information without barriers there.</p>
<p>David Schinazi and I hosted a pair of ad hoc sessions on this topic at the last IETF meeting in Montreal and the subsequent W3C Technical Plenary in Kobe; you can see the <a href="https://docs.google.com/document/d/1WaXDfwPP6olY-UVQxDZKNkUyqvmHt-u4kREJW4ys6ms/edit?usp=sharing">notes and summaries from those sessions</a>. This post contains my thoughts on the topic so far, after some simmering.</p>
<h3 id="the-open-web-is-amazing">The Open Web is Amazing</h3>
<p>For most of human history, it’s been difficult to access information. As an average citizen, you had to work pretty hard to access academic texts, historical writings, literature, news, public information, and so on. Libraries were an amazing innovation, but locating and working with the information there was still a formidable challenge.</p>
<p>Likewise, publishing information for broad consumption required resources and relationships that were unavailable to most people. Gutenberg famously broke down some of those barriers, but many still remained: publishing and distributing books (or articles, music, art, films) required navigating extensive industries of gatekeepers, and often insurmountable costs and delays.</p>
<p>Tim Berners-Lee’s invention cut through all of that; it was now possible to communicate with the whole world at very low cost and almost instantaneously. Various media industries were disrupted (but not completely displaced) by this innovation, and reinterpreted roles for intermediaries (e.g., search engines for librarians, online marketplaces for ‘brick and mortar’ shops) were created.</p>
<p>Critically, a norm was also created; an expectation that content was easy to access, didn’t require paying or logging in. This was not enforced, and it was not always honoured: there were still subscription sites, and that’s OK, but they didn’t see the massive network effects that hyperlinks and browsers brought.</p>
<p>It is hard to overstate the benefits of this norm. Farmers in developing countries now have easy access to guidelines and data that help their crops succeed. Students around the world have access to resources that were unimaginable even a few decades ago. They can also contribute to that global commons of content, benefiting others as they build a reputation for themselves.</p>
<p>The Open Web is an amazing public good, both for those who consume information and those who produce it. By reducing costs and friction on both sides, it allows people all over the world to access and create information in a way -- and with an ease -- that would have been unimaginable to our predecessors. It’s worth fighting for.</p>
<h3 id="people-have-different-motivations-for-opening-content">People Have Different Motivations for Opening Content</h3>
<p>We talk about “The Open Web” in the singular, but in fact there are many motivations for making content available freely online.</p>
<p>Some people consciously make their content freely available on the Web because they want to contribute to the global commons, to help realise all of the benefits described above.</p>
<p>Many don’t, however.</p>
<p>Others do it because they want to be discovered and build a reputation. Or because they want to build human connections. Or because they want revenue from putting ads next to the content. Or because they want people to try their content out and then subscribe to it on the less-than-open Web.</p>
<p>Most commonly, it’s a blend of many (or even all) of these motivations.</p>
<p>Discussions of the Open Web need to consider all of them distinctly -- what about their environments are changing, and what might encourage or discourage different kinds of Open Web publishers. Only focusing on some motivations or creating “purity tests” for content isn’t helpful.</p>
<h3 id="there-are-many-degrees-of-open">There are Many Degrees of “Open”</h3>
<p>Likewise, there are many degrees of “open.” While some Open Web content doesn’t come with any strings, much of it does. You might have to allow tracking for ads. While an article might be available to search engines (to drive traffic), you might have to register for an account to view the content as an individual.</p>
<p>There are serious privacy considerations associated with both of these, but those concerns should be considered as distinct from those regarding open access to information. People sometimes need to get a library card to access information at their local library (in person or online), but that doesn’t make the information less open.</p>
<p class="callout">One of the most interesting assertions at the meetings we held was about advertising-supported content: that it was <em>more</em> equitable than “micro-transactions” and similar pay-to-view approaches, because it makes content available to those who would otherwise not be able to afford it.</p>
<p>At the same time, these ‘small’ barriers – for example, requirements to log in after reading three articles – add up, reducing the openness of the content. If the new norm is that everyone has to log in everywhere to get Web content (and we may be well on our way to that), the Open Web suffers.</p>
<p>Similarly, some open content is free to all comers and can be reused at will, where other examples have technical barriers (such as bot blockers or other selective access schemes) and/or legal barriers (namely, copyright restrictions).</p>
<h3 id="it-has-to-be-voluntary">It Has to be Voluntary</h3>
<p>Everyone who publishes on the Open Web does so because they want to – because the benefits they realise (see above) outweigh any downsides.</p>
<p>Conversely, any content not on the Open Web is not there because the owner has made the judgement that it is not worthwhile for them to do so. They cannot be forced to “open up” that content -- they can only be encouraged.</p>
<p>Affordances and changes in infrastructure, platforms, and other aspects of the ecosystem -- sometimes realised in technical standards, sometimes not -- might change that incentive structure and create the conditions for more or less content on the Open Web. They cannot, however, be forced or mandated.</p>
<p>To me, this means that attempts to coerce different parties into desired behaviors are unlikely to succeed – they have to <em>want</em> to provide their content. That includes strategies like withholding capabilities from them; they’ll just go elsewhere to obtain them, or put their content beyond a paywall.</p>
<h3 id="its-changing-rapidly">It’s Changing Rapidly</h3>
<p>We’re talking about the Open Web now because of the introduction of AI -- a massive disruption to the incentives of many content creators and publishers, because AI both leverages their content (through scraping for training) and competes with it (because it is generative).</p>
<p>For those who opened up their content because they wanted to establish reputation and build connectivity, this feels exploitative. They made their content available to benefit people, and it turns out that it’s benefiting large corporations who claim to be helping humanity but have failed to convince many.</p>
<p>For those who want to sell ads next to their content or entice people to subscribe, this feels like betrayal. Search engines built an ecosystem that benefited publishers and the platforms,but publishers see those same platforms as continually taking more value from the relationship -- as seen in efforts to force intermediation like AMP, and now AI, where sites get drastically reduced traffic in exchange for nothing at all.</p>
<p>And so people are blocking bots, putting up paywalls, changing business models, and yanking their content off the Open Web. The commons is suffering because technology (which always makes <em>something</em> easier) now makes content creation <em>and</em> consumption easier, so long as you trust your local AI vendor.</p>
<p>This change is unevenly distributed. There are still people happily publishing open content in formats like RSS, which doesn’t facilitate tracking or targeting, and is wide open to scraping and reuse. That said, there are large swathes of content that are disappearing from the Open Web because it’s no longer viable for the publisher; the balance of incentives for them has changed.</p>
<h3 id="open-is-not-free-to-provide">Open is Not Free to Provide</h3>
<p>Information may be a non-rivalrous good, but that doesn’t mean it’s free to provide. The people who produce it need to support themselves.</p>
<p>That doesn’t mean that their interests dominate all others, nor that the structures that have evolved are the best (or even a good) way to assure that they can do so; these are topics better suited for copyright discussions (where there is a very long history of such considerations being debated).</p>
<p>Furthermore, on a technical level serving content to anyone who asks for it on a global scale might be a commodity service now -- and so very inexpensive to do, in some cases -- but it’s not free, and the costs add up at scale. These costs -- again, alongside the perceived extractive nature of the relationship -- are causing some to <a href="https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO">block or otherwise try to frustrate</a> these uses.</p>
<p>Underlying this factor is an argument about whether it’s legitimate to say you’re on ‘the Open Web’ while selectively blocking clients you don’t like – either because they’re abusive technically (over-crawling), or because you don’t like what they do with the data. My observation here is that however you feel about it, that practice is now very, very widespread – evidence of great demand on the publisher side. If that capability were taken away, I strongly suspect the net result would be very negative for the Open Web.</p>
<h3 id="its-about-control">It’s About Control</h3>
<p>Lurking beneath all of these arguments is a tension between the interests of those who produce and use content. Forgive me for resorting to hyperbole: some content people want pixel-perfect control not only over how their information is presented but how it is used and who uses it, and some open access advocates want all information to be usable for any purpose any time and anywhere.</p>
<p>Either of these outcomes (hyperbole as they are) would be bad for the Open Web.</p>
<p>The challenge, then, is finding the right balance – a Web where content producers have incentives to make their content available in a way that can be reused as much as is reasonable. That balance needs to be stable and sustainable, and take into account shocks like the introduction of AI.</p>
<h3 id="a-way-forward">A Way Forward</h3>
<p>Having an Open Web available for humanity is not a guaranteed outcome; we may end up in a future where easily available information is greatly diminished or even absent.</p>
<p>With that and all of the observations above in mind, what’s most apparent to me is that we should focus on finding ways to create and strengthen incentives to publish content that’s open (for some definition of open) -- understanding that people might have a variety of motivations for doing so. If environmental factors like AI change their incentives, we need to understand why and address the underlying concerns if possible.</p>
<p>In other words, we have to create an Internet where people <em>want</em> to publish content openly – for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.</p>]]>
</content>
</entry>
</feed>
{
"cache-control": "max-age=43200",
"cf-cache-status": "DYNAMIC",
"cf-ray": "9f3db9e82e905751-CMH",
"content-language": "en",
"content-length": "68371",
"content-type": "application/atom+xml",
"date": "Wed, 29 Apr 2026 10:47:01 GMT",
"etag": "\"10b13-6508872803118\"",
"last-modified": "Tue, 28 Apr 2026 17:20:27 GMT",
"server": "cloudflare",
"strict-transport-security": "max-age=15552000"
}
{
"meta": {
"type": "atom",
"version": "1.0"
},
"language": null,
"title": "Mark Nottingham",
"description": null,
"copyright": null,
"url": "https://mnot.net/blog/",
"self": "https://mnot.net/blog/index.atom",
"published": null,
"updated": "2026-04-28T17:20:19.000Z",
"generator": null,
"image": null,
"authors": [],
"categories": [],
"items": [
{
"id": "https://mnot.net/blog/2026/agents_as_collective_bargains",
"title": "What's Missing in the ‘Agentic’ Story",
"description": "Every online interaction is a lopsided negotiation. For AI to truly work for us, we need more than just safety -- we need to start building true agency as a form of collective bargaining.",
"url": "https://mnot.net/blog/2026/agents_as_collective_bargains",
"published": null,
"updated": "2026-04-24T00:00:00.000Z",
"content": "<p class=\"intro\">For much of the history of computing, it was reasonably safe to assume that a machine was doing what you told it to do (and what its creators promised it would do), because its operations were local.</p>\n\n<p>You bought a laptop or desktop with an operating system, and it did what it said on the tin: it ran programs and stored files. You bought a spreadsheet and a word processor, and those programs performed those tasks and didn’t do anything else. Software that didn’t do this was in a separate bucket called ‘malware’ and we had ways of dealing with it.</p>\n\n<p>That assumption has a more general precedent in tools – whether they be staplers, screwdrivers, or telescopes. When you buy a screwdriver, it turns screws; it has no agency of its own. It might do other things, but that’s because you’re misusing the tool, not because it decided to do something else. Most things that people use unambiguously follow this pattern: for example, my mechanical wristwatch can’t do anything but tell me the time.<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup></p>\n\n<p>That pattern is perpetuated in most<sup id=\"fnref:2\"><a href=\"#fn:2\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">2</a></sup> depictions of computers in fiction (especially sci-fi), which work for people diligently and always on their behalf, usually with minimal intrusion. They unambiguously act in the interest of their users, following in the footsteps of technological optimism which informs much of fiction and influenced a generation of nerds who tried to build it.</p>\n\n<p>All of these experiences combine to lead people to trust computers fairly unquestioningly; they don’t give much thought to the other purposes that might be served. When I use my phone, it’s <strong>my</strong> phone, and so it’s working for me, right? This is perpetuated in the press: recently, I saw an article in a major newspaper about how to talk to “your” AI agent.</p>\n\n<p>If you scratch the surface just a bit, however, none of this is <em>true</em> when applied to modern technologies, and these assumptions are not safe.</p>\n\n<h3 id=\"the-state-of-trust-on-the-internet\">The State of Trust on the Internet</h3>\n<p>Every time you use an Internet-connected computer, you’re trusting someone (and most likely, a multitude) to act on your behalf. From an application’s code all the way down to the silicon, software and hardware and the network services they use reliably embed the interests of those that create them – and they may or may not be aligned with yours.</p>\n\n<p>Critically, those layers are usually – but not always – arranged in such a way that the interests of their producers and users are aligned. People creating computer chips are competing with other people creating chips, and so they focus on that; if they try to abuse their position by (say) exfiltrating your passwords in a side channel, the market (and possibly a legal regulator) will punish them.</p>\n\n<p>However, modern businesses have become adept at exploiting the gaps in this arrangement. Now, if you use a ‘smart’ watch or your phone to check the time, it’s likely more accurate but you have to contend with the possibility that it’s reporting your location, activities, and who knows what else back to its creator – and that they might be sharing that information with others. And that’s also the case for every other application running.</p>\n\n<p>Those abuses aren’t obvious, and it’s very easy for people to look at an Internet-connected device and fail to recognise that even though it’s “theirs” and that the data it processes is also “theirs”, they’re placing an inordinate amount of trust into a galaxy of faceless parties – trust that may not be deserved or protected. For example:</p>\n\n<ul>\n <li>TVs are widely known to <a href=\"https://arstechnica.com/tech-policy/2025/12/texas-sues-biggest-tv-makers-alleging-smart-tvs-spy-on-users-without-consent/\">spy on their users’ activities without consent</a>.</li>\n <li>Meta <a href=\"https://arstechnica.com/tech-policy/2024/03/facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers-court-docs-say/\">decided to decrypt private traffic from ‘research’ users’ phones</a> to competing services and store it on their own servers. Predictably, once the users found out, it <a href=\"https://storage.courtlistener.com/recap/gov.uscourts.cand.369872/gov.uscourts.cand.369872.736.0.pdf\">ended up in court</a>.</li>\n <li>At the same time, Facebook also <a href=\"https://arstechnica.com/gadgets/2024/03/netflix-ad-spend-led-to-facebook-dm-access-end-of-facebook-streaming-biz-lawsuit/2/\">let Netflix have access to users’ private Direct Messages</a>, creating yet another lawsuit.</li>\n <li>Microsoft quietly changed the model of their ‘new Outlook’ e-mail client to <a href=\"https://www.ghacks.net/2024/01/12/proton-mail-says-that-the-new-outlook-app-for-windows-is-microsofts-new-data-collection-service/\">surreptitiously send passwords for third-party e-mail servers to their cloud</a>, so that they can share it with more than 700 of their closest friends (i.e., data brokers and advertisers).</li>\n <li>Various automakers <a href=\"https://foundation.mozilla.org/en/blog/privacy-nightmare-on-wheels-every-car-brand-reviewed-by-mozilla-including-ford-volkswagen-and-toyota-flunks-privacy-test/\">collect detailed information</a> and share it with other parties, including data brokers and <a href=\"https://www.nytimes.com/2024/03/11/technology/carmakers-driver-tracking-insurance.html\">insurance companies</a> – to the point where it’s difficult to find a car that doesn’t violate your trust.</li>\n <li>Ring (i.e., Amazon) was so sloppy with their security practices that ‘rogue insiders’ as well as hackers <a href=\"https://www.theregister.com/2024/04/25/ring_ftc_settlement/\">exploited their access to people’s video cameras</a>.</li>\n <li>Grindr <a href=\"https://arstechnica.com/tech-policy/2024/04/grindr-users-seek-payouts-after-dating-app-shared-hiv-status-with-vendors/\">shared highly sensitive health information</a> with third parties without permission.</li>\n <li>Photobucket <a href=\"https://blog.ericgoldman.org/archives/2026/03/photobuckets-attempted-tos-amended-mostly-fails-pierce-v-photobucket.htm\">aggressively changed terms of service</a> to allow AI use of people’s photos, but failed in court.</li>\n</ul>\n\n<p>This is just a small selection; there are many more. All of these are stunning violations of trust. And, it’s becoming <em>normal</em>.</p>\n\n<p class=\"hero\">How did we get here? If I were to speculate on the reasons for that, I’d say it’s a combination of the normalisation of <strong>cloud computing</strong> (because everything is now running on or connected to computers you don’t control), the <strong>expectations of higher and higher growth and returns</strong> by investors, putting pressure on companies for new and recurring revenue, and – more than anything – the <strong>weakness of any regulating forces</strong> on these actors.</p>\n\n<h3 id=\"user-agents-are-a-form-of-collective-bargaining\">User Agents are a Form of Collective Bargaining</h3>\n<p>Although it’s difficult to trust anyone on the Internet given the examples above, it could be much, much worse. Imagine if you had to install a program on your computer from every company, government body, and other entity that you interact with, and those programs had full access to do what they like on your system. In other words, every online interaction becomes an opportunity to install malware that can extract your personal information, delete files or hold them ransom, profile and monitor your behaviour, and generally ignore your interests in favour of theirs.</p>\n\n<p>What prevents that on the modern Internet? In many cases, it’s the humble Web browser, which selectively exposes capabilities to Web sites without offering full access to your computer. This is called a <a href=\"https://www.w3.org/news/2025/group-note-draft-web-user-agents/\">User Agent</a> – software that acts on your behalf, representing your interests in your interactions with other parties.</p>\n\n<p>And while the Web browser is representing your interests, it’s <em>also</em> balancing them with the interests of the sites that you visit – it’s an <strong>agent for them too</strong>. They want the page to render in a predictable way, but some users want to use accessibility tools. People don’t want to be tracked, but sites need <em>some</em> indication of how their pages are consumed. For the Web, all of these delicate tradeoffs are made within a framework of shared principles and values and decided in transparent fora using consensus processes – namely, the relevant standards bodies (usually, the W3C or IETF). There’s also more than one Web browser, so you can choose the agent that best represents your interests – thereby creating market pressure to do so.</p>\n\n<p class=\"hero\">Importantly, this is done in a way that results in the <em>same deal for everyone</em>. If you had to negotiate what Web sites are allowed to do on your computer on a case-by-case basis, you’d quickly give up out of exhaustion (and indeed, we see this in cookie banners, a notable failure). In the bargain between big sites and individual users, the sites have more <em>bargaining power</em> and therefore users’ interests need to be considered holistically – not on a case-by-case basis where sites can chip away at them. A browser embeds what is effectively a global treaty between sites and users.</p>\n\n<p>That’s not to say that Web browsers are perfectly aligned with users’ interests; the fights over DRM and advertising/tracking show that there’s disagreement on what the right balance is, or even on what those interests are. User agents can also just get it wrong; for example, Google <a href=\"https://www.theregister.com/2024/04/01/google_will_delete_data_incognito/\">kept users’ data from private browsing mode in Chrome</a>.</p>\n\n<p>As I’ve argued before, <a href=\"https://www.mnot.net/blog/2026/02/13/no\">Web browsers also show a distinct lack of ambition</a>. While they protect the data and capabilities on your computer, and (mostly) isolate Web sites from each other, they don’t work hard enough to protect the data you give to sites by creating higher-level capabilities.</p>\n\n<p>Despite those shortcomings, Web browsers are a good example of how user agency should be done. There are other platforms that aspire to represent users’ interests – for example, iOS and Android. These, however, are single implementations where all of the decisions are made opaquely by a lone corporation. The checks and balances on their power are very limited and very different to those on Web browsers.</p>\n\n<h3 id=\"why-ai-needs-user-agency\">Why AI Needs User Agency</h3>\n<p>It’s notoriously difficult to predict how Large Language Models are going to change the world in the long term. That said, everyone is excited about the possibility of ‘agentic’ AI, with many breathlessly predicting that it will transform, well, <a href=\"https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage\">everything</a>. Briefly, the idea is that a LLM with tool capabilities can act on your behalf – i.e., be your agent.</p>\n\n<p>Putting aside the question of where we’re at in the hype cycle, the models of agency being discussed here are relatively simplistic, when you compare them to Web browsers. That’s largely because there’s no single definition of what an AI agent or chatbot does and does not do – it’s just a concept at this point. As a result, unless you write your own agent (or have AI do it for you), you’re using a piece of software that embeds others’ interests without much accountability, checks or balances. While it claims to work for you, you have little assurance that it’s actually doing so.<sup id=\"fnref:3\"><a href=\"#fn:3\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">3</a></sup></p>\n\n<p>That lack of trustworthiness cuts both ways. The data and services that the agent consume have little visibility into how they will be used, because the agent could be doing <em>anything</em> – unlike a Web browser, which puts some rough guide rails around how a Web site’s data is used and creates expectations about capabilities and behaviour.</p>\n\n<p>In other words, the <strong>lack of a well-defined user agent role in AI</strong> that’s backed up by transparent, public standards that embed checks and balances on both parties to an interaction leaves a gap – it <strong>makes it harder for a marketplace to form</strong>.</p>\n\n<p>That’s not to say that there isn’t a place for agentic AI without a well-defined concept of a user agent role. Agents in limited domains that have assumed trust – like inside enterprises and with their third-party vendors – will likely thrive without one, because the contractual relationships between those parties will regulate their behaviour. And of course, we’re already seeing accelerating adoption of AI chatbots for accessing information online, even though they are currently opaque and unconstrained.</p>\n\n<p>However, that will limit the usefulness and application of agentic AI. Using agents written by other people will require a leap of trust similar to that required when using Android or iOS – and it’s not clear whether the companies that will write them will be worth of that trust, especially if they proliferate. Likewise, online data sources will be reluctant to trust random agents because they don’t know what will happen to the data – the agent could use it for the purpose they say they do and then dispose of it responsibly, or they could store it or republish it.</p>\n\n<p>Some proposals for AI agents assume that putting agentic code in a TEE or similar ‘jail’ will solve these problems, but that ignores the need to collectively bargain – if agents can ask for intrusive permissions, we’re pretty much guaranteed a world where they constantly bug us for them, and everyone will lose out in that environment, because trust will be regularly abused and thus eroded.</p>\n\n<p>Another alternative is to have AI experiences locked up in proprietary platforms. Consider, however, what kinds of experiences that will lead to:</p>\n\n<blockquote>\n <p>It is no accident that Meta is interested in smart glasses. With built-in cameras, lenses that can display WhatsApp messages and speakers that direct sound straight to the ear, the devices only make it easier for users to share what they are up to on social media and follow what others are doing. For Meta, more time spent on its platforms means more ad revenue. Amazon would likewise be delighted to have its Echo speakers in every home and its glasses on every face to gather more data for its growing ad business and make it even easier to buy from its marketplace. And OpenAI would be well served if people ditched their screens and relied instead on a chatbot to handle their interactions with the digital world.</p>\n</blockquote>\n\n<p>– <a href=\"https://www.economist.com/business/2026/01/25/will-the-smartphone-survive-the-ai-age\">The Economist</a></p>\n\n<p>Defining a user agent role for AI agents would also make agents more legible to legal regulation. With a such strong focus on “AI safety” by regulators today, an architecture that assured certain properties could be an important component of a solution in this space, not only creating more competition but also forestalling more onerous legal regulation.</p>\n\n<p>Finally, although allowing AI agents to be <em>anything</em> promises lots of opportunities, placing constraints upon them not only helps users and services build trust in them, it also helps people more easily conceptualise what they do. Simply put, users are confused when technology offers too many choices. It’s understandable that industry doesn’t want to constrain the options for agents at this early point in their development, but at some point that wide open nature is going to hurt more than help. The vast majority of people don’t understand what’s happening when they use computers, nor should they be expected to.</p>\n\n<h3 id=\"what-an-ai-user-agent-might-look-like\">What an AI User Agent Might Look Like</h3>\n<p>The problem with developing an AI UA now is that by nature, it has to put constraints on how AI is used, at a time when everyone is still exploring what AI <em>is</em>. Being an agent means carefully considering consequences and balancing the interests, and this is easy to get wrong.</p>\n\n<p>Consider, for example, the Ring camera. Amazon thought it was unambiguously good to allow the police to use a network of cameras to find ‘bad guys’, and that turned out to be not just naive, but disastrously wrong. Allowing people to opt out was not sufficient to balance the interests here – what was lacking was a principled approach to rights in their architecture.</p>\n\n<p>I suspect this is one of the reasons Apple is taking so long to enhance Siri. It’s easy to install OpenClaw and let it wreak havoc on your personal data (promoting what used to be malware into something people install willfully!); it’s a lot harder to build an ecosystem that respects user rights, creates market opportunities, and promotes a healthy ecosystem that doesn’t burden the user with an avalanche of choices. If everyone is operating their own isolated and bespoke environment, we lose the collective power of agency – both for users and the market.</p>\n\n<p>It might be that a whole new platform (whether from Apple, OpenClaw, or elsewhere) gets developed, or it might be that AI capabilities are organically added to the Web. Projects like <a href=\"https://a2ui.org\">A2UI</a> also show some small steps in this direction.</p>\n\n<p>In general, though, creating an agent role for AI – with all of the benefits to the user and market that brings – will require constraining the tools that it can call in a fashion that becomes ‘normal’, so that people can depend on how it behaves. That might involve standard tool APIs with appropriate constraints, permission models, sandboxing (TEE or otherwise), and much more.</p>\n\n<p>All of these issues are currently swept up under the carpet of ‘security’ in many AI discussions. We need to start talking about them with more nuance. Security is a defensive posture; agency is a functional right.</p>\n\n<p class=\"hero\">But perhaps the most consequential – and hidden – aspect we should be considering is how we get to a common idea of an AI platform – including user agency. Will it be like the major mobile platforms, controlled by private and well-intentioned but self-interested and conflicted actors – with almost inevitable competition and consumer regulation following? Or will it be a publicly accountable (and inevitably messy and laggy) process, like the Web?</p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n <ol>\n <li id=\"fn:1\">\n <p>And date, and perhaps other things, depending on <a href=\"https://www.hodinkee.com/articles/introducing-vacheron-constantin-les-cabinotiers-solaria\">how complicated it is</a>. <a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n </li>\n <li id=\"fn:2\">\n <p>Notable exceptions include <a href=\"https://www.youtube.com/watch?v=NqCCubrky00\">2001: A Space Odyssey</a>. <a href=\"#fnref:2\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n </li>\n <li id=\"fn:3\">\n <p>Beyond that provided by legal protections such as contract and product liability. Comparing that to the regulation provided by architecture is something I’ll address in another post. <a href=\"#fnref:3\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n </li>\n </ol>\n</div>",
"image": null,
"media": [],
"authors": [
{
"name": "Mark Nottingham",
"email": null,
"url": "https://mnot.net/personal/"
}
],
"categories": [
{
"label": "Internet and Web",
"term": "Internet and Web",
"url": null
}
]
},
{
"id": "https://mnot.net/blog/2026/using_ai",
"title": "Using AI to Evaluate Internet Standards (Part Two)",
"description": "Standards work is notoriously hard to track. Let’s explore if grounding AI in working group records can make that history more accessible.",
"url": "https://mnot.net/blog/2026/using_ai",
"published": null,
"updated": "2026-03-25T00:00:00.000Z",
"content": "<p class=\"intro\">I’ve previously looked at <a href=\"https://www.mnot.net/blog/2025/06/04/using_ai\">using AI as a tool to evaluate technical standards efforts</a> – basically, asking commercially available chatbots what they think. However, “AI” is more than off-the-shelf, general-purpose chatbots. Can we do better by grounding the model in a specific context?</p>\n\n<p>I’ve been looking for ways to use <a href=\"https://notebooklm.google.com\">NotebookLM</a> for a while: grounding a chatbot in a specific set of documents allows you to interact with them in a genuinely new way.</p>\n\n<p>The breakthrough question for me was simple: What if those documents were the records of a working group? Thanks to record-keeping requirements, meetings need to keep minutes, document drafts are available, and often groups keep additional information like issue lists and meeting transcripts.</p>\n\n<p>Feed all of that into NotebookLM and you can effectively chat with the history of a standards effort – asking about why a particular choice was made, who participated, what objections came up, and how a specification evolved.</p>\n\n<p class=\"hero\">I suspect this capability could be significant, precisely because the barriers to entry for tracking and understanding standards work are so high. There is simply too much going on — too many emails, issues, and drafts — for most people to follow.</p>\n\n<p>If successful, this technique might help make standards efforts more legible to:</p>\n\n<ul>\n <li><strong>New or casual participants</strong>, who currently face a “wall of text” when trying to catch up on years of debate.</li>\n <li><strong>Product managers and developers</strong>, who need to understand the intent behind a specification, not just the syntax.</li>\n <li><strong>Civil society and policymakers</strong>, for whom the technical archives are often effectively opaque.</li>\n</ul>\n\n<h3 id=\"ai-preferences\">AI Preferences</h3>\n\n<p>My first go at this technique was in a working group I chair, <a href=\"https://ietf-wg-aipref.github.io\">AI Preferences</a>. We needed a way to get new and casual participants up to speed on discussions, so that we didn’t need to keep repeating the same arguments.</p>\n\n<p>Here’s <a href=\"https://notebooklm.google.com/notebook/37add563-249f-442e-a604-1f8d8c1bc113\">the notebook</a> I created.<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup> I asked it to summarise the arguments against proposals for a <a href=\"https://github.com/ietf-wg-aipref/drafts/wiki/Use-Proposals\">“use” term</a> and a <a href=\"https://github.com/ietf-wg-aipref/drafts/wiki/Search-Proposals\">“search” term</a> in the vocabulary.</p>\n\n<p>Privately, I got feedback from new participants that these were very useful – and, critically, I was able to create them without injecting my own biases.</p>\n\n<h3 id=\"geopriv\">GEOPRIV</h3>\n\n<p>Another test case is the now-finished IETF work on <a href=\"https://datatracker.ietf.org/wg/geopriv/about/s\">Geolocation Privacy</a>. I wasn’t involved in this group, but have long heard my IETF colleagues whisper about it in hushed tones; it didn’t succeed, and caused a lot of pain on the way there.</p>\n\n<p>After gathering the relevant documents and dragging them <a href=\"https://notebooklm.google.com/notebook/083c8968-7322-495d-aeb1-99bf864a2374\">into a notebook</a>,<sup id=\"fnref:1:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup> I asked:</p>\n\n<blockquote>\n <p>Why did GEOPRIV fail?</p>\n</blockquote>\n\n<p>Here’s the <a href=\"https://docs.google.com/document/d/1TKBwpgC9RnlX2_0ux4k0Lm47o2BgZ3Dr9rh9UlXqMC8/edit?usp=sharing\">full response</a>. <a href=\"https://au.linkedin.com/in/martinthomson\">Martin Thomson</a> (who was intimately involved in that work) reviewed that answer and said:</p>\n\n<blockquote>\n <p>The privacy part is broadly correct. The whole on-behalf-of arrangement did lead to some fairly bitter fights. […] Fights were common. The part about wars is entirely accurate. I’m not sure about the over-engineering part, though maybe that relates to the privacy aspect, which is fair. The final thing about lack of commercial success is broadly right, modulo successful deployments for emergency services geolocation.</p>\n\n <p>So I’d say that this is maybe 80%.</p>\n</blockquote>\n\n<h3 id=\"a-new-tool\">A New Tool</h3>\n\n<p>The hard part of all of this is getting all of the documents together in one place to feed into NotebookLM. To make that easier, at least for IETF groups, I<sup id=\"fnref:2\"><a href=\"#fn:2\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">2</a></sup> created a new tool, <a href=\"https://pypi.org/project/ietf-notebook/\">ietf-notebook</a>.</p>\n\n<p>You can install it using <a href=\"https://pipx.pypa.io/latest/\">pipx</a>:</p>\n\n<blockquote>\n <p>pipx install ietf-notebook</p>\n</blockquote>\n\n<p>Then, use it to gather all of a group’s drafts, RFCs, meeting minutes and transcripts, its charter, and optionally its GitHub issues into a directory, ready for dragging into a new notebook, so you can chat with that group’s history.</p>\n\n<p>It’s still rough, so bug reports, suggestions, and improvements are most welcome. In my experience, it takes less than a minute to gather the documents for most groups, so you can be chatting with a group in almost no time.</p>\n\n<p>If you want to see a demo first, check out the notebooks for <a href=\"https://notebooklm.google.com/notebook/37add563-249f-442e-a604-1f8d8c1bc113\">AIPREF</a>, <a href=\"https://notebooklm.google.com/notebook/f998edaf-e5c5-4bb6-994e-b439dfa436f5\">DIEM</a>, and <a href=\"https://notebooklm.google.com/notebook/083c8968-7322-495d-aeb1-99bf864a2374\">GEOPRIV</a>.<sup id=\"fnref:1:2\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup></p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n <ol>\n <li id=\"fn:1\">\n <p>You’ll need to be logged into Google to use these notebooks. <a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a> <a href=\"#fnref:1:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩<sup>2</sup></a> <a href=\"#fnref:1:2\" class=\"reversefootnote\" role=\"doc-backlink\">↩<sup>3</sup></a></p>\n </li>\n <li id=\"fn:2\">\n <p>OK, Gemini. <a href=\"#fnref:2\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n </li>\n </ol>\n</div>",
"image": null,
"media": [],
"authors": [
{
"name": "Mark Nottingham",
"email": null,
"url": "https://mnot.net/personal/"
}
],
"categories": [
{
"label": "Standards",
"term": "Standards",
"url": null
},
{
"label": "Internet and Web",
"term": "Internet and Web",
"url": null
}
]
},
{
"id": "https://mnot.net/blog/2026/open_systems",
"title": "The Internet Isn’t Facebook: How Openness Changes Everything",
"description": "Openness makes the Internet harder to govern — but also makes it resilient, innovative, and difficult to capture. Let's look at how the openness of the Internet both defines it and ensures its success.",
"url": "https://mnot.net/blog/2026/open_systems",
"published": null,
"updated": "2026-02-20T00:00:00.000Z",
"content": "<p class=\"intro\">“Open” tends to get thrown around a lot when talking about the Internet: Open Source, <a href=\"https://www.mnot.net/blog/2024/07/05/open_internet_standards\">Open Standards</a>, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness <em>as a system</em>. It turns out this has profound effects on both the Internet’s design and how it might be regulated.</p>\n\n<p>This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.</p>\n\n<h3 id=\"open-and-shut\">Open and Shut</h3>\n\n<p>Most often, people think and work within <em>closed systems</em> – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.</p>\n\n<p>Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.</p>\n\n<p>The Internet is not like that.</p>\n\n<p>That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “<a href=\"https://www.internetsociety.org/blog/2014/04/permissionless-innovation-openness-not-anarchy/\">permissionless innovation</a>,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an <abbr title=\"Standards Developing Organisation\">SDO</abbr> like the <abbr title=\"Internet Engineering Task Force\">IETF</abbr>, but often without any formal coordination.</p>\n\n<p>This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.</p>\n\n<h3 id=\"designing-for-openness\">Designing for Openness</h3>\n\n<p>Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.</p>\n\n<p>For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.</p>\n\n<p class=\"callout\">This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve <a href=\"https://www.mnot.net/blog/2024/11/29/platforms\">written previously</a> about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.</p>\n\n<p>For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.</p>\n\n<p>At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.</p>\n\n<h3 id=\"introducing-change-in-open-systems\">Introducing Change in Open Systems</h3>\n\n<p>An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.</p>\n\n<p>In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the <em>ways</em> it’s being used. This makes introducing changes tricky; as is often said in the <abbr title=\"Internet Engineering Task Force\">IETF</abbr>, <strong>you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time</strong>. Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.</p>\n\n<p>The Web is another example of an open system.<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup> No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.</p>\n\n<h3 id=\"governing-open-systems\">Governing Open Systems</h3>\n\n<p>Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.</p>\n\n<p>At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.</p>\n\n<p>On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.</p>\n\n<p>Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like <abbr title=\"Domain Name System\">DNS</abbr> resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative <abbr title=\"Domain Name System\">DNS</abbr> servers, encrypted <abbr title=\"Domain Name System\">DNS</abbr>, <abbr title=\"Virtual Private Networks\">VPNs</abbr>, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.</p>\n\n<p>That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).</p>\n\n<p>On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by <a href=\"https://www.mnot.net/blog/2022/09/11/esafety-industry-codes\">applying its regulatory mechanisms to <em>all</em> actors on the Internet</a>, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.</p>\n\n<p>Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.</p>\n\n<p>None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.</p>\n\n<h3 id=\"why-the-internet-must-stay-open\">Why the Internet Must Stay Open</h3>\n\n<p>If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage? Certainly, it’s <em>possible</em> for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).</p>\n\n<p>I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.</p>\n\n<p>The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) <em>without</em> permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the <a href=\"https://en.wikipedia.org/wiki/End-to-end_principle\">end-to-end principle</a>. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to <a href=\"https://www.rfc-editor.org/rfc/rfc9518.html\">centralization</a>.</p>\n\n<p>Again, all of this is not to say that closed systems on <em>top</em> of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.</p>\n\n<p>Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, <abbr title=\"Distributed Denial of Service Attack\">DDoS</abbr>, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – <em>still</em> outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.</p>\n\n<p>Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.</p>\n\n<p>Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.</p>\n\n<p>Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.</p>\n\n<p><em>Thanks to <a href=\"https://www.komaitis.org\">Konstantinos Komaitis</a> for his suggestions.</em></p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n <ol>\n <li id=\"fn:1\">\n <p>Albeit one that is the foundation for a number of very large closed systems. <a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n </li>\n </ol>\n</div>",
"image": null,
"media": [],
"authors": [
{
"name": "Mark Nottingham",
"email": null,
"url": "https://mnot.net/personal/"
}
],
"categories": [
{
"label": "Tech Regulation",
"term": "Tech Regulation",
"url": null
},
{
"label": "Internet and Web",
"term": "Internet and Web",
"url": null
}
]
},
{
"id": "https://mnot.net/blog/2026/no",
"title": "The Power of 'No' in Internet Standards",
"description": "The voluntary nature of Internet standards means that the biggest power move may be to avoid playing the game. Let's take a look.",
"url": "https://mnot.net/blog/2026/no",
"published": null,
"updated": "2026-02-13T00:00:00.000Z",
"content": "<p class=\"intro\">Fairly regularly, I hear someone ask whether a particular company is expressing undue amounts of power in Internet standards, seemingly with the implication that they’re getting away with murder (or at least the Internet governance equivalent).</p>\n\n<p>While it’s not uncommon for powerful entities to try to steer the direction that the work goes in, they don’t have free rein: the <a href=\"https://www.mnot.net/blog/2024/07/05/open_internet_standards\">open nature of Internet standards processes</a> assures that their proposals are subjected to considerable scrutiny from their competitors, technical experts, civil society representatives, and on occasion, governments. Of course there are counterexamples, but in general that’s not something I worry about <em>too</em> much.</p>\n\n<p>The truth is that there is very little power expressed in standards themselves. Instead, it resides in the implementation, deployment, and use of a particular technology, no matter whether it was standardised in a committee or is a <em>de facto</em> standard. Open standards processes provide some useful properties, but they are <strong>not</strong> a guarantee of quality or suitability and there are many standards that have zero impact.</p>\n\n<p>That implication of <a href=\"https://www.mnot.net/blog/2024/03/13/voluntary\">voluntary adoption</a> is why I believe that <strong>the most undiluted expression of power in Internet standards is saying ‘no’</strong> – in particular, when a company declines to participate in or implement a specification, feature, or function. Especially if that company is central to a ‘choke point’ with already embedded power due to adoption of related technologies like an Operating System or Web browser. In the most egregious cases, this is effectively saying ‘we want that to stay proprietary.’</p>\n\n<p>Sometimes the no is explicit. I’ve heard an engineer from a Very Big Tech Company publicly declare that their product would not implement a specification, with the very clear implication that the working group shouldn’t bother adopting the spec as a result. That’s using their embedded power to steer the outcome, hard.</p>\n\n<p>Usually though, it’s a lot more subtle. Concerns are raised. Review of a specification is de-prioritised. Maybe a standard is published, but it never gets to implementation. Or maybe the scope of the standard or its implementation is watered down enough to deliver something actually interoperable or functional.</p>\n\n<p>To be very clear, engineers often have very good reasons for declining to implement something. There are a <em>lot</em> of bad ideas out there, and Internet engineering imposes a lot of constraints on what is possible. Proposals have to run a gamut of technical reviews, architectural considerations, and carefully staked-out fiefdoms to see the light of day. Proponents are often convinced of the value of their contributions, only to find that they fail to get traction for reasons that can be hard to understand. The number of people who understand the nuances is small: usually, just a handful in any given field.</p>\n\n<p>But when the ‘no’ comes about because it doesn’t suit the agendas of powerful parties, something is wrong. Even people who want to see a better Internet reduce their expectations, because they lose faith in the possibility of success.</p>\n\n<h3 id=\"a-failure-of-ambition\">A Failure of Ambition</h3>\n<p>To me, the evidence of this phenomenon is clearest in how little ambition the we’re seeing from the Web. The Web should be a constantly raising sea of commoditised technology, cherry picking successful proprietary applications – marketplaces like Amazon and eBay, social networks like LinkedIn and Facebook, chat on WhatsApp and iMessage, search on Google, and so on – and reinventing them as public good oriented features without a centralised owner. Robin Berjon dives into this view of the Web in <a href=\"https://berjon.com/bigger-browser/\">You’re Going to Need a Bigger Browser</a>.<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup></p>\n\n<p>Instead, most current Web standards activity focuses on incremental, small features: tweaking around the edges and creating new ‘low level’ APIs that proprietary things can be built upon. This approach was codified a while back in the ‘<a href=\"https://github.com/extensibleweb/manifesto\">Extensible Web Manifesto</a>’, which was intended to let the community to focus its resources and let a ‘thousand flowers bloom’, but the effect has been to allow silo after silo to be build upon the Web, solidifying its role as the greatest centralisation technology ever.</p>\n\n<p>There are small signs of life. Recent features like Web Payments, federated identity and the various (somewhat) decentralised social networking protocols show promise for extending the platform in important ways, but they’re exceptional, not the rule.</p>\n\n<h3 id=\"creating-upward-pressure\">Creating Upward Pressure</h3>\n<p>How then, can we create higher-level capabilities that serve society but aren’t proprietary?</p>\n\n<p>Remember that <a href=\"https://www.mnot.net/blog/2024/03/13/voluntary\">the voluntary nature of Internet standards</a> is a feature – it allows us to fail by using the marketplace as a proving function. Forcing tech companies to implement well-intentioned specifications that aren’t informed by experience is a recipe for broken, bad tech. Likewise, ‘standardising harder’ isn’t going to create better outcomes: the real influence of what standards do is in their implementation and adoption.</p>\n\n<p>What matters is not writing specifications, it’s getting to a place where it’s not possible for private concerns to express inappropriate power over the Internet. Or as Robin <a href=\"https://berjon.com/digital-sovereignty/\">articulates</a>: “What matters is who has the structural power to deploy the standards they want to see and avoid those they dislike.” To me, that suggests a few areas where progress can be made:</p>\n\n<p class=\"hero\">First, we should remember that the market is the primary force shaping companies’ behaviour right now. It used to be that paid services like Proton were <a href=\"https://balkaninsight.com/2025/04/01/taking-aim-at-big-tech-proton-ceo-warns-democracy-depends-on-privacy/\">mocked for competing with free Google services</a>. Now they’re viable because people realised the users are the product. If we want privacy-respecting, decentralised solutions and are willing to pay for them, that changes the incentives for companies, big and small. However, the solutions need to be bigger than any one company.</p>\n\n<p class=\"hero\">Second, where the market fails, competition regulators can and should step in. They’ve been increasingly active recently, but I’d like to see them go further: to provide <strong>stronger guidelines for open standards processes</strong>, and to give companies stronger incentives to participate and adopt open standards, such as a <strong>presumption that adopting a specification that goes through a high-quality process is not anticompetitive</strong>. Doing so would create natural pressure for companies to be interoperable (reducing those choke points) while also being more subject to public and expert review.</p>\n\n<p class=\"hero\">Third, private corporations are not the only source of innovation in the world. In fact, there are <a href=\"https://www.hbs.edu/faculty/Pages/item.aspx?num=36972\">great arguments</a> that open collaboration is a much deeper source of innovation in the modern economy. My interest turns towards the possibilities of public sponsorship for development of the next generation of Internet technology: what’s now being called <strong>Digital Public Infrastructure</strong>. There are many challenging issues in this area – especially regarding governance and, frankly, viability – but if the needle can be threaded and the right model found, the benefits to the people who use the Internet could be massive.</p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n <ol>\n <li id=\"fn:1\">\n <p>Yes, as discussed before there are <a href=\"https://www.mnot.net/blog/2024/11/29/platforms\">things that are harder to do without a single-company chokepoint</a>, but that shouldn’t preclude <em>trying</em>. <a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n </li>\n </ol>\n</div>",
"image": null,
"media": [],
"authors": [
{
"name": "Mark Nottingham",
"email": null,
"url": "https://mnot.net/personal/"
}
],
"categories": [
{
"label": "Tech Regulation",
"term": "Tech Regulation",
"url": null
},
{
"label": "Standards",
"term": "Standards",
"url": null
},
{
"label": "Internet and Web",
"term": "Internet and Web",
"url": null
}
]
},
{
"id": "https://mnot.net/blog/2026/open_web",
"title": "Some Thoughts on the Open Web",
"description": "The Open Web means several things to different people, depending on context, but recently discussions have focused on the Web's Openness in terms of access to information -- how easy it is to publish and obtain information without barriers there.",
"url": "https://mnot.net/blog/2026/open_web",
"published": null,
"updated": "2026-01-20T00:00:00.000Z",
"content": "<p class=\"intro\">“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of <strong>access to information</strong> -- how easy it is to publish and obtain information without barriers there.</p>\n\n<p>David Schinazi and I hosted a pair of ad hoc sessions on this topic at the last IETF meeting in Montreal and the subsequent W3C Technical Plenary in Kobe; you can see the <a href=\"https://docs.google.com/document/d/1WaXDfwPP6olY-UVQxDZKNkUyqvmHt-u4kREJW4ys6ms/edit?usp=sharing\">notes and summaries from those sessions</a>. This post contains my thoughts on the topic so far, after some simmering.</p>\n\n<h3 id=\"the-open-web-is-amazing\">The Open Web is Amazing</h3>\n\n<p>For most of human history, it’s been difficult to access information. As an average citizen, you had to work pretty hard to access academic texts, historical writings, literature, news, public information, and so on. Libraries were an amazing innovation, but locating and working with the information there was still a formidable challenge.</p>\n\n<p>Likewise, publishing information for broad consumption required resources and relationships that were unavailable to most people. Gutenberg famously broke down some of those barriers, but many still remained: publishing and distributing books (or articles, music, art, films) required navigating extensive industries of gatekeepers, and often insurmountable costs and delays.</p>\n\n<p>Tim Berners-Lee’s invention cut through all of that; it was now possible to communicate with the whole world at very low cost and almost instantaneously. Various media industries were disrupted (but not completely displaced) by this innovation, and reinterpreted roles for intermediaries (e.g., search engines for librarians, online marketplaces for ‘brick and mortar’ shops) were created.</p>\n\n<p>Critically, a norm was also created; an expectation that content was easy to access, didn’t require paying or logging in. This was not enforced, and it was not always honoured: there were still subscription sites, and that’s OK, but they didn’t see the massive network effects that hyperlinks and browsers brought.</p>\n\n<p>It is hard to overstate the benefits of this norm. Farmers in developing countries now have easy access to guidelines and data that help their crops succeed. Students around the world have access to resources that were unimaginable even a few decades ago. They can also contribute to that global commons of content, benefiting others as they build a reputation for themselves.</p>\n\n<p>The Open Web is an amazing public good, both for those who consume information and those who produce it. By reducing costs and friction on both sides, it allows people all over the world to access and create information in a way -- and with an ease -- that would have been unimaginable to our predecessors. It’s worth fighting for.</p>\n\n<h3 id=\"people-have-different-motivations-for-opening-content\">People Have Different Motivations for Opening Content</h3>\n\n<p>We talk about “The Open Web” in the singular, but in fact there are many motivations for making content available freely online.</p>\n\n<p>Some people consciously make their content freely available on the Web because they want to contribute to the global commons, to help realise all of the benefits described above.</p>\n\n<p>Many don’t, however.</p>\n\n<p>Others do it because they want to be discovered and build a reputation. Or because they want to build human connections. Or because they want revenue from putting ads next to the content. Or because they want people to try their content out and then subscribe to it on the less-than-open Web.</p>\n\n<p>Most commonly, it’s a blend of many (or even all) of these motivations.</p>\n\n<p>Discussions of the Open Web need to consider all of them distinctly -- what about their environments are changing, and what might encourage or discourage different kinds of Open Web publishers. Only focusing on some motivations or creating “purity tests” for content isn’t helpful.</p>\n\n<h3 id=\"there-are-many-degrees-of-open\">There are Many Degrees of “Open”</h3>\n\n<p>Likewise, there are many degrees of “open.” While some Open Web content doesn’t come with any strings, much of it does. You might have to allow tracking for ads. While an article might be available to search engines (to drive traffic), you might have to register for an account to view the content as an individual.</p>\n\n<p>There are serious privacy considerations associated with both of these, but those concerns should be considered as distinct from those regarding open access to information. People sometimes need to get a library card to access information at their local library (in person or online), but that doesn’t make the information less open.</p>\n\n<p class=\"callout\">One of the most interesting assertions at the meetings we held was about advertising-supported content: that it was <em>more</em> equitable than “micro-transactions” and similar pay-to-view approaches, because it makes content available to those who would otherwise not be able to afford it.</p>\n\n<p>At the same time, these ‘small’ barriers – for example, requirements to log in after reading three articles – add up, reducing the openness of the content. If the new norm is that everyone has to log in everywhere to get Web content (and we may be well on our way to that), the Open Web suffers.</p>\n\n<p>Similarly, some open content is free to all comers and can be reused at will, where other examples have technical barriers (such as bot blockers or other selective access schemes) and/or legal barriers (namely, copyright restrictions).</p>\n\n<h3 id=\"it-has-to-be-voluntary\">It Has to be Voluntary</h3>\n\n<p>Everyone who publishes on the Open Web does so because they want to – because the benefits they realise (see above) outweigh any downsides.</p>\n\n<p>Conversely, any content not on the Open Web is not there because the owner has made the judgement that it is not worthwhile for them to do so. They cannot be forced to “open up” that content -- they can only be encouraged.</p>\n\n<p>Affordances and changes in infrastructure, platforms, and other aspects of the ecosystem -- sometimes realised in technical standards, sometimes not -- might change that incentive structure and create the conditions for more or less content on the Open Web. They cannot, however, be forced or mandated.</p>\n\n<p>To me, this means that attempts to coerce different parties into desired behaviors are unlikely to succeed – they have to <em>want</em> to provide their content. That includes strategies like withholding capabilities from them; they’ll just go elsewhere to obtain them, or put their content beyond a paywall.</p>\n\n<h3 id=\"its-changing-rapidly\">It’s Changing Rapidly</h3>\n\n<p>We’re talking about the Open Web now because of the introduction of AI -- a massive disruption to the incentives of many content creators and publishers, because AI both leverages their content (through scraping for training) and competes with it (because it is generative).</p>\n\n<p>For those who opened up their content because they wanted to establish reputation and build connectivity, this feels exploitative. They made their content available to benefit people, and it turns out that it’s benefiting large corporations who claim to be helping humanity but have failed to convince many.</p>\n\n<p>For those who want to sell ads next to their content or entice people to subscribe, this feels like betrayal. Search engines built an ecosystem that benefited publishers and the platforms,but publishers see those same platforms as continually taking more value from the relationship -- as seen in efforts to force intermediation like AMP, and now AI, where sites get drastically reduced traffic in exchange for nothing at all.</p>\n\n<p>And so people are blocking bots, putting up paywalls, changing business models, and yanking their content off the Open Web. The commons is suffering because technology (which always makes <em>something</em> easier) now makes content creation <em>and</em> consumption easier, so long as you trust your local AI vendor.</p>\n\n<p>This change is unevenly distributed. There are still people happily publishing open content in formats like RSS, which doesn’t facilitate tracking or targeting, and is wide open to scraping and reuse. That said, there are large swathes of content that are disappearing from the Open Web because it’s no longer viable for the publisher; the balance of incentives for them has changed.</p>\n\n<h3 id=\"open-is-not-free-to-provide\">Open is Not Free to Provide</h3>\n\n<p>Information may be a non-rivalrous good, but that doesn’t mean it’s free to provide. The people who produce it need to support themselves.</p>\n\n<p>That doesn’t mean that their interests dominate all others, nor that the structures that have evolved are the best (or even a good) way to assure that they can do so; these are topics better suited for copyright discussions (where there is a very long history of such considerations being debated).</p>\n\n<p>Furthermore, on a technical level serving content to anyone who asks for it on a global scale might be a commodity service now -- and so very inexpensive to do, in some cases -- but it’s not free, and the costs add up at scale. These costs -- again, alongside the perceived extractive nature of the relationship -- are causing some to <a href=\"https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO\">block or otherwise try to frustrate</a> these uses.</p>\n\n<p>Underlying this factor is an argument about whether it’s legitimate to say you’re on ‘the Open Web’ while selectively blocking clients you don’t like – either because they’re abusive technically (over-crawling), or because you don’t like what they do with the data. My observation here is that however you feel about it, that practice is now very, very widespread – evidence of great demand on the publisher side. If that capability were taken away, I strongly suspect the net result would be very negative for the Open Web.</p>\n\n<h3 id=\"its-about-control\">It’s About Control</h3>\n\n<p>Lurking beneath all of these arguments is a tension between the interests of those who produce and use content. Forgive me for resorting to hyperbole: some content people want pixel-perfect control not only over how their information is presented but how it is used and who uses it, and some open access advocates want all information to be usable for any purpose any time and anywhere.</p>\n\n<p>Either of these outcomes (hyperbole as they are) would be bad for the Open Web.</p>\n\n<p>The challenge, then, is finding the right balance – a Web where content producers have incentives to make their content available in a way that can be reused as much as is reasonable. That balance needs to be stable and sustainable, and take into account shocks like the introduction of AI.</p>\n\n<h3 id=\"a-way-forward\">A Way Forward</h3>\n\n<p>Having an Open Web available for humanity is not a guaranteed outcome; we may end up in a future where easily available information is greatly diminished or even absent.</p>\n\n<p>With that and all of the observations above in mind, what’s most apparent to me is that we should focus on finding ways to create and strengthen incentives to publish content that’s open (for some definition of open) -- understanding that people might have a variety of motivations for doing so. If environmental factors like AI change their incentives, we need to understand why and address the underlying concerns if possible.</p>\n\n<p>In other words, we have to create an Internet where people <em>want</em> to publish content openly – for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.</p>",
"image": null,
"media": [],
"authors": [
{
"name": "Mark Nottingham",
"email": null,
"url": "https://mnot.net/personal/"
}
],
"categories": [
{
"label": "Internet and Web",
"term": "Internet and Web",
"url": null
}
]
}
]
}