Analysis of https://www.mnot.net/blog/index.atom

Feed fetched in 158 ms.
Warning Content type is application/atom+xml, not text/xml or applicaton/xml.
Feed is 59,750 characters long.
Feed has an ETag of "ec1a-64c549f425f20".
Feed has a last modified date of Fri, 06 Mar 2026 05:49:53 GMT.
Feed is well-formed XML.
Warning Feed has no styling.
This is an Atom feed.
Feed title: Mark Nottingham
Feed self link matches feed URL.
Warning Feed is missing an image.
Feed has 5 items.
First item published on 2026-02-20T00:00:00.000Z
Last item published on 2025-09-20T00:00:00.000Z
All items have published dates.
Newest item was published on 2026-02-20T00:00:00.000Z.
Info Feed's Last-Modified date is newer than the newest item's published date (2026-03-06T05:49:53.000Z > 2026-02-20T00:00:00.000Z).
Home page URL: https://www.mnot.net/blog/
Home page has feed discovery link in <head>.
Home page has a link to the feed in the <body>

Formatted XML
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Mark Nottingham</title>
    <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/"/>
    <link rel="self" type="application/atom+xml" href="https://www.mnot.net/blog/index.atom"/>
    <id>tag:www.mnot.net,2010-11-11:/blog//1</id>
    <updated>2026-03-06T05:49:48Z</updated>
    <subtitle></subtitle>
    <entry>
        <title>The Internet Isn’t Facebook: How Openness Changes Everything</title>
        <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2026/02/20/open_systems"/>
        <id>https://www.mnot.net/blog/2026/02/20/open_systems</id>
        <updated>2026-02-20T00:00:00Z</updated>
        <author>
            <name>Mark Nottingham</name>
            <uri>https://www.mnot.net/personal/</uri>
        </author>
        <summary>Openness makes the Internet harder to govern — but also makes it resilient, innovative, and difficult to capture. Let&apos;s look at how the openness of the Internet both defines it and ensures its success.</summary>
        <category term="Tech Regulation"/>
        <category term="Web and Internet"/>
        <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2026/02/20/open_systems"><![CDATA[<p class="intro">“Open” tends to get thrown around a lot when talking about the Internet: Open Source, <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">Open Standards</a>, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness <em>as a system</em>. It turns out this has profound effects on both the Internet’s design and how it might be regulated.</p>

<p>This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.</p>

<h3 id="open-and-shut">Open and Shut</h3>

<p>Most often, people think and work within <em>closed systems</em> – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.</p>

<p>Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.</p>

<p>The Internet is not like that.</p>

<p>That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “<a href="https://www.internetsociety.org/blog/2014/04/permissionless-innovation-openness-not-anarchy/">permissionless innovation</a>,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an <abbr title="Standards Developing Organisation">SDO</abbr> like the <abbr title="Internet Engineering Task Force">IETF</abbr>, but often without any formal coordination.</p>

<p>This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.</p>

<h3 id="designing-for-openness">Designing for Openness</h3>

<p>Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.</p>

<p>For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.</p>

<p class="callout">This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve <a href="https://www.mnot.net/blog/2024/11/29/platforms">written previously</a> about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.</p>

<p>For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.</p>

<p>At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.</p>

<h3 id="introducing-change-in-open-systems">Introducing Change in Open Systems</h3>

<p>An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.</p>

<p>In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the <em>ways</em> it’s being used. This makes introducing changes tricky; as is often said in the <abbr title="Internet Engineering Task Force">IETF</abbr>, <strong>you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time</strong>.  Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.</p>

<p>The Web is another example of an open system.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.</p>

<h3 id="governing-open-systems">Governing Open Systems</h3>

<p>Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.</p>

<p>At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.</p>

<p>On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.</p>

<p>Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like <abbr title="Domain Name System">DNS</abbr> resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative <abbr title="Domain Name System">DNS</abbr> servers, encrypted <abbr title="Domain Name System">DNS</abbr>, <abbr title="Virtual Private Networks">VPNs</abbr>, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.</p>

<p>That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).</p>

<p>On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by <a href="https://www.mnot.net/blog/2022/09/11/esafety-industry-codes">applying its regulatory mechanisms to <em>all</em> actors on the Internet</a>, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.</p>

<p>Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.</p>

<p>None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.</p>

<h3 id="why-the-internet-must-stay-open">Why the Internet Must Stay Open</h3>

<p>If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage?  Certainly, it’s <em>possible</em> for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).</p>

<p>I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.</p>

<p>The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) <em>without</em> permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the <a href="https://en.wikipedia.org/wiki/End-to-end_principle">end-to-end principle</a>. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to <a href="https://www.rfc-editor.org/rfc/rfc9518.html">centralization</a>.</p>

<p>Again, all of this is not to say that closed systems on <em>top</em> of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.</p>

<p>Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, <abbr title="Distributed Denial of Service Attack">DDoS</abbr>, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – <em>still</em> outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.</p>

<p>Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.</p>

<p>Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.</p>

<p>Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.</p>

<p><em>Thanks to <a href="https://www.komaitis.org">Konstantinos Komaitis</a> for his suggestions.</em></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Albeit one that is the foundation for a number of very large closed systems. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content>
    </entry>
    <entry>
        <title>The Power of &apos;No&apos; in Internet Standards</title>
        <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2026/02/13/no"/>
        <id>https://www.mnot.net/blog/2026/02/13/no</id>
        <updated>2026-02-13T00:00:00Z</updated>
        <author>
            <name>Mark Nottingham</name>
            <uri>https://www.mnot.net/personal/</uri>
        </author>
        <summary>The voluntary nature of Internet standards means that the biggest power move may be to avoid playing the game. Let&apos;s take a look.</summary>
        <category term="Tech Regulation"/>
        <category term="Standards"/>
        <category term="Web and Internet"/>
        <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2026/02/13/no"><![CDATA[<p class="intro">Fairly regularly, I hear someone ask whether a particular company is expressing undue amounts of power in Internet standards, seemingly with the implication that they’re getting away with murder (or at least the Internet governance equivalent).</p>

<p>While it’s not uncommon for powerful entities to try to steer the direction that the work goes in, they don’t have free rein: the <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">open nature of Internet standards processes</a> assures that their proposals are subjected to considerable scrutiny from their competitors, technical experts, civil society representatives, and on occasion, governments. Of course there are counterexamples, but in general that’s not something I worry about <em>too</em> much.</p>

<p>The truth is that there is very little power expressed in standards themselves. Instead, it resides in the implementation, deployment, and use of a particular technology, no matter whether it was standardised in a committee or is a <em>de facto</em> standard. Open standards processes provide some useful properties, but they are <strong>not</strong> a guarantee of quality or suitability and there are many standards that have zero impact.</p>

<p>That implication of <a href="https://www.mnot.net/blog/2024/03/13/voluntary">voluntary adoption</a> is why I believe that <strong>the most undiluted expression of power in Internet standards is saying ‘no’</strong> – in particular, when a company declines to participate in or implement a specification, feature, or function. Especially if that company is central to a ‘choke point’ with already embedded power due to adoption of related technologies like an Operating System or Web browser. In the most egregious cases, this is effectively saying ‘we want that to stay proprietary.’</p>

<p>Sometimes the no is explicit. I’ve heard an engineer from a Very Big Tech Company publicly declare that their product would not implement a specification, with the very clear implication that the working group shouldn’t bother adopting the spec as a result. That’s using their embedded power to steer the outcome, hard.</p>

<p>Usually though, it’s a lot more subtle. Concerns are raised. Review of a specification is de-prioritised. Maybe a standard is published, but it never gets to implementation. Or maybe the scope of the standard or its implementation is watered down enough to deliver something actually interoperable or functional.</p>

<p>To be very clear, engineers often have very good reasons for declining to implement something. There are a <em>lot</em> of bad ideas out there, and Internet engineering imposes a lot of constraints on what is possible. Proposals have to run a gamut of technical reviews, architectural considerations, and carefully staked-out fiefdoms to see the light of day. Proponents are often convinced of the value of their contributions, only to find that they fail to get traction for reasons that can be hard to understand. The number of people who understand the nuances is small: usually, just a handful in any given field.</p>

<p>But when the ‘no’ comes about because it doesn’t suit the agendas of powerful parties, something is wrong. Even people who want to see a better Internet reduce their expectations, because they lose faith in the possibility of success.</p>

<h3 id="a-failure-of-ambition">A Failure of Ambition</h3>
<p>To me, the evidence of this phenomenon is clearest in how little ambition the we’re seeing from the Web. The Web should be a constantly raising sea of commoditised technology, cherry picking successful proprietary applications – marketplaces like Amazon and eBay, social networks like LinkedIn and Facebook, chat on WhatsApp and iMessage, search on Google, and so on – and reinventing them as public good oriented features without a centralised owner. Robin Berjon dives into this view of the Web in <a href="https://berjon.com/bigger-browser/">You’re Going to Need a Bigger Browser</a>.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<p>Instead, most current Web standards activity focuses on incremental, small features: tweaking around the edges and creating new ‘low level’ APIs that proprietary things can be built upon. This approach was codified a while back in the ‘<a href="https://github.com/extensibleweb/manifesto">Extensible Web Manifesto</a>’, which was intended to let the community to focus its resources and let a ‘thousand flowers bloom’, but the effect has been to allow silo after silo to be build upon the Web, solidifying its role as the greatest centralisation technology ever.</p>

<p>There are small signs of life. Recent features like Web Payments, federated identity and the various (somewhat) decentralised social networking protocols show promise for extending the platform in important ways, but they’re exceptional, not the rule.</p>

<h3 id="creating-upward-pressure">Creating Upward Pressure</h3>
<p>How then, can we create higher-level capabilities that serve society but aren’t proprietary?</p>

<p>Remember that <a href="https://www.mnot.net/blog/2024/03/13/voluntary">the voluntary nature of Internet standards</a> is a feature – it allows us to fail by using the marketplace as a proving function. Forcing tech companies to implement well-intentioned specifications that aren’t informed by experience is a recipe for broken, bad tech. Likewise, ‘standardising harder’ isn’t going to create better outcomes: the real influence of what standards do is in their implementation and adoption.</p>

<p>What matters is not writing specifications, it’s getting to a place where it’s not possible for private concerns to express inappropriate power over the Internet. Or as Robin <a href="https://berjon.com/digital-sovereignty/">articulates</a>: “What matters is who has the structural power to deploy the standards they want to see and avoid those they dislike.” To me, that suggests a few areas where progress can be made:</p>

<p class="hero">First, we should remember that the market is the primary force shaping companies’ behaviour right now. It used to be that paid services like Proton were <a href="https://balkaninsight.com/2025/04/01/taking-aim-at-big-tech-proton-ceo-warns-democracy-depends-on-privacy/">mocked for competing with free Google services</a>. Now they’re viable because people realised the users are the product. If we want privacy-respecting, decentralised solutions and are willing to pay for them, that changes the incentives for companies, big and small. However, the solutions need to be bigger than any one company.</p>

<p class="hero">Second, where the market fails, competition regulators can and should step in. They’ve been increasingly active recently, but I’d like to see them go further: to provide <strong>stronger guidelines for open standards processes</strong>, and to give companies stronger incentives to participate and adopt open standards, such as a <strong>presumption that adopting a specification that goes through a high-quality process is not anticompetitive</strong>. Doing so would create natural pressure for companies to be interoperable (reducing those choke points) while also being more subject to public and expert review.</p>

<p class="hero">Third, private corporations are not the only source of innovation in the world. In fact, there are <a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=36972">great arguments</a> that open collaboration is a much deeper source of innovation in the modern economy. My interest turns towards the possibilities of public sponsorship for development of the next generation of Internet technology: what’s now being called <strong>Digital Public Infrastructure</strong>. There are many challenging issues in this area – especially regarding governance and, frankly, viability – but if the needle can be threaded and the right model found, the benefits to the people who use the Internet could be massive.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Yes, as discussed before there are <a href="https://www.mnot.net/blog/2024/11/29/platforms">things that are harder to do without a single-company chokepoint</a>, but that shouldn’t preclude <em>trying</em>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content>
    </entry>
    <entry>
        <title>Some Thoughts on the Open Web</title>
        <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2026/01/20/open_web"/>
        <id>https://www.mnot.net/blog/2026/01/20/open_web</id>
        <updated>2026-01-20T00:00:00Z</updated>
        <author>
            <name>Mark Nottingham</name>
            <uri>https://www.mnot.net/personal/</uri>
        </author>
        <summary>The Open Web means several things to different people, depending on context, but recently discussions have focused on the Web&apos;s Openness in terms of access to information -- how easy it is to publish and obtain information without barriers there.</summary>
        <category term="Web and Internet"/>
        <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2026/01/20/open_web"><![CDATA[<p class="intro">“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of <strong>access to information</strong> -- how easy it is to publish and obtain information without barriers there.</p>

<p>David Schinazi and I hosted a pair of ad hoc sessions on this topic at the last IETF meeting in Montreal and the subsequent W3C Technical Plenary in Kobe; you can see the <a href="https://docs.google.com/document/d/1WaXDfwPP6olY-UVQxDZKNkUyqvmHt-u4kREJW4ys6ms/edit?usp=sharing">notes and summaries from those sessions</a>.  This post contains my thoughts on the topic so far, after some simmering.</p>

<h3 id="the-open-web-is-amazing">The Open Web is Amazing</h3>

<p>For most of human history, it’s been difficult to access information. As an average citizen, you had to work pretty hard to access academic texts, historical writings, literature, news, public information, and so on. Libraries were an amazing innovation, but locating and working with the information there was still a formidable challenge.</p>

<p>Likewise, publishing information for broad consumption required resources and relationships that were unavailable to most people. Gutenberg famously broke down some of those barriers, but many still remained: publishing and distributing books (or articles, music, art, films) required navigating extensive industries of gatekeepers, and often insurmountable costs and delays.</p>

<p>Tim Berners-Lee’s invention cut through all of that; it was now possible to communicate with the whole world at very low cost and almost instantaneously. Various media industries were disrupted (but not completely displaced) by this innovation, and reinterpreted roles for intermediaries (e.g., search engines for librarians, online marketplaces for ‘brick and mortar’ shops) were created.</p>

<p>Critically, a norm was also created; an expectation that content was easy to access, didn’t require paying or logging in. This was not enforced, and it was not always honoured: there were still subscription sites, and that’s OK, but they didn’t see the massive network effects that hyperlinks and browsers brought.</p>

<p>It is hard to overstate the benefits of this norm. Farmers in developing countries now have easy access to guidelines and data that help their crops succeed. Students around the world have access to resources that were unimaginable even a few decades ago. They can also contribute to that global commons of content, benefiting others as they build a reputation for themselves.</p>

<p>The Open Web is an amazing public good, both for those who consume information and those who produce it. By reducing costs and friction on both sides, it allows people all over the world to access and create information in a way -- and with an ease -- that would have been unimaginable to our predecessors. It’s worth fighting for.</p>

<h3 id="people-have-different-motivations-for-opening-content">People Have Different Motivations for Opening Content</h3>

<p>We talk about “The Open Web” in the singular, but in fact there are many motivations for making content available freely online.</p>

<p>Some people consciously make their content freely available on the Web because they want to contribute to the global commons, to help realise all of the benefits described above.</p>

<p>Many don’t, however.</p>

<p>Others do it because they want to be discovered and build a reputation. Or because they want to build human connections. Or because they want revenue from putting ads next to the content. Or because they want people to try their content out and then subscribe to it on the less-than-open Web.</p>

<p>Most commonly, it’s a blend of many (or even all) of these motivations.</p>

<p>Discussions of the Open Web need to consider all of them distinctly -- what about their environments are changing, and what might encourage or discourage different kinds of Open Web publishers. Only focusing on some motivations or creating “purity tests” for content isn’t helpful.</p>

<h3 id="there-are-many-degrees-of-open">There are Many Degrees of “Open”</h3>

<p>Likewise, there are many degrees of “open.” While some Open Web content doesn’t come with any strings, much of it does. You might have to allow tracking for ads. While an article might be available to search engines (to drive traffic), you might have to register for an account to view the content as an individual.</p>

<p>There are serious privacy considerations associated with both of these, but those concerns should be considered as distinct from those regarding open access to information. People sometimes need to get a library card to access information at their local library (in person or online), but that doesn’t make the information less open.</p>

<p class="callout">One of the most interesting assertions at the meetings we held was about advertising-supported content: that it was <em>more</em> equitable than “micro-transactions” and similar pay-to-view approaches, because it makes content available to those who would otherwise not be able to afford it.</p>

<p>At the same time, these ‘small’ barriers  – for example, requirements to log in after reading three articles – add up, reducing the openness of the content. If the new norm is that everyone has to log in everywhere to get Web content (and we may be well on our way to that), the Open Web suffers.</p>

<p>Similarly, some open content is free to all comers and can be reused at will, where other examples have technical barriers (such as bot blockers or other selective access schemes) and/or legal barriers (namely, copyright restrictions).</p>

<h3 id="it-has-to-be-voluntary">It Has to be Voluntary</h3>

<p>Everyone who publishes on the Open Web does so because they want to – because the benefits they realise (see above) outweigh any downsides.</p>

<p>Conversely, any content not on the Open Web is not there because the owner has made the judgement that it is not worthwhile for them to do so. They cannot be forced to “open up” that content -- they can only be encouraged.</p>

<p>Affordances and changes in infrastructure, platforms, and other aspects of the ecosystem -- sometimes realised in technical standards, sometimes not -- might change that incentive structure and create the conditions for more or less content on the Open Web. They cannot, however, be forced or mandated.</p>

<p>To me, this means that attempts to coerce different parties into desired behaviors are unlikely to succeed – they have to <em>want</em> to provide their content. That includes strategies like withholding capabilities from them; they’ll just go elsewhere to obtain them, or put their content beyond a paywall.</p>

<h3 id="its-changing-rapidly">It’s Changing Rapidly</h3>

<p>We’re talking about the Open Web now because of the introduction of AI -- a massive disruption to the incentives of many content creators and publishers, because AI both leverages their content (through scraping for training) and competes with it (because it is generative).</p>

<p>For those who opened up their content because they wanted to establish reputation and build connectivity, this feels exploitative. They made their content available to benefit people, and it turns out that it’s benefiting large corporations who claim to be helping humanity but have failed to convince many.</p>

<p>For those who want to sell ads next to their content or entice people to subscribe, this feels like betrayal. Search engines built an ecosystem that benefited publishers and the platforms,but publishers see those same platforms as continually taking more value from the relationship -- as seen in efforts to force intermediation like AMP, and now AI, where sites get drastically reduced traffic in exchange for nothing at all.</p>

<p>And so people are blocking bots, putting up paywalls, changing business models, and yanking their content off the Open Web. The commons is suffering because technology (which always makes <em>something</em> easier) now makes content creation <em>and</em> consumption easier, so long as you trust your local AI vendor.</p>

<p>This change is unevenly distributed. There are still people happily publishing open content in formats like RSS, which doesn’t facilitate tracking or targeting, and is wide open to scraping and reuse. That said, there are large swathes of content that are disappearing from the Open Web because it’s no longer viable for the publisher; the balance of incentives for them has changed.</p>

<h3 id="open-is-not-free-to-provide">Open is Not Free to Provide</h3>

<p>Information may be a non-rivalrous good, but that doesn’t mean it’s free to provide. The people who produce it need to support themselves.</p>

<p>That doesn’t mean that their interests dominate all others, nor that the structures that have evolved are the best (or even a good) way to assure that they can do so; these are topics better suited for copyright discussions (where there is a very long history of such considerations being debated).</p>

<p>Furthermore, on a technical level serving content to anyone who asks for it on a global scale might be a commodity service now -- and so very inexpensive to do, in some cases -- but it’s not free, and the costs add up at scale. These costs -- again, alongside the perceived extractive nature of the relationship -- are causing some to <a href="https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO">block or otherwise try to frustrate</a> these uses.</p>

<p>Underlying this factor is an argument about whether it’s legitimate to say you’re on ‘the Open Web’ while selectively blocking clients you don’t like – either because they’re abusive technically (over-crawling), or because you don’t like what they do with the data. My observation here is that however you feel about it, that practice is now very, very widespread – evidence of great demand on the publisher side. If that capability were taken away, I strongly suspect the net result would be very negative for the Open Web.</p>

<h3 id="its-about-control">It’s About Control</h3>

<p>Lurking beneath all of these arguments is a tension between the interests of those who produce and use content. Forgive me for resorting to hyperbole: some content people want pixel-perfect control not only over how their information is presented but how it is used and who uses it, and some open access advocates want all information to be usable for any purpose any time and anywhere.</p>

<p>Either of these outcomes (hyperbole as they are) would be bad for the Open Web.</p>

<p>The challenge, then, is finding the right balance – a Web where content producers have incentives to make their content available in a way that can be reused as much as is reasonable. That balance needs to be stable and sustainable, and take into account shocks like the introduction of AI.</p>

<h3 id="a-way-forward">A Way Forward</h3>

<p>Having an Open Web available for humanity is not a guaranteed outcome; we may end up in a future where easily available information is greatly diminished or even absent.</p>

<p>With that and all of the observations above in mind, what’s most apparent to me is that we should focus on finding ways to create and strengthen incentives to publish content that’s open (for some definition of open) -- understanding that people might have a variety of motivations for doing so. If environmental factors like AI change their incentives, we need to understand why and address the underlying concerns if possible.</p>

<p>In other words, we have to create an Internet where people <em>want</em> to publish content openly – for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.</p>]]></content>
    </entry>
    <entry>
        <title>Principles for Global Online Meetings</title>
        <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2025/10/26/equitable-meetings"/>
        <id>https://www.mnot.net/blog/2025/10/26/equitable-meetings</id>
        <updated>2025-10-26T00:00:00Z</updated>
        <author>
            <name>Mark Nottingham</name>
            <uri>https://www.mnot.net/personal/</uri>
        </author>
        <summary>Some thoughts about how to schedule online meetings for a global organisation in an equitable way.</summary>
        <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2025/10/26/equitable-meetings"><![CDATA[<p class="intro">One of the tricker problems for organisations that aspire to be global is scheduling a series of meetings. While the Internet has brought the ability to meet with colleagues and stakeholders all over the world, it hasn’t been able to get everyone on the same daily tempo – the earth is still not flat.</p>

<p>As someone who has participated in such organisations from Australia for nearly two decades, I’ve formed some fairly strong opinions about how their meetings should be arranged. What follows is an attempt to distill those thoughts into a set of principles that’s flexible enough to apply to a variety of situations.</p>

<p>Keep in mind the intended application is to a series of global meetings, not a single one-off event. Also, if the set of people who need to attend a given meeting are in timezones that lead to an agreed-to “good” time, you should use that time – but then I question if your organisation is really global. For the rest, read on.</p>

<h3 id="0-its-about-equity">0. It’s About Equity</h3>
<p>For global organisations, meeting scheduling is an equity issue. Arranging a meeting where some people can attend from the convenience of their office in normal business hours while others have to stay up into the middle of the night is not equitable – the former have very low friction for attending, while the latter have to disrupt their lives, families, relationships, and sleep cycles to attend.</p>

<p>When a person does make the extra effort to attend at a less-than-ideal hour, they will not be at their best. Being awake outside your normal hours means that you aren’t thinking as clearly and might react more emotionally than otherwise. Interrupting an evening after a long day can impact your focus. Effective participation is difficult under these conditions.</p>

<p>I cast this as an equity issue because I’ve observed that many don’t perceive it that way. This is often the case if someone’s experience is that most meetings are scheduled at reasonable hours, they don’t have to think about it, and people in other parts of the world staying up late or getting up early to talk to them is normal. It’s only when people realise this privilege and challenge what’s normal that progress can be made. If you want a truly global organisation, people need to be able to participate on equal footing, and that means that some people will need to make what looks like – to them – sacrifices, because they’re used to things being a certain way.</p>

<h3 id="1-share-pain-with-rotation">1. Share Pain with Rotation</h3>
<p>With that framing as an equity issue in mind, it becomes clear what must be done: the ‘pain’ of participating needs to be shared in a way that’s equitable. The focus then becomes characterising what pain is, and how to dole it out in a fair way while still holding functional meetings.</p>

<p>The most common method for scheduling a meeting that involves people from all over the globe involves picking “winners” and “losers”. Mary and Joe in North America get a meeting in their daytime; the Europeans have something in their evening, and Asia/Pacific folks have to get up early. Australians get the hardest service – they’re usually up past midnight, but sometimes get roused at 5am or so, depending on the fluctuations of daylight savings. Often, this will be justified with a poll or survey asking for preferences, but one where all options are reasonable for a priviledged set of participants, and most are unreasonable for others.</p>

<p>This is all wrapped up in very logical explanations: it’s the constraints we work within, the locations of the participants narrow down the options, it doesn’t make sense to inconvenience a large number of people for the benefit of a few. Or the kicker: if we scheduled the meeting at that time, the folks who are used to having meetings at good times for them wouldn’t come.</p>

<p>All of those are poor excuses that should be challenged, but often aren’t because this privilege is so deeply embedded.</p>

<p>What can be done? The primary tool for pain-sharing is <strong>rotation</strong>. Schedule meetings in rotating time slots so that everyone has approximately the same number of “good”, “ok”, and “bad” time slots. This is how you put people on even footing.</p>

<p>It may even mean intentionally scheduling in a way that people will miss a slot – e.g., two out of three. In this case, you’ll need to build tools to make sure that information is shared between meetings (you should be keeping minutes and, tracking action items, and creating summaries anyway!), that decisions don’t happen in any one meeting, and that people have a chance to see a variety of people, not just the same subset every time.</p>

<p>For example, imagine an organisation that needs to meet weekly, and has three members in different parts of Europe, five across North America, two in China, and one each in Australia and India. If they rotate between three time slots for their meetings, they might end up with:</p>

<ul>
  <li>UTC: 02:00 / 11:00 / 17:00</li>
  <li>Australia/Eastern: 12:00 / 21:00 / 03:00 (+1d)</li>
  <li>China/Shanghai: 10:00 / 19:00 / 01:00 (+1d)</li>
  <li>US/Eastern: 22:00 (-1d) / 07:00 / 14:00</li>
  <li>Europe/Central: 04:00 / 13:00 / 19:00</li>
  <li>India/Mumbai: 07:30 / 16:30 / 22:30</li>
</ul>

<p>Notice that everyone has approximately one “good” slot, one “ok” slot, and one “bad” slot – depending on each individual’s preferences, of course.</p>

<p>One objection I’ve heard to this approach is that it would lead to a state where most of the people go to just one or two of the meetings, and the others are poorly attended. That kind of fragmentation is certainly possible, but in my opinion it says more about the diversity of your organisation and the commitment of the people attending the meeting – both factors that should be separately addressed, not loaded onto some of the participants as meeting pain. Doing so is saying that some people won’t attend if they’re exposed to the conditions that they ask of others.</p>

<h3 id="2-pain-is-individual">2. Pain is Individual</h3>
<p>A common approach to scheduling weighs decisions by how many people are in each timezone. For example, if you’ve got ten people in North America, three in Europe, and one in Asia, you should obviously arrange things to inconvenience the fewest number of people, right?</p>

<p>The problem is, each of those people experiences the pain individually – it is not a collective phenomenon. The person in Asia doesn’t experience 1/14th of the pain if they need to get up at 4:30am for a call.  Making things slightly inconvenient for the North Americans doesn’t magnify the pain they experience times ten.</p>

<p>So, don’t weigh your decisions by how many people are in a particular timezone or region. Of course there are limits to this principle – if it’s 100:1 you need to be able to function as a group (e.g., be quorate). But again, I’m questioning whether you’re really a global organisation here; you’re effectively gaslighting the people who are trying to participate from elsewhere by calling yourself one.</p>

<h3 id="3-pain-is-specific">3. Pain is Specific</h3>
<p>It’s easy to fall into the trap of assuming that everyone’s circumstances are the same – that if a 7am meeting is painful for you, it’s equally painful for someone else.</p>

<p>In reality, some people are morning people, while others don’t mind staying up until 2am. Some people might have a family dinner every Thursday night that would be disrupted by your meeting, while others are happy to use that time because that’s when they have the house to themselves.</p>

<p>This means you need to ask what people’s preferences and conflicts are, rather than (for example) assume that 7am-9am is ok, 9am-5pm is good, 5pm-10pm is ok, and everything else is bad. The mechanics of how that information is gathered depends upon the nature of your group, but it needs to be sensitive to privacy and resistant to gaming.</p>

<h3 id="4-pain-is-relative">4. Pain is Relative</h3>
<p>One of the complications of scheduling meetings across timezones is balancing the various kinds of conflicts and inconveniences that they bring up for a proposed time slot. John has to pick up the kids in that timeslot; Hiro is eating breakfast. Marissa needs to have dinner with her family. And Mark just wants a good night’s sleep for once.</p>

<p>I propose a hierarchy of inconvenience and pain, from most to least impactful:</p>

<ol>
  <li>Rearranging your life - changing your sleep schedule, working on weekends (remember, Friday in North America is Saturday in other parts of the world)</li>
  <li>Rearranging family life - shifting meals, changing child or elderly care arrangements</li>
  <li>Moving other meetings - managing conflicts with other professional commitments</li>
</ol>

<p>When asking for conflicts for a given time slot, the higher items should always override the lower forms of pain. I’m sure this could be elaborated upon and extended, but it’s a good starting point.</p>

<p>I sometimes also hear about another kind of pain: that rotating meetings makes it hard for some people to keep their calendars. To me, this isn’t #4; it’s #100.</p>

<h3 id="5-circumstances-change">5. Circumstances Change</h3>
<p>People aren’t static. Their lives change, their families change, their health changes. If your meetings are scheduled over long periods of time, that means you need to be responsive to these changes, periodically checking to see if their preferences need updating.</p>

<p>I used to be a night person. I’d be up until at least midnight, sometimes two or three, and mornings would be a real struggle. However, as I’ve gotten older, I’m finding that many mornings I wake naturally at five or so, and I’m ready to sleep at around 10pm unless I’m out of the house. That change has fundamentally affected how I attend meetings.</p>

<p>And, of course, if you have participants in the Southern hemisphere (and you should!), you have to account for the differences in daylight savings, due to the differences in seasons. It’s not just a one-hour shift – it’s two, and that can make a big difference to someone’s quality of life.</p>

<h3 id="6-respect-peoples-time">6. Respect People’s Time</h3>
<p>Appreciate that what’s just another meeting in the middle of your workday is a huge effort in the middle of the night for someone else; don’t fritter away a substantial portion on chitchat. Have an agenda and be prepared to make the meeting valuable. Use offline, asynchronous tools when they’re more appropriate.</p>

<p>Likewise, don’t cancel or re-schedule a meeting at the last minute (or even last day). Setting an alarm for an early meeting and struggling through getting presentable and caffeinated only to find it’s been axed is distinctly unpleasant.</p>]]></content>
    </entry>
    <entry>
        <title>Bridging the Gap Between Standards and Policy</title>
        <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2025/09/20/configuration"/>
        <id>https://www.mnot.net/blog/2025/09/20/configuration</id>
        <updated>2025-09-20T00:00:00Z</updated>
        <author>
            <name>Mark Nottingham</name>
            <uri>https://www.mnot.net/personal/</uri>
        </author>
        <summary>Achieving policymakers&apos; goals in coordination with Internet standards activity can be difficult. This post explores some of the options and considerations involved.</summary>
        <category term="Tech Regulation"/>
        <category term="Standards"/>
        <category term="Web and Internet"/>
        <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2025/09/20/configuration"><![CDATA[<p>Internet standards bodies like the IETF and W3C are places where experts can come to agreement about the details of how technology should work. These communities have the deep experience that allows them to guide the evolution of the Internet towards common goals.</p>

<p>Policymakers have none of that technical expertise, but are the legitimate source of policy decisions in any functioning society. They don’t have the means to develop new technical proposals: while most countries have a national standard body, their products are a poor fit for a global Internet, and those bodies generally lack specific expertise.</p>

<p>So, it might seem logical for policymakers to turn to Internet standards bodies to develop the technical solutions for their policy goals, trusting the open process and community involvement to produce a good solution. Unfortunately, doing so can create problems that will cause such efforts to fail.</p>

<h3 id="whats-the-problem">What’s the Problem?</h3>

<p>A few different issues often become apparent when policymakers pre-emptively specify a standard.</p>

<p>First, as discussed previously the <a href="https://www.mnot.net/blog/2024/03/13/voluntary">voluntary nature of Internet standards</a> acts as a proving function for them: if implementers don’t implement or users don’t use, the standard doesn’t matter. If a legal mandate to use a particular standard precedes that proof of viability, it distorts the incentives for participation in the process, because the power relationships between participants have changed – it’s no longer voluntary for the targets of the regulation, and the tone of the effort shifts from being <a href="https://www.mnot.net/blog/2024/07/16/collaborative_standards">collaborative</a> to competitive.</p>

<p>Second, Internet standards are created by <a href="https://www.mnot.net/blog/2024/05/24/consensus">consensus</a>. That approach to decision making is productive when there is reasonable alignment between participants’ motives, but it’s not well suited to handling fundamental conflicts about societal values. That’s because while technical experts might be good at weighing technical arguments and generally adhering to widely agreed-to principles (whether they be regarding Internet architecture or human rights), it’s much more difficult for them to adjudicate direct conflict between values outside their areas of expertise. In these circumstances, the outcome is often simply a lack of consensus.</p>

<p>Third, jurisdictions often have differences in their policy goals, but the Internet is global, and so are its standards bodies, who want the Internet to be interoperable regardless of borders. If policy goals aren’t widely shared and aligned between countries, it becomes even more difficult to come to consensus.</p>

<p>Fourth, making decisions with societal impact in a technical expert body raises fundamental legitimacy issues. That’s not to say that Internet standards can’t or shouldn’t (or don’t) change society in significant ways, but that’s done from the position of private actors coordinating to achieve a common goal through well-understood processes, within the practical boundaries of the commonalities of the applicable legal frameworks. It’s entirely different for a contentious policy decision to be delegated by policymakers to a non-representative technical body.</p>

<p>So, what’s a policymaker to do?</p>

<h3 id="patience-is-a-virtue">Patience is a Virtue</h3>

<p>One widely repeated recommendation for policymakers is to avoid specifying the work or even a venue for it in regulation or legislation until <em>after</em> it’s been created and its viability is proven by some amount of market adoption. Instead, the policymaker should just hint that an industry standard that serves a particular policy goal would be useful.</p>

<p>However, this approach comes with a few caveats:</p>
<ul>
  <li>A set of proponents that drives the standards work has to emerge, and they need to be at least somewhat aligned with the policy goal</li>
  <li>Consensus-based technical standards are slow, so policymakers have to have realistic expectations about the timeline</li>
  <li>If the targets of the regulation don’t participate in the standards process, they may be able to reasonably claim that what results can’t be implemented by them</li>
</ul>

<p>These issues aren’t impossible to address: they just require good communication, alignment of incentives, management of expectations, and careful diligence.</p>

<h3 id="add-a-configuration-layer">Add a Configuration Layer</h3>

<p>Even if the policymaker waits for the outcome of the standards process, it’s rare for the policy decisions to be cleanly separable from the technology that needed to be created. Choices need to be made about how the technology is used and how it maps to the policy goals of a specific jurisdiction.</p>

<p>One intriguing way to manage that gap is to span it with a new entity – one that creates neither technical specifications nor policy goals, but instead is explicitly constituted to define how to meet the stated policy goals using already available technology. That leaves policy formation in the hands of policymakers and technical design in the hands of technologists.</p>

<p>In technology terms, this is a configuration layer: clearly and cleanly separating the concerns of how the technology is designed from how it is used. It still requires the technology to exist and have the appropriate configuration “interfaces”, but promises to take a large part of the policy pressure off of the standards process.</p>

<p>An example of this approach is just being started by the European Commission now. At IETF 123, they explained a proposal for a <a href="https://www.iepg.org/2025-07-20-ietf123/slides-123-iepg-sessa-multi-stakeholder-forum-on-internet-standards-deployment-00.pdf">Multi-stakeholder Forum on Internet Standards Deployment</a> that fills the gap between the definition of Internet security mechanisms and the policy intent of making European networks more secure. Policymakers have no desire to refer to specific RFCs in legislation, and Internet technologists don’t want to define regulatory requirements for Europe, so the idea is that this third entity will make those decisions without defining new technology <em>or</em> policy intent.</p>

<p>Getting this right requires the new forum to be constituted in a particular way. It has to be constrained by the policymaker’s intent, and can’t define new technology. That means that the technology has to be amenable to configuration – the relevant options need to be available. The logical host for the discussion is a venue controlled by the policymaker, but it needs to be open to broad participation (including online and asynchronous participation) so that the relevant experts can participate. Transparency will be key, and I suspect that the decision making policy will be critical to get right – ideally something close to a consensus model, but the policymaker may need to reserve the right to overrule objections or handle appeals.</p>

<p>Needless to say, I’m excited to see how this forum will work out. If successful, it’s a pattern that could be useful elsewhere.</p>]]></content>
    </entry>
</feed>
Raw text
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Mark Nottingham</title>
  <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/" />
  <link rel="self" type="application/atom+xml" href="https://www.mnot.net/blog/index.atom" />
  <id>tag:www.mnot.net,2010-11-11:/blog//1</id>
  <updated>2026-03-06T05:49:48Z</updated>
  <subtitle></subtitle>

  <entry>
    <title>The Internet Isn’t Facebook: How Openness Changes Everything</title>
    <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2026/02/20/open_systems" />
    <id>https://www.mnot.net/blog/2026/02/20/open_systems</id>
    <updated>2026-02-20T00:00:00Z</updated>
    <author>
        <name>Mark Nottingham</name>
        <uri>https://www.mnot.net/personal/</uri>
    </author>
    <summary>Openness makes the Internet harder to govern — but also makes it resilient, innovative, and difficult to capture. Let&apos;s look at how the openness of the Internet both defines it and ensures its success.</summary>
    
	<category term="Tech Regulation" />
    
	<category term="Web and Internet" />
    
    <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2026/02/20/open_systems">
  	  <![CDATA[<p class="intro">“Open” tends to get thrown around a lot when talking about the Internet: Open Source, <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">Open Standards</a>, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness <em>as a system</em>. It turns out this has profound effects on both the Internet’s design and how it might be regulated.</p>

<p>This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.</p>

<h3 id="open-and-shut">Open and Shut</h3>

<p>Most often, people think and work within <em>closed systems</em> – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.</p>

<p>Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.</p>

<p>The Internet is not like that.</p>

<p>That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “<a href="https://www.internetsociety.org/blog/2014/04/permissionless-innovation-openness-not-anarchy/">permissionless innovation</a>,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an <abbr title="Standards Developing Organisation">SDO</abbr> like the <abbr title="Internet Engineering Task Force">IETF</abbr>, but often without any formal coordination.</p>

<p>This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.</p>

<h3 id="designing-for-openness">Designing for Openness</h3>

<p>Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.</p>

<p>For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.</p>

<p class="callout">This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve <a href="https://www.mnot.net/blog/2024/11/29/platforms">written previously</a> about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.</p>

<p>For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.</p>

<p>At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.</p>

<h3 id="introducing-change-in-open-systems">Introducing Change in Open Systems</h3>

<p>An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.</p>

<p>In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the <em>ways</em> it’s being used. This makes introducing changes tricky; as is often said in the <abbr title="Internet Engineering Task Force">IETF</abbr>, <strong>you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time</strong>.  Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.</p>

<p>The Web is another example of an open system.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup> No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.</p>

<h3 id="governing-open-systems">Governing Open Systems</h3>

<p>Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.</p>

<p>At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.</p>

<p>On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.</p>

<p>Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like <abbr title="Domain Name System">DNS</abbr> resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative <abbr title="Domain Name System">DNS</abbr> servers, encrypted <abbr title="Domain Name System">DNS</abbr>, <abbr title="Virtual Private Networks">VPNs</abbr>, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.</p>

<p>That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).</p>

<p>On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by <a href="https://www.mnot.net/blog/2022/09/11/esafety-industry-codes">applying its regulatory mechanisms to <em>all</em> actors on the Internet</a>, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.</p>

<p>Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.</p>

<p>None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.</p>

<h3 id="why-the-internet-must-stay-open">Why the Internet Must Stay Open</h3>

<p>If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage?  Certainly, it’s <em>possible</em> for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).</p>

<p>I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.</p>

<p>The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) <em>without</em> permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the <a href="https://en.wikipedia.org/wiki/End-to-end_principle">end-to-end principle</a>. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to <a href="https://www.rfc-editor.org/rfc/rfc9518.html">centralization</a>.</p>

<p>Again, all of this is not to say that closed systems on <em>top</em> of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.</p>

<p>Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, <abbr title="Distributed Denial of Service Attack">DDoS</abbr>, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – <em>still</em> outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.</p>

<p>Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.</p>

<p>Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.</p>

<p>Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.</p>

<p><em>Thanks to <a href="https://www.komaitis.org">Konstantinos Komaitis</a> for his suggestions.</em></p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Albeit one that is the foundation for a number of very large closed systems. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]>
    </content>
  </entry>

  <entry>
    <title>The Power of &apos;No&apos; in Internet Standards</title>
    <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2026/02/13/no" />
    <id>https://www.mnot.net/blog/2026/02/13/no</id>
    <updated>2026-02-13T00:00:00Z</updated>
    <author>
        <name>Mark Nottingham</name>
        <uri>https://www.mnot.net/personal/</uri>
    </author>
    <summary>The voluntary nature of Internet standards means that the biggest power move may be to avoid playing the game. Let&apos;s take a look.</summary>
    
	<category term="Tech Regulation" />
    
	<category term="Standards" />
    
	<category term="Web and Internet" />
    
    <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2026/02/13/no">
  	  <![CDATA[<p class="intro">Fairly regularly, I hear someone ask whether a particular company is expressing undue amounts of power in Internet standards, seemingly with the implication that they’re getting away with murder (or at least the Internet governance equivalent).</p>

<p>While it’s not uncommon for powerful entities to try to steer the direction that the work goes in, they don’t have free rein: the <a href="https://www.mnot.net/blog/2024/07/05/open_internet_standards">open nature of Internet standards processes</a> assures that their proposals are subjected to considerable scrutiny from their competitors, technical experts, civil society representatives, and on occasion, governments. Of course there are counterexamples, but in general that’s not something I worry about <em>too</em> much.</p>

<p>The truth is that there is very little power expressed in standards themselves. Instead, it resides in the implementation, deployment, and use of a particular technology, no matter whether it was standardised in a committee or is a <em>de facto</em> standard. Open standards processes provide some useful properties, but they are <strong>not</strong> a guarantee of quality or suitability and there are many standards that have zero impact.</p>

<p>That implication of <a href="https://www.mnot.net/blog/2024/03/13/voluntary">voluntary adoption</a> is why I believe that <strong>the most undiluted expression of power in Internet standards is saying ‘no’</strong> – in particular, when a company declines to participate in or implement a specification, feature, or function. Especially if that company is central to a ‘choke point’ with already embedded power due to adoption of related technologies like an Operating System or Web browser. In the most egregious cases, this is effectively saying ‘we want that to stay proprietary.’</p>

<p>Sometimes the no is explicit. I’ve heard an engineer from a Very Big Tech Company publicly declare that their product would not implement a specification, with the very clear implication that the working group shouldn’t bother adopting the spec as a result. That’s using their embedded power to steer the outcome, hard.</p>

<p>Usually though, it’s a lot more subtle. Concerns are raised. Review of a specification is de-prioritised. Maybe a standard is published, but it never gets to implementation. Or maybe the scope of the standard or its implementation is watered down enough to deliver something actually interoperable or functional.</p>

<p>To be very clear, engineers often have very good reasons for declining to implement something. There are a <em>lot</em> of bad ideas out there, and Internet engineering imposes a lot of constraints on what is possible. Proposals have to run a gamut of technical reviews, architectural considerations, and carefully staked-out fiefdoms to see the light of day. Proponents are often convinced of the value of their contributions, only to find that they fail to get traction for reasons that can be hard to understand. The number of people who understand the nuances is small: usually, just a handful in any given field.</p>

<p>But when the ‘no’ comes about because it doesn’t suit the agendas of powerful parties, something is wrong. Even people who want to see a better Internet reduce their expectations, because they lose faith in the possibility of success.</p>

<h3 id="a-failure-of-ambition">A Failure of Ambition</h3>
<p>To me, the evidence of this phenomenon is clearest in how little ambition the we’re seeing from the Web. The Web should be a constantly raising sea of commoditised technology, cherry picking successful proprietary applications – marketplaces like Amazon and eBay, social networks like LinkedIn and Facebook, chat on WhatsApp and iMessage, search on Google, and so on – and reinventing them as public good oriented features without a centralised owner. Robin Berjon dives into this view of the Web in <a href="https://berjon.com/bigger-browser/">You’re Going to Need a Bigger Browser</a>.<sup id="fnref:1"><a href="#fn:1" class="footnote" rel="footnote" role="doc-noteref">1</a></sup></p>

<p>Instead, most current Web standards activity focuses on incremental, small features: tweaking around the edges and creating new ‘low level’ APIs that proprietary things can be built upon. This approach was codified a while back in the ‘<a href="https://github.com/extensibleweb/manifesto">Extensible Web Manifesto</a>’, which was intended to let the community to focus its resources and let a ‘thousand flowers bloom’, but the effect has been to allow silo after silo to be build upon the Web, solidifying its role as the greatest centralisation technology ever.</p>

<p>There are small signs of life. Recent features like Web Payments, federated identity and the various (somewhat) decentralised social networking protocols show promise for extending the platform in important ways, but they’re exceptional, not the rule.</p>

<h3 id="creating-upward-pressure">Creating Upward Pressure</h3>
<p>How then, can we create higher-level capabilities that serve society but aren’t proprietary?</p>

<p>Remember that <a href="https://www.mnot.net/blog/2024/03/13/voluntary">the voluntary nature of Internet standards</a> is a feature – it allows us to fail by using the marketplace as a proving function. Forcing tech companies to implement well-intentioned specifications that aren’t informed by experience is a recipe for broken, bad tech. Likewise, ‘standardising harder’ isn’t going to create better outcomes: the real influence of what standards do is in their implementation and adoption.</p>

<p>What matters is not writing specifications, it’s getting to a place where it’s not possible for private concerns to express inappropriate power over the Internet. Or as Robin <a href="https://berjon.com/digital-sovereignty/">articulates</a>: “What matters is who has the structural power to deploy the standards they want to see and avoid those they dislike.” To me, that suggests a few areas where progress can be made:</p>

<p class="hero">First, we should remember that the market is the primary force shaping companies’ behaviour right now. It used to be that paid services like Proton were <a href="https://balkaninsight.com/2025/04/01/taking-aim-at-big-tech-proton-ceo-warns-democracy-depends-on-privacy/">mocked for competing with free Google services</a>. Now they’re viable because people realised the users are the product. If we want privacy-respecting, decentralised solutions and are willing to pay for them, that changes the incentives for companies, big and small. However, the solutions need to be bigger than any one company.</p>

<p class="hero">Second, where the market fails, competition regulators can and should step in. They’ve been increasingly active recently, but I’d like to see them go further: to provide <strong>stronger guidelines for open standards processes</strong>, and to give companies stronger incentives to participate and adopt open standards, such as a <strong>presumption that adopting a specification that goes through a high-quality process is not anticompetitive</strong>. Doing so would create natural pressure for companies to be interoperable (reducing those choke points) while also being more subject to public and expert review.</p>

<p class="hero">Third, private corporations are not the only source of innovation in the world. In fact, there are <a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=36972">great arguments</a> that open collaboration is a much deeper source of innovation in the modern economy. My interest turns towards the possibilities of public sponsorship for development of the next generation of Internet technology: what’s now being called <strong>Digital Public Infrastructure</strong>. There are many challenging issues in this area – especially regarding governance and, frankly, viability – but if the needle can be threaded and the right model found, the benefits to the people who use the Internet could be massive.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1">
      <p>Yes, as discussed before there are <a href="https://www.mnot.net/blog/2024/11/29/platforms">things that are harder to do without a single-company chokepoint</a>, but that shouldn’t preclude <em>trying</em>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]>
    </content>
  </entry>

  <entry>
    <title>Some Thoughts on the Open Web</title>
    <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2026/01/20/open_web" />
    <id>https://www.mnot.net/blog/2026/01/20/open_web</id>
    <updated>2026-01-20T00:00:00Z</updated>
    <author>
        <name>Mark Nottingham</name>
        <uri>https://www.mnot.net/personal/</uri>
    </author>
    <summary>The Open Web means several things to different people, depending on context, but recently discussions have focused on the Web&apos;s Openness in terms of access to information -- how easy it is to publish and obtain information without barriers there.</summary>
    
	<category term="Web and Internet" />
    
    <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2026/01/20/open_web">
  	  <![CDATA[<p class="intro">“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of <strong>access to information</strong> -- how easy it is to publish and obtain information without barriers there.</p>

<p>David Schinazi and I hosted a pair of ad hoc sessions on this topic at the last IETF meeting in Montreal and the subsequent W3C Technical Plenary in Kobe; you can see the <a href="https://docs.google.com/document/d/1WaXDfwPP6olY-UVQxDZKNkUyqvmHt-u4kREJW4ys6ms/edit?usp=sharing">notes and summaries from those sessions</a>.  This post contains my thoughts on the topic so far, after some simmering.</p>

<h3 id="the-open-web-is-amazing">The Open Web is Amazing</h3>

<p>For most of human history, it’s been difficult to access information. As an average citizen, you had to work pretty hard to access academic texts, historical writings, literature, news, public information, and so on. Libraries were an amazing innovation, but locating and working with the information there was still a formidable challenge.</p>

<p>Likewise, publishing information for broad consumption required resources and relationships that were unavailable to most people. Gutenberg famously broke down some of those barriers, but many still remained: publishing and distributing books (or articles, music, art, films) required navigating extensive industries of gatekeepers, and often insurmountable costs and delays.</p>

<p>Tim Berners-Lee’s invention cut through all of that; it was now possible to communicate with the whole world at very low cost and almost instantaneously. Various media industries were disrupted (but not completely displaced) by this innovation, and reinterpreted roles for intermediaries (e.g., search engines for librarians, online marketplaces for ‘brick and mortar’ shops) were created.</p>

<p>Critically, a norm was also created; an expectation that content was easy to access, didn’t require paying or logging in. This was not enforced, and it was not always honoured: there were still subscription sites, and that’s OK, but they didn’t see the massive network effects that hyperlinks and browsers brought.</p>

<p>It is hard to overstate the benefits of this norm. Farmers in developing countries now have easy access to guidelines and data that help their crops succeed. Students around the world have access to resources that were unimaginable even a few decades ago. They can also contribute to that global commons of content, benefiting others as they build a reputation for themselves.</p>

<p>The Open Web is an amazing public good, both for those who consume information and those who produce it. By reducing costs and friction on both sides, it allows people all over the world to access and create information in a way -- and with an ease -- that would have been unimaginable to our predecessors. It’s worth fighting for.</p>

<h3 id="people-have-different-motivations-for-opening-content">People Have Different Motivations for Opening Content</h3>

<p>We talk about “The Open Web” in the singular, but in fact there are many motivations for making content available freely online.</p>

<p>Some people consciously make their content freely available on the Web because they want to contribute to the global commons, to help realise all of the benefits described above.</p>

<p>Many don’t, however.</p>

<p>Others do it because they want to be discovered and build a reputation. Or because they want to build human connections. Or because they want revenue from putting ads next to the content. Or because they want people to try their content out and then subscribe to it on the less-than-open Web.</p>

<p>Most commonly, it’s a blend of many (or even all) of these motivations.</p>

<p>Discussions of the Open Web need to consider all of them distinctly -- what about their environments are changing, and what might encourage or discourage different kinds of Open Web publishers. Only focusing on some motivations or creating “purity tests” for content isn’t helpful.</p>

<h3 id="there-are-many-degrees-of-open">There are Many Degrees of “Open”</h3>

<p>Likewise, there are many degrees of “open.” While some Open Web content doesn’t come with any strings, much of it does. You might have to allow tracking for ads. While an article might be available to search engines (to drive traffic), you might have to register for an account to view the content as an individual.</p>

<p>There are serious privacy considerations associated with both of these, but those concerns should be considered as distinct from those regarding open access to information. People sometimes need to get a library card to access information at their local library (in person or online), but that doesn’t make the information less open.</p>

<p class="callout">One of the most interesting assertions at the meetings we held was about advertising-supported content: that it was <em>more</em> equitable than “micro-transactions” and similar pay-to-view approaches, because it makes content available to those who would otherwise not be able to afford it.</p>

<p>At the same time, these ‘small’ barriers  – for example, requirements to log in after reading three articles – add up, reducing the openness of the content. If the new norm is that everyone has to log in everywhere to get Web content (and we may be well on our way to that), the Open Web suffers.</p>

<p>Similarly, some open content is free to all comers and can be reused at will, where other examples have technical barriers (such as bot blockers or other selective access schemes) and/or legal barriers (namely, copyright restrictions).</p>

<h3 id="it-has-to-be-voluntary">It Has to be Voluntary</h3>

<p>Everyone who publishes on the Open Web does so because they want to – because the benefits they realise (see above) outweigh any downsides.</p>

<p>Conversely, any content not on the Open Web is not there because the owner has made the judgement that it is not worthwhile for them to do so. They cannot be forced to “open up” that content -- they can only be encouraged.</p>

<p>Affordances and changes in infrastructure, platforms, and other aspects of the ecosystem -- sometimes realised in technical standards, sometimes not -- might change that incentive structure and create the conditions for more or less content on the Open Web. They cannot, however, be forced or mandated.</p>

<p>To me, this means that attempts to coerce different parties into desired behaviors are unlikely to succeed – they have to <em>want</em> to provide their content. That includes strategies like withholding capabilities from them; they’ll just go elsewhere to obtain them, or put their content beyond a paywall.</p>

<h3 id="its-changing-rapidly">It’s Changing Rapidly</h3>

<p>We’re talking about the Open Web now because of the introduction of AI -- a massive disruption to the incentives of many content creators and publishers, because AI both leverages their content (through scraping for training) and competes with it (because it is generative).</p>

<p>For those who opened up their content because they wanted to establish reputation and build connectivity, this feels exploitative. They made their content available to benefit people, and it turns out that it’s benefiting large corporations who claim to be helping humanity but have failed to convince many.</p>

<p>For those who want to sell ads next to their content or entice people to subscribe, this feels like betrayal. Search engines built an ecosystem that benefited publishers and the platforms,but publishers see those same platforms as continually taking more value from the relationship -- as seen in efforts to force intermediation like AMP, and now AI, where sites get drastically reduced traffic in exchange for nothing at all.</p>

<p>And so people are blocking bots, putting up paywalls, changing business models, and yanking their content off the Open Web. The commons is suffering because technology (which always makes <em>something</em> easier) now makes content creation <em>and</em> consumption easier, so long as you trust your local AI vendor.</p>

<p>This change is unevenly distributed. There are still people happily publishing open content in formats like RSS, which doesn’t facilitate tracking or targeting, and is wide open to scraping and reuse. That said, there are large swathes of content that are disappearing from the Open Web because it’s no longer viable for the publisher; the balance of incentives for them has changed.</p>

<h3 id="open-is-not-free-to-provide">Open is Not Free to Provide</h3>

<p>Information may be a non-rivalrous good, but that doesn’t mean it’s free to provide. The people who produce it need to support themselves.</p>

<p>That doesn’t mean that their interests dominate all others, nor that the structures that have evolved are the best (or even a good) way to assure that they can do so; these are topics better suited for copyright discussions (where there is a very long history of such considerations being debated).</p>

<p>Furthermore, on a technical level serving content to anyone who asks for it on a global scale might be a commodity service now -- and so very inexpensive to do, in some cases -- but it’s not free, and the costs add up at scale. These costs -- again, alongside the perceived extractive nature of the relationship -- are causing some to <a href="https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO">block or otherwise try to frustrate</a> these uses.</p>

<p>Underlying this factor is an argument about whether it’s legitimate to say you’re on ‘the Open Web’ while selectively blocking clients you don’t like – either because they’re abusive technically (over-crawling), or because you don’t like what they do with the data. My observation here is that however you feel about it, that practice is now very, very widespread – evidence of great demand on the publisher side. If that capability were taken away, I strongly suspect the net result would be very negative for the Open Web.</p>

<h3 id="its-about-control">It’s About Control</h3>

<p>Lurking beneath all of these arguments is a tension between the interests of those who produce and use content. Forgive me for resorting to hyperbole: some content people want pixel-perfect control not only over how their information is presented but how it is used and who uses it, and some open access advocates want all information to be usable for any purpose any time and anywhere.</p>

<p>Either of these outcomes (hyperbole as they are) would be bad for the Open Web.</p>

<p>The challenge, then, is finding the right balance – a Web where content producers have incentives to make their content available in a way that can be reused as much as is reasonable. That balance needs to be stable and sustainable, and take into account shocks like the introduction of AI.</p>

<h3 id="a-way-forward">A Way Forward</h3>

<p>Having an Open Web available for humanity is not a guaranteed outcome; we may end up in a future where easily available information is greatly diminished or even absent.</p>

<p>With that and all of the observations above in mind, what’s most apparent to me is that we should focus on finding ways to create and strengthen incentives to publish content that’s open (for some definition of open) -- understanding that people might have a variety of motivations for doing so. If environmental factors like AI change their incentives, we need to understand why and address the underlying concerns if possible.</p>

<p>In other words, we have to create an Internet where people <em>want</em> to publish content openly – for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.</p>]]>
    </content>
  </entry>

  <entry>
    <title>Principles for Global Online Meetings</title>
    <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2025/10/26/equitable-meetings" />
    <id>https://www.mnot.net/blog/2025/10/26/equitable-meetings</id>
    <updated>2025-10-26T00:00:00Z</updated>
    <author>
        <name>Mark Nottingham</name>
        <uri>https://www.mnot.net/personal/</uri>
    </author>
    <summary>Some thoughts about how to schedule online meetings for a global organisation in an equitable way.</summary>
    
    <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2025/10/26/equitable-meetings">
  	  <![CDATA[<p class="intro">One of the tricker problems for organisations that aspire to be global is scheduling a series of meetings. While the Internet has brought the ability to meet with colleagues and stakeholders all over the world, it hasn’t been able to get everyone on the same daily tempo – the earth is still not flat.</p>

<p>As someone who has participated in such organisations from Australia for nearly two decades, I’ve formed some fairly strong opinions about how their meetings should be arranged. What follows is an attempt to distill those thoughts into a set of principles that’s flexible enough to apply to a variety of situations.</p>

<p>Keep in mind the intended application is to a series of global meetings, not a single one-off event. Also, if the set of people who need to attend a given meeting are in timezones that lead to an agreed-to “good” time, you should use that time – but then I question if your organisation is really global. For the rest, read on.</p>

<h3 id="0-its-about-equity">0. It’s About Equity</h3>
<p>For global organisations, meeting scheduling is an equity issue. Arranging a meeting where some people can attend from the convenience of their office in normal business hours while others have to stay up into the middle of the night is not equitable – the former have very low friction for attending, while the latter have to disrupt their lives, families, relationships, and sleep cycles to attend.</p>

<p>When a person does make the extra effort to attend at a less-than-ideal hour, they will not be at their best. Being awake outside your normal hours means that you aren’t thinking as clearly and might react more emotionally than otherwise. Interrupting an evening after a long day can impact your focus. Effective participation is difficult under these conditions.</p>

<p>I cast this as an equity issue because I’ve observed that many don’t perceive it that way. This is often the case if someone’s experience is that most meetings are scheduled at reasonable hours, they don’t have to think about it, and people in other parts of the world staying up late or getting up early to talk to them is normal. It’s only when people realise this privilege and challenge what’s normal that progress can be made. If you want a truly global organisation, people need to be able to participate on equal footing, and that means that some people will need to make what looks like – to them – sacrifices, because they’re used to things being a certain way.</p>

<h3 id="1-share-pain-with-rotation">1. Share Pain with Rotation</h3>
<p>With that framing as an equity issue in mind, it becomes clear what must be done: the ‘pain’ of participating needs to be shared in a way that’s equitable. The focus then becomes characterising what pain is, and how to dole it out in a fair way while still holding functional meetings.</p>

<p>The most common method for scheduling a meeting that involves people from all over the globe involves picking “winners” and “losers”. Mary and Joe in North America get a meeting in their daytime; the Europeans have something in their evening, and Asia/Pacific folks have to get up early. Australians get the hardest service – they’re usually up past midnight, but sometimes get roused at 5am or so, depending on the fluctuations of daylight savings. Often, this will be justified with a poll or survey asking for preferences, but one where all options are reasonable for a priviledged set of participants, and most are unreasonable for others.</p>

<p>This is all wrapped up in very logical explanations: it’s the constraints we work within, the locations of the participants narrow down the options, it doesn’t make sense to inconvenience a large number of people for the benefit of a few. Or the kicker: if we scheduled the meeting at that time, the folks who are used to having meetings at good times for them wouldn’t come.</p>

<p>All of those are poor excuses that should be challenged, but often aren’t because this privilege is so deeply embedded.</p>

<p>What can be done? The primary tool for pain-sharing is <strong>rotation</strong>. Schedule meetings in rotating time slots so that everyone has approximately the same number of “good”, “ok”, and “bad” time slots. This is how you put people on even footing.</p>

<p>It may even mean intentionally scheduling in a way that people will miss a slot – e.g., two out of three. In this case, you’ll need to build tools to make sure that information is shared between meetings (you should be keeping minutes and, tracking action items, and creating summaries anyway!), that decisions don’t happen in any one meeting, and that people have a chance to see a variety of people, not just the same subset every time.</p>

<p>For example, imagine an organisation that needs to meet weekly, and has three members in different parts of Europe, five across North America, two in China, and one each in Australia and India. If they rotate between three time slots for their meetings, they might end up with:</p>

<ul>
  <li>UTC: 02:00 / 11:00 / 17:00</li>
  <li>Australia/Eastern: 12:00 / 21:00 / 03:00 (+1d)</li>
  <li>China/Shanghai: 10:00 / 19:00 / 01:00 (+1d)</li>
  <li>US/Eastern: 22:00 (-1d) / 07:00 / 14:00</li>
  <li>Europe/Central: 04:00 / 13:00 / 19:00</li>
  <li>India/Mumbai: 07:30 / 16:30 / 22:30</li>
</ul>

<p>Notice that everyone has approximately one “good” slot, one “ok” slot, and one “bad” slot – depending on each individual’s preferences, of course.</p>

<p>One objection I’ve heard to this approach is that it would lead to a state where most of the people go to just one or two of the meetings, and the others are poorly attended. That kind of fragmentation is certainly possible, but in my opinion it says more about the diversity of your organisation and the commitment of the people attending the meeting – both factors that should be separately addressed, not loaded onto some of the participants as meeting pain. Doing so is saying that some people won’t attend if they’re exposed to the conditions that they ask of others.</p>

<h3 id="2-pain-is-individual">2. Pain is Individual</h3>
<p>A common approach to scheduling weighs decisions by how many people are in each timezone. For example, if you’ve got ten people in North America, three in Europe, and one in Asia, you should obviously arrange things to inconvenience the fewest number of people, right?</p>

<p>The problem is, each of those people experiences the pain individually – it is not a collective phenomenon. The person in Asia doesn’t experience 1/14th of the pain if they need to get up at 4:30am for a call.  Making things slightly inconvenient for the North Americans doesn’t magnify the pain they experience times ten.</p>

<p>So, don’t weigh your decisions by how many people are in a particular timezone or region. Of course there are limits to this principle – if it’s 100:1 you need to be able to function as a group (e.g., be quorate). But again, I’m questioning whether you’re really a global organisation here; you’re effectively gaslighting the people who are trying to participate from elsewhere by calling yourself one.</p>

<h3 id="3-pain-is-specific">3. Pain is Specific</h3>
<p>It’s easy to fall into the trap of assuming that everyone’s circumstances are the same – that if a 7am meeting is painful for you, it’s equally painful for someone else.</p>

<p>In reality, some people are morning people, while others don’t mind staying up until 2am. Some people might have a family dinner every Thursday night that would be disrupted by your meeting, while others are happy to use that time because that’s when they have the house to themselves.</p>

<p>This means you need to ask what people’s preferences and conflicts are, rather than (for example) assume that 7am-9am is ok, 9am-5pm is good, 5pm-10pm is ok, and everything else is bad. The mechanics of how that information is gathered depends upon the nature of your group, but it needs to be sensitive to privacy and resistant to gaming.</p>

<h3 id="4-pain-is-relative">4. Pain is Relative</h3>
<p>One of the complications of scheduling meetings across timezones is balancing the various kinds of conflicts and inconveniences that they bring up for a proposed time slot. John has to pick up the kids in that timeslot; Hiro is eating breakfast. Marissa needs to have dinner with her family. And Mark just wants a good night’s sleep for once.</p>

<p>I propose a hierarchy of inconvenience and pain, from most to least impactful:</p>

<ol>
  <li>Rearranging your life - changing your sleep schedule, working on weekends (remember, Friday in North America is Saturday in other parts of the world)</li>
  <li>Rearranging family life - shifting meals, changing child or elderly care arrangements</li>
  <li>Moving other meetings - managing conflicts with other professional commitments</li>
</ol>

<p>When asking for conflicts for a given time slot, the higher items should always override the lower forms of pain. I’m sure this could be elaborated upon and extended, but it’s a good starting point.</p>

<p>I sometimes also hear about another kind of pain: that rotating meetings makes it hard for some people to keep their calendars. To me, this isn’t #4; it’s #100.</p>

<h3 id="5-circumstances-change">5. Circumstances Change</h3>
<p>People aren’t static. Their lives change, their families change, their health changes. If your meetings are scheduled over long periods of time, that means you need to be responsive to these changes, periodically checking to see if their preferences need updating.</p>

<p>I used to be a night person. I’d be up until at least midnight, sometimes two or three, and mornings would be a real struggle. However, as I’ve gotten older, I’m finding that many mornings I wake naturally at five or so, and I’m ready to sleep at around 10pm unless I’m out of the house. That change has fundamentally affected how I attend meetings.</p>

<p>And, of course, if you have participants in the Southern hemisphere (and you should!), you have to account for the differences in daylight savings, due to the differences in seasons. It’s not just a one-hour shift – it’s two, and that can make a big difference to someone’s quality of life.</p>

<h3 id="6-respect-peoples-time">6. Respect People’s Time</h3>
<p>Appreciate that what’s just another meeting in the middle of your workday is a huge effort in the middle of the night for someone else; don’t fritter away a substantial portion on chitchat. Have an agenda and be prepared to make the meeting valuable. Use offline, asynchronous tools when they’re more appropriate.</p>

<p>Likewise, don’t cancel or re-schedule a meeting at the last minute (or even last day). Setting an alarm for an early meeting and struggling through getting presentable and caffeinated only to find it’s been axed is distinctly unpleasant.</p>]]>
    </content>
  </entry>

  <entry>
    <title>Bridging the Gap Between Standards and Policy</title>
    <link rel="alternate" type="text/html" href="https://www.mnot.net/blog/2025/09/20/configuration" />
    <id>https://www.mnot.net/blog/2025/09/20/configuration</id>
    <updated>2025-09-20T00:00:00Z</updated>
    <author>
        <name>Mark Nottingham</name>
        <uri>https://www.mnot.net/personal/</uri>
    </author>
    <summary>Achieving policymakers&apos; goals in coordination with Internet standards activity can be difficult. This post explores some of the options and considerations involved.</summary>
    
	<category term="Tech Regulation" />
    
	<category term="Standards" />
    
	<category term="Web and Internet" />
    
    <content type="html" xml:lang="en" xml:base="https://www.mnot.net/blog/2025/09/20/configuration">
  	  <![CDATA[<p>Internet standards bodies like the IETF and W3C are places where experts can come to agreement about the details of how technology should work. These communities have the deep experience that allows them to guide the evolution of the Internet towards common goals.</p>

<p>Policymakers have none of that technical expertise, but are the legitimate source of policy decisions in any functioning society. They don’t have the means to develop new technical proposals: while most countries have a national standard body, their products are a poor fit for a global Internet, and those bodies generally lack specific expertise.</p>

<p>So, it might seem logical for policymakers to turn to Internet standards bodies to develop the technical solutions for their policy goals, trusting the open process and community involvement to produce a good solution. Unfortunately, doing so can create problems that will cause such efforts to fail.</p>

<h3 id="whats-the-problem">What’s the Problem?</h3>

<p>A few different issues often become apparent when policymakers pre-emptively specify a standard.</p>

<p>First, as discussed previously the <a href="https://www.mnot.net/blog/2024/03/13/voluntary">voluntary nature of Internet standards</a> acts as a proving function for them: if implementers don’t implement or users don’t use, the standard doesn’t matter. If a legal mandate to use a particular standard precedes that proof of viability, it distorts the incentives for participation in the process, because the power relationships between participants have changed – it’s no longer voluntary for the targets of the regulation, and the tone of the effort shifts from being <a href="https://www.mnot.net/blog/2024/07/16/collaborative_standards">collaborative</a> to competitive.</p>

<p>Second, Internet standards are created by <a href="https://www.mnot.net/blog/2024/05/24/consensus">consensus</a>. That approach to decision making is productive when there is reasonable alignment between participants’ motives, but it’s not well suited to handling fundamental conflicts about societal values. That’s because while technical experts might be good at weighing technical arguments and generally adhering to widely agreed-to principles (whether they be regarding Internet architecture or human rights), it’s much more difficult for them to adjudicate direct conflict between values outside their areas of expertise. In these circumstances, the outcome is often simply a lack of consensus.</p>

<p>Third, jurisdictions often have differences in their policy goals, but the Internet is global, and so are its standards bodies, who want the Internet to be interoperable regardless of borders. If policy goals aren’t widely shared and aligned between countries, it becomes even more difficult to come to consensus.</p>

<p>Fourth, making decisions with societal impact in a technical expert body raises fundamental legitimacy issues. That’s not to say that Internet standards can’t or shouldn’t (or don’t) change society in significant ways, but that’s done from the position of private actors coordinating to achieve a common goal through well-understood processes, within the practical boundaries of the commonalities of the applicable legal frameworks. It’s entirely different for a contentious policy decision to be delegated by policymakers to a non-representative technical body.</p>

<p>So, what’s a policymaker to do?</p>

<h3 id="patience-is-a-virtue">Patience is a Virtue</h3>

<p>One widely repeated recommendation for policymakers is to avoid specifying the work or even a venue for it in regulation or legislation until <em>after</em> it’s been created and its viability is proven by some amount of market adoption. Instead, the policymaker should just hint that an industry standard that serves a particular policy goal would be useful.</p>

<p>However, this approach comes with a few caveats:</p>
<ul>
  <li>A set of proponents that drives the standards work has to emerge, and they need to be at least somewhat aligned with the policy goal</li>
  <li>Consensus-based technical standards are slow, so policymakers have to have realistic expectations about the timeline</li>
  <li>If the targets of the regulation don’t participate in the standards process, they may be able to reasonably claim that what results can’t be implemented by them</li>
</ul>

<p>These issues aren’t impossible to address: they just require good communication, alignment of incentives, management of expectations, and careful diligence.</p>

<h3 id="add-a-configuration-layer">Add a Configuration Layer</h3>

<p>Even if the policymaker waits for the outcome of the standards process, it’s rare for the policy decisions to be cleanly separable from the technology that needed to be created. Choices need to be made about how the technology is used and how it maps to the policy goals of a specific jurisdiction.</p>

<p>One intriguing way to manage that gap is to span it with a new entity – one that creates neither technical specifications nor policy goals, but instead is explicitly constituted to define how to meet the stated policy goals using already available technology. That leaves policy formation in the hands of policymakers and technical design in the hands of technologists.</p>

<p>In technology terms, this is a configuration layer: clearly and cleanly separating the concerns of how the technology is designed from how it is used. It still requires the technology to exist and have the appropriate configuration “interfaces”, but promises to take a large part of the policy pressure off of the standards process.</p>

<p>An example of this approach is just being started by the European Commission now. At IETF 123, they explained a proposal for a <a href="https://www.iepg.org/2025-07-20-ietf123/slides-123-iepg-sessa-multi-stakeholder-forum-on-internet-standards-deployment-00.pdf">Multi-stakeholder Forum on Internet Standards Deployment</a> that fills the gap between the definition of Internet security mechanisms and the policy intent of making European networks more secure. Policymakers have no desire to refer to specific RFCs in legislation, and Internet technologists don’t want to define regulatory requirements for Europe, so the idea is that this third entity will make those decisions without defining new technology <em>or</em> policy intent.</p>

<p>Getting this right requires the new forum to be constituted in a particular way. It has to be constrained by the policymaker’s intent, and can’t define new technology. That means that the technology has to be amenable to configuration – the relevant options need to be available. The logical host for the discussion is a venue controlled by the policymaker, but it needs to be open to broad participation (including online and asynchronous participation) so that the relevant experts can participate. Transparency will be key, and I suspect that the decision making policy will be critical to get right – ideally something close to a consensus model, but the policymaker may need to reserve the right to overrule objections or handle appeals.</p>

<p>Needless to say, I’m excited to see how this forum will work out. If successful, it’s a pattern that could be useful elsewhere.</p>]]>
    </content>
  </entry>

</feed>
Raw headers
{
  "cache-control": "max-age=43200",
  "cf-cache-status": "DYNAMIC",
  "cf-ray": "9dc5cde92cf45751-CMH",
  "content-language": "en",
  "content-length": "60442",
  "content-type": "application/atom+xml",
  "date": "Sat, 14 Mar 2026 19:49:54 GMT",
  "etag": "\"ec1a-64c549f425f20\"",
  "last-modified": "Fri, 06 Mar 2026 05:49:53 GMT",
  "server": "cloudflare",
  "strict-transport-security": "max-age=15552000"
}
Parsed with @rowanmanning/feed-parser
{
  "meta": {
    "type": "atom",
    "version": "1.0"
  },
  "language": null,
  "title": "Mark Nottingham",
  "description": null,
  "copyright": null,
  "url": "https://www.mnot.net/blog/",
  "self": "https://www.mnot.net/blog/index.atom",
  "published": null,
  "updated": "2026-03-06T05:49:48.000Z",
  "generator": null,
  "image": null,
  "authors": [],
  "categories": [],
  "items": [
    {
      "id": "https://www.mnot.net/blog/2026/02/20/open_systems",
      "title": "The Internet Isn’t Facebook: How Openness Changes Everything",
      "description": "Openness makes the Internet harder to govern — but also makes it resilient, innovative, and difficult to capture. Let's look at how the openness of the Internet both defines it and ensures its success.",
      "url": "https://www.mnot.net/blog/2026/02/20/open_systems",
      "published": null,
      "updated": "2026-02-20T00:00:00.000Z",
      "content": "<p class=\"intro\">“Open” tends to get thrown around a lot when talking about the Internet: Open Source, <a href=\"https://www.mnot.net/blog/2024/07/05/open_internet_standards\">Open Standards</a>, Open APIs. However, one of the most important senses of the Internet’s openness doesn’t get discussed as much: its openness <em>as a system</em>. It turns out this has profound effects on both the Internet’s design and how it might be regulated.</p>\n\n<p>This critical aspect of the Internet’s architecture needs to be understood more now than ever. For many, digital sovereignty is top-of-mind in the geopolitics of 2026, but some conceptions of it treat openness as a bug, not a feature. The other hot topic – regulation to address legitimately-perceived harms on the Internet – can put both policy goals and the value we get from the Internet at risk if it’s undertaken in a way that doesn’t account for the openness of the Internet. Properly utilised, though, the power of openness can actually help democracies contribute to the Internet (and other technologies like AI) in a constructive way that reinforces their shared values.</p>\n\n<h3 id=\"open-and-shut\">Open and Shut</h3>\n\n<p>Most often, people think and work within <em>closed systems</em> – those whose boundaries are fixed, where internal processes can be isolated from external forces, and where power is concentrated hierarchically. That single scope can still embed considerable complexity, but the assumptions that its closed nature allows make certain skills, tools, and mindsets advantageous. This simplification helps compartmentalise effects and reduces interactions; it’s easier when you don’t have to deal with things you don’t (and can’t) know, much less control.</p>\n\n<p>Many things we interact with daily are closed – for example, a single company, a project group, or even a legal jurisdiction. The Apple App Store, air traffic control, bank clearing systems, and cable television networks are closed; so are many of the emerging AI ecosystems.</p>\n\n<p>The Internet is not like that.</p>\n\n<p>That’s because it’s not possible to know or control all of the actors and forces that influence and interact with the Internet. New applications and networks appear daily, without administrative hoops; often, this is referred to as “<a href=\"https://www.internetsociety.org/blog/2014/04/permissionless-innovation-openness-not-anarchy/\">permissionless innovation</a>,” which allowed things the Web and real-time video to be built on top of the network without asking telecom operators for approval. New protocols and services are constantly proposed, implemented and deployed – sometimes through an <abbr title=\"Standards Developing Organisation\">SDO</abbr> like the <abbr title=\"Internet Engineering Task Force\">IETF</abbr>, but often without any formal coordination.</p>\n\n<p>This is an open system, and it’s important to understand how that openness constrains the nature of what’s possible on the Internet. What works in a closed system falls apart when you try to apply it to the Internet. Openness as a system makes introducing new participants and services very easy – and that’s a huge benefit – but that open nature makes other aspects of managing the ecosystem very different (and sometimes difficult). Let’s look at a few.</p>\n\n<h3 id=\"designing-for-openness\">Designing for Openness</h3>\n\n<p>Designing an Internet service like an online shop is easy if you assume it’s a closed ecosystem with an authority that ‘runs’ the shop. Yes, you have to deal with accounts, and payments, and abuse, and all of the other aspects, but the issues are known and can be addressed with the right amount of capital and a set of appropriate professionals.</p>\n\n<p>For example, designing an open trading ecosystem where there is no single authority lurking in the background and making sure everything runs well is an entirely different proposition. You need to consider how all of the components will interact and at the same time assure that none is inappropriately dominated by a single actor or even a small set, unless there are appropriate constraints on their power. You need to make sure that the amount of effort needed to join the system is low, while at the same time fighting the abusive behaviours that leverage that low barrier, such as spam.</p>\n\n<p class=\"callout\">This is why regulatory efforts that are focused on reforming currently closed systems – “opening them up” by compelling them to expose APIs and allow competitors access to their systems – are unlikely to be successful, because those platforms are designed with assumptions that you can’t take for granted when building an open system. I’ve <a href=\"https://www.mnot.net/blog/2024/11/29/platforms\">written previously</a> about Carliss Baldwin’s excellent work in this area, primarily from an economic standpoint. An open system is not just a closed one with a few APIs grafted onto it.</p>\n\n<p>For example, you’re likely to need a reputation system for vendors and users, but it can’t rely on a single authority making judgment calls about how to assign reputation, handle disputes, and so forth. Instead, you’ll want to make it more modular, where different reputation systems can compete. That’s a very different design task, and it is undoubtedly harder to achieve a good outcome.</p>\n\n<p>At the same time, an open system like the Internet needs to be more pessimistic in its assumptions about who is using it. While closed systems can take drastic steps like excluding bad actors from them, this is much more difficult (and problematic) in an open system. For example, a closed shopping site will have a definitive list of all of its users (both buyer and seller) and what they have done, so it can ascertain how trustworthy they are based upon that complete view. In an open system, there is no such luxury – each actor only has a partial view of the system.</p>\n\n<h3 id=\"introducing-change-in-open-systems\">Introducing Change in Open Systems</h3>\n\n<p>An operator of a proprietary, closed service like Amazon, Google, or Facebook has a view of its entire state and is able to deploy changes across it, even if they break assumptions its users have previously relied upon. Their privileged position gives them this ability, and even though these services run on top of the Internet, they don’t inherit its openness.</p>\n\n<p>In contrast, an open system like e-mail, federated messaging, or Internet routing is much harder to evolve, because you can’t create a list of who’s implementing or using a protocol with any certainty; you can’t even know all of the <em>ways</em> it’s being used. This makes introducing changes tricky; as is often said in the <abbr title=\"Internet Engineering Task Force\">IETF</abbr>, <strong>you can’t have a protocol ‘flag day’ where everyone changes how they behave at the same time</strong>.  Instead, mechanisms for gradual evolution (extensibility and versioning) need to be carefully built into the protocols themselves.</p>\n\n<p>The Web is another example of an open system.<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup> No one can enumerate all of the Web servers in the world – there are just too many, some hidden behind firewalls and logins. There are whole social networks and commerce sites that you’ve never heard of in other parts of the world. While search engines make us feel like we see the whole Web (and have every incentive to make us believe that), it’s a small fraction of the real thing that misses the so-called ‘deep’ Web. This vastness is why browsers have to be so conservative in introducing changes, and why we have to be so careful when we update the HTTP protocol.</p>\n\n<h3 id=\"governing-open-systems\">Governing Open Systems</h3>\n\n<p>Openness also has significant implications for governance. Command-and-control techniques that work well when governing closed systems are ineffective on an open one, and can often be counterproductive.</p>\n\n<p>At the most basic level, this is because there is no single party to assign responsibility to in an open system – its governance structure is polycentric (i.e., has multiple and often diffuse centres of power). Compounding that effect is the fact that large open systems like the Internet span multiple jurisdictions, so a single jurisdiction is always going to be playing “whack-a-mole” if it tries to enforce compliance on one party. As a result, decisions in open systems tend to take much more time and effort than anticipated if you’re used to dealing with closed, hierarchical systems.</p>\n\n<p>On the Internet, another impact of openness is seen in the tendency to create “building block” technology components that focus on enabling communication, not limiting it. That means that they are designed to support broad requirements from many kinds of users, not constrain them, and that they’re composed into layers which are distinct and separate. So trying to use open protocols to regulate behaviour of Internet users is often like trying to pin spaghetti to the wall.</p>\n\n<p>Consider, for example, the UK’s attempts to regulate user behaviour by regulating lower-layer general-purpose technologies like <abbr title=\"Domain Name System\">DNS</abbr> resolvers. Yes, they can make it more difficult for those using common technology to do certain things, but actually stopping such behaviour is very hard, due to the flexible, layered nature of the Internet; determined people can do the work and use alternative <abbr title=\"Domain Name System\">DNS</abbr> servers, encrypted <abbr title=\"Domain Name System\">DNS</abbr>, <abbr title=\"Virtual Private Networks\">VPNs</abbr>, and other technologies to work around filters. This is considered a feature of a global communications architecture, not a bug.</p>\n\n<p>That’s not to say that all Internet regulation is a fools’ errand. The EU’s Digital Markets Act is targeting a few well-identified entities who have (very successfully) built closed ecosystems on top of the open Internet. At least from the perspective of Internet openness, that isn’t problematic (and indeed might result in more openness).</p>\n\n<p>On the other hand, the Australian eSafety Regulator’s effort to improve online safety – itself a goal not at odds with Internet openness – falls on its face by <a href=\"https://www.mnot.net/blog/2022/09/11/esafety-industry-codes\">applying its regulatory mechanisms to <em>all</em> actors on the Internet</a>, not just a targeted few. This is an extension of the “Facebook is the Internet” mindset – acting as if the entire Internet is defined by a handful of big tech companies. Not only does that create significant injustice and extensive collateral damage, it also creates the conditions for making that outcome more likely (surely a competition concern). While these closed systems might be the most legible part of the Internet to regulators, they shouldn’t be mistaken for the Internet itself.</p>\n\n<p>Similarly, blanket requirements to expose encrypted messages have the effect of ‘chasing’ criminals to alternative services, making their activity even less legible to authorities and severely impacting the security and rights of law-abiding citizens in the process. That’s because there is no magical list of all of the applications that use encryption on the Internet: instead, regulators end up playing whack-a-mole. Cryptography relies on mathematical concepts realised in open protocols; treating encryption as a switch that companies can simply turn off misses the point.</p>\n\n<p>None of this is new or unique to the Internet; cross-border institutions are by nature open systems, and these issues come up often in discussions of global public goods (whether it is oceans, the climate, or the Internet). They thrive under governance that focuses on collaboration, diversity, and collective decision-making. For those that are used to top-down, hierarchical styles of governance, this can be jarring, but it produces systems that are far more resilient and less vulnerable to capture.</p>\n\n<h3 id=\"why-the-internet-must-stay-open\">Why the Internet Must Stay Open</h3>\n\n<p>If you’ve read this far, you might wonder why we bother: if openness brings so many complications, why not just change the Internet so that it’s a simpler, closed system that is easier to design and manage?  Certainly, it’s <em>possible</em> for large, world-spanning systems to be closed. For example, both the international postal and telephony systems are effectively closed (although the latter has opened up a bit). They are reliable and successful (for some definition of success).</p>\n\n<p>I’d argue that those examples are both highly constrained and well-defined; the services they provide don’t change much, and for the most part new participants are introduced only on one ‘side’ – new end users. Keeping these networks going requires considerable overhead and resources from governments around the world, both internally and at the international coordination layer.</p>\n\n<p>The Internet (in a broader definition) is not nearly so constrained, and the bulk of its value is defined by the ability to introduce new participants of all kinds (not just users) <em>without</em> permission or overhead. This isn’t just a philosophical preference; it’s embedded in the architecture itself via the <a href=\"https://en.wikipedia.org/wiki/End-to-end_principle\">end-to-end principle</a>. Governing major aspects of the Internet by international treaty is simply unworkable, and if the outcome of that agreement is to limit the ability of new services or participants to be introduced (e.g., “no new search engines without permission”), it’s going to have a material effect on the benefits that humanity has come to expect from the Internet. In many ways, it’s just another pathway to <a href=\"https://www.rfc-editor.org/rfc/rfc9518.html\">centralization</a>.</p>\n\n<p>Again, all of this is not to say that closed systems on <em>top</em> of the Internet shouldn’t be regulated – just that it needs to be done in a way that’s mindful of the open nature of the Internet itself. The guiding principle is clear: regulate the endpoints (applications, hosts, and specific commercial entities), not the transit mechanisms (the protocols and infrastructure). From what’s happened so far, it looks like many governments understand that, but some are still learning.</p>\n\n<p>Likewise, the many harms associated with the Internet need both technical and regulatory solutions; botnets, <abbr title=\"Distributed Denial of Service Attack\">DDoS</abbr>, online abuse, “cybercrime” and much more can’t be ignored. However, solutions to these issues must respect the open nature of the Internet; even though their impact on society is heavy, the collective benefits of openness – both social and economic – <em>still</em> outweigh them; low barriers to entry ensure global market access, drive innovation, and prevent infrastructure monopolies from stifling competition.</p>\n\n<p>Those points acknowledged, I and many others are concerned that regulating ‘big tech’ companies may have the unintended side effect of ossifying their power – that is, blessing their place in the ecosystem and making it harder for more open systems to displace them. This concentration of power isn’t an accident; commercial entities have a strong economic incentive to build proprietary walled gardens on top of open protocols to extract rent. For example, we’d much rather see global commerce based upon open protocols, well-thought-out legal protections, and cooperation, rather than overseen (and exploited) by the Amazon/eBay/Temu/etc. gang.</p>\n\n<p>Of course, some jurisdictions can and will try to force certain aspects of the Internet to be closed, from their perspective. They may succeed in achieving their local goals, but such systems won’t offer the same properties as the Internet. Closed systems can be bought, coerced, lobbied into compliance, or simply fail: their hierarchical nature makes them vulnerable to failures of leadership. The Internet’s openness makes it harder to maintain and govern, but also makes it far more resilient and resistant to capture.</p>\n\n<p>Openness is what makes the Internet the Internet. It needs to be actively pursued if we want the Internet to continue providing the value that society has come to depend upon from it.</p>\n\n<p><em>Thanks to <a href=\"https://www.komaitis.org\">Konstantinos Komaitis</a> for his suggestions.</em></p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n  <ol>\n    <li id=\"fn:1\">\n      <p>Albeit one that is the foundation for a number of very large closed systems. <a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n    </li>\n  </ol>\n</div>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Mark Nottingham",
          "email": null,
          "url": "https://www.mnot.net/personal/"
        }
      ],
      "categories": [
        {
          "label": "Tech Regulation",
          "term": "Tech Regulation",
          "url": null
        },
        {
          "label": "Web and Internet",
          "term": "Web and Internet",
          "url": null
        }
      ]
    },
    {
      "id": "https://www.mnot.net/blog/2026/02/13/no",
      "title": "The Power of 'No' in Internet Standards",
      "description": "The voluntary nature of Internet standards means that the biggest power move may be to avoid playing the game. Let's take a look.",
      "url": "https://www.mnot.net/blog/2026/02/13/no",
      "published": null,
      "updated": "2026-02-13T00:00:00.000Z",
      "content": "<p class=\"intro\">Fairly regularly, I hear someone ask whether a particular company is expressing undue amounts of power in Internet standards, seemingly with the implication that they’re getting away with murder (or at least the Internet governance equivalent).</p>\n\n<p>While it’s not uncommon for powerful entities to try to steer the direction that the work goes in, they don’t have free rein: the <a href=\"https://www.mnot.net/blog/2024/07/05/open_internet_standards\">open nature of Internet standards processes</a> assures that their proposals are subjected to considerable scrutiny from their competitors, technical experts, civil society representatives, and on occasion, governments. Of course there are counterexamples, but in general that’s not something I worry about <em>too</em> much.</p>\n\n<p>The truth is that there is very little power expressed in standards themselves. Instead, it resides in the implementation, deployment, and use of a particular technology, no matter whether it was standardised in a committee or is a <em>de facto</em> standard. Open standards processes provide some useful properties, but they are <strong>not</strong> a guarantee of quality or suitability and there are many standards that have zero impact.</p>\n\n<p>That implication of <a href=\"https://www.mnot.net/blog/2024/03/13/voluntary\">voluntary adoption</a> is why I believe that <strong>the most undiluted expression of power in Internet standards is saying ‘no’</strong> – in particular, when a company declines to participate in or implement a specification, feature, or function. Especially if that company is central to a ‘choke point’ with already embedded power due to adoption of related technologies like an Operating System or Web browser. In the most egregious cases, this is effectively saying ‘we want that to stay proprietary.’</p>\n\n<p>Sometimes the no is explicit. I’ve heard an engineer from a Very Big Tech Company publicly declare that their product would not implement a specification, with the very clear implication that the working group shouldn’t bother adopting the spec as a result. That’s using their embedded power to steer the outcome, hard.</p>\n\n<p>Usually though, it’s a lot more subtle. Concerns are raised. Review of a specification is de-prioritised. Maybe a standard is published, but it never gets to implementation. Or maybe the scope of the standard or its implementation is watered down enough to deliver something actually interoperable or functional.</p>\n\n<p>To be very clear, engineers often have very good reasons for declining to implement something. There are a <em>lot</em> of bad ideas out there, and Internet engineering imposes a lot of constraints on what is possible. Proposals have to run a gamut of technical reviews, architectural considerations, and carefully staked-out fiefdoms to see the light of day. Proponents are often convinced of the value of their contributions, only to find that they fail to get traction for reasons that can be hard to understand. The number of people who understand the nuances is small: usually, just a handful in any given field.</p>\n\n<p>But when the ‘no’ comes about because it doesn’t suit the agendas of powerful parties, something is wrong. Even people who want to see a better Internet reduce their expectations, because they lose faith in the possibility of success.</p>\n\n<h3 id=\"a-failure-of-ambition\">A Failure of Ambition</h3>\n<p>To me, the evidence of this phenomenon is clearest in how little ambition the we’re seeing from the Web. The Web should be a constantly raising sea of commoditised technology, cherry picking successful proprietary applications – marketplaces like Amazon and eBay, social networks like LinkedIn and Facebook, chat on WhatsApp and iMessage, search on Google, and so on – and reinventing them as public good oriented features without a centralised owner. Robin Berjon dives into this view of the Web in <a href=\"https://berjon.com/bigger-browser/\">You’re Going to Need a Bigger Browser</a>.<sup id=\"fnref:1\"><a href=\"#fn:1\" class=\"footnote\" rel=\"footnote\" role=\"doc-noteref\">1</a></sup></p>\n\n<p>Instead, most current Web standards activity focuses on incremental, small features: tweaking around the edges and creating new ‘low level’ APIs that proprietary things can be built upon. This approach was codified a while back in the ‘<a href=\"https://github.com/extensibleweb/manifesto\">Extensible Web Manifesto</a>’, which was intended to let the community to focus its resources and let a ‘thousand flowers bloom’, but the effect has been to allow silo after silo to be build upon the Web, solidifying its role as the greatest centralisation technology ever.</p>\n\n<p>There are small signs of life. Recent features like Web Payments, federated identity and the various (somewhat) decentralised social networking protocols show promise for extending the platform in important ways, but they’re exceptional, not the rule.</p>\n\n<h3 id=\"creating-upward-pressure\">Creating Upward Pressure</h3>\n<p>How then, can we create higher-level capabilities that serve society but aren’t proprietary?</p>\n\n<p>Remember that <a href=\"https://www.mnot.net/blog/2024/03/13/voluntary\">the voluntary nature of Internet standards</a> is a feature – it allows us to fail by using the marketplace as a proving function. Forcing tech companies to implement well-intentioned specifications that aren’t informed by experience is a recipe for broken, bad tech. Likewise, ‘standardising harder’ isn’t going to create better outcomes: the real influence of what standards do is in their implementation and adoption.</p>\n\n<p>What matters is not writing specifications, it’s getting to a place where it’s not possible for private concerns to express inappropriate power over the Internet. Or as Robin <a href=\"https://berjon.com/digital-sovereignty/\">articulates</a>: “What matters is who has the structural power to deploy the standards they want to see and avoid those they dislike.” To me, that suggests a few areas where progress can be made:</p>\n\n<p class=\"hero\">First, we should remember that the market is the primary force shaping companies’ behaviour right now. It used to be that paid services like Proton were <a href=\"https://balkaninsight.com/2025/04/01/taking-aim-at-big-tech-proton-ceo-warns-democracy-depends-on-privacy/\">mocked for competing with free Google services</a>. Now they’re viable because people realised the users are the product. If we want privacy-respecting, decentralised solutions and are willing to pay for them, that changes the incentives for companies, big and small. However, the solutions need to be bigger than any one company.</p>\n\n<p class=\"hero\">Second, where the market fails, competition regulators can and should step in. They’ve been increasingly active recently, but I’d like to see them go further: to provide <strong>stronger guidelines for open standards processes</strong>, and to give companies stronger incentives to participate and adopt open standards, such as a <strong>presumption that adopting a specification that goes through a high-quality process is not anticompetitive</strong>. Doing so would create natural pressure for companies to be interoperable (reducing those choke points) while also being more subject to public and expert review.</p>\n\n<p class=\"hero\">Third, private corporations are not the only source of innovation in the world. In fact, there are <a href=\"https://www.hbs.edu/faculty/Pages/item.aspx?num=36972\">great arguments</a> that open collaboration is a much deeper source of innovation in the modern economy. My interest turns towards the possibilities of public sponsorship for development of the next generation of Internet technology: what’s now being called <strong>Digital Public Infrastructure</strong>. There are many challenging issues in this area – especially regarding governance and, frankly, viability – but if the needle can be threaded and the right model found, the benefits to the people who use the Internet could be massive.</p>\n\n<div class=\"footnotes\" role=\"doc-endnotes\">\n  <ol>\n    <li id=\"fn:1\">\n      <p>Yes, as discussed before there are <a href=\"https://www.mnot.net/blog/2024/11/29/platforms\">things that are harder to do without a single-company chokepoint</a>, but that shouldn’t preclude <em>trying</em>. <a href=\"#fnref:1\" class=\"reversefootnote\" role=\"doc-backlink\">↩</a></p>\n    </li>\n  </ol>\n</div>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Mark Nottingham",
          "email": null,
          "url": "https://www.mnot.net/personal/"
        }
      ],
      "categories": [
        {
          "label": "Tech Regulation",
          "term": "Tech Regulation",
          "url": null
        },
        {
          "label": "Standards",
          "term": "Standards",
          "url": null
        },
        {
          "label": "Web and Internet",
          "term": "Web and Internet",
          "url": null
        }
      ]
    },
    {
      "id": "https://www.mnot.net/blog/2026/01/20/open_web",
      "title": "Some Thoughts on the Open Web",
      "description": "The Open Web means several things to different people, depending on context, but recently discussions have focused on the Web's Openness in terms of access to information -- how easy it is to publish and obtain information without barriers there.",
      "url": "https://www.mnot.net/blog/2026/01/20/open_web",
      "published": null,
      "updated": "2026-01-20T00:00:00.000Z",
      "content": "<p class=\"intro\">“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of <strong>access to information</strong> -- how easy it is to publish and obtain information without barriers there.</p>\n\n<p>David Schinazi and I hosted a pair of ad hoc sessions on this topic at the last IETF meeting in Montreal and the subsequent W3C Technical Plenary in Kobe; you can see the <a href=\"https://docs.google.com/document/d/1WaXDfwPP6olY-UVQxDZKNkUyqvmHt-u4kREJW4ys6ms/edit?usp=sharing\">notes and summaries from those sessions</a>.  This post contains my thoughts on the topic so far, after some simmering.</p>\n\n<h3 id=\"the-open-web-is-amazing\">The Open Web is Amazing</h3>\n\n<p>For most of human history, it’s been difficult to access information. As an average citizen, you had to work pretty hard to access academic texts, historical writings, literature, news, public information, and so on. Libraries were an amazing innovation, but locating and working with the information there was still a formidable challenge.</p>\n\n<p>Likewise, publishing information for broad consumption required resources and relationships that were unavailable to most people. Gutenberg famously broke down some of those barriers, but many still remained: publishing and distributing books (or articles, music, art, films) required navigating extensive industries of gatekeepers, and often insurmountable costs and delays.</p>\n\n<p>Tim Berners-Lee’s invention cut through all of that; it was now possible to communicate with the whole world at very low cost and almost instantaneously. Various media industries were disrupted (but not completely displaced) by this innovation, and reinterpreted roles for intermediaries (e.g., search engines for librarians, online marketplaces for ‘brick and mortar’ shops) were created.</p>\n\n<p>Critically, a norm was also created; an expectation that content was easy to access, didn’t require paying or logging in. This was not enforced, and it was not always honoured: there were still subscription sites, and that’s OK, but they didn’t see the massive network effects that hyperlinks and browsers brought.</p>\n\n<p>It is hard to overstate the benefits of this norm. Farmers in developing countries now have easy access to guidelines and data that help their crops succeed. Students around the world have access to resources that were unimaginable even a few decades ago. They can also contribute to that global commons of content, benefiting others as they build a reputation for themselves.</p>\n\n<p>The Open Web is an amazing public good, both for those who consume information and those who produce it. By reducing costs and friction on both sides, it allows people all over the world to access and create information in a way -- and with an ease -- that would have been unimaginable to our predecessors. It’s worth fighting for.</p>\n\n<h3 id=\"people-have-different-motivations-for-opening-content\">People Have Different Motivations for Opening Content</h3>\n\n<p>We talk about “The Open Web” in the singular, but in fact there are many motivations for making content available freely online.</p>\n\n<p>Some people consciously make their content freely available on the Web because they want to contribute to the global commons, to help realise all of the benefits described above.</p>\n\n<p>Many don’t, however.</p>\n\n<p>Others do it because they want to be discovered and build a reputation. Or because they want to build human connections. Or because they want revenue from putting ads next to the content. Or because they want people to try their content out and then subscribe to it on the less-than-open Web.</p>\n\n<p>Most commonly, it’s a blend of many (or even all) of these motivations.</p>\n\n<p>Discussions of the Open Web need to consider all of them distinctly -- what about their environments are changing, and what might encourage or discourage different kinds of Open Web publishers. Only focusing on some motivations or creating “purity tests” for content isn’t helpful.</p>\n\n<h3 id=\"there-are-many-degrees-of-open\">There are Many Degrees of “Open”</h3>\n\n<p>Likewise, there are many degrees of “open.” While some Open Web content doesn’t come with any strings, much of it does. You might have to allow tracking for ads. While an article might be available to search engines (to drive traffic), you might have to register for an account to view the content as an individual.</p>\n\n<p>There are serious privacy considerations associated with both of these, but those concerns should be considered as distinct from those regarding open access to information. People sometimes need to get a library card to access information at their local library (in person or online), but that doesn’t make the information less open.</p>\n\n<p class=\"callout\">One of the most interesting assertions at the meetings we held was about advertising-supported content: that it was <em>more</em> equitable than “micro-transactions” and similar pay-to-view approaches, because it makes content available to those who would otherwise not be able to afford it.</p>\n\n<p>At the same time, these ‘small’ barriers  – for example, requirements to log in after reading three articles – add up, reducing the openness of the content. If the new norm is that everyone has to log in everywhere to get Web content (and we may be well on our way to that), the Open Web suffers.</p>\n\n<p>Similarly, some open content is free to all comers and can be reused at will, where other examples have technical barriers (such as bot blockers or other selective access schemes) and/or legal barriers (namely, copyright restrictions).</p>\n\n<h3 id=\"it-has-to-be-voluntary\">It Has to be Voluntary</h3>\n\n<p>Everyone who publishes on the Open Web does so because they want to – because the benefits they realise (see above) outweigh any downsides.</p>\n\n<p>Conversely, any content not on the Open Web is not there because the owner has made the judgement that it is not worthwhile for them to do so. They cannot be forced to “open up” that content -- they can only be encouraged.</p>\n\n<p>Affordances and changes in infrastructure, platforms, and other aspects of the ecosystem -- sometimes realised in technical standards, sometimes not -- might change that incentive structure and create the conditions for more or less content on the Open Web. They cannot, however, be forced or mandated.</p>\n\n<p>To me, this means that attempts to coerce different parties into desired behaviors are unlikely to succeed – they have to <em>want</em> to provide their content. That includes strategies like withholding capabilities from them; they’ll just go elsewhere to obtain them, or put their content beyond a paywall.</p>\n\n<h3 id=\"its-changing-rapidly\">It’s Changing Rapidly</h3>\n\n<p>We’re talking about the Open Web now because of the introduction of AI -- a massive disruption to the incentives of many content creators and publishers, because AI both leverages their content (through scraping for training) and competes with it (because it is generative).</p>\n\n<p>For those who opened up their content because they wanted to establish reputation and build connectivity, this feels exploitative. They made their content available to benefit people, and it turns out that it’s benefiting large corporations who claim to be helping humanity but have failed to convince many.</p>\n\n<p>For those who want to sell ads next to their content or entice people to subscribe, this feels like betrayal. Search engines built an ecosystem that benefited publishers and the platforms,but publishers see those same platforms as continually taking more value from the relationship -- as seen in efforts to force intermediation like AMP, and now AI, where sites get drastically reduced traffic in exchange for nothing at all.</p>\n\n<p>And so people are blocking bots, putting up paywalls, changing business models, and yanking their content off the Open Web. The commons is suffering because technology (which always makes <em>something</em> easier) now makes content creation <em>and</em> consumption easier, so long as you trust your local AI vendor.</p>\n\n<p>This change is unevenly distributed. There are still people happily publishing open content in formats like RSS, which doesn’t facilitate tracking or targeting, and is wide open to scraping and reuse. That said, there are large swathes of content that are disappearing from the Open Web because it’s no longer viable for the publisher; the balance of incentives for them has changed.</p>\n\n<h3 id=\"open-is-not-free-to-provide\">Open is Not Free to Provide</h3>\n\n<p>Information may be a non-rivalrous good, but that doesn’t mean it’s free to provide. The people who produce it need to support themselves.</p>\n\n<p>That doesn’t mean that their interests dominate all others, nor that the structures that have evolved are the best (or even a good) way to assure that they can do so; these are topics better suited for copyright discussions (where there is a very long history of such considerations being debated).</p>\n\n<p>Furthermore, on a technical level serving content to anyone who asks for it on a global scale might be a commodity service now -- and so very inexpensive to do, in some cases -- but it’s not free, and the costs add up at scale. These costs -- again, alongside the perceived extractive nature of the relationship -- are causing some to <a href=\"https://social.kernel.org/notice/B2JlhcxNTfI8oDVoyO\">block or otherwise try to frustrate</a> these uses.</p>\n\n<p>Underlying this factor is an argument about whether it’s legitimate to say you’re on ‘the Open Web’ while selectively blocking clients you don’t like – either because they’re abusive technically (over-crawling), or because you don’t like what they do with the data. My observation here is that however you feel about it, that practice is now very, very widespread – evidence of great demand on the publisher side. If that capability were taken away, I strongly suspect the net result would be very negative for the Open Web.</p>\n\n<h3 id=\"its-about-control\">It’s About Control</h3>\n\n<p>Lurking beneath all of these arguments is a tension between the interests of those who produce and use content. Forgive me for resorting to hyperbole: some content people want pixel-perfect control not only over how their information is presented but how it is used and who uses it, and some open access advocates want all information to be usable for any purpose any time and anywhere.</p>\n\n<p>Either of these outcomes (hyperbole as they are) would be bad for the Open Web.</p>\n\n<p>The challenge, then, is finding the right balance – a Web where content producers have incentives to make their content available in a way that can be reused as much as is reasonable. That balance needs to be stable and sustainable, and take into account shocks like the introduction of AI.</p>\n\n<h3 id=\"a-way-forward\">A Way Forward</h3>\n\n<p>Having an Open Web available for humanity is not a guaranteed outcome; we may end up in a future where easily available information is greatly diminished or even absent.</p>\n\n<p>With that and all of the observations above in mind, what’s most apparent to me is that we should focus on finding ways to create and strengthen incentives to publish content that’s open (for some definition of open) -- understanding that people might have a variety of motivations for doing so. If environmental factors like AI change their incentives, we need to understand why and address the underlying concerns if possible.</p>\n\n<p>In other words, we have to create an Internet where people <em>want</em> to publish content openly – for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Mark Nottingham",
          "email": null,
          "url": "https://www.mnot.net/personal/"
        }
      ],
      "categories": [
        {
          "label": "Web and Internet",
          "term": "Web and Internet",
          "url": null
        }
      ]
    },
    {
      "id": "https://www.mnot.net/blog/2025/10/26/equitable-meetings",
      "title": "Principles for Global Online Meetings",
      "description": "Some thoughts about how to schedule online meetings for a global organisation in an equitable way.",
      "url": "https://www.mnot.net/blog/2025/10/26/equitable-meetings",
      "published": null,
      "updated": "2025-10-26T00:00:00.000Z",
      "content": "<p class=\"intro\">One of the tricker problems for organisations that aspire to be global is scheduling a series of meetings. While the Internet has brought the ability to meet with colleagues and stakeholders all over the world, it hasn’t been able to get everyone on the same daily tempo – the earth is still not flat.</p>\n\n<p>As someone who has participated in such organisations from Australia for nearly two decades, I’ve formed some fairly strong opinions about how their meetings should be arranged. What follows is an attempt to distill those thoughts into a set of principles that’s flexible enough to apply to a variety of situations.</p>\n\n<p>Keep in mind the intended application is to a series of global meetings, not a single one-off event. Also, if the set of people who need to attend a given meeting are in timezones that lead to an agreed-to “good” time, you should use that time – but then I question if your organisation is really global. For the rest, read on.</p>\n\n<h3 id=\"0-its-about-equity\">0. It’s About Equity</h3>\n<p>For global organisations, meeting scheduling is an equity issue. Arranging a meeting where some people can attend from the convenience of their office in normal business hours while others have to stay up into the middle of the night is not equitable – the former have very low friction for attending, while the latter have to disrupt their lives, families, relationships, and sleep cycles to attend.</p>\n\n<p>When a person does make the extra effort to attend at a less-than-ideal hour, they will not be at their best. Being awake outside your normal hours means that you aren’t thinking as clearly and might react more emotionally than otherwise. Interrupting an evening after a long day can impact your focus. Effective participation is difficult under these conditions.</p>\n\n<p>I cast this as an equity issue because I’ve observed that many don’t perceive it that way. This is often the case if someone’s experience is that most meetings are scheduled at reasonable hours, they don’t have to think about it, and people in other parts of the world staying up late or getting up early to talk to them is normal. It’s only when people realise this privilege and challenge what’s normal that progress can be made. If you want a truly global organisation, people need to be able to participate on equal footing, and that means that some people will need to make what looks like – to them – sacrifices, because they’re used to things being a certain way.</p>\n\n<h3 id=\"1-share-pain-with-rotation\">1. Share Pain with Rotation</h3>\n<p>With that framing as an equity issue in mind, it becomes clear what must be done: the ‘pain’ of participating needs to be shared in a way that’s equitable. The focus then becomes characterising what pain is, and how to dole it out in a fair way while still holding functional meetings.</p>\n\n<p>The most common method for scheduling a meeting that involves people from all over the globe involves picking “winners” and “losers”. Mary and Joe in North America get a meeting in their daytime; the Europeans have something in their evening, and Asia/Pacific folks have to get up early. Australians get the hardest service – they’re usually up past midnight, but sometimes get roused at 5am or so, depending on the fluctuations of daylight savings. Often, this will be justified with a poll or survey asking for preferences, but one where all options are reasonable for a priviledged set of participants, and most are unreasonable for others.</p>\n\n<p>This is all wrapped up in very logical explanations: it’s the constraints we work within, the locations of the participants narrow down the options, it doesn’t make sense to inconvenience a large number of people for the benefit of a few. Or the kicker: if we scheduled the meeting at that time, the folks who are used to having meetings at good times for them wouldn’t come.</p>\n\n<p>All of those are poor excuses that should be challenged, but often aren’t because this privilege is so deeply embedded.</p>\n\n<p>What can be done? The primary tool for pain-sharing is <strong>rotation</strong>. Schedule meetings in rotating time slots so that everyone has approximately the same number of “good”, “ok”, and “bad” time slots. This is how you put people on even footing.</p>\n\n<p>It may even mean intentionally scheduling in a way that people will miss a slot – e.g., two out of three. In this case, you’ll need to build tools to make sure that information is shared between meetings (you should be keeping minutes and, tracking action items, and creating summaries anyway!), that decisions don’t happen in any one meeting, and that people have a chance to see a variety of people, not just the same subset every time.</p>\n\n<p>For example, imagine an organisation that needs to meet weekly, and has three members in different parts of Europe, five across North America, two in China, and one each in Australia and India. If they rotate between three time slots for their meetings, they might end up with:</p>\n\n<ul>\n  <li>UTC: 02:00 / 11:00 / 17:00</li>\n  <li>Australia/Eastern: 12:00 / 21:00 / 03:00 (+1d)</li>\n  <li>China/Shanghai: 10:00 / 19:00 / 01:00 (+1d)</li>\n  <li>US/Eastern: 22:00 (-1d) / 07:00 / 14:00</li>\n  <li>Europe/Central: 04:00 / 13:00 / 19:00</li>\n  <li>India/Mumbai: 07:30 / 16:30 / 22:30</li>\n</ul>\n\n<p>Notice that everyone has approximately one “good” slot, one “ok” slot, and one “bad” slot – depending on each individual’s preferences, of course.</p>\n\n<p>One objection I’ve heard to this approach is that it would lead to a state where most of the people go to just one or two of the meetings, and the others are poorly attended. That kind of fragmentation is certainly possible, but in my opinion it says more about the diversity of your organisation and the commitment of the people attending the meeting – both factors that should be separately addressed, not loaded onto some of the participants as meeting pain. Doing so is saying that some people won’t attend if they’re exposed to the conditions that they ask of others.</p>\n\n<h3 id=\"2-pain-is-individual\">2. Pain is Individual</h3>\n<p>A common approach to scheduling weighs decisions by how many people are in each timezone. For example, if you’ve got ten people in North America, three in Europe, and one in Asia, you should obviously arrange things to inconvenience the fewest number of people, right?</p>\n\n<p>The problem is, each of those people experiences the pain individually – it is not a collective phenomenon. The person in Asia doesn’t experience 1/14th of the pain if they need to get up at 4:30am for a call.  Making things slightly inconvenient for the North Americans doesn’t magnify the pain they experience times ten.</p>\n\n<p>So, don’t weigh your decisions by how many people are in a particular timezone or region. Of course there are limits to this principle – if it’s 100:1 you need to be able to function as a group (e.g., be quorate). But again, I’m questioning whether you’re really a global organisation here; you’re effectively gaslighting the people who are trying to participate from elsewhere by calling yourself one.</p>\n\n<h3 id=\"3-pain-is-specific\">3. Pain is Specific</h3>\n<p>It’s easy to fall into the trap of assuming that everyone’s circumstances are the same – that if a 7am meeting is painful for you, it’s equally painful for someone else.</p>\n\n<p>In reality, some people are morning people, while others don’t mind staying up until 2am. Some people might have a family dinner every Thursday night that would be disrupted by your meeting, while others are happy to use that time because that’s when they have the house to themselves.</p>\n\n<p>This means you need to ask what people’s preferences and conflicts are, rather than (for example) assume that 7am-9am is ok, 9am-5pm is good, 5pm-10pm is ok, and everything else is bad. The mechanics of how that information is gathered depends upon the nature of your group, but it needs to be sensitive to privacy and resistant to gaming.</p>\n\n<h3 id=\"4-pain-is-relative\">4. Pain is Relative</h3>\n<p>One of the complications of scheduling meetings across timezones is balancing the various kinds of conflicts and inconveniences that they bring up for a proposed time slot. John has to pick up the kids in that timeslot; Hiro is eating breakfast. Marissa needs to have dinner with her family. And Mark just wants a good night’s sleep for once.</p>\n\n<p>I propose a hierarchy of inconvenience and pain, from most to least impactful:</p>\n\n<ol>\n  <li>Rearranging your life - changing your sleep schedule, working on weekends (remember, Friday in North America is Saturday in other parts of the world)</li>\n  <li>Rearranging family life - shifting meals, changing child or elderly care arrangements</li>\n  <li>Moving other meetings - managing conflicts with other professional commitments</li>\n</ol>\n\n<p>When asking for conflicts for a given time slot, the higher items should always override the lower forms of pain. I’m sure this could be elaborated upon and extended, but it’s a good starting point.</p>\n\n<p>I sometimes also hear about another kind of pain: that rotating meetings makes it hard for some people to keep their calendars. To me, this isn’t #4; it’s #100.</p>\n\n<h3 id=\"5-circumstances-change\">5. Circumstances Change</h3>\n<p>People aren’t static. Their lives change, their families change, their health changes. If your meetings are scheduled over long periods of time, that means you need to be responsive to these changes, periodically checking to see if their preferences need updating.</p>\n\n<p>I used to be a night person. I’d be up until at least midnight, sometimes two or three, and mornings would be a real struggle. However, as I’ve gotten older, I’m finding that many mornings I wake naturally at five or so, and I’m ready to sleep at around 10pm unless I’m out of the house. That change has fundamentally affected how I attend meetings.</p>\n\n<p>And, of course, if you have participants in the Southern hemisphere (and you should!), you have to account for the differences in daylight savings, due to the differences in seasons. It’s not just a one-hour shift – it’s two, and that can make a big difference to someone’s quality of life.</p>\n\n<h3 id=\"6-respect-peoples-time\">6. Respect People’s Time</h3>\n<p>Appreciate that what’s just another meeting in the middle of your workday is a huge effort in the middle of the night for someone else; don’t fritter away a substantial portion on chitchat. Have an agenda and be prepared to make the meeting valuable. Use offline, asynchronous tools when they’re more appropriate.</p>\n\n<p>Likewise, don’t cancel or re-schedule a meeting at the last minute (or even last day). Setting an alarm for an early meeting and struggling through getting presentable and caffeinated only to find it’s been axed is distinctly unpleasant.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Mark Nottingham",
          "email": null,
          "url": "https://www.mnot.net/personal/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://www.mnot.net/blog/2025/09/20/configuration",
      "title": "Bridging the Gap Between Standards and Policy",
      "description": "Achieving policymakers' goals in coordination with Internet standards activity can be difficult. This post explores some of the options and considerations involved.",
      "url": "https://www.mnot.net/blog/2025/09/20/configuration",
      "published": null,
      "updated": "2025-09-20T00:00:00.000Z",
      "content": "<p>Internet standards bodies like the IETF and W3C are places where experts can come to agreement about the details of how technology should work. These communities have the deep experience that allows them to guide the evolution of the Internet towards common goals.</p>\n\n<p>Policymakers have none of that technical expertise, but are the legitimate source of policy decisions in any functioning society. They don’t have the means to develop new technical proposals: while most countries have a national standard body, their products are a poor fit for a global Internet, and those bodies generally lack specific expertise.</p>\n\n<p>So, it might seem logical for policymakers to turn to Internet standards bodies to develop the technical solutions for their policy goals, trusting the open process and community involvement to produce a good solution. Unfortunately, doing so can create problems that will cause such efforts to fail.</p>\n\n<h3 id=\"whats-the-problem\">What’s the Problem?</h3>\n\n<p>A few different issues often become apparent when policymakers pre-emptively specify a standard.</p>\n\n<p>First, as discussed previously the <a href=\"https://www.mnot.net/blog/2024/03/13/voluntary\">voluntary nature of Internet standards</a> acts as a proving function for them: if implementers don’t implement or users don’t use, the standard doesn’t matter. If a legal mandate to use a particular standard precedes that proof of viability, it distorts the incentives for participation in the process, because the power relationships between participants have changed – it’s no longer voluntary for the targets of the regulation, and the tone of the effort shifts from being <a href=\"https://www.mnot.net/blog/2024/07/16/collaborative_standards\">collaborative</a> to competitive.</p>\n\n<p>Second, Internet standards are created by <a href=\"https://www.mnot.net/blog/2024/05/24/consensus\">consensus</a>. That approach to decision making is productive when there is reasonable alignment between participants’ motives, but it’s not well suited to handling fundamental conflicts about societal values. That’s because while technical experts might be good at weighing technical arguments and generally adhering to widely agreed-to principles (whether they be regarding Internet architecture or human rights), it’s much more difficult for them to adjudicate direct conflict between values outside their areas of expertise. In these circumstances, the outcome is often simply a lack of consensus.</p>\n\n<p>Third, jurisdictions often have differences in their policy goals, but the Internet is global, and so are its standards bodies, who want the Internet to be interoperable regardless of borders. If policy goals aren’t widely shared and aligned between countries, it becomes even more difficult to come to consensus.</p>\n\n<p>Fourth, making decisions with societal impact in a technical expert body raises fundamental legitimacy issues. That’s not to say that Internet standards can’t or shouldn’t (or don’t) change society in significant ways, but that’s done from the position of private actors coordinating to achieve a common goal through well-understood processes, within the practical boundaries of the commonalities of the applicable legal frameworks. It’s entirely different for a contentious policy decision to be delegated by policymakers to a non-representative technical body.</p>\n\n<p>So, what’s a policymaker to do?</p>\n\n<h3 id=\"patience-is-a-virtue\">Patience is a Virtue</h3>\n\n<p>One widely repeated recommendation for policymakers is to avoid specifying the work or even a venue for it in regulation or legislation until <em>after</em> it’s been created and its viability is proven by some amount of market adoption. Instead, the policymaker should just hint that an industry standard that serves a particular policy goal would be useful.</p>\n\n<p>However, this approach comes with a few caveats:</p>\n<ul>\n  <li>A set of proponents that drives the standards work has to emerge, and they need to be at least somewhat aligned with the policy goal</li>\n  <li>Consensus-based technical standards are slow, so policymakers have to have realistic expectations about the timeline</li>\n  <li>If the targets of the regulation don’t participate in the standards process, they may be able to reasonably claim that what results can’t be implemented by them</li>\n</ul>\n\n<p>These issues aren’t impossible to address: they just require good communication, alignment of incentives, management of expectations, and careful diligence.</p>\n\n<h3 id=\"add-a-configuration-layer\">Add a Configuration Layer</h3>\n\n<p>Even if the policymaker waits for the outcome of the standards process, it’s rare for the policy decisions to be cleanly separable from the technology that needed to be created. Choices need to be made about how the technology is used and how it maps to the policy goals of a specific jurisdiction.</p>\n\n<p>One intriguing way to manage that gap is to span it with a new entity – one that creates neither technical specifications nor policy goals, but instead is explicitly constituted to define how to meet the stated policy goals using already available technology. That leaves policy formation in the hands of policymakers and technical design in the hands of technologists.</p>\n\n<p>In technology terms, this is a configuration layer: clearly and cleanly separating the concerns of how the technology is designed from how it is used. It still requires the technology to exist and have the appropriate configuration “interfaces”, but promises to take a large part of the policy pressure off of the standards process.</p>\n\n<p>An example of this approach is just being started by the European Commission now. At IETF 123, they explained a proposal for a <a href=\"https://www.iepg.org/2025-07-20-ietf123/slides-123-iepg-sessa-multi-stakeholder-forum-on-internet-standards-deployment-00.pdf\">Multi-stakeholder Forum on Internet Standards Deployment</a> that fills the gap between the definition of Internet security mechanisms and the policy intent of making European networks more secure. Policymakers have no desire to refer to specific RFCs in legislation, and Internet technologists don’t want to define regulatory requirements for Europe, so the idea is that this third entity will make those decisions without defining new technology <em>or</em> policy intent.</p>\n\n<p>Getting this right requires the new forum to be constituted in a particular way. It has to be constrained by the policymaker’s intent, and can’t define new technology. That means that the technology has to be amenable to configuration – the relevant options need to be available. The logical host for the discussion is a venue controlled by the policymaker, but it needs to be open to broad participation (including online and asynchronous participation) so that the relevant experts can participate. Transparency will be key, and I suspect that the decision making policy will be critical to get right – ideally something close to a consensus model, but the policymaker may need to reserve the right to overrule objections or handle appeals.</p>\n\n<p>Needless to say, I’m excited to see how this forum will work out. If successful, it’s a pattern that could be useful elsewhere.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Mark Nottingham",
          "email": null,
          "url": "https://www.mnot.net/personal/"
        }
      ],
      "categories": [
        {
          "label": "Tech Regulation",
          "term": "Tech Regulation",
          "url": null
        },
        {
          "label": "Standards",
          "term": "Standards",
          "url": null
        },
        {
          "label": "Web and Internet",
          "term": "Web and Internet",
          "url": null
        }
      ]
    }
  ]
}
Analyze Another View with RSS.Style