Analysis of https://www.anildash.com/feed.xml

Feed fetched in 242 ms.
Content type is application/xml.
Feed is 158,892 characters long.
Feed has an ETag of W/"4c238dd3bce552862f7bc8c19114547a-ssl-df".
Warning Feed is missing the Last-Modified HTTP header.
Feed is well-formed XML.
Warning Feed has no styling.
This is an Atom feed.
Feed title: Anil Dash
Error Feed self link: https://anildash.com/feed.xml does not match feed URL: https://www.anildash.com/feed.xml.
Warning Feed is missing an image.
Feed has 12 items.
First item published on 2025-11-14T00:00:00.000Z
Last item published on 2026-01-27T00:00:00.000Z
All items have published dates.
Newest item was published on 2026-01-27T00:00:00.000Z.
Home page URL: https://anildash.com/
Warning Home page URL redirected to https://www.anildash.com/.
Home page has feed discovery link in <head>.
Home page has a link to the feed in the <body>

Formatted XML
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:xml="http://www.w3.org/XML/1998/namespace" xml:base="https://anildash.com/">
    <title>Anil Dash</title>
    <subtitle>A blog about making culture. Since 1999.</subtitle>
    <link href="https://anildash.com/feed.xml" rel="self"/>
    <link href="https://anildash.com/"/>
    <updated>2026-01-27T00:00:00Z</updated>
    <id>https://anildash.com</id>
    <author>
        <name>Anil Dash</name>
        <email>[email protected]</email>
    </author>
    <entry>
        <title>I know you don’t want them to want AI, but…</title>
        <link href="https://anildash.com/2025/11/14/wanting-not-to-want-ai/"/>
        <updated>2025-11-14T00:00:00Z</updated>
        <id>https://anildash.com/2025/11/14/wanting-not-to-want-ai/</id>
        <content type="html"><![CDATA[
      <p>Today, Rodrigo Ghedrin wrote the very well-intentioned, but incorrectly-titled,  “<a href="https://manualdousuario.net/en/mozilla-firefox-window-ai">I think nobody wants AI in Firefox, Mozilla</a>”. As he correctly summarizes, <a href="https://connect.mozilla.org/t5/discussions/building-ai-the-firefox-way-shaping-what-s-next-together/td-p/109922">sentiment on the Mozilla thread</a> about a potential new AI pane in the Firefox browser is overwhelmingly negative. That’s not surprising; the Big AI companies have given people numerous legitimate reasons to hate and reject “AI” products, ranging from undermining labor to appropriating content without consent to having egregious environmental impacts to eroding trust in public discourse.</p>
<p>I spent much of the last week having the distinct honor of serving as MC at the <a href="https://www.mozillafestival.org/">Mozilla Festival</a> in Barcelona, which gave me the extraordinary opportunity to talk to hundreds of the most engaged Mozilla community members in person, and to address thousands more from onstage or on the livestream during the event. No surprise, one of the biggest topics we talked about the entire time was AI, and the intense, complex, and passionate feelings so many have about these new tools. Virtually everyone shared some version of what I’d articulated as <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view">the majority view</a> on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and <em>especially</em> Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable.</p>
<p>But.</p>
<p>Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that <em>hundreds of millions of people are using the major AI tools every day</em>. When I would point this out, there was often an initial defensive reaction talking about how people are forced to use these tools at work, or how AI is being shoehorned into every tool and foisted upon users. This is all true! And also? Hundreds of millions of users are choosing to go to these websites, of their own volition, and engage with these tools.</p>
<p>Regular, non-expert internet users find it interesting, or even <em>amusing</em>, to generate images or videos using AI and to send that media to their friends. While sophisticated media aesthetics find those creations gauche or even offensive, a lot of other cultures find them perfectly acceptable. And it’s an inarguable reality that millions of people find AI-generated media images emotionally <em>moving</em>. Most people that see AI-generated content as tolerable folk art belong to demographics that are dismissed by those who shape the technology platforms that billions of people use every day.</p>
<p>Which brings us back to “nobody wants AI in Firefox”. (And its obligatory <a href="https://news.ycombinator.com/item?id=45926779">matching Hacker News thread</a>, which proceeds exactly as you might expect.) In the communities that frequent places like Hacker News and Mozilla forums, where everyone is hyper-fluent in concerns like intellectual property rights and the abuses of Big Tech, it’s received wisdom that “everyone” resists the encroachment of AI into tools, and therefore the only possible reason that Mozilla (or any organization) might add support for any kind of AI features would be to chase a trend that’s in fashion amongst tech tycoons. I don’t doubt that this is a factor; anytime a significant percentage of decision makers are alumni of Silicon Valley, its culture is going to seep into an organization.</p>
<h2>The War On Pop-Ups</h2>
<p>What people are ignoring, though, is that <em>using AI tools is an incredibly mainstream experience now</em>. Regular people do it all the time. And doing so in normal browsers, in a normal context, is less safe. We can look at an analogy from the early days of the browser wars, a generation ago.</p>
<p>Twenty years ago, millions and millions of people used Internet Explorer to get around the web, because it was the default browser that came with their computer. It was buggy and wildly insecure, and users would often find their screen littered with intrusive pop-up advertisements that had been spawned by various sites that they had visited across the web. We could have said, “well, those are simply fools with no taste using bad technology who get what they deserve”</p>
<p>Instead, countless enthusiasts and advocates across the web decided that <em>everyone</em> deserved to have an experience that was better and safer. And as it turned out, while getting those improvements, people could even get access to a cool new feature that nobody had seen before: tabs! Firefox wasn’t the first browser to invent all these little details, but it was the first to put them all together into one convenient little package. Even if the expert users weren’t personally visiting the sites riddled with pop-up ads themselves, they were glad to have spared their non-expert friends from the miseries they were enduring on the broken internet.</p>
<p>I don’t know why today’s Firefox users, even if they’re the most rabid anti-AI zealots in the world, don’t say, “well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI”. I have to assume it’s because they’re in denial about the fact that their friends and family are using these platforms. (Judging by the tenor of their comments on the topic, I’d have to guess their friends don’t want to engage with them on the topic at all.)</p>
<p>We see with tools like <a href="https://www.anildash.com/2025/10/22/atlas-anti-web-browser">ChatGPT’s Atlas</a> that there are now aggressively anti-web browsers coming to market, and even a sophisticated user might not be able to realize how nefarious some of the tactics of these new apps can be. I think those who are critical can certainly see that those enabling those harms are bad actors. And those critics are also aware that hundreds of millions of people are using ChatGPT. So, then… what browser do they think those users should use?</p>
<h2>What does good look like?</h2>
<p>Judging by what I see in the comments on the posts about Firefox’s potential AI feature integrations, the apparent path that critics are recommending as an alternative browser is “I’ll yell at you until you stop using ChatGPT”. Consider this post my official notice: that strategy hasn’t worked. And it is not <em>going</em> to work. The only thing that <em>will</em> work is to offer a better alternative to these users. That will involve <a href="https://www.anildash.com/2025/05/02/what-would-good-ai-look-like">defining what an acceptably “good” alternative AI looks like</a>, and then building and shipping it to these users, and convincing them to use it. I’m hoping such an effort succeeds. But I can guarantee that scolding people and trying to convince them that they’re not finding utility in the current platforms, or trying to make them feel guilty about the fact that they <em>are</em> finding utility in the current platforms, will not work.</p>
<p>And none of this is exculpatory for my friends at Mozilla. As I’ve said to the good people there, and will share again here, I don’t think the framing of the way this feature has been presented has done either the Firefox team or the community any favors. These big, emotional blow-ups are demoralizing, and take away time and energy and attention that could be better spent getting people excited and motivated to grow for the future.</p>
<p>My personal wishlist would be pretty simple:</p>
<p><em>Just give people the “shut off all AI features” button. It’s a tiny percentage of people who want it, but they’re never going to shut up about it, and they’re convinced they’re the whole world and they can’t distinguish between being mad at big companies and being mad at a technology so give them a toggle switch and write up a blog post explaining how extraordinarily expensive it is to maintain a configuration option over the lifespan of a global product.</em> Market Firefox as “The best AI browser for people who hate Big AI”. Regular users have <em>no idea</em> how creepy the Big AI companies are — they’ve just heard their local news talk about how AI is the inevitable future. If Mozilla can warn me <a href="https://www.mozillafoundation.org/en/privacynotincluded/articles/how-to-protect-your-privacy-from-chatgpt-and-other-ai-chatbots">how to protect my privacy from ChatGPT</a>, then it can also mention that ChatGPT tells children how to self-harm, and should be aggressive in engaging with the community on how to build tools that help mitigate those kinds of harms — how do we catalyze <em>that</em> innovation?</p>
<ul>
<li>Remind people that there isn’t “a Firefox” — everyone is Firefox. Whether it’s Zen, or your custom build of Firefox with your favorite extensions and skins, it’s all part of the same story. Got a local LLM that runs entirely as a Firefox extension? Great! That should be one of the many Firefoxes, too. Right now, so much of the drama and heightened emotions and tension are coming from people’s (well… dudes') egos about there being One True Firefox, and wanting to be the one who controls what’s in that version, as an expression of one set of values. This isn’t some blood-feud fork, there can just be a lot of different choices for different situations. Make it all work.</li>
</ul>
<p>So, that’s the answer. I think some people want AI in Firefox, Mozilla. And some people don’t. And some people don’t know what “AI” means. And some people forgot Firefox even exists. It’s that last category I’m most concerned about, frankly. Let’s go get ‘em.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Vibe Coding: Empowering and Imprisoning</title>
        <link href="https://anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/"/>
        <updated>2025-12-02T00:00:00Z</updated>
        <id>https://anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/</id>
        <content type="html"><![CDATA[
      <p>In case you haven’t been following the world of software development closely, it’s good to know that vibe coding — using LLM tools to assist with writing code — can help enable many people to create apps or software that they wouldn’t otherwise be able to make. This has led to an extraordinarily rapid adoption curve amongst even experienced coders in many different disciplines within the world of coding. But there’s a very important threat posed by vibe coding that almost no one has been talking about, one that’s far more insidious and specific than just the risks and threats posted by AI or LLMs in general.</p>
<p>Here’s a quick summary:</p>
<p><em>One of the most effective uses of LLMs is in helping programmers write code</em> A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages</p>
<ul>
<li>Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech</li>
</ul>
<h2>Start vibing</h2>
<p>It may be useful to start by explaining how people use LLMs to assist with writing software. My background is that I’ve helped build multiple companies focused on enabling millions of people to create with code. And I’m personally an example of one common scenario with vibe coding. Since I don’t code regularly anymore, I’ve become much slower and less efficient at even the web development tasks that I used to do professionally, which I used to be fairly competent at performing. In software development, there are usually a nearly-continuous stream of new technologies being released (like when you upgrade your phone, or your computer downloads an update to your web browser), and when those things change, developers have to update <em>their</em> skills and knowledge to stay current with the latest tools and techniques. If you’re not staying on top of things, your skillset can rapidly decay into irrelevance, and it can be hard to get back up to speed, even though you understand the fundamentals completely, and the underlying logic of <em>how</em> to write code hasn’t changed at all. It’s like knowing how to be an electrician but suddenly you have to do all your work in French, and you don’t speak French.</p>
<p>This is the kind of problem that LLMs are really good at helping with. Before I had this kind of coding assistant, I couldn’t do any meaningful projects within the limited amount of free time that I have available on nights and weekends to build things. Now, with the assistance of contemporary tools, I can get help with things like routine boilerplate code and obscure syntax, speeding up my work enough to focus on the fun, creative parts of coding that I love.</p>
<p>Even professional coders who <em>are</em> up to date on the latest technologies use these LLM tools to do things like creating scripts, which are essentially small bits of code used to automate or process common tasks. This kind of code is disposable, meaning it may only ever be run once, and it’s not exposed to the internet, so security or privacy concerns aren’t usually much of an issue. In that context, having the LLM create a utility for you can feel like being truly liberated from grunt work, something like having a robot vacuum around to sweep up the floor.</p>
<h2>Surfing towards serfdom</h2>
<p>This all sounds pretty good, right? It certainly helps explain why so many in the tech world tend to see AI much more positively than almost everyone else does; there’s a clear-cut example of people finding value from these tools in a way that feels empowering or even freeing.</p>
<p>But there are far darker sides to this use of AI. Let me put aside the threats and risks of AI that are true of <em>all</em> uses of the Big AI platforms, like the environmental impact, the training on content without consent, the psychological manipulation of users, the undermining of legal regulations, and other significant harms. These are all real, and profound, but I want to focus on what’s specific to using AI to help write code here, because there are negative externalities that are unique to <em>this</em> context that people haven’t discussed enough. (For more on the larger AI discussion, see &quot;<a href="https://www.anildash.com/2025/05/01/what-would-good-ai-look-like/">What would good AI look like?</a>&quot;)</p>
<p>The first problem raised by vibe coding is an obvious one: the major tech investors focused on making AI good at writing code because they wanted to make coders less powerful and reduce their pay. If you go back a decade ago, nearly everyone in the world was saying “teach your kids to code” and being a software engineer was one of the highest paying, most powerful individual jobs in the history of labor. Pretty soon, coders were acting like it — using their power to improve workplace conditions for those around them at the major tech companies, and pushing their employers to be more socially responsible. Once workers began organizing in this way, the tech tycoons who founded the big tech companies, and the board members and venture capitalists who backed them, immediately began investing billions of dollars in building these technologies that would devalue the labor of millions of coders around the world.</p>
<p>It worked. More than <em>half a million</em> tech workers have been laid off in America since ChatGPT was released in November 2022.</p>
<p>That’s <em>just</em> in the private sector, and <em>just</em> the ones tracked by <a href="https://layoffs.fyi">layoffs.fyi</a>.  Software engineering job listings have <a href="https://blog.pragmaticengineer.com/software-engineer-jobs-five-year-low/">plummeted to a 5-year low</a>. This is during a period of time that nobody even describes as a recession. The same venture capitalists who funded the AI boom keep insisting that these trends are about macroeconomic abstractions like interest rates, a stark contrast to their rhetoric the rest of the time, when they insist that they are alpha males who make their own decisions based on their strong convictions and brave stances against woke culture. It is, in fact, the case that they are just greedy people who invested a ton of money into trying to put a lot of good people out of work, and they succeeded in doing so.</p>
<p>There is no reason why AI tools like this <em>couldn't</em> be used in the way that they're often described, where they increase productivity and enable workers to do more and generate more value. But instead we have the wealthiest people in the world telling the wealthiest companies in the world, while they generate record profits, to lay off workers who could be creating cool things for customers, and then blaming it on everyone but themselves.</p>
<h2>The past as prison</h2>
<p>Then there’s the second problem raised by vibe coding: You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at. Worse, many of the people who are using vibe coding tools are increasingly those who <em>don’t</em> understand the code that is being generated by these systems. This means the people generating all of this newly-vibed code won’t even know when the output is insecure, or will perform poorly, or includes exploits that let others take over their system, or when it is simply incoherent nonsense that <em>looks</em> like code but doesn’t do anything.</p>
<p>All of those factors combine to encourage people to think of vibe coding tools as a sort of “black box” that just spits out an app <em>for</em> you. Even the giant tech companies are starting to encourage this mindset, tacitly endorsing the idea that people don’t need to know what their systems are doing under the hood. But obviously, somebody needs to know whether a system is <em>actually</em> secure. Somebody needs to know if a system is actually doing the tasks it says that it’s doing. The Big AI companies that make the most popular LLMs on the market today routinely design their products to induce emotional dependency in users by giving them positive feedback and encouragement, even when that requires generating false responses. Put more simply: they make the bot lie to you to make you feel good so you use the AI more. That’s terrible in a million ways, but one of them is that it sure does generate some bad code.</p>
<p>And a vibe coding tool absolutely won’t make something truly <em>new</em>. The most radical, disruptive, interesting, surprising, weird, fun innovations in technology have happened because people with a strange compulsion to do something cool had enough knowledge to get their code out into the world. The World Wide Web itself was <em>not</em> a huge technological leap over what came before — it took off because of a huge leap in <em>insight</em> into human nature and human behavior, that happened to be captured in code. The actual bits and bytes? They were mostly just plain text, much of which was in formats that had already been around for many years prior to Tim Berners-Lee assembling it all into the first web browser. That kind of surprising innovation could probably never be vibe coded, even though all of the raw materials might be scooped up by an LLM, because even if the human writing the prompt had that counterintuitive stroke of genius, the system would still be hemmed in by the constraints of the works it had been trained on. The past is a prison when you’re inventing the future.</p>
<p>What’s more, if you were going to use a vibe coding tool to make a truly radical new technology, do you think today’s Big AI companies would let their systems create that app? The same companies that made a platform that just put hundreds of thousands of coders out of work? The  same companies that make a platform that tells your kids to end their own lives? The same companies whose cronies in the White House are saying there should <em>never be any laws</em> reining them in? Those folks are going to help you make new tech that threatens to disrupt their power? I don’t think so.</p>
<h2>Putting power in people’s hands</h2>
<p>I’m deeply torn about what the future of LLMs for coding should be. I’ve spent decades of my life trying to make it easier for everyone to make software. I’ve seen, firsthand, the power of using AI tools to help coders — especially those new to coding — build their confidence in being able to create something new. I love that potential, and in many ways, it’s the most positive and optimistic possibility around LLMs that I’ve seen. It’s the thing that makes me think that maybe there is a part of all the AI hype that is not pure bullshit. Especially if we can find a version of these tools that’s genuinely open source and free and has been trained on people’s code with their consent and cooperation, perhaps in collaboration with some educational institutions, I’d be delighted to see that shared with the world in a thoughtful way.</p>
<p>But I also have seen the majority of the working coders I know (and the <em>non</em>-working coders I know, including myself) rush to integrate the commercial coding assistants from the Big AI companies into their workflow without necessarily giving proper consideration to the long-term implications of that choice. What happens when we’ve developed our dependencies on that assistance? How will people introduce <em>new</em> technologies like new programming languages and frameworks if we all consider the LLMs to be the canonical way of writing our code, and the training models don’t know the new tech exists? How does our imagination shrink when we consider our options of what we create with code to be choosing between the outputs of the LLM rather than starting from the blank slate of our imagination? How will we build the next generation of coders skilled enough to catch the glaring errors that LLMs create in their code?</p>
<p>There’s never been this stark a contrast between the negatives and positives of a new technology being so tightly coupled before when it comes to enabling developers. Generally change comes to coders incrementally. Historically, there was always a (wonderful!) default skepticism to coding culture, where anything that reeked of marketing or hype was looked at with a huge amount of doubt until there was a significant amount of proof to back it up.</p>
<p>But in recent years, as with everything else, the culture wars have come for tech. There’s now a cohort in the coding world that has adopted a cult of personality around a handful of big tech tycoons despite the fact that these men are deeply corrosive to society. Or perhaps <em>because</em> they are. As a result, there’s a built-in constituency for any new AI tool, regardless of its negative externalities, which gives them a sense of momentum even where there may not be any.</p>
<p>It’s worth us examining what’s really going on, and articulating explicitly what we’re trying to enable. Who are we trying to empower? What does success look like? What do we want people to be able to build? What do we <em>not</em> want people to be able to make? What price is too high to pay? What convenience is not worth the cost?</p>
<h2>What tools do we choose?</h2>
<p>I do, still, believe deeply in the power of technology to empower people. I believe firmly that you have to understand how to create technology if you want to understand how to control it. And I still believe that we have to democratize the power to create and control technology to as many people as possible so that technology can be something people can use as a tool, rather than something that happens _to_them.</p>
<p>We are now in a complex phase, though, where the promise of democratizing access to creating technology is suddenly fraught in a way that it has never been before. The answer can’t possibly be that technology remains inaccessible and difficult for those outside of a privileged class, and easy for those who are already comfortable in the existing power structure.</p>
<p>A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.</p>

    ]]></content>
    </entry>
    <entry>
        <title>They have to be able to talk about us without us</title>
        <link href="https://anildash.com/2025/12/05/talk-about-us-without-us/"/>
        <updated>2025-12-05T00:00:00Z</updated>
        <id>https://anildash.com/2025/12/05/talk-about-us-without-us/</id>
        <content type="html"><![CDATA[
      <p>It’s absolutely vital to be able to communicate effectively and efficiently to large groups of people. I’ve been lucky enough to get to refine and test my skills in communicating at scale for a few decades now, and the power of talking to communities is the one area where I’d most like to pass on what I’ve learned, because it’s this set of skills that can have the biggest effect on deciding whether good ideas and good work can have their greatest impact.</p>
<p>My own work crosses many disparate areas. Over the years, I’ve gotten to cycle between domains as distinct as building technology platforms and products for developers and creators, enabling activism and policy advocacy in service of humanist ideals, and more visible external-facing work such as public speaking or writing in various venues like magazines or on this site. (And then sometimes I dabble in my other hobbies and fun stuff like scholarship or research into areas like pop culture and media.)</p>
<p>What’s amazing is, in <em>every single one</em> of these wildly different areas, the exact same demands apply when trying to communicate to broad groups of people. This is true despite the broadly divergent cultural norms across all of these different disciplines. It can be a profoundly challenging, even intimidating, job to make sure a message is being communicated accurately, and in high fidelity, to everyone that you need to reach.</p>
<p>That vital task of communicating to a large group gets even <em>more</em> daunting when you inevitably realize that, even if you <em>were</em> to find the perfect wording or phrasing for your message, you’d still never be able to deliver your story to every single person in your target audience by yourself anyway. There will always be another person whom you’re trying to reach that you just haven’t found yet. So, is it hopeless? Is it simply impossible to effectively tell a story at scale if you don’t have massive resources?</p>
<p>It doesn’t have to be. We can start with one key insight about what it takes to get your most important stories out into the world. It’s a perspective that seems incredibly simple at first, but can lead to a pretty profound set of insights.</p>
<h2>They have to be able to talk about us <em>without us</em>.</h2>
<p>They have to be able to talk about us without us. What this phrase means, in its simplest form,  is that you have to tell a story so clear, so concise, so <em>memorable and evocative</em> that people can repeat it for you even after you’ve left the room. And the people who hear it need to be able to do this the <em>first time</em> they hear the story. Whether it’s the idea behind a new product, the core promise of a political campaign, or the basic takeaway from a persuasive essay (guess what the point of this one is!) — not only do you have to explain your idea and make your case, you have to be teaching your listener how to do the same thing for themselves.</p>
<p>This is a tall order, to be sure. In pop music, the equivalent is writing a hit where people feel like they can sing along to the chorus by the time they get to the end of the song for the first time. Not everybody has it in them to write a hook that good, but if you do, that thing is going to become a classic. And when someone <em>else</em> has done it, you know it because it gets stuck in your head. Sometimes you end up humming it to yourself even if you didn’t want to. Your best ideas — your most <em>vital</em> ideas — need to rest on a messaging platform that solid.</p>
<p>Delivering this kind of story actually requires substance. If you’re trying to fake it, or to force a narrative out of fluff or fakery, that will very immediately become obvious. When you set out to craft a story that travels in your absence, it has to have a body if it’s going to have legs. Bullshit is slippery and smells terrible, and the first thing people want to do when you leave the room is run away from it, not carry it with them.</p>
<h2>The mission is the message</h2>
<p>There’s another challenge to making a story that can travel in your absence: your ego has to let that happen. If you make a story that is effective and compelling enough that others can tell it, then, well…. those other people are going to tell it.  Not you. They’ll do it in their own words, and in their own voices, and make it <em>theirs</em>. They may use a similar story, but in their own phrasing, so it will resonate better with their people. This is a <em>gift</em>! They are doing you a kindness, and extending you great generosity. Respond with gratitude, and be wary of anyone who balks at not getting to be the voice or the face of a message themselves. Everyone gets a turn telling the story.</p>
<p>Maybe the simple fact that others will be hearing a good story for the first time will draw them to it, regardless of <em>who</em> the messenger is. Sometimes people get attached to the idea that <em>they</em> have to be the one to deliver the one true message. But a core precept of “talk about us without us” is that there’s a larger mission and goal that everyone is bought into, and this demands that everyone stay aligned to their values rather than to their own personal ambitions around who tells the story.</p>
<p>The truth of whomever will be most <em>effective</em> is the factor used to decide who will be the person to tell the story in any context. And this is a forgiving environment, because even if someone doesn’t get to be the voice one day, they’ll get another shot, since repetition and consistency are also key parts of this strategy, thanks to the disciplined approach that it brings to communication.</p>
<h2>The joy of communications discipline</h2>
<p>At nearly every organization where I’ve been in charge of onboarding team members in the last decade or so, one of the first messages we’ve presented to our new colleagues is, “We are disciplined communicators!” It’s a message that they hopefully get to hear as a joyous declaration, and as an assertion of our shared values. I always try to explicitly instill this value into teams I work with because, first, it’s good to communicate values explicitly, but also because this is a concept that is very seldom directly stated.</p>
<p>It is ironic that this statement usually goes unsaid, because nearly everyone who pays attention to culture understands the vital importance of disciplined communications. Brands that are strictly consistent in their use of things like logos, type, colors, and imagery get such wildly-outsized cultural impact in exchange for relatively modest investment that it’s mind-boggling to me that more organizations don’t insist on following suit. Similarly, institutions that develop and strictly enforce a standard tone of voice and way of communicating (even if the tone itself is playful or casual) capture an incredibly valuable opportunity at minimal additional cost relative to how much everyone’s already spending on internal and external communications.</p>
<p>In an era where every channel is being flooded with AI-generated slop, and when most of the slop tools are woefully incapable of being consistent about anything, simply showing up with an obviously-human, obviously-consistent story is a phenomenal way of standing out. That discipline demonstrates all the best of humanity: a shared ethos, discerning taste, joyful expression, a sense of belonging, an appealing consistency. And best of all, it represents the chance to participate for yourself — because it’s a message that you now know how to repeat for yourself.</p>
<p>Providing messages that individuals can pick up and run with on their own is a profoundly human-centric and empowering thing to do in a moment of rising authoritarianism. When the fascists in power are shutting down prominent voices for leveling critiques that they would like to censor, and demanding control over an increasingly broad number of channels, there’s reassurance in people being empowered to tell their own stories together. Seeing stories bubble up from the grassroots in collaboration, rather than being forced down upon people from authoritarians at the top, has an emotional resonance that only strengthens the substance of whatever story you’re telling.</p>
<h2>How to do it</h2>
<p>Okay, so it sounds great: Let’s tell stories that other people want to share! Now, uh… how do we do it? There are simple principles we can follow that help shape a message or story into one that is likely to be carried forward by a community on its own.</p>
<ul>
<li><strong>Ground it in your values.</strong> When we began telling the story of my last company Glitch, the conventional wisdom was that we were building a developer tool, so people would describe it as an “IDE” — an “integrated development environment”, which is the normal developer jargon for the tool coders use to write their code in. We <em>never</em> described Glitch that way. From <a href=https://web.archive.org/web/20170504080445/https://glitch.com/>day one</a>, we always said “Glitch is the friendly community where you'll build the app of your dreams” (later, “the friendly community where everybody builds the internet”). By talking about the site as a <em>friendly community</em> instead of an <code>integrated development environment</code>, it was crystal clear what expectations and norms we were setting, and what our values were. Within a few months, even our <em>competitors</em> were describing Glitch as a “friendly community” while they were trying to talk about how they were better than us about some feature or the other. That still feels like a huge victory — even the competition was talking about us without us! Make sure your message evokes the values you want people to share with each other, either directly or indirectly.</li>
<li><strong>Start with the principle.</strong> This is a topic I’ve covered before, but <a href=https://www.anildash.com/2022/01/31/you-have-to-start-with-the-principle/>you can't win unless you know what you're fighting for</a>. Identify concrete, specific, perhaps even <em>measurable</em> goals that are tied directly to the values that motivate your efforts. As <a href=https://www.anildash.com/2025/11/05/turn-the-volume-up/>noted recently</a>, Zohran Mamdani did this masterfully when running for mayor of New York City. While the <em>values</em> were affordability and the dignity of ordinary New Yorkers, the clear, understandable, measurable principle could be something as simple as “free buses”. This is a goal that everyone can get in 5 seconds, and can explain to their neighbor <em>the first time they hear it</em>. It’s a story that travels effortlessly on its own — and that people will be able to verify very easily when it’s been delivered. That’s a perfect encapsulation of “talk about us without us”.</li>
<li>**Know what makes you unique.**Another way of putting this is to simply make sure that you have a sense of self-awareness. But the story you tell about your work or your movement has to be <em>specific</em>. There can’t be platitudes or generalities or vague assertions as a core part of the message, or it will never take off. One of the most common failure states for this mistake is when people lean on <em>slogans</em>. Slogans can have their use in a campaign, for reminding people about the existence of a brand, or supporting broader messaging. But very often, people think a slogan <em>is</em> a story. The problem is that, while slogans are definitely repeatable, slogans are almost definitionally too vague and broad to offer a specific and unique narrative that will resonate. There’s no point in having people share something if it doesn’t say something. I usually articulate the challenge here like this:<strong>Only say what only <em>you</em> can say.</strong></li>
<li><strong>Be evocative, not comprehensive.</strong> Many times, when people are passionate about a topic or a movement, the temptation they have in telling the story is to work in <em>every little detail</em> about the subject. They often think, “if I include every detail, it will persuade more people, because they’ll know that I’m an expert, or it will convince them that I’ve thought of everything!” In reality, when people are not subject matter experts on a topic, or if they’re not already intrinsically interested in that topic, hearing a bunch of extensive minutia about it will almost always leave them feeling bored, confused, intimidated, condescended-to, or some combination of all of these. Instead, pick a small subset of the most <em>emotionally gripping</em> parts of your story, the aspects that have the deepest human connection or greatest relevance and specificity to the broadest set of your audience, and focus on telling those parts of the story as passionately as possible. If you succeed in communicating that initial small subset of your story effectively, then you may <em>earn</em> the chance to tell the other more complex and nuanced details of your story.</li>
<li><strong>Your enemies are your friends.</strong> Very often, when people are creating messages about advocacy, they’re focused on competition or rivals. In the political realm, this can be literal opposing candidates, or the abstraction of another political party. In the corporate world, this can be (real or imagined) competitive products or companies. In many cases, these other organizations or products or competitors occupy so much more mental space in your mind, or your team’s mind, than they do in the mind of your potential audience. Some of your audience has never heard of them at all. And a <em>huge</em> part of your audience thinks of you and your biggest rival as… basically the same thing. In a business or commercial context, customers can barely keep straight the difference between you and your competition — you’re both just part of the same amorphous blob that exists as “the things that occupy that space”. Your competitor may be the only other organization in the world that’s fighting just as hard as you are to create a market for the product that you’re selling. The same is true in the political space; sometimes the biggest friction arises over the narcissism of small differences. What we can take away from these perspectives is that our stories have to focus on what distinguishes us, yes, but also on what we might have in common with those whom we might otherwise have perceived to have been aligned with the “enemy”. Those folks might not have sworn allegiance to an opposing force; they may simply have chosen another option out of convenience, and not even seen that choice as being in opposition to your story at all.</li>
<li><strong>Find joy in repetition.</strong> Done correctly, a disciplined, collaborative, evocative message can become a mantra for a community. There’s a pride and enthusiasm that can come from people becoming proficient in sharing their own version of the collective story. And that means enjoying when that refrain comes back around, or when a slight improvement in the core message is discovered, and everyone finds a way to refine the way they’re communicating about the narrative. A lot of times, people worry that their team will get bored if they’re “just telling the same story over and over all the time”. In reality, as a brilliant man once said, <a href=https://youtu.be/FgP5VRp_myE>there’s joy in repetition</a>.</li>
<li><strong>Don’t obsess over exact wording.</strong> This one is tricky; you might say, “but you said we have to be disciplined communicators!” And it’s true: it’s important to be disciplined. But that doesn’t mean you can’t leave room for people to put their own spin on things. Let them translate to their own languages or communities. Let them augment a general principle with a specific, personal connection. If they have their own authentic experience which will amplify a story or drive a point home, let them weave that context into the consistent narrative that’s been shared over time. As long as you’re not enabling a “telephone game” where the story starts to morph into an unrecognizable form, it’s perfectly okay to add a human touch by going slightly off script.</li>
</ul>
<h2>Share the story</h2>
<p>Few things are more rewarding than when you find a meaningful narrative that resonates with the world. Stories have the power to change things, to make people feel empowered, to galvanize entire communities into taking action and recognizing their own power. There’s also a quiet reward in the craft and creativity of working on a story that travels, in finding notes that resonate with others, and in challenging yourself to get far enough out of your own head to get into someone else’s heart.</p>
<p>I still have so much to learn about being able to tell stories effectively. I still screw it up so much of the time, and I can look back on many times when I wish I had better words at hand for moments that sorely needed them. But many of the most meaningful and rewarding moments of my life have been when I’ve gotten to be in community with others, as we were not just sharing stories together, but <em>telling</em> a united story together. It unlocks a special kind of creativity that’s a lot bigger than what any one of us can do alone.</p>

    ]]></content>
    </entry>
    <entry>
        <title>What about “Nothing about us without us?”</title>
        <link href="https://anildash.com/2025/12/08/what-about-nothing-about-us/"/>
        <updated>2025-12-08T00:00:00Z</updated>
        <id>https://anildash.com/2025/12/08/what-about-nothing-about-us/</id>
        <content type="html"><![CDATA[
      <p>As I was drafting my last piece on Friday, “<a href="https://www.anildash.com/2025/12/05/talk-about-us-without-us/">They have to be able to talk about us without us</a>”, my thoughts of course went to one of the most famous slogans of the disability rights movement, “<a href="https://en.wikipedia.org/wiki/Nothing_about_us_without_us">Nothing about us without us.</a>” I wasn’t unaware that there were similarities in the phrasing of what I wrote. But I think the topic of communicating effectively to groups, as I wrote about the other day, and ensuring that disabled people are centered in disability advocacy, are such different subjects that I didn’t want to just quickly gloss over the topic in a sidebar of a larger piece. They're very distinct topics that really only share a few words in common.</p>
<p>One of the great joys of becoming friends with a number of really thoughtful and experienced disability rights activists over the last several years has been their incredible generosity in teaching me about so much of the culture and history of the movements that they’ve built their work upon, and one of the most powerful slogans has been that refrain of “nothing about us without us”.</p>
<p>Here I should start by acknowledging Alice Wong, who we recently lost, who founded the <a href="https://disabilityvisibilityproject.com/about/">Disability Visibility Project</a>, and a MacArthur Fellow, and a tireless and inventive advocate for everyone in the disabled community. She was one of the first people to bring me in to learning about this history and these movements, more than a decade ago. She was also a patient and thoughtful teacher, and over our many conversations over the years, she did more than anyone else in my life to truly <em>personify</em> the spirit of “nothing about us without us” by fighting to ensure that disabled people led the work to make the world accessible for all. If you have the chance, learn about her work, and <a href="https://www.gofundme.com/f/Alice-Wongs-Legacy">support it</a>.</p>
<p>But a key inflection point in my own understanding of “nothing about us without us” came, unsurprisingly, in the context of how disabled people have been interacting with technology. I used to host a podcast called Function, and we did an episode about how inaccessible so much of contemporary technology has become, and how that kind of ruins things for everyone. (The episode is still up on <a href="https://open.spotify.com/episode/0IN2nQWUqmQnAMxNLN85WE">Spotify</a> and <a href="https://podcasts.apple.com/us/podcast/function-with-anil-dash/id1439658455?i=1000452883786">Apple Podcasts</a>.)  We had on <a href="https://emilyladau.com">Emily Ladau</a> of <a href="https://www.theaccessiblestall.com">The Accessible Stall</a> podcast, <a href="https://alexhaagaard.com">Alex Haagaard</a> of <a href="https://www.disabledlist.org">The Disabled List</a>, and <a href="https://www.vilissathompson.com">Vilissa Thompson</a> of <a href="https://www.rampyourvoice.com">Ramp Your Voice</a>. It’s well worth a listen, and Emily, Alex and Vilissa really do an amazing job of pointing to really specific, really evocative examples of <em>obvious</em> places where today’s tech world could be so much more useful and powerful for everyone if its creators were making just a few simple changes.</p>
<p>What’s striking to me now, listening to that conversation six years later, is how little has changed from the perspective of the technology world, but also how much my own lived experience has come to reflect so much of what I learned in those conversations.</p>
<p>Each of them was the &quot;us&quot; in the conversation, using their own personal experience, and the experience of other disabled people that they were in community with, to offer specific and personal insights that the creators of these technologies did not have. And whether it was for reasons of crass commercial opportunism — here's some money you could be making! — or simply because it was the right thing to do morally, it's obvious that the people making these technologies could benefit by honoring the principle of centering these users of their products.</p>
<h2>Taking our turn</h2>
<p>I’ve had this conversation on various social media channels in a number of ways over the years, but another key part of understanding the “us” in “nothing about us without us” when it comes to disability, is that the “us” is <em>all of us</em>, in time. It's very hard for many people who haven’t experienced it to understand that everyone should be accommodated and supported, because everyone is disabled; it’s only a question of when and for how long.</p>
<p>In contemporary society, we’re given all kinds of justifications for why we can’t support everyone’s needs, but so much of those are really grounded in simply trying to convince ourselves that a disabled person is <em>someone else</em>, an “other” who isn’t worthy or deserving of our support. I think deep down, everyone knows better. It’s just that people who don’t (yet) identify as disabled don’t really talk about it very much.</p>
<p>In reality, we'll all be disabled. Maybe you're in a moment of respite from it, or in that brief window before the truth of the inevitability of it has been revealed to you (sorry, spoiler warning!), but it's true for all of us — even when it's not visible. That means all of us have to default to supporting and uplifting and empowering the people who are disabled today. This was the key lesson that I didn’t really get personally until I started listening to those who were versed in the history and culture of disability advocacy, about how the patronizing solutions were often harmful, or competing for resources with the <em>right</em> answers.</p>
<p>I’ve had my glimpses of this myself. Back in 2021, I had Lyme disease. I didn’t get it as bad as some, but it did leave me physically and mentally unable to function as I had been used to, for several months. I had some frame of reference for physical weakness; I could roughly compare it to a bad illness like the flu, even if it wasn’t exactly the same. But a diminished <em>mental</em> capacity was unlike anything I had ever experienced before, and was profoundly unsettling, deeply challenging my sense of self. After the <a href="https://www.anildash.com/2022/07/18/i-went-to-a-coffee-shop/">incident I’d described in 2022</a>, I had a series of things to recover from physically and mentally that also presented a significant challenge, but were especially tough because so much of people’s willingness to accommodate others is based on any disability being <em>visible</em>. Anything that’s not immediately perceived at a superficial level, or legible to a stranger in a way that’s familiar to them, is generally dismissed or seen as invalid for support.</p>
<p>I point all of this out not to claim that I fully understand the experience of those who live with truly serious disabilities, or to act as if I know what it’s been like for those who have genuinely worked to advocate for disabled people. Instead, I think it can often be useful to show how porous the boundary is between people who <em>don’t</em> think of themselves as disabled and those who already know that they are. And of course this does <em>not</em> mean that people who aren't currently disabled can speak on behalf of those who are — that's the whole point of &quot;nothing about us without us&quot;! — but rather to point out that the time to begin building your empathy and solidarity is now, not when you suddenly have the realization that you're part of the community.</p>
<h2>Everything about us</h2>
<p>There’s a righteous rage that underlies the cry of “nothing about us without us”, stemming from so many attempts to address the needs of disabled people having come from those outside the community, arriving with plans that ranged from inept to evil. We’re in a moment when the authoritarians in charge in so much of the world are pushing openly-eugenicist agendas that will target disabled people first amongst the many vulnerable populations that they’ll attempt to attack. Challenging economic times like the one we’re in affect disabled people significantly harder as the job market disproportionately shrinks in opportunities for the disabled first.</p>
<p>So it’s going to take all of us standing in solidarity to ensure that the necessary advocacy and support are in place for what promises to be an extraordinarily difficult moment. But I take some solace and inspiration from the fact that there are so many disabled people who have provided us with the clear guidance and leadership we need to navigate this moment. And there is simple guidance we can follow when doing so to ensure that we’re centering the right leaders, by listening to those who said, “nothing about us without us.”</p>

    ]]></content>
    </entry>
    <entry>
        <title>How the hell are you supposed to have a career in tech in 2026?</title>
        <link href="https://anildash.com/2026/01/05/a-tech-career-in-2026/"/>
        <updated>2026-01-05T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/05/a-tech-career-in-2026/</id>
        <content type="html"><![CDATA[
      <p>The number one question I get from my friends, acquaintances, and mentees in the technology industry these days is, by far, variations on the basic theme of, “what the hell are we supposed to do now?”</p>
<p>There have been mass layoffs that leave more tech workers than ever looking for new roles in the worst market we’ve ever seen. Many of the most talented, thoughtful and experienced people in the industry are feeling worried, confused, and ungrounded in a field that no longer looks familiar.</p>
<p>If you’re outside the industry, you may be confused — isn’t there an AI boom that’s getting hundreds of billions of dollars in investments? Doesn’t that mean the tech bros are doing great? What you may have missed is that half a million tech workers have been laid off in the years since ChatGPT was released; the same attacks on marginalized workers and DEI and “woke” that the tech robber barons launched against the rest of society were aimed at their own companies first.</p>
<p>So the good people who actually <em>make</em> the technology we use every day, the real innovators and creators and designers, are reacting to the unprecedented disconnect between the contemporary tech industry and the fundamentals that drew so many people toward it in the first place. Many of the biggest companies have abandoned the basic principle of making technology that actually <em>works</em>. So many new products fail to deliver on even the basic capabilities that the companies are promising that they will provide.</p>
<p>Many leaders at these companies have run full speed towards moral and social cowardice, abandoning their employees and customers to embrace rank hatred and discrimination in ways that they pretended to be fighting against just a few years ago. Meanwhile, unchecked consolidation has left markets wildly uncompetitive, leaving consumers suffering from the effects of categories without any competition or investment — which we know now as “enshittification”. And the full-scale shift into corruption and crony capitalism means that winners in business are decided by whoever is shameless enough to offer the biggest bribes and debase themselves with the <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">most humiliating display</a> of groveling. It’s a depressing shift for people who, earlier in their careers, often actually <em>were</em> part of inventing the future.</p>
<p>So where do we go from here?</p>
<h2>You’re not crazy.</h2>
<p>The first, and most important, thing to know is that <em>it’s not just you</em>. Nearly everyone in tech I have this conversation with feels very isolated about it, and they’re often embarrassed or ashamed to discuss it. They think that everyone else who has a job in tech is happy or comfortable at their current employers, or that the other people looking for work are getting calls back or are being offered interviews in response to their job applications. But I’m here to tell you: it is grim right now. About as bad as I’ve seen. And I’ve been around a long time.</p>
<p>Every major tech company has watched their leadership abandon principles that were once thought sacrosanct. I’ve heard more people talk about losing respect for executives they trusted, respected, even <em>admired</em> in the last year than at any time I can remember. In smaller companies and other types of organizations, the challenges have been more about the hard choices that come from dire resource constraints or being forced to make ugly ethical compromises for pragmatic reasons. The net result is tons of people who have lost pride and conviction in their work. They’re going through the motions for a paycheck, because they know it’s a tough job market out there, which is a miserable state of affairs.</p>
<p>The public narrative is dominated by the loud minority of dudes who are content to appease the egos of their bosses, sucking up to the worse impulses of those in charge. An industry that used to pride itself on publicly reporting security issues and openly disclosing vulnerabilities now circles its wagons to gang up on people who suggest that an AI tool shouldn’t tell children to harm themselves, that perhaps it should be possible to write a law limiting schools from deploying AI platforms that are known to tell kids to end their own lives. People in tech endure their bosses using slurs at work, making jokes about sexual assault, consorting with leaders who have directly planned the murder of journalists, engaging in open bribery in blatant violation of federal law and their own corporate training on corruption, and have to act like it’s normal.</p>
<p>But it’s not the end of the world. The forces of evil have not yet triumphed, and all hope is not lost. There are still things we can do.</p>
<h2>Taking back control</h2>
<p>It can be easy to feel overwhelmed at such an unprecedented time in the industry, especially when there’s so much change happening. But there are concrete actions you can take to have agency over your own career, and to insulate yourself from the bad actors and maximize your own opportunities — even if some of those bad actors are your own bosses.</p>
<h3>Understanding systems</h3>
<p>One of the most important things you can do is to be clear about your own place, and your own role, within the systems that you are part of. A major factor in the changes that bosses are trying to effect with the deployment of AI is shifting the role of workers within the systems in their organization to make them more replaceable.</p>
<p>If you’re a coder, and you think your job is to make really good code in a particular programming language, you might double down on getting better at the details of that language. But that’s almost certainly misunderstanding the system that your company thinks you’re part of, where the code is just a means to the end of creating a final product. In that system-centric view, the programming language, and indeed all of the code itself, doesn’t really matter; the person who is productive at causing all of that code to be created reliably and efficiently is the person who is going to be valued, or at least who is most likely to be kept around. That may not be satisfying or reassuring if you truly love coding, but at least this perspective can help you make informed decisions about whether or not that organization is going to make choices that respect the things you value.</p>
<p>This same way of understanding systems can apply if you’re a designer or a product manager or a HR administrator or anything else. As I’ve covered before, <a href= "https://anildash.com/2024/05/28/systems-the-purpose-of-a-system/">the purpose of a system is what it does</a>, and that truth can provide some hard lessons if we find it’s in tension with the things we <em>want</em> to be doing for an organization. The system may not value the things we do, or it may not value them enough; the way they phrase this to avoid having to say it directly is by describing something as “inefficient”. Then, the question you have to ask yourself is, can you care about this kind of work or this kind of program at one level higher up in the system? Can it still be meaningful to you if it’s slightly more abstract? Because that may be the requirement for navigating the expectations that technology organizations will be foisting on everyone through the language of talking about “adopting AI”.</p>
<h3>Understanding power</h3>
<p>Just as important as understanding systems is understanding <em>power</em>. In the workplace, power is something real. It means being able to control how money is spent. It means being able to make decisions. It means being able to hire people, or fire them. Power is being able to say no.</p>
<p>You probably don’t have enough power; that’s why you have worries. But you almost certainly have more power than you think, it’s just not as obvious how to wield it. The most essential thing to understand is that you will need to collaborate with your peers to exercise collective power for many of the most significant things you may wish to achieve.</p>
<p>But even at an individual level, a key way of understanding power in your workplace is to consider the systems that you are part of, and then to reckon with which ones you can meaningfully change from your current position. Very often, people will, in a moment of frustration, say “this place couldn’t run without me!” And companies will almost always go out of their way to prove someone wrong if they hear that message.</p>
<p>On the other hand, if you identify a system for operating the organization that no one else has envisioned, you’ve already <em>demonstrated</em> that this part of the organization couldn’t run without you, and you don’t need to say it or prove it. There is power in the mere action of creating that system. But a lot depends on where you have both the positional authority and the social permission to actually accomplish that kind of thing.</p>
<p>So, if you’re dissatisfied with where you are, but have not decided to leave your current organization, then your first orders of business in this new year should be to consolidate power through building alliances with peers, and by understanding which fundamental systems of your organization you can define or influence, and thus be in control of. Once you’ve got power, you’ve got options.</p>
<h3>Most tech isn’t “tech”</h3>
<p>So far, we’re talking about very abstract stuff. What do we do if your job sucks right now, or if you don’t have a job today and you really need one? After vague things like systems and power, then what?</p>
<p>Well, an important thing to understand, if you care about innovation and technology, is that the vast majority of technology doesn’t happen in the startup world, or even in the “tech industry”. Startups are only a tiny fraction of the entire realm of companies that create or use technology, and the giant tech companies are only a small percentage of all jobs or hiring within the tech realm.</p>
<p>So much opportunity, inspiration, creativity, and possibility lies in applying the skills and experience that you may have from technological disciplines in other realms and industries that are often far less advanced in their deployment of technologies. In a lot of cases, these other businesses get taken advantage of for their lack of experience — and in the non-profit world, the lack of tech expertise or fluency is often exploited by both the technology vendors and bad actors who swoop in to capitalize on their vulnerability.</p>
<p>Many of the people I talk to who bring their technology experience to other fields also tell me that the culture in more traditional industries is often less toxic or broken than things in Silicon Valley (or Silicon Valley-based) companies are these days, since older or more established companies have had time to work out the more extreme aspects of their culture. It’s an extraordinary moment in history when people who work on Wall Street tell me that even <em>their</em> HR departments wouldn’t put up with the kind of bad behavior that we’re seeing within the ranks of tech company execs.</p>
<h3>Plan for the long term</h3>
<p>This too shall pass. One of the great gifts of working in technology is that it’s given so many of us the habit of constantly learning, of always being curious and paying attention to the new things worth discovering. That healthy and open-minded spirit is an important part of how to navigate a moment when lots of people are being laid off, or lots of energy and attention are being focused on products and initiatives that don’t have a lot of substance behind them.
Eventually, people will want to return to what’s real. The companies that focus on delivering products with meaning, and taking care of employees over time, will be the ones that are able to persist past the current moment. So building habits that enable resiliency at both a personal and professional level is going to be key.</p>
<p>As I’ve been fond of saying for a long time: don’t let your job get in the way of your career.</p>
<p>Build habits and routines that serve your own professional goals. As much as you can, participate in the things that get your name out into your professional community, whether that’s in-person events in your town, or writing on a regular basis about your area of expertise, or mentoring with those who are new to your field. You’ll never regret building relationships with people, or being generous with your knowledge in ways that remind others that you’re great at what you do.</p>
<p>If your time and budget permit, attend events in person or online where you can learn from others or respond to the ideas that others are sharing. The more people can see and remember that you’re engaged with the conversations about your discipline, the greater the likelihood that they’ll reach out when the next opportunity arises.</p>
<p>Similarly, take every chance you can to be generous to others when you see a door open that might be valuable for them. I can promise you, people will <em>never</em> forget that you thought of them in their time of need, even if they don’t end up getting that role or nabbing that interview.</p>
<h2>It’s an evolution, not a resolution</h2>
<p>New years are often a time when people make a promise to themselves about how they’re going to change everything. If I can just get this new notebook to write in, I’m suddenly going to become a person who keeps a journal, and that will make me a person who’s on top of everything all the time.</p>
<p>But hopefully you can see, many of the challenges that so many people are facing are systemic, and aren’t the result of any personal failings or shortcomings. So there isn’t some heroic individual change that you can make when you flip over to a new calendar month that will suddenly fix all the things.</p>
<p>What you can control, though, are small iterative things that make you feel better on a human scale, in little ways, when you can. You can help yourself maintain perspective, and you can do the same for those around you who share your values, and who care about the same personal or professional goals that you do.</p>
<p>A lot of us still care about things like the potential for technology to help people, or still believe in the idealistic and positive goals that got us into our careers in the first place. We weren’t wrong, or naive, or foolish to aspire to those goals simply because some bad actors sought to undermine them. And it’s okay to feel frustrated or scared in a time when it seems to many like those goals could be further away than they’ve been in a long time.</p>
<p>I do hope, though, that people can see that, by sticking together, and focusing on the things that are within our reach, things can begin to change. All it takes is remembering that the power in tech truly rests with all the people who actually <em>make</em> things, not with the loudmouths at the top who try to tear things down.</p>

    ]]></content>
    </entry>
    <entry>
        <title>500,000 tech workers have been laid off since ChatGPT was released</title>
        <link href="https://anildash.com/2026/01/06/500k-tech-workers-laid-off/"/>
        <updated>2026-01-06T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/06/500k-tech-workers-laid-off/</id>
        <content type="html"><![CDATA[
      <p>One of the key points I repeated when <a href="https://www.anildash.com/2026/01/05/a-tech-career-in-2026/">talking about the state of the tech industry yesterday</a> was the salient fact that <em>half a million tech workers have been laid off since ChatGPT was released in late 2022</em>. Now, to be clear, those workers haven’t been laid off because their jobs are now being done by AI, and they’ve been replaced by bots. Instead, they’ve been laid off by execs who now have AI to use as an excuse for going after workers they’ve wanted to cut all along.</p>
<p>This is important to understand for a few reasons. First, it’s key just for having empathy for both the mindset and the working conditions of people in the tech industry. For so many outside of tech, their impression of what “tech” means is whatever is the most recent transgression they’ve heard about from the most obnoxious billionaire who’s made the news lately. But in many cases, it’s the rank and file workers at that person’s company who were the first victims of that billionaire’s ego.</p>
<p>Second, it’s important to understand the big tech companies as almost the testing grounds for the techniques and strategies that these guys want to roll out on the rest of the economy, and on the rest of the world. Before they started going on podcasts pretending to be extremely masculine while whining about their feelings, or overtly bribing politicians to give them government contracts, they beta-tested these manipulative strategies within their companies by cracking down on dissent and letting their most self-indulgent and egomaniacal tendencies run wild. Then, when people (reasonably!) began to object, they used that as an excuse to purge any dissenters for being uncooperative or “difficult”.</p>
<h2>It starts with tech, but doesn’t end there</h2>
<p>These are tactics they’ll be bringing to other industries and sectors of the economy, if they haven’t already. Sometimes they’ll be providing AI technologies and tools as an enabler or justification for the cultural and political agenda that they’re enacting, but often times, they don’t even need to. In many cases, they can simply make clear that they want to enforce psychological and social conformity within their organizations, and that any disagreement will not be tolerated, and the implicit threat of being replaced by automation (or by other workers who are willing to fall in line) is enough to get people to comply.</p>
<p>This is the subtext, and sometimes the explicit text, of the deployment of “AI” in a lot of organizations. That’s separate from what actual AI software or technology can do. And it explains a lot of why the <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view/">majority AI view</a> within the tech industry is nothing like the hype cycle that’s being pushed by the loudest voices of the big-name CEOs.</p>
<p>Because people who work in tech still believe in the power of tech to do good things, many of us won’t just dismiss outright the possibility that any technology — even AI tools like LLMs — could yield some benefits. But the optimistic takes are tempered by the first-hand knowledge of how the tools are being used as an excuse to sideline or victimize good people.</p>
<p>This wave of layoffs and reductions has been described as “pursuing efficiencies” or “right-sizing”. But so many of us in tech can remember a few years back, when working in tech as an upwardly-mobile worker with a successful career felt like the best job in the world. When many people could buy nice presents for their kids at Christmas or they weren’t as worried about your car payments. When huge parts of society were promising young people that there was a great future ahead if they would just learn to code. When the promise of a tech career’s potential was used as the foundation for building infrastructure in our schools and cities to train a whole new generation of coders.</p>
<p>But the funders and tycoons in charge of the big tech companies <em>knew</em> that they did not want to keep paying enormous salaries to the people they were hiring. They certainly knew they didn’t want to keep paying huge hiring bonuses to young people just out of college, or to pay large staffs of recruiters to go find underrepresented candidates. Those niceties that everybody loved, like great healthcare and decent benefits, were identified by the people running the big tech companies as “market inefficiencies” which indicated some wealth was going to you that should have been going to <em>them</em>. So yes, part of the reason for the huge investment in AI coding tools was to make it easier to write code. But another huge reason that AI got so good at writing code was so that nobody would ever have to pay coders so well again.</p>
<p>You’re not wrong if you feel angry, resentful and overwhelmed by all of this; indeed, it would be absurd if you <em>didn’t</em> feel this way, since the wealthiest and most powerful people in the history of the world have been spending a few years trying to make you feel exactly this way. Constant rotating layoffs and a nonstop fear of further cuts, with a perpetual sense of precarity, are a deliberate strategy so that everyone will accept lower salaries and reduced benefits, and be too afraid to push for the exact same salaries that the company could afford to pay the year before.</p>
<h2>Why are we stirring the pot?</h2>
<p>Okay, so are we just trying to get each other all depressed? No. It’s just vitally important that we name a problem and identify it if we’re going to solve it.

Most people outside of the technology industry think that “tech” is a monolith, that the people who work in tech are the same as the people who <em>own</em> the technology companies. They don’t know that tech workers are in the same boat that they are, being buffeted by the economy, and being subject to the whims of their bosses, or being displaced by AI. They don’t know that the DEI backlash has gutted HR teams at tech companies, too, for example. So it’s key for everyone to understand that they’re starting from the same place.</p>
<p>Next, it’s key to tease apart things that are separate concerns. For example: AI is often an <em>excuse</em> for layoffs, not the cause of them. ChatGPT didn’t replace the tasks that recruiters were doing in attracting underrepresented candidates at big tech companies — the bosses just don’t care about trying to hire underrepresented candidates anymore! The tech story is being used to mask the political and social goal. And it’s important to understand that, because otherwise people waste their time fighting battles that might not matter, like the deployment of a technology system, and losing the ones that do, like the actual decisions that an organization is making about its future.</p>
<h2>Are they efficient, though?</h2>
<p>But what if, some people will ask, these companies just had <em>too many people</em>? What if they’d over-hired? The folks who want to feel really savvy will say, “I heard that they had all those employees because interest rates were low. It was a Zero Interest Rate Phenomenon.” This is, not to put too fine a point on it, bullshit. It’s not in any company’s best interests to cut their staffing down to the bone.</p>
<p>You actually <em>need</em> to have some reserve capacity for labor in order to reach maximum output for a large organization. This is the difference between a large-scale organization and a small one. People sitting around doing nothing is the epitome of waste or inefficiency in a small team, but in a large organization, it’s a lot more costly if you are about to start a new process or project and you don’t have labor capacity or expertise to deploy.</p>
<p>A good analogy is the oft-cited need these days for people to be bored more often. There’s a frequent lament that, because people are so distracted by things like social media and constant interruptions, they never have time to get bored and let their mind wander, and think new thoughts or discover their own creativity. Put another way, they never get the chance to tap into their own cognitive surplus.</p>
<p>The only advantage a large organization can have over a small one, other than sheer efficiencies of scale, is if it has a cognitive surplus that it can tap into. By destroying that cognitive surplus, and leaving those who remain behind in a state of constant emotional turmoil and duress, these organizations are permanently damaging both their competitive advantages and their potential future innovations.</p>
<h2>AI Spring</h2>
<p>When the dust clears, and people realize that extreme greed is never the path to maximum long-term reward, there is going to be a “peace dividend” of sorts from all the good talent that’s now on the market. Some of this will be smart, thoughtful people flowing to other industries or companies, bringing their experience and insights with them.</p>
<p>But I think a lot of this will be people starting their own new companies and organizations, informed by the broken economic models, and broken <em>human</em> models, of the companies they’ve left. We saw this a generation ago after the bust of the dot-com boom, when it was not only revealed that the economics of a lot of the companies didn’t work, but that so many of the people who had created the companies of that era didn’t even care about the markets or the industries that they’d entered. When the get-rich-quick folks left the scene, those of us who remained, who truly loved the web as a creative and expressive medium, found a ton of opportunity in being the little mammals amidst the sad dinosaurs trying to find funding for meteor dot com.</p>
<h2>What comes next</h2>
<p>I don’t think this all gets better very quickly. If you put aside the puffery of the AI companies scratching each others’ backs, it’s clear the economy is in a recession, even if this administration’s goons have shut down reporting on jobs and inflation in a vain attempt to hide that reality. But I do think there may be more resilience because of the sheer talent and entrepreneurial skill of the people who are now on the market as individuals.</p>

    ]]></content>
    </entry>
    <entry>
        <title>How Markdown took over the world</title>
        <link href="https://anildash.com/2026/01/09/how-markdown-took-over-the-world/"/>
        <updated>2026-01-09T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/09/how-markdown-took-over-the-world/</id>
        <content type="html"><![CDATA[
      <p>Nearly every bit of the high-tech world, from the most cutting-edge AI systems at the biggest companies, to the casual scraps of code cobbled together by college students, is annotated and described by the same, simple plain text format. Whether you’re trying to give complex instructions to ChatGPT, or you want to be able to exchange a grocery list in Apple Notes or copy someone’s homework in Google Docs, that same format will do the trick. The wild part is, the format wasn’t created by a conglomerate of tech tycoons, it was created by a curmudgeonly guy with a kind heart who right this minute is probably rewatching a Kubrick film while cheering for an absolutely indefensible sports team.</p>
<p>But it’s worth understanding how these simple little text files were born, not just because I get to brag about how generous and clever my friends are, but also because it reminds us of how the Internet <em>really</em> works: smart people think of good things that are crazy enough that they <em>just might work</em>, and then they give them away, over and over, until they slowly take over the world and make things better for everyone.</p>
<h2>Making Their Mark</h2>
<p>Though it’s now a building block of the contemporary Internet, like so many great things, <a href="https://daringfireball.net/projects/markdown/">Markdown</a> just started out trying to solve a personal problem. In 2002, John Gruber made the unconventional decision to bet his online career on two completely irrational foundations: Apple, and blogs.</p>
<p>It’s hard to remember now, but in 2002, Apple was just a few years past having been on death’s door. As difficult as it may be to picture in today’s world where Apple keynotes are treated like major events, back then, almost nobody was covering Apple regularly, let alone writing <em>exclusively</em> about the company. There was barely even an “tech news” scene online at all, and virtually no one was blogging. So John’s decision to go all-in on Apple for his pioneering blog <a href="https://daringfireball.net">Daring Fireball</a> was, well, a daring one. At the time, Apple had only <em>just</em> launched its first iPod that worked with Windows computers, and the iPhone was still a full five years in the future. But that single-minded focus, not just on Apple, but on obsessive detail in everything he covered, eventually helped inspire much of the technology media landscape that we see today. John’s timing was also perfect — from the doldrums of that era, Apple’s stock price would rise by about 120,000% in the years after Daring Fireball started, and its cultural relevance probably increased by even more than that.</p>
<p>By 2004, it wasn’t just Apple that had begun to take off: blogs and social media themselves had moved from obscurity to the very center of culture, and <a href="https://cybercultural.com/p/internet-2004/">a new era of web technology had begun</a>. At the beginning of that year, few people in the world even knew what a “blog” was, but by the end of 2004, blogs had become not just ubiquitous, but downright <em>cool</em>. As unlikely as it seems now, that year’s largely uninspiring slate of U.S. presidential candidates like Wesley Clark, Gary Hart and, yes, <a href="https://en.wikipedia.org/wiki/Howard_Dean_2004_presidential_campaign">Howard Dean</a> helped propel blogs into mainstream awareness during the Democratic primaries, alongside online pundits who had begun weighing in on politics and the issues and cultural moments at a pace that newspapers and TV couldn’t keep up with. A lot has been written about the transformation of media during those years, but less has been written about how the media and tech of the time transformed <em>each other</em>.</p>
<p><img src="/images/gary-hart-blog.JPG" alt="A photo from 2004 of a TV screen showing CNN, with a ticker saying &quot;Gary Hart Cyber Campaign Starts blog for possible 2004 presidential bid&quot;"></p>
<p>That era of early blogging was interesting in that nearly everyone who was writing the first popular sites was also busy helping <em>create</em> the tools for publishing them. Just like Lucille Ball and Desi Arnaz had to pioneer combining studio-style flat lighting with 35mm filming in order to define the look of the modern sitcom, or Jimi Hendrix had to work with Roger Mayer to invent the signature guitar distortion pedals that defined the sound of rock and roll, the pioneers who defined the technical format and structures of blogging were often building the very tools of creation as they went along.</p>
<p>I got a front row seat to these acts of creation. At the time I was working on Movable Type, which was the most popular tool for publishing “serious” blogs, and helped popularize the medium. Two of my good friends had built the tool and quickly made it into the default choice for anybody who wanted to reach a big audience; it was kind of a combination of everything people do these days on WordPress and all the various email newsletter platforms and all of the “serious” podcasts (since podcasts wouldn’t be invented for another few months). But back in those early days, we’d watch people use our tools to set up Gawker or Huffington Post one day, and Daring Fireball or Waxy.org the next, and each of them would be the first of its kind, both in terms of its design and its voice. To this day, when I see something online that I love by Julianne Escobedo Shepherd or Ta-Nehisi Coates or Nilay Patel or Annalee Newitz or any one of dozens of other brilliant writers or creators, my first thought is often, “hey! They used to type in that app that I used to make!” Because sometimes those writers would inspire us to make a new feature in the publishing tools, and sometimes they would have hacked up a new feature all by themselves in between typing up their new blog posts.</p>
<p>A really clear, and very simple, early example of how we learned that lesson was when we changed the size of the box that people used to type in just to create the posts on their sites. We made the box a little bit taller, mostly for aesthetic reasons. Within a few weeks, we’d found that posts on sites like Gawker had gotten longer, <em>mostly because the box was bigger</em>. This seems obvious now, years after we saw tweets get longer when Twitter expanded from 140 characters to 280 characters, but at the time this was a terrifying glimpse at how much power a couple of young product managers in a conference room in California would have over the media consumption of the entire world every time they made a seemingly-insignificant decision.</p>
<p>The <em>other</em> dirty little secret was, typing in the box in that old blogging app could be… pretty wonky sometimes. People who wanted to do normal things like include an image or link in their blog post, or even just make some text bold, often had to learn somewhat-obscure HTML formatting, memorizing the actual language that’s used to make web pages. Not everybody knew all the details of how to make pages that way, and if they made even one small mistake, sometimes they could break the whole design of their site. It made things feel very fraught every time a writer went to publish something new online, and got in the way of the increasingly-fast pace of sharing ideas now that social media was taking over the public conversation.</p>
<p>Enter John and his magical text files.</p>
<p><img src="/images/markdown-text-hero-slice.jpg" alt=""></p>
<h2>Marking up and marking down</h2>
<p>The purpose of Markdown is really simple: It lets you use the regular characters on your keyboard which you already use while typing out things like emails, to make fancy formatting of text for the web. That HTML format that’s used to make web pages stands for HyperText Markup Language. The word “markup” there means you’re “marking up” your text with all kinds of special characters.
Only, the special characters can be kind of arcane. Want to put in a link to everybody’s favorite website? Well, you’re going to have to type in <code>&lt;a href=&quot;https://anildash.com/&quot;&gt;Anil Dash’s blog&lt;/a&gt;</code> I could explain why, and what it all means, but honestly, you get the point — it’s a lot! Too much. What if you could just write out the text and then the link, sort of like you might within an email? Like: <code>[Anil Dash’s blog](https://anildash.com)</code>! And then the right thing would happen. Seems great, right?</p>
<p>The same thing works for things like putting a header on a page. For example, as I’m writing this right now, if I want to put a big headline on this page, I can just type <code>#How Markdown Took Over the World</code> and the right thing will happen.</p>
<p>If mark_up_ is complicated, then the opposite of that complexity must be… markd_own_. This kind of solution, where it’s so smart it seems obvious in hindsight, is key to Markdown’s success. John worked to make a format that was so simple that anybody could pick it up in a few minutes, and powerful enough that it could help people express pretty much anything that they wanted to include while writing on the internet. At a technical level, it was also easy enough to implement that John could write the code himself to make it work with Movable Type, his publishing tool of choice. (Within days, people had implemented the same feature for most of the other blogging tools of the era; these days, virtually every app that you can type text into ships with Markdown support as a feature on day one.)</p>
<p>Prior to launch, John had enlisted our mutual friend, the late, dearly missed <a href="http://www.aaronsw.com">Aaron Swartz</a>, as a beta tester. In addition to being extremely fluent in every detail of the blogging technologies of the time, Aaron was, most notably, seventeen years old. And though Aaron’s activism and untimely passing have resulted in him having been turned into something of a mythological figure, one of the greatest things about Aaron was that he could be a total pain in the ass, which made him <em>terrific</em> at reporting bugs in your software. (One of the last email conversations I ever had with Aaron was him pointing out some obscure bugs in an open source app I was working on at the time.) No surprise, Aaron instantly understood both the potential and the power of Markdown, and was a top-tier beta tester for the technology as it was created. His astute feedback helped finely hone the final product so it was ready for the world, and when Markdown <a href="https://daringfireball.net/2004/03/introducing_markdown">quietly debuted in March of 2004</a>, it was clear that text files around the web were about to get a permanent upgrade.</p>
<p>The most surprising part of what happened next wasn’t that everybody immediately started using it to write their blogs; that was, after all, what the tool was designed to do. It’s that everybody started using Markdown to do <em>everything else</em>, too.</p>
<h2>Hitting the Mark</h2>
<p>It’s almost impossible to overstate the ubiquity of Markdown within the modern computer industry in the decades since its launch.</p>
<p>After being nagged about it by users for more than a decade, Google finally <a href="https://www.theverge.com/2022/3/29/23002138/google-docs-markdown-support-formatting-update">added support for Markdown to Google Docs</a>, though it took them years of fiddly improvements to make it truly usable. Just last year, Microsoft added support for Markdown to its <a href="https://www.theverge.com/news/677474/microsoft-windows-notepad-bold-italic-text-formatting-markdown-support">venerable Notepad app</a>, perhaps in attempt to assuage the tempers of users who were still in disbelief that Notepad had been bloated with AI features. Nearly every powerful group messaging app, from Slack to WhatsApp to Discord, has support for Markdown in messages. And even the company that indirectly inspired all of this in the first place finally got on board: the most recent version of Apple Notes <a href="https://apple.gadgethacks.com/how-to/ios-26-notes-app-finally-gets-markdown-support-this-fall/">finally added support</a> for Markdown. (It’s an especially striking launch by Apple due to its timing, shortly after John had used his platform as the most influential Apple writer in the world to <a href="https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertino">blog about the utter failure</a> of the “Apple Intelligence” AI launch.)</p>
<p>But it’s not just the apps that you use on your phone or your laptop. For developers, Markdown has long been the lingua franca of the tools we string together to accomplish our work. On GitHub, the platform that nearly every developer in the world uses to share their code, nearly <em>every single repository of code</em> on the site has at least one Markdown file that’s used to describe its contents. Many have <em>dozens</em> of files describing all the different aspects of their project. And some of the repositories on GitHub consist of nothing <em>but</em> massive collections of Markdown files. The small tools and automations we run to perform routine tasks, the one-off reports that we generate to make sure something worked correctly, the confirmations that we have a system email out when something goes wrong, the temporary files we use when trying to recover some old data — all of these default to being Markdown files.</p>
<p>As a result, there are now <em>billions</em> of Markdown files lying around on hard drives around the world. Billions more are stashed in the cloud. There are some on the phone in your pocket. Programmers leave them lying around wherever their code might someday be running. Your kid’s Nintendo Switch has Markdown files on it. If you’re listening to music, there’s probably a Markdown file on the memory chip of the tiny system that controls the headphones stuck in your ears. <em>The Markdown is inside you right now!</em></p>
<h2>Down For Whatever</h2>
<p>So far, these were all things we could have foreseen when John first unleashed his little text tool on the world. I would have been surprised about how <em>many</em> people were using it, but not really the <em>ways</em> in which they were using it. If you’d have said “Twenty years in the future, all the different note-taking apps people use save their files using Markdown!”, I would have said, “Okay, that makes sense!”</p>
<p>What I <em>wouldn’t</em> have asked, though, was “Is John getting paid?” As hard as it may be to believe, back in 2004, the <em>default</em> was that people made new standards for open technologies like Markdown, and just shared them freely for the good of the internet, and the world, and then went on about their lives. If it happened to have unleashed billions of dollars of value for others, then so much the better. If they got some credit along the way, that was great, too. But mostly you just did it to solve a problem for yourself and for other like-minded people. And also, maybe, to help make sure that some jerk didn’t otherwise create some horrible proprietary alternative that would lock everybody into their terrible inferior version forever instead. (We didn’t have the word “enshittification” yet, but we did have Cory Doctorow and we did have plain text files, so we kind of knew where things were headed.)</p>
<p>To give a sense of the vibe of that era, the term “podcasting” had been coined just a month before Markdown was released, and went into wider use that fall, and was similarly <a href="https://www.anildash.com/2024/02/05/wherever-you-get-podcasts/">a radically open system</a> that wasn’t owned by any big company and that empowered people to do whatever they wanted to do to express themselves. (And podcasting was another technology that Aaron Swartz helped improve by being a brilliant pain in the ass. But I’ll save that story for another book-length essay.)</p>
<p>That attitude of being not-quite-_anti_commercial, but perhaps just not even really <em>concerned</em> with whether something was commercial or not seems downright quaint in an era when the tech tycoons are not just the wealthiest people in the world, but also some of the weirdest and most obnoxious as well. But the truth is, most people <em>today</em> who make technology are actually still exceedingly normal, and quite generous. It’s just that they’ve been overshadowed by their bosses who are out of their minds and building rocket ships and siring hundreds of children and embracing overt white supremacy instead of making fun tools for helping you type text, like regular people do.</p>
<p><img src="/images/markdown-text-hero-slice2.jpg" alt=""></p>
<h2>The Markdown Model</h2>
<p>The part about not doing this stuff solely for money matters, because even the <em>most</em> advanced LLM systems today, what the big AI companies call their “frontier” models, require complex orchestration that’s carefully scripted by people who’ve tuned their prompts for these systems through countless rounds of trial and error. They’ve iterated and tested and watched for the results as these systems hallucinated or failed or ran amok, chewing up countless resources  along the way. And sometimes, they generated genuinely astonishing outputs, things that are truly amazing to consider that modern technology can achieve. The rate of progress and evolution, even factoring in the mind-boggling amounts of investment that are going into these systems, is rivaled only by the initial development of the personal computer or the Internet, or the early space race.</p>
<p>And all of it — <em>all of it</em> — is controlled through Markdown files. When you see the brilliant work shown off from somebody who’s bragging about what they made ChatGPT generate for them, or someone is understandably proud about the code that they got Claude to create, all of the most advanced work has been prompted in Markdown. Though where the logic of Markdown was originally a very simple version of &quot;use human language to tell the machine what to do&quot;, the implications have gotten far more dire when they use a format designed to help expresss &quot;make this <code>**bold**</code>&quot; to tell the computer itself &quot;<code>make this imaginary girlfriend more compliant</code>&quot;.</p>
<p>But we already know that the Big AI companies are run by people who don't reckon with the implications of their work. They could never understand that every single project that's even moderately ambitious on these new AI platforms is being written up in files formatted according to this system created by one guy who has never asked for a dime for this work. An entire generation of AI coders has been born since Markdown was created who probably can’t even imagine that this technology even <em>has</em> an &quot;inventor&quot;. It’s just always been here, like the Moon, or Rihanna.</p>
<p>But it’s important for <em>everyone</em> to know that the Internet, and the tech industry, don’t run without the generosity and genius of regular people. It is not just billion-dollar checks and Silicon Valley boardrooms that enable creativity over years, decades, or generations — it’s often a guy with a day job who just gives a damn about doing something right, sweating the details and assuming that if he cares enough about what he makes then others will too. The <em>majority</em> of the technical infrastructure of the Internet was created in this way. For free, often by people in academia, or as part of their regular work, with no promise of some big payday or getting a ton of credit.</p>
<p>The people who make the <em>real</em> Internet and the real innovations also don’t look for ways to hurt the world around them, or the people around them. Sometimes, as in the case of Aaron, the world hurts them more than anyone should ever have to bear. I know not everybody cares that much about plain text files on the Internet; I will readily admit I am a huge nerd about this stuff in a way that maybe most normal people are not. But I do think everybody cares about <em>some</em> part of the wonderful stuff on the Internet in this way, and I want to fight to make sure that everybody can understand that it’s not just five terrible tycoons who built this shit. Real people did. Good people. I saw them do it.</p>
<p>The trillion-dollar AI industry's system for controlling their most advanced platforms is a plain text format one guy made up for his blog and then bounced off of a 17-year-old kid before sharing it with the world for free. You're welcome, Time Magazine's people of the year, <em>The Architects of AI</em>. Their achievement is every bit as impressive as yours.</p>
<p><img src="/images/markdown-text-hero-slice3.jpg" alt=""></p>
<h1 id="top-ten">The Ten Technical Reasons Markdown Won</h1>
<p>Okay, with some of the narrative covered, what can we <em>learn</em> from Markdown’s success? How did this thing really take off? What could we do if we wanted to replicate something like this in the modern era? Let’s consider a few key points:</p>
<h3>1. Had a great brand.</h3>
<p>Okay, let’s be real: “Markdown” as a name is clever as hell. Get it it’s not markup, it’s mark <em>down</em>. You just can’t argue with that kind of logic. People who knew what the “M” in “HTML” stood for could understand the reference, and to everyone else, it was just a clearly-understandable name for a useful utility.</p>
<h3>2. Solved a real problem.</h3>
<p>This one is not obvious, but it’s really important that a new technology have a <em>real</em> problem that it’s trying to solve, instead of just being an abstract attempt to do something vague, like “make text files better”. Millions of people were encountering the idea that it was too difficult or inconvenient to write out full HTML by hand, and even if one had the necessary skills, it was nice to be able to do so in a format that was legible as plain text as well.</p>
<h3>3. Built on behaviors that already existed.</h3>
<p>This is one of the most quietly genius parts of Markdown: The format is based on the ways people had been adding emphasis and formatting to their text for years or even decades. Some of the formatting choices dated back to the early days of email, so they’d been ingrained in the culture of the internet for a full generation before Markdown existed. It was so familiar, people could be writing Markdown <em>without even knowing it</em>.</p>
<h3>4. Mirrored RSS in its origin.</h3>
<p>Around the same time that Markdown was taking off, RSS was maturing into its ubiquitous form as well. The format had existed for some years already, enabling various kinds of content syndication, but at this time, it was adding support for the technologies that would come to be known as podcasting as well. And just like RSS, Markdown was spearheaded by a smart technologist who was also more than a little stubborn about defining a format that would go on to change the way we share content on the internet. In RSS’ case, it was pioneered by Dave Winer, and with Markdown it was John Gruber, and both were tireless in extolling the virtues of the plain text formats they’d helped pioneer. They could both leverage blogs to get the word out, and to get feedback on how to build on their wins.</p>
<h3>5. There was a community ready to help.</h3>
<p>One great thing about a format like Markdown is that its success is never just the result of one person. Vitally, Markdown was part of a community that could build on it right from the start. Right from the beginning, Markdown was inspired by earlier works like Textile, a formatting system for plain text created by <a href="https://web.archive.org/web/20021226035527/http://textism.com/tools/textile/">Dean Allen</a>. Many of us appreciated and were inspired by Dean, who was a pioneer of blogging tools in the early days of social media, but if there’s a bigger fan of Dean Allen on the internet than John Gruber, I’ve never met them. Similarly, <a href="http://www.rememberaaronsw.com/memories/">Aaron Swartz</a>, the brilliant young technologist who’s known best known as an activist for digital rights and access, was at that time just a super brilliant teenager that a lot of us loved hacking with. He was the most valuable beta tester of Markdown prior to its release, helping to shape it into a durable and flexible format that’s stood the test of time.</p>
<h3>6. Had the right flavor for every different context.</h3>
<p>Because Markdown’s format was frozen in place (and had some super-technical details that people could debate about) and people wanted to add features over time, various communities that were implementing Markdown could add their own “flavors” of it as they needed. Popular ones came to be called Commonmark and Github-Flavored, led by various companies or teams that had divergent needs for the tool. While tech geeks tend to obsess over needing everything to be “correct”, in reality it often just <em>doesn’t matter</em> that much, and in the real world, the entire Internet is made up of content that barely follows the technical rules that it’s supposed to.</p>
<h3>7. Released at a time of change in behaviors and habits.</h3>
<p>This is a subtle point, but an important one: Markdown came along at the right time in the evolution of its medium. You can get people to change their behaviors when they’re using a new tool, or adopting a new technology. In this case, blogging (and all of social media!) were new, so saying “here’s a new way of typing a list of bullet points” wasn’t much an additional learning curve to add to the mix. If you can take advantage of catching people while they’re already in a learning mood, you can really tap into the moment when they’re most open-minded to new things.</p>
<h3>8. Came right on the cusp of the “build tool era”.</h3>
<p>This one’s a bit more technical, but also important to understand. In the first era of building for the web, people often built the web’s languages of HTML, Javascript and CSS by hand, by themselves, or stitched these formats together from subsets or templates. But in many cases, these were fairly simple compositions, made up of smaller pieces that were written in the same languages. As things matured, the roles for web developers specialized (there started to be backend developers vs. front-end, or people who focused on performance vs. those who focused on visual design), and as a result the tooling for developers matured. On the other side of this transition, developers began to use many different programming languages, frameworks and tools, and the standard step before trying to deploy a website was to have an automated build process that transformed the “raw materials” of the site into the finished product. Since Markdown is a raw material that has to be transformed into HTML, it perfectly fit this new workflow as it became the de facto standard method of creation and collaboration.</p>
<h3>9. Worked with “View source”</h3>
<p>Most of the technologies that work best on the web enable creators to “view source” just like HTML originally did when the first web browsers were created. In this philosophy, one can look at the source code that makes up a web page, and understand how it was constructed so that you can make your own. With Markdown, it only takes one glimpse of a source Markdown file for anyone to understand how they might make a similar file of their own, or to extrapolate how they might apply analogous formatting to their own documents. There’s no teaching required when people can just see it for themselves.</p>
<h3>10. Not encumbered in IP</h3>
<p>This one’s obvious if you think about it, but it can’t go unsaid: There are no legal restrictions around Markdown. You wouldn’t <em>think</em> that anybody would be foolish or greedy enough to try to patent something as simple as Markdown, but there are many far worse examples of patent abuse in the tech industry. Fortunately, John Gruber is not an awful person, and nobody else has (yet) been brazen enough to try to usurp the format for their own misadventures in intellectual property law. As a result, nobody’s been afraid, either to use the format, or to support creating or reading the format in their apps.</p>

    ]]></content>
    </entry>
    <entry>
        <title>How to know if that job will crush your soul</title>
        <link href="https://anildash.com/2026/01/12/will-that-job-crush-your-soul/"/>
        <updated>2026-01-12T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/12/will-that-job-crush-your-soul/</id>
        <content type="html"><![CDATA[
      <p>Last week, we talked about one huge question, “<a href="https://www.anildash.com/2026/01/05/a-tech-career-in-2026/">How the hell are you supposed to have a career in tech in 2026?</a>” That’s pretty specific to this current moment, but there are some timeless, more perennial questions I've been sharing with friends for years that I wanted to give to all of you. They're a short list of questions that help you judge whether a job that you’re considering is going to crush your soul or not.</p>
<p>Obviously, not everyone is going to get to work in an environment that has perfect answers to all of these questions; a lot of the time, we’re lucky just to get a place to work at all. But these questions are framed in this way to encourage us all to aspire towards roles that enable us to do our best work, to have the biggest impact, and to live according to our values.</p>
<h2>The Seven Questions</h2>
<ul>
<li>If what you do succeeds, will the world be better?</li>
</ul>
<p>This question originally started for me when I would talk to people about new startups, where people were judging the basic idea of the product or the company itself, but it actually applies to <em>any</em> institution, at <em>any</em> size. If the organization that you’re considering working for, or the team you’re considering joining, is able to achieve their stated goals, is it ultimately going to have a positive effect? Will you be proud of what it means? Will the people you love and care about respect you for making that choice, and will those with the least to gain feel like you’re the kind of person who cares about their impact on the world?</p>
<ul>
<li>Whose money do they have to take to stay in business?</li>
</ul>
<p>Where does the money in the organization <em>really</em> come from? You need to know this for a lot of reasons. First of all, you need to be sure that <em>they</em> know the answer. (You’d be surprised how often that’s not the case!) Even if they do know the answer, it may make you realize that those customers are not the people whose needs or wants you’d like to spend most of your waking hours catering to. This goes beyond the simple basics of the business model — it can be about whether they're profitable or not, and what the corporate ownership structure is like.</p>
<p>It’s also increasingly common for companies to mistake those who are <em>investing</em> in a company with those who are their <em>customers</em>. But there’s a world of difference between those who are paying you, and those who you have to pay back tenfold. Or thousandfold.</p>
<p>The same goes for nonprofits — do you know who has to stay happy and smiling in order for the institution to stay stable and successful? If you know those answers, you'll be far more confident about the motivations and incentives that will drive key decisions within the organization.</p>
<ul>
<li>What do you have to believe to think that they’re going to succeed? In what way does the world have to change or not change?</li>
</ul>
<p>Now we’re getting a little bit deeper into thinking about the systems that surround the organization that you’re evaluating. Every company, every institution, even every small team, is built around a set of invisible assumptions. Many times, they’re completely reasonable assumptions that are unlikely to change in the future. But <em>sometimes</em>, the world you’re working in is about to shift in a big way, or things are built on a foundation that’s speculative or even unrealistic.</p>
<p>Maybe they're assuming there aren't going to be any big new competitors. Perhaps they think they'll always remain the most popular product in their category. Or their assumptions could be about the stability of the rule of law, or a lack of corruption — more fundamental assumptions that they've never seen challenged in their lifetime or in their culture, but that turn out to be far more fragile than they'd imagined.</p>
<p>Thinking through the context that everyone is sharing, and reflecting on whether they’re really planning for any potential disruptions, is an essential part of judging the psychological health of an organization. It’s the equivalent of a person having self-awareness, and it’s just as much of a red flag if it’s missing.</p>
<ul>
<li>What’s the lived experience of the workers there whom you trust? Do you have evidence of leaders in the organization making hard choices to do the right thing?</li>
</ul>
<p>Here is how we can tell the culture and character of an organization. If you’ve got connections into the company, or a backchannel to workers there, finding out as much information as you can about the real story of its working conditions is often one of the best ways of understanding whether it’s a fit for your needs. Now, people can always have a bad day, but overall, workers are usually very good at providing helpful perspectives about their context.</p>
<p>And more broadly, if people can provide examples of those in power within an organization <em>using</em> that power to take care of their workers or customers, or to fight for the company to be more responsible, then you’ve got an extremely positive sign about the health of the place even before you’ve joined. It’s vital that these be stories you are able to find and discover on your own, not the ones amplified by the institution itself for PR purposes.</p>
<ul>
<li>What were you wrong about?</li>
</ul>
<p>And here we have perhaps one of the easiest and most obvious ways to judge the culture of an organization. This is even a question you can ask people while you’re in an interview process, and you can judge their responses to help form your opinion. A company, and <em>leadership culture</em>, that can change its mind when faced with new information and new circumstances is much more likely to adapt to challenges in a healthy way. (If you want to be nice, phrase it as &quot;What is a way in which the company has evolved or changed?&quot;)</p>
<ul>
<li>Does your actual compensation take care of what you need for all of your current goals and needs — from day one?</li>
</ul>
<p>This is where we go from the abstract and psychological goals to the practical and everyday concerns: can you pay your bills? The phrasing and framing here is very intentional: <em>are they really going to pay you enough</em>? I ask this question very specifically because you’d be surprised how often companies actually dance around this question, or how often we trick ourselves into hearing what we <em>want</em> to hear as the answer to this question when we’re in the exciting (or stressful) process of considering a new job, instead of looking at the facts of what’s actually written in black-and-white on an offer letter.</p>
<p>It's also important not to get distracted with potential, even if you're optimistic about the future. Don’t listen to promises about what might happen, or descriptions of what’s possible if you advance in your role. Think about what your real life will be like, after taxes, if you take the job that they’ve described.</p>
<ul>
<li>Is the role you’re being hired into one where you can credibly advance, and where there’s sufficient resources for success?</li>
</ul>
<p>This is where you can apply your optimism in a practical way: can the organization accurately describe how your career will proceed within the company? Does it have a specific and defined trajectory, or does it involve ambiguous processes or changes in teams or departments? Would you have to lobby for the support of leaders from other parts of the organization? Would making progress require acquiring new skills or knowledge? Have they committed to providing you with the investment and resources required to learn those skills?</p>
<p>These questions are essential to understand, because lacking these answers can lead to an ugly later realization that even an initially-exciting position may turn out to be a dead-end job over time.</p>
<h3>Towards better working worlds</h3>
<p>Sometimes it can really feel like the deck is stacked against you when you're trying to find a new job. It can feel even worse to be faced with an opportunity and have a nagging sense that something is <em>not quite right</em>. Much of the time, that feeling comes from the vague worry that we're taking a job that is going to make us miserable.</p>
<p>Even in a tough job market, there are some places that are trying to do their best to treat people decently. In larger organizations, there are often pockets of relative sanity, led by good leaders, who are trying to do the right thing. It can be a massive improvement in quality of life if you can find these places and use them as foundations for the next stage of your career.</p>
<p>The best way to navigate towards these better opportunities is to be systematic when evaluating all of your options, and to hold out for as high standards as possible when you're out there looking. These seven questions give you the tools to do exactly that.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Wikipedia at 25: What the web can be</title>
        <link href="https://anildash.com/2026/01/15/wikipedia-at-25/"/>
        <updated>2026-01-15T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/15/wikipedia-at-25/</id>
        <content type="html"><![CDATA[
      <p>When Wikipedia <a href="https://wikipedia25.org/en/">launched 25 years ago today</a>, I heard about it almost immediately, because the Internet was small back then, and I thought “Well… good luck to those guys.” Because there had been online encyclopedias before Wikipedia, and anybody who really <em>cared</em> about this stuff would, of course, buy Microsoft Encarta on CD-ROM, right? I’d been fascinated by the technology of wikis for a good while at that point, but was still not convinced about whether they could be deployed at such a large scale.</p>
<p>So, once Wikipedia got a little bit of traction, and I met Jimmy Wales the next year, I remember telling him (with all the arrogance that only a dude that age can bring to such an obvious point) “well, the <em>hard part</em> is going to be getting all the people to contribute”. As you may be aware, Jimmy, and a broad worldwide community of volunteers, did pretty well with the hard part.</p>
<p>Wikipedia has, of course, become vital to the world’s information ecosystem. Which is why everyone needs to be aware of the fact that it is currently under <a href="https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales">existential threat</a> from those who see any reliable source of truth as an attack on their power. The same authoritarians in power who are trying to purchase every media outlet and social network where ordinary people might have a chance to share accurate information about their crimes or human rights violations are deeply threatened about a platform that they can’t control and can’t own.</p>
<p>Perhaps the greatest compliment to Wikipedia at 25 years old is the fact that, if the fascists can’t buy it, then they’re going to try to kill it.</p>
<h2>The Building Block</h2>
<p>What I couldn’t foresee in the early days, when so many were desperate to make sure that Wikipedia wasn’t treated as a credible source — there were <em>so many</em> panicked conversations about how to keep kids from citing the site in their school papers — was how the site would become infrastructure for so much of the commercial internet.</p>
<p>The first hint was when Google introduced their “Knowledge Panel”, the little box of info next to their search results that tried to explain what you were looking for, without you even having to click through to a website. For Google, this had a huge economic value, because it kept you on their search results page where all their ad links lived. The vast majority of the Knowledge Panel content for many major topics was basically just Wikipedia content, summarized and wrapped up in a nice little box. Here was the most valuable company of the new era of the Internet, and one of their signature experiences relied on the strength of the Wikipedia community’s work.</p>
<p>This was, of course, complemented by the fact that Wikipedia would also organically show up right near the top of so many search results just based on the strength of the content that the community was cranking out at a remarkable pace. Though it probably made Google bristle a little bit that those damn Wikipedia pages didn’t have any Google ads on them, and didn’t have any of Google’s tracking code on them, so they couldn’t surveil what you do when you were clicking around on the site, making it impossible for them to spy on you and improve the targeting of their advertising to you.</p>
<p>The same pattern played out later for the other major platforms; Apple’s Siri and Amazon’s Alexa both default to using Wikipedia data to answer common questions. During the few years when Facebook pretended to care about misinformation, they would show summaries of Wikipedia information in the news feed to help users fact-check misinformation that was being shared.</p>
<p>Unsurprisingly, a lot of the time when the big companies would try to use Wikipedia as the water to put out the fires that they’d started, they <a href="https://www.wired.com/story/youtube-wikipedia-content-moderation-internet/">didn’t even bother to let the organization know</a> before they started doing so, burdening the non-profit with the cost and complexity of handling their millions of users and billions of requests, without sharing any of their trillions of dollars. (At least until there was public uproar over the practice.) Eventually, Wikimedia Foundation (the organization that runs Wikipedia) made a way for <a href="https://enterprise.wikimedia.com">companies to make deals with them</a> and actually support the community instead of just extracting from the community without compensation.</p>
<h2>The culture war comes for Wikipedia</h2>
<p>Things had reached a bit of equilibrium for a few years, even as the larger media ecosystem started to crumble, because the world could see after a few decades that Wikipedia had become a vital and valuable foundation to the global knowledge ecology. It’s almost impossible to imagine how the modern internet would function without it.</p>
<p>But as the global fascist movement has risen in recent years, one of their first priorities, as in all previous such movements, has been undermining any sources of truth that can challenge their control over information and public sentiment. In the U.S., this has manifested from the top-down with the richest tycoons in the country, including Elon Musk, stoking sentiment against Wikipedia with vague innuendo and baseless attacks against the site. This is also why Musk has funded the creation of alternatives like Grokipedia, designed to undermine the centrality and success of Wikipedia. From the bottom-up, there have been individual bad actors who have attempted to infiltrate the ranks of editors on the site, or worked to deface articles, often working slowly or across broad swaths of content in order to attempt to avoid detection.</p>
<p>All of this has been carefully coordinated; as noted in <a href="https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales">well-documented pieces like the Verge’s excellent coverage</a> of the story, the attack on Wikipedia is a campaign that has been led by voices like Christopher Rufo, who helped devise campaigns like the concerted effort to demonize trans kids as a cultural scapegoat, and the intentional targeting of Ivy League presidents as part of the war on DEI. The undermining of Wikipedia hasn’t yet gotten the same traction, but they also haven’t yet put the same time and resources into the fight.</p>
<p>There’s been such a constant stream of vitriol directed at Wikipedia and its editors and leadership that, when I heard about a <a href="https://gothamist.com/news/gunman-storms-stage-at-wikipedia-conference-in-manhattan-no-injuries-reported">gunman storming the stage</a> at the recent gathering of Wikipedia editors, I had <em>assumed</em> it was someone who had been incited by the baseless attacks from the extremists. (It turned out to have been someone who was disturbed on his own, which he said was tied to the editorial policies of the site.) But I would expect it’s only a matter of time until the attacks on Wikipedia’s staff and volunteers take on a far more serious tone much of the time — and it’s not as if this is an organization that has a massive security budget like the trillion-dollar tech companies.</p>
<p>The temperature keeps rising, and there isn’t yet sufficient awareness amongst good actors to protect the Wikipedia community and to guard its larger place in society.</p>
<h2>Enter the AI era</h2>
<p>Against this constant backdrop of increasing political escalation, there’s also been the astronomical ramp-up in demand for Wikipedia content from AI platforms. The very first source of data for many teams when training a new LLM system is Wikipedia, and the vast majority of the time, they gather that data not by paying to license the content, but by “scraping” it from the site — which uses both more technical resources and precludes the possibility of establishing any consensual paid relationship with the site.</p>
<p>A way to think about it is that, for the AI world, they’re music fans trading Wikipedia like it’s MP3s on Napster, and conveniently ignoring the fact there’s an Apple Music or Spotify offering a legitimate way to get that same data while supporting the artist. Hopefully the <a href="https://www.anildash.com/2025/09/18/the-taylors-version-generation/">“Taylor’s Version” generation</a> can see Wikipedia as being at least as worthy of supporting as a billionaire like Taylor Swift is.</p>
<p>But as people start going to their AI apps first, or chatting with bots instead of doing Google searches, they don’t <em>see</em> those Knowledge Panels anymore, and they don’t click through to Wikipedia anymore. At a surface level, this hurts traffic to the site, but at a deeper level, this hurts the flow of new contributors to the site. Interestingly, though I’ve been linking to <a href="https://www.anildash.com/2006/07/31/quitting-wikipe/">critiques of Wikipedia</a> on my site for at least twenty years, for most of the last few decades, my biggest criticism of Wikipedia has long been the lack of inclusion amongst its base of editorial volunteers. But this is, at least, a shortcoming that both the Wikimedia Foundation and the community itself readily acknowledge and have been working diligently on.</p>
<p>That lack of diversity in editors as a problem will pale in comparison to the challenge presented if people stop coming to the front door entirely because they’re too busy talking to their AI bots. They may not even <em>know</em> what parts of the answers they’re getting from AI are due to the bot having slurped up the content from Wikipedia. Worse, they’ll have been so used to constantly encountering hallucinations that the idea of joining a community that’s constantly trying to improve the accuracy of information will seem quaint, or even <em>absurd</em>, in a world where everything is wrong and made up all the time.</p>
<p>This means that it’s in the best interests of the AI platforms to not only pay to sustain Wikipedia and its community so that there’s a continuous source of new, accurate information over time, but that it’s also in their interest to keep teaching their community about the value of such a resource. The very fact that people are so desperate to chat with a bot shows how hungry they are for connection, and just imagine how excited they’d be to connect with the <em>actual humans</em> of the Wikipedia community!</p>
<h2>We can still build</h2>
<p>It’s easy to forget how radical Wikipedia was at its start. For the majority of people on the Internet, Wikipedia is just something that’s been omnipresent right from the start. But, as someone who got to watch it rise, take it from me: this was a thing that lots of regular people <em>built together</em>. And it was explicitly done as a collaboration meant to show the spirit of what the Internet is really about.</p>
<p><a href="https://wikimediafoundation.org/wikipedia25/">Take a look at its history</a>. Think about what it means that there is no advertising, and there never has been. It doesn’t track your activity. You can edit the site <em>without even logging in</em>. If you make an account, you don’t have to use your real name if you’d like to stay anonymous. When I wrote about <a href="https://www.anildash.com/2008/09/22/alan-leeds-and-who-writes-the-web/">being the creator</a> of an entirely <em>new</em> page on Wikipedia, it felt like magic, and it still does! You can be the person that births something onto the Internet that feels like it becomes a permanent part of the historical record, and then others around the world will help make it better, forever.</p>
<p>The site is still amongst the most popular sites on the web, bigger than almost every commercial website or app that has ever existed. There’s never been a single ad promoting it. It has unlocked <em>trillions</em> of dollars in value for the business world, and unmeasurable educational value for multiple generations of children. Did you know that for many, many topics, you can change your language from English to <em>Simple English</em> and get an <a href="https://simple.wikipedia.org/wiki/Quadratic_equation">easier-to-understand</a> version of an article that can often help explain a concept in much more approachable terms? Wikipedia has a <a href="https://www.wikivoyage.org">travel guide</a>! A <a href="https://www.wiktionary.org">dictionary</a>! A <a href="https://www.wikibooks.org">collection of textbooks and cookbooks</a>! Here are <a href="https://species.wikimedia.org/">all the species</a>! It’s unimaginably deep.</p>
<p>Whenever I worry about where the Internet is headed, I remember that this example of the collective generosity and goodness of people still exists. There are so many folks just working away, every day, to make something good and valuable for strangers out there, simply from the goodness of their hearts. They have no way of ever knowing who they’ve helped. But they believe in the simple power of doing a little bit of good using some of the most basic technologies of the internet. Twenty-five years later, all of the evidence has shown that they really have changed the world.</p>
<hr>
<p>If you are able, today is a very good day to <a href="https://donate.wikimedia.org/">support the Wikimedia Foundation</a>.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Codeless: From idea to software</title>
        <link href="https://anildash.com/2026/01/22/codeless/"/>
        <updated>2026-01-22T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/22/codeless/</id>
        <content type="html"><![CDATA[
      <h2>Something actually new?</h2>
<p>There’s finally been a big leap forward in coding tech unlocked by AI — not just “it’s doing some work for me”, but “we couldn’t do this before”. What’s new are a few smart systems that let coders control fleets of dozens of coding bots, all working in tandem, to swarm over a list of tasks and to deliver entire features, or even entire <em>sets</em> of features, just from a plain-English description of the strategic goal to be accomplished.</p>
<p>This isn’t a tutorial, this is just trying to understand that something cool is happening, and maybe we can figure out what it means, and where it’s going. Lots of new technologies and buzzwords with wacky names like Gas Town and Ralph Wiggum and loops and polecats are getting as much attention as, well, anything since vibe coding. So what’s really going on?</p>
<p>The breakthrough here came from using two familiar ideas in interesting new ways. The first idea is <em>orchestration</em>. Just like cloud computing got massively more powerful when it became routine for coders to be able to control entire fleets of servers, the ability to reliably configure and control entire fleets of coding bots unlocks a much higher scale of capability than any one person could have by chatting with a bot on their own.</p>
<p>The second big idea is <em>resilience</em>. Just like systems got more capable when designers started to assume that components like hard drives would fail, or that networks would lose connection, today’s coders are aware of the worst shortcoming of using LLMs: sometimes they create garbage code. This tendency used to be the biggest shortcoming about using LLMs to create code, but by <em>designing</em> for failure, testing outputs, and iterating rapidly, codeless systems enable a huge advancement in the ultimate reliability of the output code.</p>
<p>The codeless approach also addresses the other huge objection that many coders have to using LLMs for coding. The most common direct objection to using AI tools to assist in coding hasn’t just been the broken code — it’s been the many valid social and ethical concerns around the vendors who build the platforms. But codeless systems are open source, non-commercial, and free to deploy, while making it trivial to swap in alternatives for every part of the stack, including using open source or local options for all or part of the LLM workload. This isn’t software being sold by a Big AI vendor; these are tools being created by independent hackers in the community.</p>
<p>The ultimate result is the ability to create software at scale without directly writing any code, simply by providing strategic direction to a fleet of coding bots. Call it “codeless” software.</p>
<h2>Codeless in 10 points</h2>
<p>If you’re looking for a quick bullet-point summary, here’s something skimmable:</p>
<ol class="numbered-callout">
  <li>"Codeless" is a way to describe a new way of orchestrating large numbers of AI coding bots to build software at scale, controlled by a plain-English strategic plan for the bots to follow.</li>
  <li>In this approach, you don't write code directly. Instead, you write a plan for the end result or product that you want, and the system directs your bots to build code to deliver that product. (Codeless abstracts away directly writing code just like "<a href="https://en.wikipedia.org/wiki/Serverless_computing">serverless</a>" abstracted away directly managing servers.)</li>
  <li>This codeless approach is credible because it emerged organically from influential coders who don't work for the Big AI companies, and independent devs are already starting to make it easier and more approachable. It's not a pitch from a big company trying to sell a product, and in fact, codeless tools make it easy to swap out one LLM for another.</li>
  <li>Today, codeless tools themselves don't cost anything. The systems are entirely open source, though setting them up can be complicated and take some time. Actually running enough bots to generate all that code gets expensive quickly if you use cutting-edge commercial LLMs, but mixing in some lower-cost open tools can help defray costs. We can also expect that, as this approach gains momentum, more polished paid versions of the tools will emerge.</li>
  <li>Many coders didn't like using LLMs to generate code because they hallucinate. Codeless systems <em>assume</em> that the code they generate will be broken sometimes, and handle that failure. Just like other resilient systems assume that hard drives will fail, or that network connections will be unreliable, codeless systems are designed to handle unreliable code.</li>
  <li>This has nothing to do with the "no code" hype from years ago, because it's not locked-in to one commercial vendor or one proprietary platform. And codeless projects can be designed to output code that will run on any regular infrastructure, including your existing systems.</li>
  <li>Codeless changes power dynamics. People and teams who adopt a codeless approach have the potential to build a lot more under their own control. And those codeless makers won't necessarily have to ask for permission or resources in order to start creating. Putting this power in the hands of those individuals might have huge implications over time, as people realize that they may not have to raise funding or seek out sponsors to build the things that they imagine.</li>
  <li>The management and creation interfaces for codeless systems are radically more accessible than many other platforms because they're often controlled by simple plain text <a href="https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/">Markdown</a> files. This means it's likely that some of the most effective or successful codeless creators could end up being people who have had roles like product managers, designers, or systems architects, not just developers.</li>
  <li>Codeless approaches are probably <em>not</em> a great way to take over a big legacy codebase, since they rely on accurately describing an entire problem, which can often be difficult to completely capture. And coding bots may lack sufficient context to understand legacy codebases, especially since LLMs are sometimes weaker with legacy technologies.</li>
  <li>In many prior evolutions of coding, abstractions let coders work at higher levels, closer to the problem they were trying to solve. Low-level languages saved coders from having to write assembly language; high-level languages kept coders from having to write code to manage memory. Codeless systems abstract away directly writing code, continuing the long history of letting developers focus more on the problem to be solved than on manually creating every part of the code.</li>
</ol>
<h2>What does software look like when coders stop coding?</h2>
<p>As we’ve been saying for some time, for people who actually make and understand technology, the <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view/">majority AI view</a> is that LLMs are just useful technologies that have their purposes, but we shouldn’t go overboard with all of the absurd hype. We’re seeing new examples of the deep moral failings and social harms of the Big AI companies every day.</p>
<p>Despite this, coders still haven’t completely written off the potential of LLMs. A big reason why coders are generally more optimistic about AI than writers or photographers is because, in creative spaces, AI smothers the human part of the process. But in coding, AI takes over the drudgery, and lets coders focus on the most human and expressive parts.</p>
<p>The shame, then, is that much of the adoption of AI for coding has been in top-down mandates at companies. Rather than enabling innovation, it’s been in deployments designed to undermine their workers’ job security. And, as we’ve seen, <a href="https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/">this has worked</a>. It’s no wonder that a lot of the research on enterprise use of AI for coding has shown little to no increase in productivity; obviously productivity improvements have not been the goal, much of the time.</p>
<p>Codeless tech has the potential to change that. Putting the power of orchestrating a fleet of coding bots in the hands of a smart and talented coder (or designer! or product manager! or writer! or…) upends a lot of the hierarchy about who’s able to call the shots on what gets created. The size of your nights-and-weekends project might be a lot bigger, the ambitions of your side gig could be a lot more grand.</p>
<p>It’s still early, of course. The bots themselves are expensive as hell if you’re running the latest versions of Claude Code for all of them. Getting this stuff running is hard; you’re bouncing between obscure references to Gas Town on <a href="https://github.com/steveyegge">Steve Yegge’s Github</a>, and a bunch of smart posts on <a href="https://simonwillison.net">Simon Willison’s blog</a>, and sifting through YouTube videos about <a href="https://www.youtube.com/watch?v=vIFD0YE29Fs">Ralph Wiggum</a> to see if they’re about the Simpsons or the software.</p>
<p>It’s gonna be like that for a while, a little bit of a mess. But that’s a lot better than Enterprise Certified Cloud AI Engineer, Level II, minimum 11 years LLM experience required. If history is any guide, the entire first wave of implementations will be discarded in favor of more elegant and/or powerful second versions, once we know what we actually want. <a href="https://wiki.c2.com/?PlanToThrowOneAway">Build one to throw away.</a> I mean, that’s kind of the spirit of the whole codeless thing, isn’t it?</p>
<p>This could all still sputter out, too. Maybe it’s another fad. I don’t love seeing some of the folks working on codeless tools pivot into asking folks to buy memecoins to support their expensive coding bot habits. The Big AI companies are gonna try to kill it or co-opt it, because tools that reduce the switching cost between LLMs to zero must terrify them.</p>
<p>But for the first time in a long time, this thing feels a little different. It’s emerging organically from people who don’t work for trillion dollar companies. It’s starting out janky and broken and interesting, instead of shiny and polished in a soulless live stream featuring five dudes wearing vests. This is tech made for people who <em>like making things</em>, not tech made for people who are trying to appease financiers. It’s <a href="https://www.anildash.com/2025/10/24/founders-over-funders/">for inventors, not investors</a>.</p>
<p>I truly, genuinely, don’t care if you call it “codeless”; it just needs a name that we can hang on it so people know wtf we’re talking about. I worked backwards from “what could we write on a whiteboard, and everyone would know what we were talking about?” If you point at the diagrams and say, “The legacy code is complicated, so we’re going to do that as usual, but the client apps and mobile are all new, so we could just do those codeless and see how it goes”, people would just sort of nod along and know what you meant, at least vaguely. If you’ve got a better name, have at it.</p>
<p>In the meantime, though, start hacking away. Make something more ambitious than you could do on your own. Sneak an army of bots into work. Build something that you would have needed funding for before, but don’t now. Build something that somebody has made a horrible proprietary version of, and release it for free. Share your Markdown files!</p>
<p>Maybe the distance from idea to app just got a little bit shorter? We're about to find out.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Why We Speak</title>
        <link href="https://anildash.com/2026/01/26/why-we-speak/"/>
        <updated>2026-01-26T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/26/why-we-speak/</id>
        <content type="html"><![CDATA[
      <p>I've been working in and around the technology industry for a long time. Depending on how you count, it's 20 or 30 years. (I first started getting paid to put together PCs with a screwdriver when I was a teenager, but there isn't a good way to list that on LinkedIn.) And as soon as I felt like I was pretty sure that I was going to be able to pay the next month's rent without having to eat ramen noodles for two weeks before it was due, I felt like I'd really made it.</p>
<p>And as soon as you've made it, you owe it to everybody else to help out as much as you can. I don't know how to put it more simply than that. But for maybe the first decade of being in the &quot;startup&quot; world, where everybody was worried about appealing to venture capital investors, or concerned about getting jobs with the big tech companies, I was pretty convinced that one of the things that you <em>couldn't</em> do to help people was to talk about some of the things that were wrong. Especially if the things that were wrong were problems that, when described, might piss off the guys who were in charge of the industry.</p>
<p>But eventually, I got a little bit of power, mostly due to becoming a little bit visible in the industry, and I started to get more comfortable speaking my mind. Then, surprisingly, it turned out that... nothing happened. The sky didn't fall. I didn't get fired from my jobs. I certainly got targeted for harassment by bad actors, but that was largely due to my presence on social media, not simply because of my views. (And also because I tend to take a pretty provocative or antagonistic tone on social media when trying to frame an argument.)  It probably helped that, in the workplace, I both tend to act like a normal person and am also generally good at my job.</p>
<p>I point all of this out not to pat myself on the back, or as if any of this is remarkable  — it's certainly not — but because it's useful context for the current moment.</p>
<h2>The cycle of backlash</h2>
<p>I have been around the technology industry, and the larger business world, long enough to have watched the practice of speaking up about moral issues go from completely unthinkable to briefly being given lip service to actively being persecuted both professionally and politically. The campaigns to stamp out issues of conscience amongst working people have vilified caring for others with names ranging from &quot;political correctness&quot; to &quot;radicalism&quot; to &quot;virtue signaling&quot; to &quot;woke&quot; and I'm sure I'm missing many more. This, despite the fact that there have always been thoughtful people in every organization who try to do the right thing; it's impossible to have a group of people of any significant size and not have <em>some</em> who have a shred of decency and humanity within them.</p>
<p>But the technology industry has an incredibly short memory, by design. We're always at the beginning of history, and so many people working in it have never encountered a time before this moment when there's been this kind of brutal backlash from their leaders against common decency. Many have never felt such pressure to tamp down their own impulses to be good to their colleagues, coworkers, collaborators and customers.</p>
<p>I want to encourage everyone who is afraid in this moment to find some comfort and some solace in the fact that we have been here before. Not in <em>exactly</em> this place, but in analogous ones. And also to know that there are many people who are also feeling the same combination of fear or trepidation about speaking up, but a compelling and irrepressible desire to do so. We've shifted the Overton window on what's acceptable multiple times before.</p>
<p>I am, plainly, exhorting you do to speak up about the current political moment and to call for action. There is some risk to this. There is less risk for everyone when more of us speak up.</p>
<h2>Where we are</h2>
<p>In the United States, our government is lying to us about an illegal occupation of a major city, which has so far led to multiple deaths of innocents who were murdered by agents of the state. We have video evidence of what happened, and the most senior officials in our country have deliberately, blatantly and unrepentantly lied about what the videos show, while besmirching the good names of the people who were murdered. Just as the administration's most senior officials spread these lies, several of the most powerful and influential executives in the tech industry voluntarily met with the President, screened a propaganda film made expressly as a bribe for him, and have said nothing about either the murders or the lies about the murders.</p>
<p>These are certainly not the first wrongs by our government. These are not even the first such killings in Minnesota in recent years. But they are a new phase, and this occupation is a new escalation. This degree of lawless authoritarianism <em>is</em> new — tech leaders were <em>not</em> <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">crafting golden ingots</a> to bribe sitting leaders of the United States in the past. Military parades featuring banners bearing the face of Dear Leader, followed by ritual gift-giving in the throne room of the golden palace with the do-nothing failsons and conniving hangers-on of the aging strongman used to be the sort of thing we mocked about failing states, not things we emulated about them.</p>
<p>So, when our &quot;leaders&quot; have failed, and they have, we must become a leaderful community. This, I have a very positive feeling about. I've seen so many people who are willing to step up, to give of themselves, to use their voices. And I have all the patience in the world for those who may not be used to doing those things, because it can be hard to step into those shoes for the first time. If you're unfamiliar or uncomfortable with this work, or if the risk feels a little more scary because you carry the responsibility of caring for those around you, that's okay.</p>
<p>But I've been really heartened to see <a href="https://www.linkedin.com/posts/anildash_i-just-want-to-share-something-briefly-as-activity-7421306939055198209-272Z">how many people have responded</a> when I started talking about these ideas on LinkedIn — not usually the bastion of &quot;political&quot; speech. I don't write the usual hustle-bro career advice platitudes there, and instead laid out the argument for why people will need to choose a side, and should choose the side that their heart already knows that they're on. To my surprise, there's been near-universal agreement, even amongst many who don't agree with many of my other views.</p>
<p><a href="https://www.businessinsider.com/business-leader-ceo-silence-alex-pretti-killing-minneapolis-2026-1">It is already clear</a> that business leaders are going to be compelled to speak up. It would be ideal if it is their own workers who lead them towards the words (and actions) that they put out into the world.</p>
<h2>Where we go</h2>
<p>Those of us in the technology realm bear a unique responsibility here. It is the tools that we create which enable the surveillance and monitoring that agencies like ICE use to track down and threaten both their targets and those they attempt to intimidate away from holding them accountable. It is the wealth of our industry which isolates the tycoons who run our companies when they make irrational decisions like creating vanity films about the strongman's consort rather than pushing for the massive increase in ICE spending to instead go towards funding all of Section 8 housing, all of CHIP insurance, all school lunches, and 1/3 of all federal spending on K-12 education.</p>
<p>It takes practice to get comfortable using our voices. It takes repetition until leaders know we're not backing down. It takes perseverance until people in power understand they're going to have to act in response to the voices of their workers. <a href="https://iceout.tech">But everyone has a voice</a>. Now is your turn to use it.</p>
<p>When we speak, we make it easier for others to do so. When we all speak, we make change inevitable.</p>

    ]]></content>
    </entry>
    <entry>
        <title>A Codeless Ecosystem, or hacking beyond vibe coding</title>
        <link href="https://anildash.com/2026/01/27/codeless-ecosystem/"/>
        <updated>2026-01-27T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/27/codeless-ecosystem/</id>
        <content type="html"><![CDATA[
      <p>There's been a <a href="https://www.anildash.com/2026/01/22/codeless/">remarkable leap forward</a> in the ability to orchestrate coding bots, making it possible for ordinary creators to command dozens of AI bots to build software without ever having to directly touch code. The implications of this kind of evolution are potentially extraordinary, as outlined in that first set of notes about what we could call &quot;codeless&quot; software. But now it's worth looking at the larger ecosystem to understand where all of this might be headed.</p>
<h2>&quot;Frontier minus six&quot;</h2>
<p>One idea that's come up in a host of different conversations around codeless software, both from supporters and skeptics, is how these new orchestration tools can enable coders to control coding bots that <em>aren't</em> from the Big AI companies. Skeptics say, &quot;won't everyone just use Claude Code, since that's the best coding bot?&quot;</p>
<p>The response that comes up is one that I keep articulating as &quot;frontier minus six&quot;, meaning the idea that many of the open source or open-weight AI models are often delivering results at a level equivalent to where frontier AI models were six months ago. Or, sometimes, where they were 9 months or a year ago. In any of these cases, these are still damn good results! These levels of performance are not merely acceptable, they are results that we were amazed by just months ago, and are more than serviceable for a large number of use cases — especially if those use cases can be run locally, at low cost, with lower power usage, without having to pay any vendor, and in environments where one can inspect what's happening with security and privacy.</p>
<p>When we consider that a frontier-minus-six fleet of bots can often run on cheap commodity hardware (instead of the latest, most costly, hard-to-get Nvidia GPUs) and we still have the backup option of escalating workloads to the paid services if and when a task is too challenging for them to complete, it seems inevitable that this will be part of the mix in future codeless implementations.</p>
<h2>Agent patterns and design</h2>
<p>The most thoughtful and fluent analysis of the new codeless approach has been <a href="https://maggieappleton.com/gastown">this wonderful essay by Maggie Appleton</a>, whose writing is always incisive and insightful. This one's a must-read! Speaking of Gas Town (Steve Yegge's signature orchestration tool, which has catalyzed much of the codeless revolution), Maggie captures the ethos of the entire space:</p>
<blockquote>
<p>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.</p>
</blockquote>
<h2>Code and legacy</h2>
<p>Once you've considered Maggie's piece, it's worth reading over Steve Krouse's essay, &quot;<a href="https://blog.val.town/vibe-code">Vibe code is legacy code</a>&quot;. Steve and his team build the delightful <a href="https://www.val.town">val town</a>, an incredibly accessible coding community that strikes a very careful balance between enabling coding and enabling AI assistance without overwriting the human, creative aspects of building with code. In many ways (including its aesthetic), it is the closest thing I've seen to a spiritual successor to the work we'd done for many years with <a href="https://en.wikipedia.org/wiki/Glitch,_Inc.">Glitch</a>, so it's no surprise that Steve would have a good intuition about the human relationship to creating with code.</p>
<p>There's an interesting point, however to the core point Steve makes about the disposability of vibe-coded (or AI-generated) code: <em>all</em> code is disposable. Every single line of code I wrote during the many years I was a professional developer has since been discarded. And it's not just because I was a singularly terrible coder; this is often the <em>normal</em> thing that happens with code bases after just a short period of time. As much as we lament the longevity of legacy code bases, or the impossibility of fixing some stubborn old systems based on dusty old languages, it's also very frequently the case that people happily rip out massive chunks of code that people toiled over for months or years and then discard it all without any sentimentality whatsoever.</p>
<p>Codeless tooling just happens to embrace this ephemerality and treat it as a feature instead of a bug. That kind of inversion of assumptions often leads to interesting innovations.</p>
<h2>To enterprise or not</h2>
<p>As I noted in my original piece on codeless software, we can expect any successful way of building software to be appropriated by companies that want to profiteer off of the technology, <em>especially</em> enterprise companies. This new realm is no different. Because these codeless orchestration systems have been percolating for some time, we've seen some of these efforts pop up already.</p>
<p>For example, the team at Every, which consults and builds tools around AI for businesses, calls a lot of these approaches <a href="https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents">compound engineering</a> when their team uses them to create software. This name seems fine, and it's good to see that they maintain the ability to switch between models easily, even if they currently prefer Claude's Opus 4.5 for most of their work. The focus on planning and thinking through the end product holistically is a particularly important point to emphasize, and will be key to this approach succeeding as new organizations adopt it.</p>
<p>But where I'd quibble with some of what they've explained is the focus on tying the work to individual vendors. Those concerns should be abstracted away by those who are implementing the infrastructure, as much as possible. It's a bit like ensuring that most individual coders don't have to know exactly which optimizations a compiler is making when it targets a particular CPU architecture. Building that muscle where the specifics of different AI vendors become less important will help move the industry forward towards reducing platforms costs — and more importantly, empowering coders to make choices based on their priorities, not those of the AI platforms or their bosses.</p>
<h2>Meeting the codeless moment</h2>
<p>A good example of the &quot;normal&quot; developer ecosystem recognizing the groundswell around codeless workflows and moving quickly to integrate with them is the Tailscale team <em>already</em> shipping <a href="https://tailscale.com/blog/aperture-private-alpha">Aperture</a>. While this initial release is focused on routine tasks like managing API keys, it's really easy to see how the ability to manage gateways and usage into a heterogeneous mix of coding agents will start to enable, and encourage, adoption of new coding agents. (Especially if those &quot;frontier-minus-six&quot; scenarios start to take off.)</p>
<p>I've been on the record <a href="https://me.dm/@anildash/109719178280170032">for years</a> about being bullish on Tailscale, and nimbleness like this is a big reason why. That example of seeing where developers are going, and then building tooling to serve them, is always a sign that something is bubbling up that could actually become signficant.</p>
<p>It's still early, but these are the first few signs of a nascent ecosystem that give me more conviction that this whole thing might become real.</p>

    ]]></content>
    </entry>
</feed>
Raw text
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:xml="http://www.w3.org/XML/1998/namespace" xml:base="https://anildash.com/">
  <title>Anil Dash</title>
  <subtitle>A blog about making culture. Since 1999.</subtitle>
  <link href="https://anildash.com/feed.xml" rel="self"/>
  <link href="https://anildash.com/"/>
  
    <updated>2026-01-27T00:00:00Z</updated>
  
  <id>https://anildash.com</id>
  <author>
    <name>Anil Dash</name>
    <email>[email protected]</email>
  </author>
  
    
    <entry>
      <title>I know you don’t want them to want AI, but…</title>
      <link href="https://anildash.com/2025/11/14/wanting-not-to-want-ai/"/>
      <updated>2025-11-14T00:00:00Z</updated>
      <id>https://anildash.com/2025/11/14/wanting-not-to-want-ai/</id>
      <content type="html">
        <![CDATA[
      <p>Today, Rodrigo Ghedrin wrote the very well-intentioned, but incorrectly-titled,  “<a href="https://manualdousuario.net/en/mozilla-firefox-window-ai">I think nobody wants AI in Firefox, Mozilla</a>”. As he correctly summarizes, <a href="https://connect.mozilla.org/t5/discussions/building-ai-the-firefox-way-shaping-what-s-next-together/td-p/109922">sentiment on the Mozilla thread</a> about a potential new AI pane in the Firefox browser is overwhelmingly negative. That’s not surprising; the Big AI companies have given people numerous legitimate reasons to hate and reject “AI” products, ranging from undermining labor to appropriating content without consent to having egregious environmental impacts to eroding trust in public discourse.</p>
<p>I spent much of the last week having the distinct honor of serving as MC at the <a href="https://www.mozillafestival.org/">Mozilla Festival</a> in Barcelona, which gave me the extraordinary opportunity to talk to hundreds of the most engaged Mozilla community members in person, and to address thousands more from onstage or on the livestream during the event. No surprise, one of the biggest topics we talked about the entire time was AI, and the intense, complex, and passionate feelings so many have about these new tools. Virtually everyone shared some version of what I’d articulated as <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view">the majority view</a> on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and <em>especially</em> Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable.</p>
<p>But.</p>
<p>Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that <em>hundreds of millions of people are using the major AI tools every day</em>. When I would point this out, there was often an initial defensive reaction talking about how people are forced to use these tools at work, or how AI is being shoehorned into every tool and foisted upon users. This is all true! And also? Hundreds of millions of users are choosing to go to these websites, of their own volition, and engage with these tools.</p>
<p>Regular, non-expert internet users find it interesting, or even <em>amusing</em>, to generate images or videos using AI and to send that media to their friends. While sophisticated media aesthetics find those creations gauche or even offensive, a lot of other cultures find them perfectly acceptable. And it’s an inarguable reality that millions of people find AI-generated media images emotionally <em>moving</em>. Most people that see AI-generated content as tolerable folk art belong to demographics that are dismissed by those who shape the technology platforms that billions of people use every day.</p>
<p>Which brings us back to “nobody wants AI in Firefox”. (And its obligatory <a href="https://news.ycombinator.com/item?id=45926779">matching Hacker News thread</a>, which proceeds exactly as you might expect.) In the communities that frequent places like Hacker News and Mozilla forums, where everyone is hyper-fluent in concerns like intellectual property rights and the abuses of Big Tech, it’s received wisdom that “everyone” resists the encroachment of AI into tools, and therefore the only possible reason that Mozilla (or any organization) might add support for any kind of AI features would be to chase a trend that’s in fashion amongst tech tycoons. I don’t doubt that this is a factor; anytime a significant percentage of decision makers are alumni of Silicon Valley, its culture is going to seep into an organization.</p>
<h2>The War On Pop-Ups</h2>
<p>What people are ignoring, though, is that <em>using AI tools is an incredibly mainstream experience now</em>. Regular people do it all the time. And doing so in normal browsers, in a normal context, is less safe. We can look at an analogy from the early days of the browser wars, a generation ago.</p>
<p>Twenty years ago, millions and millions of people used Internet Explorer to get around the web, because it was the default browser that came with their computer. It was buggy and wildly insecure, and users would often find their screen littered with intrusive pop-up advertisements that had been spawned by various sites that they had visited across the web. We could have said, “well, those are simply fools with no taste using bad technology who get what they deserve”</p>
<p>Instead, countless enthusiasts and advocates across the web decided that <em>everyone</em> deserved to have an experience that was better and safer. And as it turned out, while getting those improvements, people could even get access to a cool new feature that nobody had seen before: tabs! Firefox wasn’t the first browser to invent all these little details, but it was the first to put them all together into one convenient little package. Even if the expert users weren’t personally visiting the sites riddled with pop-up ads themselves, they were glad to have spared their non-expert friends from the miseries they were enduring on the broken internet.</p>
<p>I don’t know why today’s Firefox users, even if they’re the most rabid anti-AI zealots in the world, don’t say, “well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI”. I have to assume it’s because they’re in denial about the fact that their friends and family are using these platforms. (Judging by the tenor of their comments on the topic, I’d have to guess their friends don’t want to engage with them on the topic at all.)</p>
<p>We see with tools like <a href="https://www.anildash.com/2025/10/22/atlas-anti-web-browser">ChatGPT’s Atlas</a> that there are now aggressively anti-web browsers coming to market, and even a sophisticated user might not be able to realize how nefarious some of the tactics of these new apps can be. I think those who are critical can certainly see that those enabling those harms are bad actors. And those critics are also aware that hundreds of millions of people are using ChatGPT. So, then… what browser do they think those users should use?</p>
<h2>What does good look like?</h2>
<p>Judging by what I see in the comments on the posts about Firefox’s potential AI feature integrations, the apparent path that critics are recommending as an alternative browser is “I’ll yell at you until you stop using ChatGPT”. Consider this post my official notice: that strategy hasn’t worked. And it is not <em>going</em> to work. The only thing that <em>will</em> work is to offer a better alternative to these users. That will involve <a href="https://www.anildash.com/2025/05/02/what-would-good-ai-look-like">defining what an acceptably “good” alternative AI looks like</a>, and then building and shipping it to these users, and convincing them to use it. I’m hoping such an effort succeeds. But I can guarantee that scolding people and trying to convince them that they’re not finding utility in the current platforms, or trying to make them feel guilty about the fact that they <em>are</em> finding utility in the current platforms, will not work.</p>
<p>And none of this is exculpatory for my friends at Mozilla. As I’ve said to the good people there, and will share again here, I don’t think the framing of the way this feature has been presented has done either the Firefox team or the community any favors. These big, emotional blow-ups are demoralizing, and take away time and energy and attention that could be better spent getting people excited and motivated to grow for the future.</p>
<p>My personal wishlist would be pretty simple:</p>
<p><em>Just give people the “shut off all AI features” button. It’s a tiny percentage of people who want it, but they’re never going to shut up about it, and they’re convinced they’re the whole world and they can’t distinguish between being mad at big companies and being mad at a technology so give them a toggle switch and write up a blog post explaining how extraordinarily expensive it is to maintain a configuration option over the lifespan of a global product.</em> Market Firefox as “The best AI browser for people who hate Big AI”. Regular users have <em>no idea</em> how creepy the Big AI companies are — they’ve just heard their local news talk about how AI is the inevitable future. If Mozilla can warn me <a href="https://www.mozillafoundation.org/en/privacynotincluded/articles/how-to-protect-your-privacy-from-chatgpt-and-other-ai-chatbots">how to protect my privacy from ChatGPT</a>, then it can also mention that ChatGPT tells children how to self-harm, and should be aggressive in engaging with the community on how to build tools that help mitigate those kinds of harms — how do we catalyze <em>that</em> innovation?</p>
<ul>
<li>Remind people that there isn’t “a Firefox” — everyone is Firefox. Whether it’s Zen, or your custom build of Firefox with your favorite extensions and skins, it’s all part of the same story. Got a local LLM that runs entirely as a Firefox extension? Great! That should be one of the many Firefoxes, too. Right now, so much of the drama and heightened emotions and tension are coming from people’s (well… dudes') egos about there being One True Firefox, and wanting to be the one who controls what’s in that version, as an expression of one set of values. This isn’t some blood-feud fork, there can just be a lot of different choices for different situations. Make it all work.</li>
</ul>
<p>So, that’s the answer. I think some people want AI in Firefox, Mozilla. And some people don’t. And some people don’t know what “AI” means. And some people forgot Firefox even exists. It’s that last category I’m most concerned about, frankly. Let’s go get ‘em.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Vibe Coding: Empowering and Imprisoning</title>
      <link href="https://anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/"/>
      <updated>2025-12-02T00:00:00Z</updated>
      <id>https://anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/</id>
      <content type="html">
        <![CDATA[
      <p>In case you haven’t been following the world of software development closely, it’s good to know that vibe coding — using LLM tools to assist with writing code — can help enable many people to create apps or software that they wouldn’t otherwise be able to make. This has led to an extraordinarily rapid adoption curve amongst even experienced coders in many different disciplines within the world of coding. But there’s a very important threat posed by vibe coding that almost no one has been talking about, one that’s far more insidious and specific than just the risks and threats posted by AI or LLMs in general.</p>
<p>Here’s a quick summary:</p>
<p><em>One of the most effective uses of LLMs is in helping programmers write code</em> A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages</p>
<ul>
<li>Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech</li>
</ul>
<h2>Start vibing</h2>
<p>It may be useful to start by explaining how people use LLMs to assist with writing software. My background is that I’ve helped build multiple companies focused on enabling millions of people to create with code. And I’m personally an example of one common scenario with vibe coding. Since I don’t code regularly anymore, I’ve become much slower and less efficient at even the web development tasks that I used to do professionally, which I used to be fairly competent at performing. In software development, there are usually a nearly-continuous stream of new technologies being released (like when you upgrade your phone, or your computer downloads an update to your web browser), and when those things change, developers have to update <em>their</em> skills and knowledge to stay current with the latest tools and techniques. If you’re not staying on top of things, your skillset can rapidly decay into irrelevance, and it can be hard to get back up to speed, even though you understand the fundamentals completely, and the underlying logic of <em>how</em> to write code hasn’t changed at all. It’s like knowing how to be an electrician but suddenly you have to do all your work in French, and you don’t speak French.</p>
<p>This is the kind of problem that LLMs are really good at helping with. Before I had this kind of coding assistant, I couldn’t do any meaningful projects within the limited amount of free time that I have available on nights and weekends to build things. Now, with the assistance of contemporary tools, I can get help with things like routine boilerplate code and obscure syntax, speeding up my work enough to focus on the fun, creative parts of coding that I love.</p>
<p>Even professional coders who <em>are</em> up to date on the latest technologies use these LLM tools to do things like creating scripts, which are essentially small bits of code used to automate or process common tasks. This kind of code is disposable, meaning it may only ever be run once, and it’s not exposed to the internet, so security or privacy concerns aren’t usually much of an issue. In that context, having the LLM create a utility for you can feel like being truly liberated from grunt work, something like having a robot vacuum around to sweep up the floor.</p>
<h2>Surfing towards serfdom</h2>
<p>This all sounds pretty good, right? It certainly helps explain why so many in the tech world tend to see AI much more positively than almost everyone else does; there’s a clear-cut example of people finding value from these tools in a way that feels empowering or even freeing.</p>
<p>But there are far darker sides to this use of AI. Let me put aside the threats and risks of AI that are true of <em>all</em> uses of the Big AI platforms, like the environmental impact, the training on content without consent, the psychological manipulation of users, the undermining of legal regulations, and other significant harms. These are all real, and profound, but I want to focus on what’s specific to using AI to help write code here, because there are negative externalities that are unique to <em>this</em> context that people haven’t discussed enough. (For more on the larger AI discussion, see &quot;<a href="https://www.anildash.com/2025/05/01/what-would-good-ai-look-like/">What would good AI look like?</a>&quot;)</p>
<p>The first problem raised by vibe coding is an obvious one: the major tech investors focused on making AI good at writing code because they wanted to make coders less powerful and reduce their pay. If you go back a decade ago, nearly everyone in the world was saying “teach your kids to code” and being a software engineer was one of the highest paying, most powerful individual jobs in the history of labor. Pretty soon, coders were acting like it — using their power to improve workplace conditions for those around them at the major tech companies, and pushing their employers to be more socially responsible. Once workers began organizing in this way, the tech tycoons who founded the big tech companies, and the board members and venture capitalists who backed them, immediately began investing billions of dollars in building these technologies that would devalue the labor of millions of coders around the world.</p>
<p>It worked. More than <em>half a million</em> tech workers have been laid off in America since ChatGPT was released in November 2022.</p>
<p>That’s <em>just</em> in the private sector, and <em>just</em> the ones tracked by <a href="https://layoffs.fyi">layoffs.fyi</a>.  Software engineering job listings have <a href="https://blog.pragmaticengineer.com/software-engineer-jobs-five-year-low/">plummeted to a 5-year low</a>. This is during a period of time that nobody even describes as a recession. The same venture capitalists who funded the AI boom keep insisting that these trends are about macroeconomic abstractions like interest rates, a stark contrast to their rhetoric the rest of the time, when they insist that they are alpha males who make their own decisions based on their strong convictions and brave stances against woke culture. It is, in fact, the case that they are just greedy people who invested a ton of money into trying to put a lot of good people out of work, and they succeeded in doing so.</p>
<p>There is no reason why AI tools like this <em>couldn't</em> be used in the way that they're often described, where they increase productivity and enable workers to do more and generate more value. But instead we have the wealthiest people in the world telling the wealthiest companies in the world, while they generate record profits, to lay off workers who could be creating cool things for customers, and then blaming it on everyone but themselves.</p>
<h2>The past as prison</h2>
<p>Then there’s the second problem raised by vibe coding: You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at. Worse, many of the people who are using vibe coding tools are increasingly those who <em>don’t</em> understand the code that is being generated by these systems. This means the people generating all of this newly-vibed code won’t even know when the output is insecure, or will perform poorly, or includes exploits that let others take over their system, or when it is simply incoherent nonsense that <em>looks</em> like code but doesn’t do anything.</p>
<p>All of those factors combine to encourage people to think of vibe coding tools as a sort of “black box” that just spits out an app <em>for</em> you. Even the giant tech companies are starting to encourage this mindset, tacitly endorsing the idea that people don’t need to know what their systems are doing under the hood. But obviously, somebody needs to know whether a system is <em>actually</em> secure. Somebody needs to know if a system is actually doing the tasks it says that it’s doing. The Big AI companies that make the most popular LLMs on the market today routinely design their products to induce emotional dependency in users by giving them positive feedback and encouragement, even when that requires generating false responses. Put more simply: they make the bot lie to you to make you feel good so you use the AI more. That’s terrible in a million ways, but one of them is that it sure does generate some bad code.</p>
<p>And a vibe coding tool absolutely won’t make something truly <em>new</em>. The most radical, disruptive, interesting, surprising, weird, fun innovations in technology have happened because people with a strange compulsion to do something cool had enough knowledge to get their code out into the world. The World Wide Web itself was <em>not</em> a huge technological leap over what came before — it took off because of a huge leap in <em>insight</em> into human nature and human behavior, that happened to be captured in code. The actual bits and bytes? They were mostly just plain text, much of which was in formats that had already been around for many years prior to Tim Berners-Lee assembling it all into the first web browser. That kind of surprising innovation could probably never be vibe coded, even though all of the raw materials might be scooped up by an LLM, because even if the human writing the prompt had that counterintuitive stroke of genius, the system would still be hemmed in by the constraints of the works it had been trained on. The past is a prison when you’re inventing the future.</p>
<p>What’s more, if you were going to use a vibe coding tool to make a truly radical new technology, do you think today’s Big AI companies would let their systems create that app? The same companies that made a platform that just put hundreds of thousands of coders out of work? The  same companies that make a platform that tells your kids to end their own lives? The same companies whose cronies in the White House are saying there should <em>never be any laws</em> reining them in? Those folks are going to help you make new tech that threatens to disrupt their power? I don’t think so.</p>
<h2>Putting power in people’s hands</h2>
<p>I’m deeply torn about what the future of LLMs for coding should be. I’ve spent decades of my life trying to make it easier for everyone to make software. I’ve seen, firsthand, the power of using AI tools to help coders — especially those new to coding — build their confidence in being able to create something new. I love that potential, and in many ways, it’s the most positive and optimistic possibility around LLMs that I’ve seen. It’s the thing that makes me think that maybe there is a part of all the AI hype that is not pure bullshit. Especially if we can find a version of these tools that’s genuinely open source and free and has been trained on people’s code with their consent and cooperation, perhaps in collaboration with some educational institutions, I’d be delighted to see that shared with the world in a thoughtful way.</p>
<p>But I also have seen the majority of the working coders I know (and the <em>non</em>-working coders I know, including myself) rush to integrate the commercial coding assistants from the Big AI companies into their workflow without necessarily giving proper consideration to the long-term implications of that choice. What happens when we’ve developed our dependencies on that assistance? How will people introduce <em>new</em> technologies like new programming languages and frameworks if we all consider the LLMs to be the canonical way of writing our code, and the training models don’t know the new tech exists? How does our imagination shrink when we consider our options of what we create with code to be choosing between the outputs of the LLM rather than starting from the blank slate of our imagination? How will we build the next generation of coders skilled enough to catch the glaring errors that LLMs create in their code?</p>
<p>There’s never been this stark a contrast between the negatives and positives of a new technology being so tightly coupled before when it comes to enabling developers. Generally change comes to coders incrementally. Historically, there was always a (wonderful!) default skepticism to coding culture, where anything that reeked of marketing or hype was looked at with a huge amount of doubt until there was a significant amount of proof to back it up.</p>
<p>But in recent years, as with everything else, the culture wars have come for tech. There’s now a cohort in the coding world that has adopted a cult of personality around a handful of big tech tycoons despite the fact that these men are deeply corrosive to society. Or perhaps <em>because</em> they are. As a result, there’s a built-in constituency for any new AI tool, regardless of its negative externalities, which gives them a sense of momentum even where there may not be any.</p>
<p>It’s worth us examining what’s really going on, and articulating explicitly what we’re trying to enable. Who are we trying to empower? What does success look like? What do we want people to be able to build? What do we <em>not</em> want people to be able to make? What price is too high to pay? What convenience is not worth the cost?</p>
<h2>What tools do we choose?</h2>
<p>I do, still, believe deeply in the power of technology to empower people. I believe firmly that you have to understand how to create technology if you want to understand how to control it. And I still believe that we have to democratize the power to create and control technology to as many people as possible so that technology can be something people can use as a tool, rather than something that happens _to_them.</p>
<p>We are now in a complex phase, though, where the promise of democratizing access to creating technology is suddenly fraught in a way that it has never been before. The answer can’t possibly be that technology remains inaccessible and difficult for those outside of a privileged class, and easy for those who are already comfortable in the existing power structure.</p>
<p>A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>They have to be able to talk about us without us</title>
      <link href="https://anildash.com/2025/12/05/talk-about-us-without-us/"/>
      <updated>2025-12-05T00:00:00Z</updated>
      <id>https://anildash.com/2025/12/05/talk-about-us-without-us/</id>
      <content type="html">
        <![CDATA[
      <p>It’s absolutely vital to be able to communicate effectively and efficiently to large groups of people. I’ve been lucky enough to get to refine and test my skills in communicating at scale for a few decades now, and the power of talking to communities is the one area where I’d most like to pass on what I’ve learned, because it’s this set of skills that can have the biggest effect on deciding whether good ideas and good work can have their greatest impact.</p>
<p>My own work crosses many disparate areas. Over the years, I’ve gotten to cycle between domains as distinct as building technology platforms and products for developers and creators, enabling activism and policy advocacy in service of humanist ideals, and more visible external-facing work such as public speaking or writing in various venues like magazines or on this site. (And then sometimes I dabble in my other hobbies and fun stuff like scholarship or research into areas like pop culture and media.)</p>
<p>What’s amazing is, in <em>every single one</em> of these wildly different areas, the exact same demands apply when trying to communicate to broad groups of people. This is true despite the broadly divergent cultural norms across all of these different disciplines. It can be a profoundly challenging, even intimidating, job to make sure a message is being communicated accurately, and in high fidelity, to everyone that you need to reach.</p>
<p>That vital task of communicating to a large group gets even <em>more</em> daunting when you inevitably realize that, even if you <em>were</em> to find the perfect wording or phrasing for your message, you’d still never be able to deliver your story to every single person in your target audience by yourself anyway. There will always be another person whom you’re trying to reach that you just haven’t found yet. So, is it hopeless? Is it simply impossible to effectively tell a story at scale if you don’t have massive resources?</p>
<p>It doesn’t have to be. We can start with one key insight about what it takes to get your most important stories out into the world. It’s a perspective that seems incredibly simple at first, but can lead to a pretty profound set of insights.</p>
<h2>They have to be able to talk about us <em>without us</em>.</h2>
<p>They have to be able to talk about us without us. What this phrase means, in its simplest form,  is that you have to tell a story so clear, so concise, so <em>memorable and evocative</em> that people can repeat it for you even after you’ve left the room. And the people who hear it need to be able to do this the <em>first time</em> they hear the story. Whether it’s the idea behind a new product, the core promise of a political campaign, or the basic takeaway from a persuasive essay (guess what the point of this one is!) — not only do you have to explain your idea and make your case, you have to be teaching your listener how to do the same thing for themselves.</p>
<p>This is a tall order, to be sure. In pop music, the equivalent is writing a hit where people feel like they can sing along to the chorus by the time they get to the end of the song for the first time. Not everybody has it in them to write a hook that good, but if you do, that thing is going to become a classic. And when someone <em>else</em> has done it, you know it because it gets stuck in your head. Sometimes you end up humming it to yourself even if you didn’t want to. Your best ideas — your most <em>vital</em> ideas — need to rest on a messaging platform that solid.</p>
<p>Delivering this kind of story actually requires substance. If you’re trying to fake it, or to force a narrative out of fluff or fakery, that will very immediately become obvious. When you set out to craft a story that travels in your absence, it has to have a body if it’s going to have legs. Bullshit is slippery and smells terrible, and the first thing people want to do when you leave the room is run away from it, not carry it with them.</p>
<h2>The mission is the message</h2>
<p>There’s another challenge to making a story that can travel in your absence: your ego has to let that happen. If you make a story that is effective and compelling enough that others can tell it, then, well…. those other people are going to tell it.  Not you. They’ll do it in their own words, and in their own voices, and make it <em>theirs</em>. They may use a similar story, but in their own phrasing, so it will resonate better with their people. This is a <em>gift</em>! They are doing you a kindness, and extending you great generosity. Respond with gratitude, and be wary of anyone who balks at not getting to be the voice or the face of a message themselves. Everyone gets a turn telling the story.</p>
<p>Maybe the simple fact that others will be hearing a good story for the first time will draw them to it, regardless of <em>who</em> the messenger is. Sometimes people get attached to the idea that <em>they</em> have to be the one to deliver the one true message. But a core precept of “talk about us without us” is that there’s a larger mission and goal that everyone is bought into, and this demands that everyone stay aligned to their values rather than to their own personal ambitions around who tells the story.</p>
<p>The truth of whomever will be most <em>effective</em> is the factor used to decide who will be the person to tell the story in any context. And this is a forgiving environment, because even if someone doesn’t get to be the voice one day, they’ll get another shot, since repetition and consistency are also key parts of this strategy, thanks to the disciplined approach that it brings to communication.</p>
<h2>The joy of communications discipline</h2>
<p>At nearly every organization where I’ve been in charge of onboarding team members in the last decade or so, one of the first messages we’ve presented to our new colleagues is, “We are disciplined communicators!” It’s a message that they hopefully get to hear as a joyous declaration, and as an assertion of our shared values. I always try to explicitly instill this value into teams I work with because, first, it’s good to communicate values explicitly, but also because this is a concept that is very seldom directly stated.</p>
<p>It is ironic that this statement usually goes unsaid, because nearly everyone who pays attention to culture understands the vital importance of disciplined communications. Brands that are strictly consistent in their use of things like logos, type, colors, and imagery get such wildly-outsized cultural impact in exchange for relatively modest investment that it’s mind-boggling to me that more organizations don’t insist on following suit. Similarly, institutions that develop and strictly enforce a standard tone of voice and way of communicating (even if the tone itself is playful or casual) capture an incredibly valuable opportunity at minimal additional cost relative to how much everyone’s already spending on internal and external communications.</p>
<p>In an era where every channel is being flooded with AI-generated slop, and when most of the slop tools are woefully incapable of being consistent about anything, simply showing up with an obviously-human, obviously-consistent story is a phenomenal way of standing out. That discipline demonstrates all the best of humanity: a shared ethos, discerning taste, joyful expression, a sense of belonging, an appealing consistency. And best of all, it represents the chance to participate for yourself — because it’s a message that you now know how to repeat for yourself.</p>
<p>Providing messages that individuals can pick up and run with on their own is a profoundly human-centric and empowering thing to do in a moment of rising authoritarianism. When the fascists in power are shutting down prominent voices for leveling critiques that they would like to censor, and demanding control over an increasingly broad number of channels, there’s reassurance in people being empowered to tell their own stories together. Seeing stories bubble up from the grassroots in collaboration, rather than being forced down upon people from authoritarians at the top, has an emotional resonance that only strengthens the substance of whatever story you’re telling.</p>
<h2>How to do it</h2>
<p>Okay, so it sounds great: Let’s tell stories that other people want to share! Now, uh… how do we do it? There are simple principles we can follow that help shape a message or story into one that is likely to be carried forward by a community on its own.</p>
<ul>
<li><strong>Ground it in your values.</strong> When we began telling the story of my last company Glitch, the conventional wisdom was that we were building a developer tool, so people would describe it as an “IDE” — an “integrated development environment”, which is the normal developer jargon for the tool coders use to write their code in. We <em>never</em> described Glitch that way. From <a href=https://web.archive.org/web/20170504080445/https://glitch.com/>day one</a>, we always said “Glitch is the friendly community where you'll build the app of your dreams” (later, “the friendly community where everybody builds the internet”). By talking about the site as a <em>friendly community</em> instead of an <code>integrated development environment</code>, it was crystal clear what expectations and norms we were setting, and what our values were. Within a few months, even our <em>competitors</em> were describing Glitch as a “friendly community” while they were trying to talk about how they were better than us about some feature or the other. That still feels like a huge victory — even the competition was talking about us without us! Make sure your message evokes the values you want people to share with each other, either directly or indirectly.</li>
<li><strong>Start with the principle.</strong> This is a topic I’ve covered before, but <a href=https://www.anildash.com/2022/01/31/you-have-to-start-with-the-principle/>you can't win unless you know what you're fighting for</a>. Identify concrete, specific, perhaps even <em>measurable</em> goals that are tied directly to the values that motivate your efforts. As <a href=https://www.anildash.com/2025/11/05/turn-the-volume-up/>noted recently</a>, Zohran Mamdani did this masterfully when running for mayor of New York City. While the <em>values</em> were affordability and the dignity of ordinary New Yorkers, the clear, understandable, measurable principle could be something as simple as “free buses”. This is a goal that everyone can get in 5 seconds, and can explain to their neighbor <em>the first time they hear it</em>. It’s a story that travels effortlessly on its own — and that people will be able to verify very easily when it’s been delivered. That’s a perfect encapsulation of “talk about us without us”.</li>
<li>**Know what makes you unique.**Another way of putting this is to simply make sure that you have a sense of self-awareness. But the story you tell about your work or your movement has to be <em>specific</em>. There can’t be platitudes or generalities or vague assertions as a core part of the message, or it will never take off. One of the most common failure states for this mistake is when people lean on <em>slogans</em>. Slogans can have their use in a campaign, for reminding people about the existence of a brand, or supporting broader messaging. But very often, people think a slogan <em>is</em> a story. The problem is that, while slogans are definitely repeatable, slogans are almost definitionally too vague and broad to offer a specific and unique narrative that will resonate. There’s no point in having people share something if it doesn’t say something. I usually articulate the challenge here like this:<strong>Only say what only <em>you</em> can say.</strong></li>
<li><strong>Be evocative, not comprehensive.</strong> Many times, when people are passionate about a topic or a movement, the temptation they have in telling the story is to work in <em>every little detail</em> about the subject. They often think, “if I include every detail, it will persuade more people, because they’ll know that I’m an expert, or it will convince them that I’ve thought of everything!” In reality, when people are not subject matter experts on a topic, or if they’re not already intrinsically interested in that topic, hearing a bunch of extensive minutia about it will almost always leave them feeling bored, confused, intimidated, condescended-to, or some combination of all of these. Instead, pick a small subset of the most <em>emotionally gripping</em> parts of your story, the aspects that have the deepest human connection or greatest relevance and specificity to the broadest set of your audience, and focus on telling those parts of the story as passionately as possible. If you succeed in communicating that initial small subset of your story effectively, then you may <em>earn</em> the chance to tell the other more complex and nuanced details of your story.</li>
<li><strong>Your enemies are your friends.</strong> Very often, when people are creating messages about advocacy, they’re focused on competition or rivals. In the political realm, this can be literal opposing candidates, or the abstraction of another political party. In the corporate world, this can be (real or imagined) competitive products or companies. In many cases, these other organizations or products or competitors occupy so much more mental space in your mind, or your team’s mind, than they do in the mind of your potential audience. Some of your audience has never heard of them at all. And a <em>huge</em> part of your audience thinks of you and your biggest rival as… basically the same thing. In a business or commercial context, customers can barely keep straight the difference between you and your competition — you’re both just part of the same amorphous blob that exists as “the things that occupy that space”. Your competitor may be the only other organization in the world that’s fighting just as hard as you are to create a market for the product that you’re selling. The same is true in the political space; sometimes the biggest friction arises over the narcissism of small differences. What we can take away from these perspectives is that our stories have to focus on what distinguishes us, yes, but also on what we might have in common with those whom we might otherwise have perceived to have been aligned with the “enemy”. Those folks might not have sworn allegiance to an opposing force; they may simply have chosen another option out of convenience, and not even seen that choice as being in opposition to your story at all.</li>
<li><strong>Find joy in repetition.</strong> Done correctly, a disciplined, collaborative, evocative message can become a mantra for a community. There’s a pride and enthusiasm that can come from people becoming proficient in sharing their own version of the collective story. And that means enjoying when that refrain comes back around, or when a slight improvement in the core message is discovered, and everyone finds a way to refine the way they’re communicating about the narrative. A lot of times, people worry that their team will get bored if they’re “just telling the same story over and over all the time”. In reality, as a brilliant man once said, <a href=https://youtu.be/FgP5VRp_myE>there’s joy in repetition</a>.</li>
<li><strong>Don’t obsess over exact wording.</strong> This one is tricky; you might say, “but you said we have to be disciplined communicators!” And it’s true: it’s important to be disciplined. But that doesn’t mean you can’t leave room for people to put their own spin on things. Let them translate to their own languages or communities. Let them augment a general principle with a specific, personal connection. If they have their own authentic experience which will amplify a story or drive a point home, let them weave that context into the consistent narrative that’s been shared over time. As long as you’re not enabling a “telephone game” where the story starts to morph into an unrecognizable form, it’s perfectly okay to add a human touch by going slightly off script.</li>
</ul>
<h2>Share the story</h2>
<p>Few things are more rewarding than when you find a meaningful narrative that resonates with the world. Stories have the power to change things, to make people feel empowered, to galvanize entire communities into taking action and recognizing their own power. There’s also a quiet reward in the craft and creativity of working on a story that travels, in finding notes that resonate with others, and in challenging yourself to get far enough out of your own head to get into someone else’s heart.</p>
<p>I still have so much to learn about being able to tell stories effectively. I still screw it up so much of the time, and I can look back on many times when I wish I had better words at hand for moments that sorely needed them. But many of the most meaningful and rewarding moments of my life have been when I’ve gotten to be in community with others, as we were not just sharing stories together, but <em>telling</em> a united story together. It unlocks a special kind of creativity that’s a lot bigger than what any one of us can do alone.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>What about “Nothing about us without us?”</title>
      <link href="https://anildash.com/2025/12/08/what-about-nothing-about-us/"/>
      <updated>2025-12-08T00:00:00Z</updated>
      <id>https://anildash.com/2025/12/08/what-about-nothing-about-us/</id>
      <content type="html">
        <![CDATA[
      <p>As I was drafting my last piece on Friday, “<a href="https://www.anildash.com/2025/12/05/talk-about-us-without-us/">They have to be able to talk about us without us</a>”, my thoughts of course went to one of the most famous slogans of the disability rights movement, “<a href="https://en.wikipedia.org/wiki/Nothing_about_us_without_us">Nothing about us without us.</a>” I wasn’t unaware that there were similarities in the phrasing of what I wrote. But I think the topic of communicating effectively to groups, as I wrote about the other day, and ensuring that disabled people are centered in disability advocacy, are such different subjects that I didn’t want to just quickly gloss over the topic in a sidebar of a larger piece. They're very distinct topics that really only share a few words in common.</p>
<p>One of the great joys of becoming friends with a number of really thoughtful and experienced disability rights activists over the last several years has been their incredible generosity in teaching me about so much of the culture and history of the movements that they’ve built their work upon, and one of the most powerful slogans has been that refrain of “nothing about us without us”.</p>
<p>Here I should start by acknowledging Alice Wong, who we recently lost, who founded the <a href="https://disabilityvisibilityproject.com/about/">Disability Visibility Project</a>, and a MacArthur Fellow, and a tireless and inventive advocate for everyone in the disabled community. She was one of the first people to bring me in to learning about this history and these movements, more than a decade ago. She was also a patient and thoughtful teacher, and over our many conversations over the years, she did more than anyone else in my life to truly <em>personify</em> the spirit of “nothing about us without us” by fighting to ensure that disabled people led the work to make the world accessible for all. If you have the chance, learn about her work, and <a href="https://www.gofundme.com/f/Alice-Wongs-Legacy">support it</a>.</p>
<p>But a key inflection point in my own understanding of “nothing about us without us” came, unsurprisingly, in the context of how disabled people have been interacting with technology. I used to host a podcast called Function, and we did an episode about how inaccessible so much of contemporary technology has become, and how that kind of ruins things for everyone. (The episode is still up on <a href="https://open.spotify.com/episode/0IN2nQWUqmQnAMxNLN85WE">Spotify</a> and <a href="https://podcasts.apple.com/us/podcast/function-with-anil-dash/id1439658455?i=1000452883786">Apple Podcasts</a>.)  We had on <a href="https://emilyladau.com">Emily Ladau</a> of <a href="https://www.theaccessiblestall.com">The Accessible Stall</a> podcast, <a href="https://alexhaagaard.com">Alex Haagaard</a> of <a href="https://www.disabledlist.org">The Disabled List</a>, and <a href="https://www.vilissathompson.com">Vilissa Thompson</a> of <a href="https://www.rampyourvoice.com">Ramp Your Voice</a>. It’s well worth a listen, and Emily, Alex and Vilissa really do an amazing job of pointing to really specific, really evocative examples of <em>obvious</em> places where today’s tech world could be so much more useful and powerful for everyone if its creators were making just a few simple changes.</p>
<p>What’s striking to me now, listening to that conversation six years later, is how little has changed from the perspective of the technology world, but also how much my own lived experience has come to reflect so much of what I learned in those conversations.</p>
<p>Each of them was the &quot;us&quot; in the conversation, using their own personal experience, and the experience of other disabled people that they were in community with, to offer specific and personal insights that the creators of these technologies did not have. And whether it was for reasons of crass commercial opportunism — here's some money you could be making! — or simply because it was the right thing to do morally, it's obvious that the people making these technologies could benefit by honoring the principle of centering these users of their products.</p>
<h2>Taking our turn</h2>
<p>I’ve had this conversation on various social media channels in a number of ways over the years, but another key part of understanding the “us” in “nothing about us without us” when it comes to disability, is that the “us” is <em>all of us</em>, in time. It's very hard for many people who haven’t experienced it to understand that everyone should be accommodated and supported, because everyone is disabled; it’s only a question of when and for how long.</p>
<p>In contemporary society, we’re given all kinds of justifications for why we can’t support everyone’s needs, but so much of those are really grounded in simply trying to convince ourselves that a disabled person is <em>someone else</em>, an “other” who isn’t worthy or deserving of our support. I think deep down, everyone knows better. It’s just that people who don’t (yet) identify as disabled don’t really talk about it very much.</p>
<p>In reality, we'll all be disabled. Maybe you're in a moment of respite from it, or in that brief window before the truth of the inevitability of it has been revealed to you (sorry, spoiler warning!), but it's true for all of us — even when it's not visible. That means all of us have to default to supporting and uplifting and empowering the people who are disabled today. This was the key lesson that I didn’t really get personally until I started listening to those who were versed in the history and culture of disability advocacy, about how the patronizing solutions were often harmful, or competing for resources with the <em>right</em> answers.</p>
<p>I’ve had my glimpses of this myself. Back in 2021, I had Lyme disease. I didn’t get it as bad as some, but it did leave me physically and mentally unable to function as I had been used to, for several months. I had some frame of reference for physical weakness; I could roughly compare it to a bad illness like the flu, even if it wasn’t exactly the same. But a diminished <em>mental</em> capacity was unlike anything I had ever experienced before, and was profoundly unsettling, deeply challenging my sense of self. After the <a href="https://www.anildash.com/2022/07/18/i-went-to-a-coffee-shop/">incident I’d described in 2022</a>, I had a series of things to recover from physically and mentally that also presented a significant challenge, but were especially tough because so much of people’s willingness to accommodate others is based on any disability being <em>visible</em>. Anything that’s not immediately perceived at a superficial level, or legible to a stranger in a way that’s familiar to them, is generally dismissed or seen as invalid for support.</p>
<p>I point all of this out not to claim that I fully understand the experience of those who live with truly serious disabilities, or to act as if I know what it’s been like for those who have genuinely worked to advocate for disabled people. Instead, I think it can often be useful to show how porous the boundary is between people who <em>don’t</em> think of themselves as disabled and those who already know that they are. And of course this does <em>not</em> mean that people who aren't currently disabled can speak on behalf of those who are — that's the whole point of &quot;nothing about us without us&quot;! — but rather to point out that the time to begin building your empathy and solidarity is now, not when you suddenly have the realization that you're part of the community.</p>
<h2>Everything about us</h2>
<p>There’s a righteous rage that underlies the cry of “nothing about us without us”, stemming from so many attempts to address the needs of disabled people having come from those outside the community, arriving with plans that ranged from inept to evil. We’re in a moment when the authoritarians in charge in so much of the world are pushing openly-eugenicist agendas that will target disabled people first amongst the many vulnerable populations that they’ll attempt to attack. Challenging economic times like the one we’re in affect disabled people significantly harder as the job market disproportionately shrinks in opportunities for the disabled first.</p>
<p>So it’s going to take all of us standing in solidarity to ensure that the necessary advocacy and support are in place for what promises to be an extraordinarily difficult moment. But I take some solace and inspiration from the fact that there are so many disabled people who have provided us with the clear guidance and leadership we need to navigate this moment. And there is simple guidance we can follow when doing so to ensure that we’re centering the right leaders, by listening to those who said, “nothing about us without us.”</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>How the hell are you supposed to have a career in tech in 2026?</title>
      <link href="https://anildash.com/2026/01/05/a-tech-career-in-2026/"/>
      <updated>2026-01-05T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/05/a-tech-career-in-2026/</id>
      <content type="html">
        <![CDATA[
      <p>The number one question I get from my friends, acquaintances, and mentees in the technology industry these days is, by far, variations on the basic theme of, “what the hell are we supposed to do now?”</p>
<p>There have been mass layoffs that leave more tech workers than ever looking for new roles in the worst market we’ve ever seen. Many of the most talented, thoughtful and experienced people in the industry are feeling worried, confused, and ungrounded in a field that no longer looks familiar.</p>
<p>If you’re outside the industry, you may be confused — isn’t there an AI boom that’s getting hundreds of billions of dollars in investments? Doesn’t that mean the tech bros are doing great? What you may have missed is that half a million tech workers have been laid off in the years since ChatGPT was released; the same attacks on marginalized workers and DEI and “woke” that the tech robber barons launched against the rest of society were aimed at their own companies first.</p>
<p>So the good people who actually <em>make</em> the technology we use every day, the real innovators and creators and designers, are reacting to the unprecedented disconnect between the contemporary tech industry and the fundamentals that drew so many people toward it in the first place. Many of the biggest companies have abandoned the basic principle of making technology that actually <em>works</em>. So many new products fail to deliver on even the basic capabilities that the companies are promising that they will provide.</p>
<p>Many leaders at these companies have run full speed towards moral and social cowardice, abandoning their employees and customers to embrace rank hatred and discrimination in ways that they pretended to be fighting against just a few years ago. Meanwhile, unchecked consolidation has left markets wildly uncompetitive, leaving consumers suffering from the effects of categories without any competition or investment — which we know now as “enshittification”. And the full-scale shift into corruption and crony capitalism means that winners in business are decided by whoever is shameless enough to offer the biggest bribes and debase themselves with the <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">most humiliating display</a> of groveling. It’s a depressing shift for people who, earlier in their careers, often actually <em>were</em> part of inventing the future.</p>
<p>So where do we go from here?</p>
<h2>You’re not crazy.</h2>
<p>The first, and most important, thing to know is that <em>it’s not just you</em>. Nearly everyone in tech I have this conversation with feels very isolated about it, and they’re often embarrassed or ashamed to discuss it. They think that everyone else who has a job in tech is happy or comfortable at their current employers, or that the other people looking for work are getting calls back or are being offered interviews in response to their job applications. But I’m here to tell you: it is grim right now. About as bad as I’ve seen. And I’ve been around a long time.</p>
<p>Every major tech company has watched their leadership abandon principles that were once thought sacrosanct. I’ve heard more people talk about losing respect for executives they trusted, respected, even <em>admired</em> in the last year than at any time I can remember. In smaller companies and other types of organizations, the challenges have been more about the hard choices that come from dire resource constraints or being forced to make ugly ethical compromises for pragmatic reasons. The net result is tons of people who have lost pride and conviction in their work. They’re going through the motions for a paycheck, because they know it’s a tough job market out there, which is a miserable state of affairs.</p>
<p>The public narrative is dominated by the loud minority of dudes who are content to appease the egos of their bosses, sucking up to the worse impulses of those in charge. An industry that used to pride itself on publicly reporting security issues and openly disclosing vulnerabilities now circles its wagons to gang up on people who suggest that an AI tool shouldn’t tell children to harm themselves, that perhaps it should be possible to write a law limiting schools from deploying AI platforms that are known to tell kids to end their own lives. People in tech endure their bosses using slurs at work, making jokes about sexual assault, consorting with leaders who have directly planned the murder of journalists, engaging in open bribery in blatant violation of federal law and their own corporate training on corruption, and have to act like it’s normal.</p>
<p>But it’s not the end of the world. The forces of evil have not yet triumphed, and all hope is not lost. There are still things we can do.</p>
<h2>Taking back control</h2>
<p>It can be easy to feel overwhelmed at such an unprecedented time in the industry, especially when there’s so much change happening. But there are concrete actions you can take to have agency over your own career, and to insulate yourself from the bad actors and maximize your own opportunities — even if some of those bad actors are your own bosses.</p>
<h3>Understanding systems</h3>
<p>One of the most important things you can do is to be clear about your own place, and your own role, within the systems that you are part of. A major factor in the changes that bosses are trying to effect with the deployment of AI is shifting the role of workers within the systems in their organization to make them more replaceable.</p>
<p>If you’re a coder, and you think your job is to make really good code in a particular programming language, you might double down on getting better at the details of that language. But that’s almost certainly misunderstanding the system that your company thinks you’re part of, where the code is just a means to the end of creating a final product. In that system-centric view, the programming language, and indeed all of the code itself, doesn’t really matter; the person who is productive at causing all of that code to be created reliably and efficiently is the person who is going to be valued, or at least who is most likely to be kept around. That may not be satisfying or reassuring if you truly love coding, but at least this perspective can help you make informed decisions about whether or not that organization is going to make choices that respect the things you value.</p>
<p>This same way of understanding systems can apply if you’re a designer or a product manager or a HR administrator or anything else. As I’ve covered before, <a href= "https://anildash.com/2024/05/28/systems-the-purpose-of-a-system/">the purpose of a system is what it does</a>, and that truth can provide some hard lessons if we find it’s in tension with the things we <em>want</em> to be doing for an organization. The system may not value the things we do, or it may not value them enough; the way they phrase this to avoid having to say it directly is by describing something as “inefficient”. Then, the question you have to ask yourself is, can you care about this kind of work or this kind of program at one level higher up in the system? Can it still be meaningful to you if it’s slightly more abstract? Because that may be the requirement for navigating the expectations that technology organizations will be foisting on everyone through the language of talking about “adopting AI”.</p>
<h3>Understanding power</h3>
<p>Just as important as understanding systems is understanding <em>power</em>. In the workplace, power is something real. It means being able to control how money is spent. It means being able to make decisions. It means being able to hire people, or fire them. Power is being able to say no.</p>
<p>You probably don’t have enough power; that’s why you have worries. But you almost certainly have more power than you think, it’s just not as obvious how to wield it. The most essential thing to understand is that you will need to collaborate with your peers to exercise collective power for many of the most significant things you may wish to achieve.</p>
<p>But even at an individual level, a key way of understanding power in your workplace is to consider the systems that you are part of, and then to reckon with which ones you can meaningfully change from your current position. Very often, people will, in a moment of frustration, say “this place couldn’t run without me!” And companies will almost always go out of their way to prove someone wrong if they hear that message.</p>
<p>On the other hand, if you identify a system for operating the organization that no one else has envisioned, you’ve already <em>demonstrated</em> that this part of the organization couldn’t run without you, and you don’t need to say it or prove it. There is power in the mere action of creating that system. But a lot depends on where you have both the positional authority and the social permission to actually accomplish that kind of thing.</p>
<p>So, if you’re dissatisfied with where you are, but have not decided to leave your current organization, then your first orders of business in this new year should be to consolidate power through building alliances with peers, and by understanding which fundamental systems of your organization you can define or influence, and thus be in control of. Once you’ve got power, you’ve got options.</p>
<h3>Most tech isn’t “tech”</h3>
<p>So far, we’re talking about very abstract stuff. What do we do if your job sucks right now, or if you don’t have a job today and you really need one? After vague things like systems and power, then what?</p>
<p>Well, an important thing to understand, if you care about innovation and technology, is that the vast majority of technology doesn’t happen in the startup world, or even in the “tech industry”. Startups are only a tiny fraction of the entire realm of companies that create or use technology, and the giant tech companies are only a small percentage of all jobs or hiring within the tech realm.</p>
<p>So much opportunity, inspiration, creativity, and possibility lies in applying the skills and experience that you may have from technological disciplines in other realms and industries that are often far less advanced in their deployment of technologies. In a lot of cases, these other businesses get taken advantage of for their lack of experience — and in the non-profit world, the lack of tech expertise or fluency is often exploited by both the technology vendors and bad actors who swoop in to capitalize on their vulnerability.</p>
<p>Many of the people I talk to who bring their technology experience to other fields also tell me that the culture in more traditional industries is often less toxic or broken than things in Silicon Valley (or Silicon Valley-based) companies are these days, since older or more established companies have had time to work out the more extreme aspects of their culture. It’s an extraordinary moment in history when people who work on Wall Street tell me that even <em>their</em> HR departments wouldn’t put up with the kind of bad behavior that we’re seeing within the ranks of tech company execs.</p>
<h3>Plan for the long term</h3>
<p>This too shall pass. One of the great gifts of working in technology is that it’s given so many of us the habit of constantly learning, of always being curious and paying attention to the new things worth discovering. That healthy and open-minded spirit is an important part of how to navigate a moment when lots of people are being laid off, or lots of energy and attention are being focused on products and initiatives that don’t have a lot of substance behind them.
Eventually, people will want to return to what’s real. The companies that focus on delivering products with meaning, and taking care of employees over time, will be the ones that are able to persist past the current moment. So building habits that enable resiliency at both a personal and professional level is going to be key.</p>
<p>As I’ve been fond of saying for a long time: don’t let your job get in the way of your career.</p>
<p>Build habits and routines that serve your own professional goals. As much as you can, participate in the things that get your name out into your professional community, whether that’s in-person events in your town, or writing on a regular basis about your area of expertise, or mentoring with those who are new to your field. You’ll never regret building relationships with people, or being generous with your knowledge in ways that remind others that you’re great at what you do.</p>
<p>If your time and budget permit, attend events in person or online where you can learn from others or respond to the ideas that others are sharing. The more people can see and remember that you’re engaged with the conversations about your discipline, the greater the likelihood that they’ll reach out when the next opportunity arises.</p>
<p>Similarly, take every chance you can to be generous to others when you see a door open that might be valuable for them. I can promise you, people will <em>never</em> forget that you thought of them in their time of need, even if they don’t end up getting that role or nabbing that interview.</p>
<h2>It’s an evolution, not a resolution</h2>
<p>New years are often a time when people make a promise to themselves about how they’re going to change everything. If I can just get this new notebook to write in, I’m suddenly going to become a person who keeps a journal, and that will make me a person who’s on top of everything all the time.</p>
<p>But hopefully you can see, many of the challenges that so many people are facing are systemic, and aren’t the result of any personal failings or shortcomings. So there isn’t some heroic individual change that you can make when you flip over to a new calendar month that will suddenly fix all the things.</p>
<p>What you can control, though, are small iterative things that make you feel better on a human scale, in little ways, when you can. You can help yourself maintain perspective, and you can do the same for those around you who share your values, and who care about the same personal or professional goals that you do.</p>
<p>A lot of us still care about things like the potential for technology to help people, or still believe in the idealistic and positive goals that got us into our careers in the first place. We weren’t wrong, or naive, or foolish to aspire to those goals simply because some bad actors sought to undermine them. And it’s okay to feel frustrated or scared in a time when it seems to many like those goals could be further away than they’ve been in a long time.</p>
<p>I do hope, though, that people can see that, by sticking together, and focusing on the things that are within our reach, things can begin to change. All it takes is remembering that the power in tech truly rests with all the people who actually <em>make</em> things, not with the loudmouths at the top who try to tear things down.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>500,000 tech workers have been laid off since ChatGPT was released</title>
      <link href="https://anildash.com/2026/01/06/500k-tech-workers-laid-off/"/>
      <updated>2026-01-06T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/06/500k-tech-workers-laid-off/</id>
      <content type="html">
        <![CDATA[
      <p>One of the key points I repeated when <a href="https://www.anildash.com/2026/01/05/a-tech-career-in-2026/">talking about the state of the tech industry yesterday</a> was the salient fact that <em>half a million tech workers have been laid off since ChatGPT was released in late 2022</em>. Now, to be clear, those workers haven’t been laid off because their jobs are now being done by AI, and they’ve been replaced by bots. Instead, they’ve been laid off by execs who now have AI to use as an excuse for going after workers they’ve wanted to cut all along.</p>
<p>This is important to understand for a few reasons. First, it’s key just for having empathy for both the mindset and the working conditions of people in the tech industry. For so many outside of tech, their impression of what “tech” means is whatever is the most recent transgression they’ve heard about from the most obnoxious billionaire who’s made the news lately. But in many cases, it’s the rank and file workers at that person’s company who were the first victims of that billionaire’s ego.</p>
<p>Second, it’s important to understand the big tech companies as almost the testing grounds for the techniques and strategies that these guys want to roll out on the rest of the economy, and on the rest of the world. Before they started going on podcasts pretending to be extremely masculine while whining about their feelings, or overtly bribing politicians to give them government contracts, they beta-tested these manipulative strategies within their companies by cracking down on dissent and letting their most self-indulgent and egomaniacal tendencies run wild. Then, when people (reasonably!) began to object, they used that as an excuse to purge any dissenters for being uncooperative or “difficult”.</p>
<h2>It starts with tech, but doesn’t end there</h2>
<p>These are tactics they’ll be bringing to other industries and sectors of the economy, if they haven’t already. Sometimes they’ll be providing AI technologies and tools as an enabler or justification for the cultural and political agenda that they’re enacting, but often times, they don’t even need to. In many cases, they can simply make clear that they want to enforce psychological and social conformity within their organizations, and that any disagreement will not be tolerated, and the implicit threat of being replaced by automation (or by other workers who are willing to fall in line) is enough to get people to comply.</p>
<p>This is the subtext, and sometimes the explicit text, of the deployment of “AI” in a lot of organizations. That’s separate from what actual AI software or technology can do. And it explains a lot of why the <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view/">majority AI view</a> within the tech industry is nothing like the hype cycle that’s being pushed by the loudest voices of the big-name CEOs.</p>
<p>Because people who work in tech still believe in the power of tech to do good things, many of us won’t just dismiss outright the possibility that any technology — even AI tools like LLMs — could yield some benefits. But the optimistic takes are tempered by the first-hand knowledge of how the tools are being used as an excuse to sideline or victimize good people.</p>
<p>This wave of layoffs and reductions has been described as “pursuing efficiencies” or “right-sizing”. But so many of us in tech can remember a few years back, when working in tech as an upwardly-mobile worker with a successful career felt like the best job in the world. When many people could buy nice presents for their kids at Christmas or they weren’t as worried about your car payments. When huge parts of society were promising young people that there was a great future ahead if they would just learn to code. When the promise of a tech career’s potential was used as the foundation for building infrastructure in our schools and cities to train a whole new generation of coders.</p>
<p>But the funders and tycoons in charge of the big tech companies <em>knew</em> that they did not want to keep paying enormous salaries to the people they were hiring. They certainly knew they didn’t want to keep paying huge hiring bonuses to young people just out of college, or to pay large staffs of recruiters to go find underrepresented candidates. Those niceties that everybody loved, like great healthcare and decent benefits, were identified by the people running the big tech companies as “market inefficiencies” which indicated some wealth was going to you that should have been going to <em>them</em>. So yes, part of the reason for the huge investment in AI coding tools was to make it easier to write code. But another huge reason that AI got so good at writing code was so that nobody would ever have to pay coders so well again.</p>
<p>You’re not wrong if you feel angry, resentful and overwhelmed by all of this; indeed, it would be absurd if you <em>didn’t</em> feel this way, since the wealthiest and most powerful people in the history of the world have been spending a few years trying to make you feel exactly this way. Constant rotating layoffs and a nonstop fear of further cuts, with a perpetual sense of precarity, are a deliberate strategy so that everyone will accept lower salaries and reduced benefits, and be too afraid to push for the exact same salaries that the company could afford to pay the year before.</p>
<h2>Why are we stirring the pot?</h2>
<p>Okay, so are we just trying to get each other all depressed? No. It’s just vitally important that we name a problem and identify it if we’re going to solve it.

Most people outside of the technology industry think that “tech” is a monolith, that the people who work in tech are the same as the people who <em>own</em> the technology companies. They don’t know that tech workers are in the same boat that they are, being buffeted by the economy, and being subject to the whims of their bosses, or being displaced by AI. They don’t know that the DEI backlash has gutted HR teams at tech companies, too, for example. So it’s key for everyone to understand that they’re starting from the same place.</p>
<p>Next, it’s key to tease apart things that are separate concerns. For example: AI is often an <em>excuse</em> for layoffs, not the cause of them. ChatGPT didn’t replace the tasks that recruiters were doing in attracting underrepresented candidates at big tech companies — the bosses just don’t care about trying to hire underrepresented candidates anymore! The tech story is being used to mask the political and social goal. And it’s important to understand that, because otherwise people waste their time fighting battles that might not matter, like the deployment of a technology system, and losing the ones that do, like the actual decisions that an organization is making about its future.</p>
<h2>Are they efficient, though?</h2>
<p>But what if, some people will ask, these companies just had <em>too many people</em>? What if they’d over-hired? The folks who want to feel really savvy will say, “I heard that they had all those employees because interest rates were low. It was a Zero Interest Rate Phenomenon.” This is, not to put too fine a point on it, bullshit. It’s not in any company’s best interests to cut their staffing down to the bone.</p>
<p>You actually <em>need</em> to have some reserve capacity for labor in order to reach maximum output for a large organization. This is the difference between a large-scale organization and a small one. People sitting around doing nothing is the epitome of waste or inefficiency in a small team, but in a large organization, it’s a lot more costly if you are about to start a new process or project and you don’t have labor capacity or expertise to deploy.</p>
<p>A good analogy is the oft-cited need these days for people to be bored more often. There’s a frequent lament that, because people are so distracted by things like social media and constant interruptions, they never have time to get bored and let their mind wander, and think new thoughts or discover their own creativity. Put another way, they never get the chance to tap into their own cognitive surplus.</p>
<p>The only advantage a large organization can have over a small one, other than sheer efficiencies of scale, is if it has a cognitive surplus that it can tap into. By destroying that cognitive surplus, and leaving those who remain behind in a state of constant emotional turmoil and duress, these organizations are permanently damaging both their competitive advantages and their potential future innovations.</p>
<h2>AI Spring</h2>
<p>When the dust clears, and people realize that extreme greed is never the path to maximum long-term reward, there is going to be a “peace dividend” of sorts from all the good talent that’s now on the market. Some of this will be smart, thoughtful people flowing to other industries or companies, bringing their experience and insights with them.</p>
<p>But I think a lot of this will be people starting their own new companies and organizations, informed by the broken economic models, and broken <em>human</em> models, of the companies they’ve left. We saw this a generation ago after the bust of the dot-com boom, when it was not only revealed that the economics of a lot of the companies didn’t work, but that so many of the people who had created the companies of that era didn’t even care about the markets or the industries that they’d entered. When the get-rich-quick folks left the scene, those of us who remained, who truly loved the web as a creative and expressive medium, found a ton of opportunity in being the little mammals amidst the sad dinosaurs trying to find funding for meteor dot com.</p>
<h2>What comes next</h2>
<p>I don’t think this all gets better very quickly. If you put aside the puffery of the AI companies scratching each others’ backs, it’s clear the economy is in a recession, even if this administration’s goons have shut down reporting on jobs and inflation in a vain attempt to hide that reality. But I do think there may be more resilience because of the sheer talent and entrepreneurial skill of the people who are now on the market as individuals.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>How Markdown took over the world</title>
      <link href="https://anildash.com/2026/01/09/how-markdown-took-over-the-world/"/>
      <updated>2026-01-09T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/09/how-markdown-took-over-the-world/</id>
      <content type="html">
        <![CDATA[
      <p>Nearly every bit of the high-tech world, from the most cutting-edge AI systems at the biggest companies, to the casual scraps of code cobbled together by college students, is annotated and described by the same, simple plain text format. Whether you’re trying to give complex instructions to ChatGPT, or you want to be able to exchange a grocery list in Apple Notes or copy someone’s homework in Google Docs, that same format will do the trick. The wild part is, the format wasn’t created by a conglomerate of tech tycoons, it was created by a curmudgeonly guy with a kind heart who right this minute is probably rewatching a Kubrick film while cheering for an absolutely indefensible sports team.</p>
<p>But it’s worth understanding how these simple little text files were born, not just because I get to brag about how generous and clever my friends are, but also because it reminds us of how the Internet <em>really</em> works: smart people think of good things that are crazy enough that they <em>just might work</em>, and then they give them away, over and over, until they slowly take over the world and make things better for everyone.</p>
<h2>Making Their Mark</h2>
<p>Though it’s now a building block of the contemporary Internet, like so many great things, <a href="https://daringfireball.net/projects/markdown/">Markdown</a> just started out trying to solve a personal problem. In 2002, John Gruber made the unconventional decision to bet his online career on two completely irrational foundations: Apple, and blogs.</p>
<p>It’s hard to remember now, but in 2002, Apple was just a few years past having been on death’s door. As difficult as it may be to picture in today’s world where Apple keynotes are treated like major events, back then, almost nobody was covering Apple regularly, let alone writing <em>exclusively</em> about the company. There was barely even an “tech news” scene online at all, and virtually no one was blogging. So John’s decision to go all-in on Apple for his pioneering blog <a href="https://daringfireball.net">Daring Fireball</a> was, well, a daring one. At the time, Apple had only <em>just</em> launched its first iPod that worked with Windows computers, and the iPhone was still a full five years in the future. But that single-minded focus, not just on Apple, but on obsessive detail in everything he covered, eventually helped inspire much of the technology media landscape that we see today. John’s timing was also perfect — from the doldrums of that era, Apple’s stock price would rise by about 120,000% in the years after Daring Fireball started, and its cultural relevance probably increased by even more than that.</p>
<p>By 2004, it wasn’t just Apple that had begun to take off: blogs and social media themselves had moved from obscurity to the very center of culture, and <a href="https://cybercultural.com/p/internet-2004/">a new era of web technology had begun</a>. At the beginning of that year, few people in the world even knew what a “blog” was, but by the end of 2004, blogs had become not just ubiquitous, but downright <em>cool</em>. As unlikely as it seems now, that year’s largely uninspiring slate of U.S. presidential candidates like Wesley Clark, Gary Hart and, yes, <a href="https://en.wikipedia.org/wiki/Howard_Dean_2004_presidential_campaign">Howard Dean</a> helped propel blogs into mainstream awareness during the Democratic primaries, alongside online pundits who had begun weighing in on politics and the issues and cultural moments at a pace that newspapers and TV couldn’t keep up with. A lot has been written about the transformation of media during those years, but less has been written about how the media and tech of the time transformed <em>each other</em>.</p>
<p><img src="/images/gary-hart-blog.JPG" alt="A photo from 2004 of a TV screen showing CNN, with a ticker saying &quot;Gary Hart Cyber Campaign Starts blog for possible 2004 presidential bid&quot;"></p>
<p>That era of early blogging was interesting in that nearly everyone who was writing the first popular sites was also busy helping <em>create</em> the tools for publishing them. Just like Lucille Ball and Desi Arnaz had to pioneer combining studio-style flat lighting with 35mm filming in order to define the look of the modern sitcom, or Jimi Hendrix had to work with Roger Mayer to invent the signature guitar distortion pedals that defined the sound of rock and roll, the pioneers who defined the technical format and structures of blogging were often building the very tools of creation as they went along.</p>
<p>I got a front row seat to these acts of creation. At the time I was working on Movable Type, which was the most popular tool for publishing “serious” blogs, and helped popularize the medium. Two of my good friends had built the tool and quickly made it into the default choice for anybody who wanted to reach a big audience; it was kind of a combination of everything people do these days on WordPress and all the various email newsletter platforms and all of the “serious” podcasts (since podcasts wouldn’t be invented for another few months). But back in those early days, we’d watch people use our tools to set up Gawker or Huffington Post one day, and Daring Fireball or Waxy.org the next, and each of them would be the first of its kind, both in terms of its design and its voice. To this day, when I see something online that I love by Julianne Escobedo Shepherd or Ta-Nehisi Coates or Nilay Patel or Annalee Newitz or any one of dozens of other brilliant writers or creators, my first thought is often, “hey! They used to type in that app that I used to make!” Because sometimes those writers would inspire us to make a new feature in the publishing tools, and sometimes they would have hacked up a new feature all by themselves in between typing up their new blog posts.</p>
<p>A really clear, and very simple, early example of how we learned that lesson was when we changed the size of the box that people used to type in just to create the posts on their sites. We made the box a little bit taller, mostly for aesthetic reasons. Within a few weeks, we’d found that posts on sites like Gawker had gotten longer, <em>mostly because the box was bigger</em>. This seems obvious now, years after we saw tweets get longer when Twitter expanded from 140 characters to 280 characters, but at the time this was a terrifying glimpse at how much power a couple of young product managers in a conference room in California would have over the media consumption of the entire world every time they made a seemingly-insignificant decision.</p>
<p>The <em>other</em> dirty little secret was, typing in the box in that old blogging app could be… pretty wonky sometimes. People who wanted to do normal things like include an image or link in their blog post, or even just make some text bold, often had to learn somewhat-obscure HTML formatting, memorizing the actual language that’s used to make web pages. Not everybody knew all the details of how to make pages that way, and if they made even one small mistake, sometimes they could break the whole design of their site. It made things feel very fraught every time a writer went to publish something new online, and got in the way of the increasingly-fast pace of sharing ideas now that social media was taking over the public conversation.</p>
<p>Enter John and his magical text files.</p>
<p><img src="/images/markdown-text-hero-slice.jpg" alt=""></p>
<h2>Marking up and marking down</h2>
<p>The purpose of Markdown is really simple: It lets you use the regular characters on your keyboard which you already use while typing out things like emails, to make fancy formatting of text for the web. That HTML format that’s used to make web pages stands for HyperText Markup Language. The word “markup” there means you’re “marking up” your text with all kinds of special characters.
Only, the special characters can be kind of arcane. Want to put in a link to everybody’s favorite website? Well, you’re going to have to type in <code>&lt;a href=&quot;https://anildash.com/&quot;&gt;Anil Dash’s blog&lt;/a&gt;</code> I could explain why, and what it all means, but honestly, you get the point — it’s a lot! Too much. What if you could just write out the text and then the link, sort of like you might within an email? Like: <code>[Anil Dash’s blog](https://anildash.com)</code>! And then the right thing would happen. Seems great, right?</p>
<p>The same thing works for things like putting a header on a page. For example, as I’m writing this right now, if I want to put a big headline on this page, I can just type <code>#How Markdown Took Over the World</code> and the right thing will happen.</p>
<p>If mark_up_ is complicated, then the opposite of that complexity must be… markd_own_. This kind of solution, where it’s so smart it seems obvious in hindsight, is key to Markdown’s success. John worked to make a format that was so simple that anybody could pick it up in a few minutes, and powerful enough that it could help people express pretty much anything that they wanted to include while writing on the internet. At a technical level, it was also easy enough to implement that John could write the code himself to make it work with Movable Type, his publishing tool of choice. (Within days, people had implemented the same feature for most of the other blogging tools of the era; these days, virtually every app that you can type text into ships with Markdown support as a feature on day one.)</p>
<p>Prior to launch, John had enlisted our mutual friend, the late, dearly missed <a href="http://www.aaronsw.com">Aaron Swartz</a>, as a beta tester. In addition to being extremely fluent in every detail of the blogging technologies of the time, Aaron was, most notably, seventeen years old. And though Aaron’s activism and untimely passing have resulted in him having been turned into something of a mythological figure, one of the greatest things about Aaron was that he could be a total pain in the ass, which made him <em>terrific</em> at reporting bugs in your software. (One of the last email conversations I ever had with Aaron was him pointing out some obscure bugs in an open source app I was working on at the time.) No surprise, Aaron instantly understood both the potential and the power of Markdown, and was a top-tier beta tester for the technology as it was created. His astute feedback helped finely hone the final product so it was ready for the world, and when Markdown <a href="https://daringfireball.net/2004/03/introducing_markdown">quietly debuted in March of 2004</a>, it was clear that text files around the web were about to get a permanent upgrade.</p>
<p>The most surprising part of what happened next wasn’t that everybody immediately started using it to write their blogs; that was, after all, what the tool was designed to do. It’s that everybody started using Markdown to do <em>everything else</em>, too.</p>
<h2>Hitting the Mark</h2>
<p>It’s almost impossible to overstate the ubiquity of Markdown within the modern computer industry in the decades since its launch.</p>
<p>After being nagged about it by users for more than a decade, Google finally <a href="https://www.theverge.com/2022/3/29/23002138/google-docs-markdown-support-formatting-update">added support for Markdown to Google Docs</a>, though it took them years of fiddly improvements to make it truly usable. Just last year, Microsoft added support for Markdown to its <a href="https://www.theverge.com/news/677474/microsoft-windows-notepad-bold-italic-text-formatting-markdown-support">venerable Notepad app</a>, perhaps in attempt to assuage the tempers of users who were still in disbelief that Notepad had been bloated with AI features. Nearly every powerful group messaging app, from Slack to WhatsApp to Discord, has support for Markdown in messages. And even the company that indirectly inspired all of this in the first place finally got on board: the most recent version of Apple Notes <a href="https://apple.gadgethacks.com/how-to/ios-26-notes-app-finally-gets-markdown-support-this-fall/">finally added support</a> for Markdown. (It’s an especially striking launch by Apple due to its timing, shortly after John had used his platform as the most influential Apple writer in the world to <a href="https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertino">blog about the utter failure</a> of the “Apple Intelligence” AI launch.)</p>
<p>But it’s not just the apps that you use on your phone or your laptop. For developers, Markdown has long been the lingua franca of the tools we string together to accomplish our work. On GitHub, the platform that nearly every developer in the world uses to share their code, nearly <em>every single repository of code</em> on the site has at least one Markdown file that’s used to describe its contents. Many have <em>dozens</em> of files describing all the different aspects of their project. And some of the repositories on GitHub consist of nothing <em>but</em> massive collections of Markdown files. The small tools and automations we run to perform routine tasks, the one-off reports that we generate to make sure something worked correctly, the confirmations that we have a system email out when something goes wrong, the temporary files we use when trying to recover some old data — all of these default to being Markdown files.</p>
<p>As a result, there are now <em>billions</em> of Markdown files lying around on hard drives around the world. Billions more are stashed in the cloud. There are some on the phone in your pocket. Programmers leave them lying around wherever their code might someday be running. Your kid’s Nintendo Switch has Markdown files on it. If you’re listening to music, there’s probably a Markdown file on the memory chip of the tiny system that controls the headphones stuck in your ears. <em>The Markdown is inside you right now!</em></p>
<h2>Down For Whatever</h2>
<p>So far, these were all things we could have foreseen when John first unleashed his little text tool on the world. I would have been surprised about how <em>many</em> people were using it, but not really the <em>ways</em> in which they were using it. If you’d have said “Twenty years in the future, all the different note-taking apps people use save their files using Markdown!”, I would have said, “Okay, that makes sense!”</p>
<p>What I <em>wouldn’t</em> have asked, though, was “Is John getting paid?” As hard as it may be to believe, back in 2004, the <em>default</em> was that people made new standards for open technologies like Markdown, and just shared them freely for the good of the internet, and the world, and then went on about their lives. If it happened to have unleashed billions of dollars of value for others, then so much the better. If they got some credit along the way, that was great, too. But mostly you just did it to solve a problem for yourself and for other like-minded people. And also, maybe, to help make sure that some jerk didn’t otherwise create some horrible proprietary alternative that would lock everybody into their terrible inferior version forever instead. (We didn’t have the word “enshittification” yet, but we did have Cory Doctorow and we did have plain text files, so we kind of knew where things were headed.)</p>
<p>To give a sense of the vibe of that era, the term “podcasting” had been coined just a month before Markdown was released, and went into wider use that fall, and was similarly <a href="https://www.anildash.com/2024/02/05/wherever-you-get-podcasts/">a radically open system</a> that wasn’t owned by any big company and that empowered people to do whatever they wanted to do to express themselves. (And podcasting was another technology that Aaron Swartz helped improve by being a brilliant pain in the ass. But I’ll save that story for another book-length essay.)</p>
<p>That attitude of being not-quite-_anti_commercial, but perhaps just not even really <em>concerned</em> with whether something was commercial or not seems downright quaint in an era when the tech tycoons are not just the wealthiest people in the world, but also some of the weirdest and most obnoxious as well. But the truth is, most people <em>today</em> who make technology are actually still exceedingly normal, and quite generous. It’s just that they’ve been overshadowed by their bosses who are out of their minds and building rocket ships and siring hundreds of children and embracing overt white supremacy instead of making fun tools for helping you type text, like regular people do.</p>
<p><img src="/images/markdown-text-hero-slice2.jpg" alt=""></p>
<h2>The Markdown Model</h2>
<p>The part about not doing this stuff solely for money matters, because even the <em>most</em> advanced LLM systems today, what the big AI companies call their “frontier” models, require complex orchestration that’s carefully scripted by people who’ve tuned their prompts for these systems through countless rounds of trial and error. They’ve iterated and tested and watched for the results as these systems hallucinated or failed or ran amok, chewing up countless resources  along the way. And sometimes, they generated genuinely astonishing outputs, things that are truly amazing to consider that modern technology can achieve. The rate of progress and evolution, even factoring in the mind-boggling amounts of investment that are going into these systems, is rivaled only by the initial development of the personal computer or the Internet, or the early space race.</p>
<p>And all of it — <em>all of it</em> — is controlled through Markdown files. When you see the brilliant work shown off from somebody who’s bragging about what they made ChatGPT generate for them, or someone is understandably proud about the code that they got Claude to create, all of the most advanced work has been prompted in Markdown. Though where the logic of Markdown was originally a very simple version of &quot;use human language to tell the machine what to do&quot;, the implications have gotten far more dire when they use a format designed to help expresss &quot;make this <code>**bold**</code>&quot; to tell the computer itself &quot;<code>make this imaginary girlfriend more compliant</code>&quot;.</p>
<p>But we already know that the Big AI companies are run by people who don't reckon with the implications of their work. They could never understand that every single project that's even moderately ambitious on these new AI platforms is being written up in files formatted according to this system created by one guy who has never asked for a dime for this work. An entire generation of AI coders has been born since Markdown was created who probably can’t even imagine that this technology even <em>has</em> an &quot;inventor&quot;. It’s just always been here, like the Moon, or Rihanna.</p>
<p>But it’s important for <em>everyone</em> to know that the Internet, and the tech industry, don’t run without the generosity and genius of regular people. It is not just billion-dollar checks and Silicon Valley boardrooms that enable creativity over years, decades, or generations — it’s often a guy with a day job who just gives a damn about doing something right, sweating the details and assuming that if he cares enough about what he makes then others will too. The <em>majority</em> of the technical infrastructure of the Internet was created in this way. For free, often by people in academia, or as part of their regular work, with no promise of some big payday or getting a ton of credit.</p>
<p>The people who make the <em>real</em> Internet and the real innovations also don’t look for ways to hurt the world around them, or the people around them. Sometimes, as in the case of Aaron, the world hurts them more than anyone should ever have to bear. I know not everybody cares that much about plain text files on the Internet; I will readily admit I am a huge nerd about this stuff in a way that maybe most normal people are not. But I do think everybody cares about <em>some</em> part of the wonderful stuff on the Internet in this way, and I want to fight to make sure that everybody can understand that it’s not just five terrible tycoons who built this shit. Real people did. Good people. I saw them do it.</p>
<p>The trillion-dollar AI industry's system for controlling their most advanced platforms is a plain text format one guy made up for his blog and then bounced off of a 17-year-old kid before sharing it with the world for free. You're welcome, Time Magazine's people of the year, <em>The Architects of AI</em>. Their achievement is every bit as impressive as yours.</p>
<p><img src="/images/markdown-text-hero-slice3.jpg" alt=""></p>
<h1 id="top-ten">The Ten Technical Reasons Markdown Won</h1>
<p>Okay, with some of the narrative covered, what can we <em>learn</em> from Markdown’s success? How did this thing really take off? What could we do if we wanted to replicate something like this in the modern era? Let’s consider a few key points:</p>
<h3>1. Had a great brand.</h3>
<p>Okay, let’s be real: “Markdown” as a name is clever as hell. Get it it’s not markup, it’s mark <em>down</em>. You just can’t argue with that kind of logic. People who knew what the “M” in “HTML” stood for could understand the reference, and to everyone else, it was just a clearly-understandable name for a useful utility.</p>
<h3>2. Solved a real problem.</h3>
<p>This one is not obvious, but it’s really important that a new technology have a <em>real</em> problem that it’s trying to solve, instead of just being an abstract attempt to do something vague, like “make text files better”. Millions of people were encountering the idea that it was too difficult or inconvenient to write out full HTML by hand, and even if one had the necessary skills, it was nice to be able to do so in a format that was legible as plain text as well.</p>
<h3>3. Built on behaviors that already existed.</h3>
<p>This is one of the most quietly genius parts of Markdown: The format is based on the ways people had been adding emphasis and formatting to their text for years or even decades. Some of the formatting choices dated back to the early days of email, so they’d been ingrained in the culture of the internet for a full generation before Markdown existed. It was so familiar, people could be writing Markdown <em>without even knowing it</em>.</p>
<h3>4. Mirrored RSS in its origin.</h3>
<p>Around the same time that Markdown was taking off, RSS was maturing into its ubiquitous form as well. The format had existed for some years already, enabling various kinds of content syndication, but at this time, it was adding support for the technologies that would come to be known as podcasting as well. And just like RSS, Markdown was spearheaded by a smart technologist who was also more than a little stubborn about defining a format that would go on to change the way we share content on the internet. In RSS’ case, it was pioneered by Dave Winer, and with Markdown it was John Gruber, and both were tireless in extolling the virtues of the plain text formats they’d helped pioneer. They could both leverage blogs to get the word out, and to get feedback on how to build on their wins.</p>
<h3>5. There was a community ready to help.</h3>
<p>One great thing about a format like Markdown is that its success is never just the result of one person. Vitally, Markdown was part of a community that could build on it right from the start. Right from the beginning, Markdown was inspired by earlier works like Textile, a formatting system for plain text created by <a href="https://web.archive.org/web/20021226035527/http://textism.com/tools/textile/">Dean Allen</a>. Many of us appreciated and were inspired by Dean, who was a pioneer of blogging tools in the early days of social media, but if there’s a bigger fan of Dean Allen on the internet than John Gruber, I’ve never met them. Similarly, <a href="http://www.rememberaaronsw.com/memories/">Aaron Swartz</a>, the brilliant young technologist who’s known best known as an activist for digital rights and access, was at that time just a super brilliant teenager that a lot of us loved hacking with. He was the most valuable beta tester of Markdown prior to its release, helping to shape it into a durable and flexible format that’s stood the test of time.</p>
<h3>6. Had the right flavor for every different context.</h3>
<p>Because Markdown’s format was frozen in place (and had some super-technical details that people could debate about) and people wanted to add features over time, various communities that were implementing Markdown could add their own “flavors” of it as they needed. Popular ones came to be called Commonmark and Github-Flavored, led by various companies or teams that had divergent needs for the tool. While tech geeks tend to obsess over needing everything to be “correct”, in reality it often just <em>doesn’t matter</em> that much, and in the real world, the entire Internet is made up of content that barely follows the technical rules that it’s supposed to.</p>
<h3>7. Released at a time of change in behaviors and habits.</h3>
<p>This is a subtle point, but an important one: Markdown came along at the right time in the evolution of its medium. You can get people to change their behaviors when they’re using a new tool, or adopting a new technology. In this case, blogging (and all of social media!) were new, so saying “here’s a new way of typing a list of bullet points” wasn’t much an additional learning curve to add to the mix. If you can take advantage of catching people while they’re already in a learning mood, you can really tap into the moment when they’re most open-minded to new things.</p>
<h3>8. Came right on the cusp of the “build tool era”.</h3>
<p>This one’s a bit more technical, but also important to understand. In the first era of building for the web, people often built the web’s languages of HTML, Javascript and CSS by hand, by themselves, or stitched these formats together from subsets or templates. But in many cases, these were fairly simple compositions, made up of smaller pieces that were written in the same languages. As things matured, the roles for web developers specialized (there started to be backend developers vs. front-end, or people who focused on performance vs. those who focused on visual design), and as a result the tooling for developers matured. On the other side of this transition, developers began to use many different programming languages, frameworks and tools, and the standard step before trying to deploy a website was to have an automated build process that transformed the “raw materials” of the site into the finished product. Since Markdown is a raw material that has to be transformed into HTML, it perfectly fit this new workflow as it became the de facto standard method of creation and collaboration.</p>
<h3>9. Worked with “View source”</h3>
<p>Most of the technologies that work best on the web enable creators to “view source” just like HTML originally did when the first web browsers were created. In this philosophy, one can look at the source code that makes up a web page, and understand how it was constructed so that you can make your own. With Markdown, it only takes one glimpse of a source Markdown file for anyone to understand how they might make a similar file of their own, or to extrapolate how they might apply analogous formatting to their own documents. There’s no teaching required when people can just see it for themselves.</p>
<h3>10. Not encumbered in IP</h3>
<p>This one’s obvious if you think about it, but it can’t go unsaid: There are no legal restrictions around Markdown. You wouldn’t <em>think</em> that anybody would be foolish or greedy enough to try to patent something as simple as Markdown, but there are many far worse examples of patent abuse in the tech industry. Fortunately, John Gruber is not an awful person, and nobody else has (yet) been brazen enough to try to usurp the format for their own misadventures in intellectual property law. As a result, nobody’s been afraid, either to use the format, or to support creating or reading the format in their apps.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>How to know if that job will crush your soul</title>
      <link href="https://anildash.com/2026/01/12/will-that-job-crush-your-soul/"/>
      <updated>2026-01-12T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/12/will-that-job-crush-your-soul/</id>
      <content type="html">
        <![CDATA[
      <p>Last week, we talked about one huge question, “<a href="https://www.anildash.com/2026/01/05/a-tech-career-in-2026/">How the hell are you supposed to have a career in tech in 2026?</a>” That’s pretty specific to this current moment, but there are some timeless, more perennial questions I've been sharing with friends for years that I wanted to give to all of you. They're a short list of questions that help you judge whether a job that you’re considering is going to crush your soul or not.</p>
<p>Obviously, not everyone is going to get to work in an environment that has perfect answers to all of these questions; a lot of the time, we’re lucky just to get a place to work at all. But these questions are framed in this way to encourage us all to aspire towards roles that enable us to do our best work, to have the biggest impact, and to live according to our values.</p>
<h2>The Seven Questions</h2>
<ul>
<li>If what you do succeeds, will the world be better?</li>
</ul>
<p>This question originally started for me when I would talk to people about new startups, where people were judging the basic idea of the product or the company itself, but it actually applies to <em>any</em> institution, at <em>any</em> size. If the organization that you’re considering working for, or the team you’re considering joining, is able to achieve their stated goals, is it ultimately going to have a positive effect? Will you be proud of what it means? Will the people you love and care about respect you for making that choice, and will those with the least to gain feel like you’re the kind of person who cares about their impact on the world?</p>
<ul>
<li>Whose money do they have to take to stay in business?</li>
</ul>
<p>Where does the money in the organization <em>really</em> come from? You need to know this for a lot of reasons. First of all, you need to be sure that <em>they</em> know the answer. (You’d be surprised how often that’s not the case!) Even if they do know the answer, it may make you realize that those customers are not the people whose needs or wants you’d like to spend most of your waking hours catering to. This goes beyond the simple basics of the business model — it can be about whether they're profitable or not, and what the corporate ownership structure is like.</p>
<p>It’s also increasingly common for companies to mistake those who are <em>investing</em> in a company with those who are their <em>customers</em>. But there’s a world of difference between those who are paying you, and those who you have to pay back tenfold. Or thousandfold.</p>
<p>The same goes for nonprofits — do you know who has to stay happy and smiling in order for the institution to stay stable and successful? If you know those answers, you'll be far more confident about the motivations and incentives that will drive key decisions within the organization.</p>
<ul>
<li>What do you have to believe to think that they’re going to succeed? In what way does the world have to change or not change?</li>
</ul>
<p>Now we’re getting a little bit deeper into thinking about the systems that surround the organization that you’re evaluating. Every company, every institution, even every small team, is built around a set of invisible assumptions. Many times, they’re completely reasonable assumptions that are unlikely to change in the future. But <em>sometimes</em>, the world you’re working in is about to shift in a big way, or things are built on a foundation that’s speculative or even unrealistic.</p>
<p>Maybe they're assuming there aren't going to be any big new competitors. Perhaps they think they'll always remain the most popular product in their category. Or their assumptions could be about the stability of the rule of law, or a lack of corruption — more fundamental assumptions that they've never seen challenged in their lifetime or in their culture, but that turn out to be far more fragile than they'd imagined.</p>
<p>Thinking through the context that everyone is sharing, and reflecting on whether they’re really planning for any potential disruptions, is an essential part of judging the psychological health of an organization. It’s the equivalent of a person having self-awareness, and it’s just as much of a red flag if it’s missing.</p>
<ul>
<li>What’s the lived experience of the workers there whom you trust? Do you have evidence of leaders in the organization making hard choices to do the right thing?</li>
</ul>
<p>Here is how we can tell the culture and character of an organization. If you’ve got connections into the company, or a backchannel to workers there, finding out as much information as you can about the real story of its working conditions is often one of the best ways of understanding whether it’s a fit for your needs. Now, people can always have a bad day, but overall, workers are usually very good at providing helpful perspectives about their context.</p>
<p>And more broadly, if people can provide examples of those in power within an organization <em>using</em> that power to take care of their workers or customers, or to fight for the company to be more responsible, then you’ve got an extremely positive sign about the health of the place even before you’ve joined. It’s vital that these be stories you are able to find and discover on your own, not the ones amplified by the institution itself for PR purposes.</p>
<ul>
<li>What were you wrong about?</li>
</ul>
<p>And here we have perhaps one of the easiest and most obvious ways to judge the culture of an organization. This is even a question you can ask people while you’re in an interview process, and you can judge their responses to help form your opinion. A company, and <em>leadership culture</em>, that can change its mind when faced with new information and new circumstances is much more likely to adapt to challenges in a healthy way. (If you want to be nice, phrase it as &quot;What is a way in which the company has evolved or changed?&quot;)</p>
<ul>
<li>Does your actual compensation take care of what you need for all of your current goals and needs — from day one?</li>
</ul>
<p>This is where we go from the abstract and psychological goals to the practical and everyday concerns: can you pay your bills? The phrasing and framing here is very intentional: <em>are they really going to pay you enough</em>? I ask this question very specifically because you’d be surprised how often companies actually dance around this question, or how often we trick ourselves into hearing what we <em>want</em> to hear as the answer to this question when we’re in the exciting (or stressful) process of considering a new job, instead of looking at the facts of what’s actually written in black-and-white on an offer letter.</p>
<p>It's also important not to get distracted with potential, even if you're optimistic about the future. Don’t listen to promises about what might happen, or descriptions of what’s possible if you advance in your role. Think about what your real life will be like, after taxes, if you take the job that they’ve described.</p>
<ul>
<li>Is the role you’re being hired into one where you can credibly advance, and where there’s sufficient resources for success?</li>
</ul>
<p>This is where you can apply your optimism in a practical way: can the organization accurately describe how your career will proceed within the company? Does it have a specific and defined trajectory, or does it involve ambiguous processes or changes in teams or departments? Would you have to lobby for the support of leaders from other parts of the organization? Would making progress require acquiring new skills or knowledge? Have they committed to providing you with the investment and resources required to learn those skills?</p>
<p>These questions are essential to understand, because lacking these answers can lead to an ugly later realization that even an initially-exciting position may turn out to be a dead-end job over time.</p>
<h3>Towards better working worlds</h3>
<p>Sometimes it can really feel like the deck is stacked against you when you're trying to find a new job. It can feel even worse to be faced with an opportunity and have a nagging sense that something is <em>not quite right</em>. Much of the time, that feeling comes from the vague worry that we're taking a job that is going to make us miserable.</p>
<p>Even in a tough job market, there are some places that are trying to do their best to treat people decently. In larger organizations, there are often pockets of relative sanity, led by good leaders, who are trying to do the right thing. It can be a massive improvement in quality of life if you can find these places and use them as foundations for the next stage of your career.</p>
<p>The best way to navigate towards these better opportunities is to be systematic when evaluating all of your options, and to hold out for as high standards as possible when you're out there looking. These seven questions give you the tools to do exactly that.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Wikipedia at 25: What the web can be</title>
      <link href="https://anildash.com/2026/01/15/wikipedia-at-25/"/>
      <updated>2026-01-15T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/15/wikipedia-at-25/</id>
      <content type="html">
        <![CDATA[
      <p>When Wikipedia <a href="https://wikipedia25.org/en/">launched 25 years ago today</a>, I heard about it almost immediately, because the Internet was small back then, and I thought “Well… good luck to those guys.” Because there had been online encyclopedias before Wikipedia, and anybody who really <em>cared</em> about this stuff would, of course, buy Microsoft Encarta on CD-ROM, right? I’d been fascinated by the technology of wikis for a good while at that point, but was still not convinced about whether they could be deployed at such a large scale.</p>
<p>So, once Wikipedia got a little bit of traction, and I met Jimmy Wales the next year, I remember telling him (with all the arrogance that only a dude that age can bring to such an obvious point) “well, the <em>hard part</em> is going to be getting all the people to contribute”. As you may be aware, Jimmy, and a broad worldwide community of volunteers, did pretty well with the hard part.</p>
<p>Wikipedia has, of course, become vital to the world’s information ecosystem. Which is why everyone needs to be aware of the fact that it is currently under <a href="https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales">existential threat</a> from those who see any reliable source of truth as an attack on their power. The same authoritarians in power who are trying to purchase every media outlet and social network where ordinary people might have a chance to share accurate information about their crimes or human rights violations are deeply threatened about a platform that they can’t control and can’t own.</p>
<p>Perhaps the greatest compliment to Wikipedia at 25 years old is the fact that, if the fascists can’t buy it, then they’re going to try to kill it.</p>
<h2>The Building Block</h2>
<p>What I couldn’t foresee in the early days, when so many were desperate to make sure that Wikipedia wasn’t treated as a credible source — there were <em>so many</em> panicked conversations about how to keep kids from citing the site in their school papers — was how the site would become infrastructure for so much of the commercial internet.</p>
<p>The first hint was when Google introduced their “Knowledge Panel”, the little box of info next to their search results that tried to explain what you were looking for, without you even having to click through to a website. For Google, this had a huge economic value, because it kept you on their search results page where all their ad links lived. The vast majority of the Knowledge Panel content for many major topics was basically just Wikipedia content, summarized and wrapped up in a nice little box. Here was the most valuable company of the new era of the Internet, and one of their signature experiences relied on the strength of the Wikipedia community’s work.</p>
<p>This was, of course, complemented by the fact that Wikipedia would also organically show up right near the top of so many search results just based on the strength of the content that the community was cranking out at a remarkable pace. Though it probably made Google bristle a little bit that those damn Wikipedia pages didn’t have any Google ads on them, and didn’t have any of Google’s tracking code on them, so they couldn’t surveil what you do when you were clicking around on the site, making it impossible for them to spy on you and improve the targeting of their advertising to you.</p>
<p>The same pattern played out later for the other major platforms; Apple’s Siri and Amazon’s Alexa both default to using Wikipedia data to answer common questions. During the few years when Facebook pretended to care about misinformation, they would show summaries of Wikipedia information in the news feed to help users fact-check misinformation that was being shared.</p>
<p>Unsurprisingly, a lot of the time when the big companies would try to use Wikipedia as the water to put out the fires that they’d started, they <a href="https://www.wired.com/story/youtube-wikipedia-content-moderation-internet/">didn’t even bother to let the organization know</a> before they started doing so, burdening the non-profit with the cost and complexity of handling their millions of users and billions of requests, without sharing any of their trillions of dollars. (At least until there was public uproar over the practice.) Eventually, Wikimedia Foundation (the organization that runs Wikipedia) made a way for <a href="https://enterprise.wikimedia.com">companies to make deals with them</a> and actually support the community instead of just extracting from the community without compensation.</p>
<h2>The culture war comes for Wikipedia</h2>
<p>Things had reached a bit of equilibrium for a few years, even as the larger media ecosystem started to crumble, because the world could see after a few decades that Wikipedia had become a vital and valuable foundation to the global knowledge ecology. It’s almost impossible to imagine how the modern internet would function without it.</p>
<p>But as the global fascist movement has risen in recent years, one of their first priorities, as in all previous such movements, has been undermining any sources of truth that can challenge their control over information and public sentiment. In the U.S., this has manifested from the top-down with the richest tycoons in the country, including Elon Musk, stoking sentiment against Wikipedia with vague innuendo and baseless attacks against the site. This is also why Musk has funded the creation of alternatives like Grokipedia, designed to undermine the centrality and success of Wikipedia. From the bottom-up, there have been individual bad actors who have attempted to infiltrate the ranks of editors on the site, or worked to deface articles, often working slowly or across broad swaths of content in order to attempt to avoid detection.</p>
<p>All of this has been carefully coordinated; as noted in <a href="https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales">well-documented pieces like the Verge’s excellent coverage</a> of the story, the attack on Wikipedia is a campaign that has been led by voices like Christopher Rufo, who helped devise campaigns like the concerted effort to demonize trans kids as a cultural scapegoat, and the intentional targeting of Ivy League presidents as part of the war on DEI. The undermining of Wikipedia hasn’t yet gotten the same traction, but they also haven’t yet put the same time and resources into the fight.</p>
<p>There’s been such a constant stream of vitriol directed at Wikipedia and its editors and leadership that, when I heard about a <a href="https://gothamist.com/news/gunman-storms-stage-at-wikipedia-conference-in-manhattan-no-injuries-reported">gunman storming the stage</a> at the recent gathering of Wikipedia editors, I had <em>assumed</em> it was someone who had been incited by the baseless attacks from the extremists. (It turned out to have been someone who was disturbed on his own, which he said was tied to the editorial policies of the site.) But I would expect it’s only a matter of time until the attacks on Wikipedia’s staff and volunteers take on a far more serious tone much of the time — and it’s not as if this is an organization that has a massive security budget like the trillion-dollar tech companies.</p>
<p>The temperature keeps rising, and there isn’t yet sufficient awareness amongst good actors to protect the Wikipedia community and to guard its larger place in society.</p>
<h2>Enter the AI era</h2>
<p>Against this constant backdrop of increasing political escalation, there’s also been the astronomical ramp-up in demand for Wikipedia content from AI platforms. The very first source of data for many teams when training a new LLM system is Wikipedia, and the vast majority of the time, they gather that data not by paying to license the content, but by “scraping” it from the site — which uses both more technical resources and precludes the possibility of establishing any consensual paid relationship with the site.</p>
<p>A way to think about it is that, for the AI world, they’re music fans trading Wikipedia like it’s MP3s on Napster, and conveniently ignoring the fact there’s an Apple Music or Spotify offering a legitimate way to get that same data while supporting the artist. Hopefully the <a href="https://www.anildash.com/2025/09/18/the-taylors-version-generation/">“Taylor’s Version” generation</a> can see Wikipedia as being at least as worthy of supporting as a billionaire like Taylor Swift is.</p>
<p>But as people start going to their AI apps first, or chatting with bots instead of doing Google searches, they don’t <em>see</em> those Knowledge Panels anymore, and they don’t click through to Wikipedia anymore. At a surface level, this hurts traffic to the site, but at a deeper level, this hurts the flow of new contributors to the site. Interestingly, though I’ve been linking to <a href="https://www.anildash.com/2006/07/31/quitting-wikipe/">critiques of Wikipedia</a> on my site for at least twenty years, for most of the last few decades, my biggest criticism of Wikipedia has long been the lack of inclusion amongst its base of editorial volunteers. But this is, at least, a shortcoming that both the Wikimedia Foundation and the community itself readily acknowledge and have been working diligently on.</p>
<p>That lack of diversity in editors as a problem will pale in comparison to the challenge presented if people stop coming to the front door entirely because they’re too busy talking to their AI bots. They may not even <em>know</em> what parts of the answers they’re getting from AI are due to the bot having slurped up the content from Wikipedia. Worse, they’ll have been so used to constantly encountering hallucinations that the idea of joining a community that’s constantly trying to improve the accuracy of information will seem quaint, or even <em>absurd</em>, in a world where everything is wrong and made up all the time.</p>
<p>This means that it’s in the best interests of the AI platforms to not only pay to sustain Wikipedia and its community so that there’s a continuous source of new, accurate information over time, but that it’s also in their interest to keep teaching their community about the value of such a resource. The very fact that people are so desperate to chat with a bot shows how hungry they are for connection, and just imagine how excited they’d be to connect with the <em>actual humans</em> of the Wikipedia community!</p>
<h2>We can still build</h2>
<p>It’s easy to forget how radical Wikipedia was at its start. For the majority of people on the Internet, Wikipedia is just something that’s been omnipresent right from the start. But, as someone who got to watch it rise, take it from me: this was a thing that lots of regular people <em>built together</em>. And it was explicitly done as a collaboration meant to show the spirit of what the Internet is really about.</p>
<p><a href="https://wikimediafoundation.org/wikipedia25/">Take a look at its history</a>. Think about what it means that there is no advertising, and there never has been. It doesn’t track your activity. You can edit the site <em>without even logging in</em>. If you make an account, you don’t have to use your real name if you’d like to stay anonymous. When I wrote about <a href="https://www.anildash.com/2008/09/22/alan-leeds-and-who-writes-the-web/">being the creator</a> of an entirely <em>new</em> page on Wikipedia, it felt like magic, and it still does! You can be the person that births something onto the Internet that feels like it becomes a permanent part of the historical record, and then others around the world will help make it better, forever.</p>
<p>The site is still amongst the most popular sites on the web, bigger than almost every commercial website or app that has ever existed. There’s never been a single ad promoting it. It has unlocked <em>trillions</em> of dollars in value for the business world, and unmeasurable educational value for multiple generations of children. Did you know that for many, many topics, you can change your language from English to <em>Simple English</em> and get an <a href="https://simple.wikipedia.org/wiki/Quadratic_equation">easier-to-understand</a> version of an article that can often help explain a concept in much more approachable terms? Wikipedia has a <a href="https://www.wikivoyage.org">travel guide</a>! A <a href="https://www.wiktionary.org">dictionary</a>! A <a href="https://www.wikibooks.org">collection of textbooks and cookbooks</a>! Here are <a href="https://species.wikimedia.org/">all the species</a>! It’s unimaginably deep.</p>
<p>Whenever I worry about where the Internet is headed, I remember that this example of the collective generosity and goodness of people still exists. There are so many folks just working away, every day, to make something good and valuable for strangers out there, simply from the goodness of their hearts. They have no way of ever knowing who they’ve helped. But they believe in the simple power of doing a little bit of good using some of the most basic technologies of the internet. Twenty-five years later, all of the evidence has shown that they really have changed the world.</p>
<hr>
<p>If you are able, today is a very good day to <a href="https://donate.wikimedia.org/">support the Wikimedia Foundation</a>.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Codeless: From idea to software</title>
      <link href="https://anildash.com/2026/01/22/codeless/"/>
      <updated>2026-01-22T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/22/codeless/</id>
      <content type="html">
        <![CDATA[
      <h2>Something actually new?</h2>
<p>There’s finally been a big leap forward in coding tech unlocked by AI — not just “it’s doing some work for me”, but “we couldn’t do this before”. What’s new are a few smart systems that let coders control fleets of dozens of coding bots, all working in tandem, to swarm over a list of tasks and to deliver entire features, or even entire <em>sets</em> of features, just from a plain-English description of the strategic goal to be accomplished.</p>
<p>This isn’t a tutorial, this is just trying to understand that something cool is happening, and maybe we can figure out what it means, and where it’s going. Lots of new technologies and buzzwords with wacky names like Gas Town and Ralph Wiggum and loops and polecats are getting as much attention as, well, anything since vibe coding. So what’s really going on?</p>
<p>The breakthrough here came from using two familiar ideas in interesting new ways. The first idea is <em>orchestration</em>. Just like cloud computing got massively more powerful when it became routine for coders to be able to control entire fleets of servers, the ability to reliably configure and control entire fleets of coding bots unlocks a much higher scale of capability than any one person could have by chatting with a bot on their own.</p>
<p>The second big idea is <em>resilience</em>. Just like systems got more capable when designers started to assume that components like hard drives would fail, or that networks would lose connection, today’s coders are aware of the worst shortcoming of using LLMs: sometimes they create garbage code. This tendency used to be the biggest shortcoming about using LLMs to create code, but by <em>designing</em> for failure, testing outputs, and iterating rapidly, codeless systems enable a huge advancement in the ultimate reliability of the output code.</p>
<p>The codeless approach also addresses the other huge objection that many coders have to using LLMs for coding. The most common direct objection to using AI tools to assist in coding hasn’t just been the broken code — it’s been the many valid social and ethical concerns around the vendors who build the platforms. But codeless systems are open source, non-commercial, and free to deploy, while making it trivial to swap in alternatives for every part of the stack, including using open source or local options for all or part of the LLM workload. This isn’t software being sold by a Big AI vendor; these are tools being created by independent hackers in the community.</p>
<p>The ultimate result is the ability to create software at scale without directly writing any code, simply by providing strategic direction to a fleet of coding bots. Call it “codeless” software.</p>
<h2>Codeless in 10 points</h2>
<p>If you’re looking for a quick bullet-point summary, here’s something skimmable:</p>
<ol class="numbered-callout">
  <li>"Codeless" is a way to describe a new way of orchestrating large numbers of AI coding bots to build software at scale, controlled by a plain-English strategic plan for the bots to follow.</li>
  <li>In this approach, you don't write code directly. Instead, you write a plan for the end result or product that you want, and the system directs your bots to build code to deliver that product. (Codeless abstracts away directly writing code just like "<a href="https://en.wikipedia.org/wiki/Serverless_computing">serverless</a>" abstracted away directly managing servers.)</li>
  <li>This codeless approach is credible because it emerged organically from influential coders who don't work for the Big AI companies, and independent devs are already starting to make it easier and more approachable. It's not a pitch from a big company trying to sell a product, and in fact, codeless tools make it easy to swap out one LLM for another.</li>
  <li>Today, codeless tools themselves don't cost anything. The systems are entirely open source, though setting them up can be complicated and take some time. Actually running enough bots to generate all that code gets expensive quickly if you use cutting-edge commercial LLMs, but mixing in some lower-cost open tools can help defray costs. We can also expect that, as this approach gains momentum, more polished paid versions of the tools will emerge.</li>
  <li>Many coders didn't like using LLMs to generate code because they hallucinate. Codeless systems <em>assume</em> that the code they generate will be broken sometimes, and handle that failure. Just like other resilient systems assume that hard drives will fail, or that network connections will be unreliable, codeless systems are designed to handle unreliable code.</li>
  <li>This has nothing to do with the "no code" hype from years ago, because it's not locked-in to one commercial vendor or one proprietary platform. And codeless projects can be designed to output code that will run on any regular infrastructure, including your existing systems.</li>
  <li>Codeless changes power dynamics. People and teams who adopt a codeless approach have the potential to build a lot more under their own control. And those codeless makers won't necessarily have to ask for permission or resources in order to start creating. Putting this power in the hands of those individuals might have huge implications over time, as people realize that they may not have to raise funding or seek out sponsors to build the things that they imagine.</li>
  <li>The management and creation interfaces for codeless systems are radically more accessible than many other platforms because they're often controlled by simple plain text <a href="https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/">Markdown</a> files. This means it's likely that some of the most effective or successful codeless creators could end up being people who have had roles like product managers, designers, or systems architects, not just developers.</li>
  <li>Codeless approaches are probably <em>not</em> a great way to take over a big legacy codebase, since they rely on accurately describing an entire problem, which can often be difficult to completely capture. And coding bots may lack sufficient context to understand legacy codebases, especially since LLMs are sometimes weaker with legacy technologies.</li>
  <li>In many prior evolutions of coding, abstractions let coders work at higher levels, closer to the problem they were trying to solve. Low-level languages saved coders from having to write assembly language; high-level languages kept coders from having to write code to manage memory. Codeless systems abstract away directly writing code, continuing the long history of letting developers focus more on the problem to be solved than on manually creating every part of the code.</li>
</ol>
<h2>What does software look like when coders stop coding?</h2>
<p>As we’ve been saying for some time, for people who actually make and understand technology, the <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view/">majority AI view</a> is that LLMs are just useful technologies that have their purposes, but we shouldn’t go overboard with all of the absurd hype. We’re seeing new examples of the deep moral failings and social harms of the Big AI companies every day.</p>
<p>Despite this, coders still haven’t completely written off the potential of LLMs. A big reason why coders are generally more optimistic about AI than writers or photographers is because, in creative spaces, AI smothers the human part of the process. But in coding, AI takes over the drudgery, and lets coders focus on the most human and expressive parts.</p>
<p>The shame, then, is that much of the adoption of AI for coding has been in top-down mandates at companies. Rather than enabling innovation, it’s been in deployments designed to undermine their workers’ job security. And, as we’ve seen, <a href="https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/">this has worked</a>. It’s no wonder that a lot of the research on enterprise use of AI for coding has shown little to no increase in productivity; obviously productivity improvements have not been the goal, much of the time.</p>
<p>Codeless tech has the potential to change that. Putting the power of orchestrating a fleet of coding bots in the hands of a smart and talented coder (or designer! or product manager! or writer! or…) upends a lot of the hierarchy about who’s able to call the shots on what gets created. The size of your nights-and-weekends project might be a lot bigger, the ambitions of your side gig could be a lot more grand.</p>
<p>It’s still early, of course. The bots themselves are expensive as hell if you’re running the latest versions of Claude Code for all of them. Getting this stuff running is hard; you’re bouncing between obscure references to Gas Town on <a href="https://github.com/steveyegge">Steve Yegge’s Github</a>, and a bunch of smart posts on <a href="https://simonwillison.net">Simon Willison’s blog</a>, and sifting through YouTube videos about <a href="https://www.youtube.com/watch?v=vIFD0YE29Fs">Ralph Wiggum</a> to see if they’re about the Simpsons or the software.</p>
<p>It’s gonna be like that for a while, a little bit of a mess. But that’s a lot better than Enterprise Certified Cloud AI Engineer, Level II, minimum 11 years LLM experience required. If history is any guide, the entire first wave of implementations will be discarded in favor of more elegant and/or powerful second versions, once we know what we actually want. <a href="https://wiki.c2.com/?PlanToThrowOneAway">Build one to throw away.</a> I mean, that’s kind of the spirit of the whole codeless thing, isn’t it?</p>
<p>This could all still sputter out, too. Maybe it’s another fad. I don’t love seeing some of the folks working on codeless tools pivot into asking folks to buy memecoins to support their expensive coding bot habits. The Big AI companies are gonna try to kill it or co-opt it, because tools that reduce the switching cost between LLMs to zero must terrify them.</p>
<p>But for the first time in a long time, this thing feels a little different. It’s emerging organically from people who don’t work for trillion dollar companies. It’s starting out janky and broken and interesting, instead of shiny and polished in a soulless live stream featuring five dudes wearing vests. This is tech made for people who <em>like making things</em>, not tech made for people who are trying to appease financiers. It’s <a href="https://www.anildash.com/2025/10/24/founders-over-funders/">for inventors, not investors</a>.</p>
<p>I truly, genuinely, don’t care if you call it “codeless”; it just needs a name that we can hang on it so people know wtf we’re talking about. I worked backwards from “what could we write on a whiteboard, and everyone would know what we were talking about?” If you point at the diagrams and say, “The legacy code is complicated, so we’re going to do that as usual, but the client apps and mobile are all new, so we could just do those codeless and see how it goes”, people would just sort of nod along and know what you meant, at least vaguely. If you’ve got a better name, have at it.</p>
<p>In the meantime, though, start hacking away. Make something more ambitious than you could do on your own. Sneak an army of bots into work. Build something that you would have needed funding for before, but don’t now. Build something that somebody has made a horrible proprietary version of, and release it for free. Share your Markdown files!</p>
<p>Maybe the distance from idea to app just got a little bit shorter? We're about to find out.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Why We Speak</title>
      <link href="https://anildash.com/2026/01/26/why-we-speak/"/>
      <updated>2026-01-26T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/26/why-we-speak/</id>
      <content type="html">
        <![CDATA[
      <p>I've been working in and around the technology industry for a long time. Depending on how you count, it's 20 or 30 years. (I first started getting paid to put together PCs with a screwdriver when I was a teenager, but there isn't a good way to list that on LinkedIn.) And as soon as I felt like I was pretty sure that I was going to be able to pay the next month's rent without having to eat ramen noodles for two weeks before it was due, I felt like I'd really made it.</p>
<p>And as soon as you've made it, you owe it to everybody else to help out as much as you can. I don't know how to put it more simply than that. But for maybe the first decade of being in the &quot;startup&quot; world, where everybody was worried about appealing to venture capital investors, or concerned about getting jobs with the big tech companies, I was pretty convinced that one of the things that you <em>couldn't</em> do to help people was to talk about some of the things that were wrong. Especially if the things that were wrong were problems that, when described, might piss off the guys who were in charge of the industry.</p>
<p>But eventually, I got a little bit of power, mostly due to becoming a little bit visible in the industry, and I started to get more comfortable speaking my mind. Then, surprisingly, it turned out that... nothing happened. The sky didn't fall. I didn't get fired from my jobs. I certainly got targeted for harassment by bad actors, but that was largely due to my presence on social media, not simply because of my views. (And also because I tend to take a pretty provocative or antagonistic tone on social media when trying to frame an argument.)  It probably helped that, in the workplace, I both tend to act like a normal person and am also generally good at my job.</p>
<p>I point all of this out not to pat myself on the back, or as if any of this is remarkable  — it's certainly not — but because it's useful context for the current moment.</p>
<h2>The cycle of backlash</h2>
<p>I have been around the technology industry, and the larger business world, long enough to have watched the practice of speaking up about moral issues go from completely unthinkable to briefly being given lip service to actively being persecuted both professionally and politically. The campaigns to stamp out issues of conscience amongst working people have vilified caring for others with names ranging from &quot;political correctness&quot; to &quot;radicalism&quot; to &quot;virtue signaling&quot; to &quot;woke&quot; and I'm sure I'm missing many more. This, despite the fact that there have always been thoughtful people in every organization who try to do the right thing; it's impossible to have a group of people of any significant size and not have <em>some</em> who have a shred of decency and humanity within them.</p>
<p>But the technology industry has an incredibly short memory, by design. We're always at the beginning of history, and so many people working in it have never encountered a time before this moment when there's been this kind of brutal backlash from their leaders against common decency. Many have never felt such pressure to tamp down their own impulses to be good to their colleagues, coworkers, collaborators and customers.</p>
<p>I want to encourage everyone who is afraid in this moment to find some comfort and some solace in the fact that we have been here before. Not in <em>exactly</em> this place, but in analogous ones. And also to know that there are many people who are also feeling the same combination of fear or trepidation about speaking up, but a compelling and irrepressible desire to do so. We've shifted the Overton window on what's acceptable multiple times before.</p>
<p>I am, plainly, exhorting you do to speak up about the current political moment and to call for action. There is some risk to this. There is less risk for everyone when more of us speak up.</p>
<h2>Where we are</h2>
<p>In the United States, our government is lying to us about an illegal occupation of a major city, which has so far led to multiple deaths of innocents who were murdered by agents of the state. We have video evidence of what happened, and the most senior officials in our country have deliberately, blatantly and unrepentantly lied about what the videos show, while besmirching the good names of the people who were murdered. Just as the administration's most senior officials spread these lies, several of the most powerful and influential executives in the tech industry voluntarily met with the President, screened a propaganda film made expressly as a bribe for him, and have said nothing about either the murders or the lies about the murders.</p>
<p>These are certainly not the first wrongs by our government. These are not even the first such killings in Minnesota in recent years. But they are a new phase, and this occupation is a new escalation. This degree of lawless authoritarianism <em>is</em> new — tech leaders were <em>not</em> <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">crafting golden ingots</a> to bribe sitting leaders of the United States in the past. Military parades featuring banners bearing the face of Dear Leader, followed by ritual gift-giving in the throne room of the golden palace with the do-nothing failsons and conniving hangers-on of the aging strongman used to be the sort of thing we mocked about failing states, not things we emulated about them.</p>
<p>So, when our &quot;leaders&quot; have failed, and they have, we must become a leaderful community. This, I have a very positive feeling about. I've seen so many people who are willing to step up, to give of themselves, to use their voices. And I have all the patience in the world for those who may not be used to doing those things, because it can be hard to step into those shoes for the first time. If you're unfamiliar or uncomfortable with this work, or if the risk feels a little more scary because you carry the responsibility of caring for those around you, that's okay.</p>
<p>But I've been really heartened to see <a href="https://www.linkedin.com/posts/anildash_i-just-want-to-share-something-briefly-as-activity-7421306939055198209-272Z">how many people have responded</a> when I started talking about these ideas on LinkedIn — not usually the bastion of &quot;political&quot; speech. I don't write the usual hustle-bro career advice platitudes there, and instead laid out the argument for why people will need to choose a side, and should choose the side that their heart already knows that they're on. To my surprise, there's been near-universal agreement, even amongst many who don't agree with many of my other views.</p>
<p><a href="https://www.businessinsider.com/business-leader-ceo-silence-alex-pretti-killing-minneapolis-2026-1">It is already clear</a> that business leaders are going to be compelled to speak up. It would be ideal if it is their own workers who lead them towards the words (and actions) that they put out into the world.</p>
<h2>Where we go</h2>
<p>Those of us in the technology realm bear a unique responsibility here. It is the tools that we create which enable the surveillance and monitoring that agencies like ICE use to track down and threaten both their targets and those they attempt to intimidate away from holding them accountable. It is the wealth of our industry which isolates the tycoons who run our companies when they make irrational decisions like creating vanity films about the strongman's consort rather than pushing for the massive increase in ICE spending to instead go towards funding all of Section 8 housing, all of CHIP insurance, all school lunches, and 1/3 of all federal spending on K-12 education.</p>
<p>It takes practice to get comfortable using our voices. It takes repetition until leaders know we're not backing down. It takes perseverance until people in power understand they're going to have to act in response to the voices of their workers. <a href="https://iceout.tech">But everyone has a voice</a>. Now is your turn to use it.</p>
<p>When we speak, we make it easier for others to do so. When we all speak, we make change inevitable.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>A Codeless Ecosystem, or hacking beyond vibe coding</title>
      <link href="https://anildash.com/2026/01/27/codeless-ecosystem/"/>
      <updated>2026-01-27T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/27/codeless-ecosystem/</id>
      <content type="html">
        <![CDATA[
      <p>There's been a <a href="https://www.anildash.com/2026/01/22/codeless/">remarkable leap forward</a> in the ability to orchestrate coding bots, making it possible for ordinary creators to command dozens of AI bots to build software without ever having to directly touch code. The implications of this kind of evolution are potentially extraordinary, as outlined in that first set of notes about what we could call &quot;codeless&quot; software. But now it's worth looking at the larger ecosystem to understand where all of this might be headed.</p>
<h2>&quot;Frontier minus six&quot;</h2>
<p>One idea that's come up in a host of different conversations around codeless software, both from supporters and skeptics, is how these new orchestration tools can enable coders to control coding bots that <em>aren't</em> from the Big AI companies. Skeptics say, &quot;won't everyone just use Claude Code, since that's the best coding bot?&quot;</p>
<p>The response that comes up is one that I keep articulating as &quot;frontier minus six&quot;, meaning the idea that many of the open source or open-weight AI models are often delivering results at a level equivalent to where frontier AI models were six months ago. Or, sometimes, where they were 9 months or a year ago. In any of these cases, these are still damn good results! These levels of performance are not merely acceptable, they are results that we were amazed by just months ago, and are more than serviceable for a large number of use cases — especially if those use cases can be run locally, at low cost, with lower power usage, without having to pay any vendor, and in environments where one can inspect what's happening with security and privacy.</p>
<p>When we consider that a frontier-minus-six fleet of bots can often run on cheap commodity hardware (instead of the latest, most costly, hard-to-get Nvidia GPUs) and we still have the backup option of escalating workloads to the paid services if and when a task is too challenging for them to complete, it seems inevitable that this will be part of the mix in future codeless implementations.</p>
<h2>Agent patterns and design</h2>
<p>The most thoughtful and fluent analysis of the new codeless approach has been <a href="https://maggieappleton.com/gastown">this wonderful essay by Maggie Appleton</a>, whose writing is always incisive and insightful. This one's a must-read! Speaking of Gas Town (Steve Yegge's signature orchestration tool, which has catalyzed much of the codeless revolution), Maggie captures the ethos of the entire space:</p>
<blockquote>
<p>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.</p>
</blockquote>
<h2>Code and legacy</h2>
<p>Once you've considered Maggie's piece, it's worth reading over Steve Krouse's essay, &quot;<a href="https://blog.val.town/vibe-code">Vibe code is legacy code</a>&quot;. Steve and his team build the delightful <a href="https://www.val.town">val town</a>, an incredibly accessible coding community that strikes a very careful balance between enabling coding and enabling AI assistance without overwriting the human, creative aspects of building with code. In many ways (including its aesthetic), it is the closest thing I've seen to a spiritual successor to the work we'd done for many years with <a href="https://en.wikipedia.org/wiki/Glitch,_Inc.">Glitch</a>, so it's no surprise that Steve would have a good intuition about the human relationship to creating with code.</p>
<p>There's an interesting point, however to the core point Steve makes about the disposability of vibe-coded (or AI-generated) code: <em>all</em> code is disposable. Every single line of code I wrote during the many years I was a professional developer has since been discarded. And it's not just because I was a singularly terrible coder; this is often the <em>normal</em> thing that happens with code bases after just a short period of time. As much as we lament the longevity of legacy code bases, or the impossibility of fixing some stubborn old systems based on dusty old languages, it's also very frequently the case that people happily rip out massive chunks of code that people toiled over for months or years and then discard it all without any sentimentality whatsoever.</p>
<p>Codeless tooling just happens to embrace this ephemerality and treat it as a feature instead of a bug. That kind of inversion of assumptions often leads to interesting innovations.</p>
<h2>To enterprise or not</h2>
<p>As I noted in my original piece on codeless software, we can expect any successful way of building software to be appropriated by companies that want to profiteer off of the technology, <em>especially</em> enterprise companies. This new realm is no different. Because these codeless orchestration systems have been percolating for some time, we've seen some of these efforts pop up already.</p>
<p>For example, the team at Every, which consults and builds tools around AI for businesses, calls a lot of these approaches <a href="https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents">compound engineering</a> when their team uses them to create software. This name seems fine, and it's good to see that they maintain the ability to switch between models easily, even if they currently prefer Claude's Opus 4.5 for most of their work. The focus on planning and thinking through the end product holistically is a particularly important point to emphasize, and will be key to this approach succeeding as new organizations adopt it.</p>
<p>But where I'd quibble with some of what they've explained is the focus on tying the work to individual vendors. Those concerns should be abstracted away by those who are implementing the infrastructure, as much as possible. It's a bit like ensuring that most individual coders don't have to know exactly which optimizations a compiler is making when it targets a particular CPU architecture. Building that muscle where the specifics of different AI vendors become less important will help move the industry forward towards reducing platforms costs — and more importantly, empowering coders to make choices based on their priorities, not those of the AI platforms or their bosses.</p>
<h2>Meeting the codeless moment</h2>
<p>A good example of the &quot;normal&quot; developer ecosystem recognizing the groundswell around codeless workflows and moving quickly to integrate with them is the Tailscale team <em>already</em> shipping <a href="https://tailscale.com/blog/aperture-private-alpha">Aperture</a>. While this initial release is focused on routine tasks like managing API keys, it's really easy to see how the ability to manage gateways and usage into a heterogeneous mix of coding agents will start to enable, and encourage, adoption of new coding agents. (Especially if those &quot;frontier-minus-six&quot; scenarios start to take off.)</p>
<p>I've been on the record <a href="https://me.dm/@anildash/109719178280170032">for years</a> about being bullish on Tailscale, and nimbleness like this is a big reason why. That example of seeing where developers are going, and then building tooling to serve them, is always a sign that something is bubbling up that could actually become signficant.</p>
<p>It's still early, but these are the first few signs of a nascent ecosystem that give me more conviction that this whole thing might become real.</p>

    ]]>
      </content>
    </entry>
  
</feed>
Raw headers
{
  "age": "51533",
  "cache-control": "public,max-age=0,must-revalidate",
  "cache-status": "\"Netlify Edge\"; hit",
  "cf-cache-status": "DYNAMIC",
  "content-type": "application/xml",
  "date": "Wed, 28 Jan 2026 12:34:27 GMT",
  "etag": "W/\"4c238dd3bce552862f7bc8c19114547a-ssl-df\"",
  "referrer-policy": "strict-origin-when-cross-origin",
  "server": "cloudflare",
  "strict-transport-security": "max-age=31536000",
  "transfer-encoding": "chunked",
  "vary": "Accept-Encoding",
  "x-content-type-options": "nosniff",
  "x-frame-options": "SAMEORIGIN",
  "x-nf-request-id": "01KG29J5CK0HN3762HVZGAN0NT",
  "x-xss-protection": "1; mode=block"
}
Parsed with @rowanmanning/feed-parser
{
  "meta": {
    "type": "atom",
    "version": "1.0"
  },
  "language": null,
  "title": "Anil Dash",
  "description": "A blog about making culture. Since 1999.",
  "copyright": null,
  "url": "https://anildash.com/",
  "self": "https://anildash.com/feed.xml",
  "published": null,
  "updated": "2026-01-27T00:00:00.000Z",
  "generator": null,
  "image": null,
  "authors": [
    {
      "name": "Anil Dash",
      "email": "[email protected]",
      "url": null
    }
  ],
  "categories": [],
  "items": [
    {
      "id": "https://anildash.com/2025/11/14/wanting-not-to-want-ai/",
      "title": "I know you don’t want them to want AI, but…",
      "description": null,
      "url": "https://anildash.com/2025/11/14/wanting-not-to-want-ai/",
      "published": null,
      "updated": "2025-11-14T00:00:00.000Z",
      "content": "<p>Today, Rodrigo Ghedrin wrote the very well-intentioned, but incorrectly-titled,  “<a href=\"https://manualdousuario.net/en/mozilla-firefox-window-ai\">I think nobody wants AI in Firefox, Mozilla</a>”. As he correctly summarizes, <a href=\"https://connect.mozilla.org/t5/discussions/building-ai-the-firefox-way-shaping-what-s-next-together/td-p/109922\">sentiment on the Mozilla thread</a> about a potential new AI pane in the Firefox browser is overwhelmingly negative. That’s not surprising; the Big AI companies have given people numerous legitimate reasons to hate and reject “AI” products, ranging from undermining labor to appropriating content without consent to having egregious environmental impacts to eroding trust in public discourse.</p>\n<p>I spent much of the last week having the distinct honor of serving as MC at the <a href=\"https://www.mozillafestival.org/\">Mozilla Festival</a> in Barcelona, which gave me the extraordinary opportunity to talk to hundreds of the most engaged Mozilla community members in person, and to address thousands more from onstage or on the livestream during the event. No surprise, one of the biggest topics we talked about the entire time was AI, and the intense, complex, and passionate feelings so many have about these new tools. Virtually everyone shared some version of what I’d articulated as <a href=\"https://www.anildash.com/2025/10/17/the-majority-ai-view\">the majority view</a> on AI, which is approximately that LLMs can be interesting as a technology, but that Big Tech, and <em>especially</em> Big AI, are decidedly awful and people are very motivated to stop them from committing their worst harms upon the vulnerable.</p>\n<p>But.</p>\n<p>Another reality that people were a little more quiet in acknowledging, and sometimes reluctant to engage with out loud, is the reality that <em>hundreds of millions of people are using the major AI tools every day</em>. When I would point this out, there was often an initial defensive reaction talking about how people are forced to use these tools at work, or how AI is being shoehorned into every tool and foisted upon users. This is all true! And also? Hundreds of millions of users are choosing to go to these websites, of their own volition, and engage with these tools.</p>\n<p>Regular, non-expert internet users find it interesting, or even <em>amusing</em>, to generate images or videos using AI and to send that media to their friends. While sophisticated media aesthetics find those creations gauche or even offensive, a lot of other cultures find them perfectly acceptable. And it’s an inarguable reality that millions of people find AI-generated media images emotionally <em>moving</em>. Most people that see AI-generated content as tolerable folk art belong to demographics that are dismissed by those who shape the technology platforms that billions of people use every day.</p>\n<p>Which brings us back to “nobody wants AI in Firefox”. (And its obligatory <a href=\"https://news.ycombinator.com/item?id=45926779\">matching Hacker News thread</a>, which proceeds exactly as you might expect.) In the communities that frequent places like Hacker News and Mozilla forums, where everyone is hyper-fluent in concerns like intellectual property rights and the abuses of Big Tech, it’s received wisdom that “everyone” resists the encroachment of AI into tools, and therefore the only possible reason that Mozilla (or any organization) might add support for any kind of AI features would be to chase a trend that’s in fashion amongst tech tycoons. I don’t doubt that this is a factor; anytime a significant percentage of decision makers are alumni of Silicon Valley, its culture is going to seep into an organization.</p>\n<h2>The War On Pop-Ups</h2>\n<p>What people are ignoring, though, is that <em>using AI tools is an incredibly mainstream experience now</em>. Regular people do it all the time. And doing so in normal browsers, in a normal context, is less safe. We can look at an analogy from the early days of the browser wars, a generation ago.</p>\n<p>Twenty years ago, millions and millions of people used Internet Explorer to get around the web, because it was the default browser that came with their computer. It was buggy and wildly insecure, and users would often find their screen littered with intrusive pop-up advertisements that had been spawned by various sites that they had visited across the web. We could have said, “well, those are simply fools with no taste using bad technology who get what they deserve”</p>\n<p>Instead, countless enthusiasts and advocates across the web decided that <em>everyone</em> deserved to have an experience that was better and safer. And as it turned out, while getting those improvements, people could even get access to a cool new feature that nobody had seen before: tabs! Firefox wasn’t the first browser to invent all these little details, but it was the first to put them all together into one convenient little package. Even if the expert users weren’t personally visiting the sites riddled with pop-up ads themselves, they were glad to have spared their non-expert friends from the miseries they were enduring on the broken internet.</p>\n<p>I don’t know why today’s Firefox users, even if they’re the most rabid anti-AI zealots in the world, don’t say, “well, even if I hate AI, I want to make sure Firefox is good at protecting the privacy of AI users so I can recommend it to my friends and family who use AI”. I have to assume it’s because they’re in denial about the fact that their friends and family are using these platforms. (Judging by the tenor of their comments on the topic, I’d have to guess their friends don’t want to engage with them on the topic at all.)</p>\n<p>We see with tools like <a href=\"https://www.anildash.com/2025/10/22/atlas-anti-web-browser\">ChatGPT’s Atlas</a> that there are now aggressively anti-web browsers coming to market, and even a sophisticated user might not be able to realize how nefarious some of the tactics of these new apps can be. I think those who are critical can certainly see that those enabling those harms are bad actors. And those critics are also aware that hundreds of millions of people are using ChatGPT. So, then… what browser do they think those users should use?</p>\n<h2>What does good look like?</h2>\n<p>Judging by what I see in the comments on the posts about Firefox’s potential AI feature integrations, the apparent path that critics are recommending as an alternative browser is “I’ll yell at you until you stop using ChatGPT”. Consider this post my official notice: that strategy hasn’t worked. And it is not <em>going</em> to work. The only thing that <em>will</em> work is to offer a better alternative to these users. That will involve <a href=\"https://www.anildash.com/2025/05/02/what-would-good-ai-look-like\">defining what an acceptably “good” alternative AI looks like</a>, and then building and shipping it to these users, and convincing them to use it. I’m hoping such an effort succeeds. But I can guarantee that scolding people and trying to convince them that they’re not finding utility in the current platforms, or trying to make them feel guilty about the fact that they <em>are</em> finding utility in the current platforms, will not work.</p>\n<p>And none of this is exculpatory for my friends at Mozilla. As I’ve said to the good people there, and will share again here, I don’t think the framing of the way this feature has been presented has done either the Firefox team or the community any favors. These big, emotional blow-ups are demoralizing, and take away time and energy and attention that could be better spent getting people excited and motivated to grow for the future.</p>\n<p>My personal wishlist would be pretty simple:</p>\n<p><em>Just give people the “shut off all AI features” button. It’s a tiny percentage of people who want it, but they’re never going to shut up about it, and they’re convinced they’re the whole world and they can’t distinguish between being mad at big companies and being mad at a technology so give them a toggle switch and write up a blog post explaining how extraordinarily expensive it is to maintain a configuration option over the lifespan of a global product.</em> Market Firefox as “The best AI browser for people who hate Big AI”. Regular users have <em>no idea</em> how creepy the Big AI companies are — they’ve just heard their local news talk about how AI is the inevitable future. If Mozilla can warn me <a href=\"https://www.mozillafoundation.org/en/privacynotincluded/articles/how-to-protect-your-privacy-from-chatgpt-and-other-ai-chatbots\">how to protect my privacy from ChatGPT</a>, then it can also mention that ChatGPT tells children how to self-harm, and should be aggressive in engaging with the community on how to build tools that help mitigate those kinds of harms — how do we catalyze <em>that</em> innovation?</p>\n<ul>\n<li>Remind people that there isn’t “a Firefox” — everyone is Firefox. Whether it’s Zen, or your custom build of Firefox with your favorite extensions and skins, it’s all part of the same story. Got a local LLM that runs entirely as a Firefox extension? Great! That should be one of the many Firefoxes, too. Right now, so much of the drama and heightened emotions and tension are coming from people’s (well… dudes') egos about there being One True Firefox, and wanting to be the one who controls what’s in that version, as an expression of one set of values. This isn’t some blood-feud fork, there can just be a lot of different choices for different situations. Make it all work.</li>\n</ul>\n<p>So, that’s the answer. I think some people want AI in Firefox, Mozilla. And some people don’t. And some people don’t know what “AI” means. And some people forgot Firefox even exists. It’s that last category I’m most concerned about, frankly. Let’s go get ‘em.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/",
      "title": "Vibe Coding: Empowering and Imprisoning",
      "description": null,
      "url": "https://anildash.com/2025/12/02/vibe-coding-empowering-and-imprisoning/",
      "published": null,
      "updated": "2025-12-02T00:00:00.000Z",
      "content": "<p>In case you haven’t been following the world of software development closely, it’s good to know that vibe coding — using LLM tools to assist with writing code — can help enable many people to create apps or software that they wouldn’t otherwise be able to make. This has led to an extraordinarily rapid adoption curve amongst even experienced coders in many different disciplines within the world of coding. But there’s a very important threat posed by vibe coding that almost no one has been talking about, one that’s far more insidious and specific than just the risks and threats posted by AI or LLMs in general.</p>\n<p>Here’s a quick summary:</p>\n<p><em>One of the most effective uses of LLMs is in helping programmers write code</em> A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages</p>\n<ul>\n<li>Vibe coding might limit us to making simpler apps instead of the radical innovation we need to challenge Big Tech</li>\n</ul>\n<h2>Start vibing</h2>\n<p>It may be useful to start by explaining how people use LLMs to assist with writing software. My background is that I’ve helped build multiple companies focused on enabling millions of people to create with code. And I’m personally an example of one common scenario with vibe coding. Since I don’t code regularly anymore, I’ve become much slower and less efficient at even the web development tasks that I used to do professionally, which I used to be fairly competent at performing. In software development, there are usually a nearly-continuous stream of new technologies being released (like when you upgrade your phone, or your computer downloads an update to your web browser), and when those things change, developers have to update <em>their</em> skills and knowledge to stay current with the latest tools and techniques. If you’re not staying on top of things, your skillset can rapidly decay into irrelevance, and it can be hard to get back up to speed, even though you understand the fundamentals completely, and the underlying logic of <em>how</em> to write code hasn’t changed at all. It’s like knowing how to be an electrician but suddenly you have to do all your work in French, and you don’t speak French.</p>\n<p>This is the kind of problem that LLMs are really good at helping with. Before I had this kind of coding assistant, I couldn’t do any meaningful projects within the limited amount of free time that I have available on nights and weekends to build things. Now, with the assistance of contemporary tools, I can get help with things like routine boilerplate code and obscure syntax, speeding up my work enough to focus on the fun, creative parts of coding that I love.</p>\n<p>Even professional coders who <em>are</em> up to date on the latest technologies use these LLM tools to do things like creating scripts, which are essentially small bits of code used to automate or process common tasks. This kind of code is disposable, meaning it may only ever be run once, and it’s not exposed to the internet, so security or privacy concerns aren’t usually much of an issue. In that context, having the LLM create a utility for you can feel like being truly liberated from grunt work, something like having a robot vacuum around to sweep up the floor.</p>\n<h2>Surfing towards serfdom</h2>\n<p>This all sounds pretty good, right? It certainly helps explain why so many in the tech world tend to see AI much more positively than almost everyone else does; there’s a clear-cut example of people finding value from these tools in a way that feels empowering or even freeing.</p>\n<p>But there are far darker sides to this use of AI. Let me put aside the threats and risks of AI that are true of <em>all</em> uses of the Big AI platforms, like the environmental impact, the training on content without consent, the psychological manipulation of users, the undermining of legal regulations, and other significant harms. These are all real, and profound, but I want to focus on what’s specific to using AI to help write code here, because there are negative externalities that are unique to <em>this</em> context that people haven’t discussed enough. (For more on the larger AI discussion, see \"<a href=\"https://www.anildash.com/2025/05/01/what-would-good-ai-look-like/\">What would good AI look like?</a>\")</p>\n<p>The first problem raised by vibe coding is an obvious one: the major tech investors focused on making AI good at writing code because they wanted to make coders less powerful and reduce their pay. If you go back a decade ago, nearly everyone in the world was saying “teach your kids to code” and being a software engineer was one of the highest paying, most powerful individual jobs in the history of labor. Pretty soon, coders were acting like it — using their power to improve workplace conditions for those around them at the major tech companies, and pushing their employers to be more socially responsible. Once workers began organizing in this way, the tech tycoons who founded the big tech companies, and the board members and venture capitalists who backed them, immediately began investing billions of dollars in building these technologies that would devalue the labor of millions of coders around the world.</p>\n<p>It worked. More than <em>half a million</em> tech workers have been laid off in America since ChatGPT was released in November 2022.</p>\n<p>That’s <em>just</em> in the private sector, and <em>just</em> the ones tracked by <a href=\"https://layoffs.fyi\">layoffs.fyi</a>.  Software engineering job listings have <a href=\"https://blog.pragmaticengineer.com/software-engineer-jobs-five-year-low/\">plummeted to a 5-year low</a>. This is during a period of time that nobody even describes as a recession. The same venture capitalists who funded the AI boom keep insisting that these trends are about macroeconomic abstractions like interest rates, a stark contrast to their rhetoric the rest of the time, when they insist that they are alpha males who make their own decisions based on their strong convictions and brave stances against woke culture. It is, in fact, the case that they are just greedy people who invested a ton of money into trying to put a lot of good people out of work, and they succeeded in doing so.</p>\n<p>There is no reason why AI tools like this <em>couldn't</em> be used in the way that they're often described, where they increase productivity and enable workers to do more and generate more value. But instead we have the wealthiest people in the world telling the wealthiest companies in the world, while they generate record profits, to lay off workers who could be creating cool things for customers, and then blaming it on everyone but themselves.</p>\n<h2>The past as prison</h2>\n<p>Then there’s the second problem raised by vibe coding: You can’t make anything truly radical with it. By definition, LLMs are trained on what has come before. In addition to being already-discovered territory, existing code is buggy and broken and sloppy and, as anyone who has ever written code knows, absolutely embarrassing to look at. Worse, many of the people who are using vibe coding tools are increasingly those who <em>don’t</em> understand the code that is being generated by these systems. This means the people generating all of this newly-vibed code won’t even know when the output is insecure, or will perform poorly, or includes exploits that let others take over their system, or when it is simply incoherent nonsense that <em>looks</em> like code but doesn’t do anything.</p>\n<p>All of those factors combine to encourage people to think of vibe coding tools as a sort of “black box” that just spits out an app <em>for</em> you. Even the giant tech companies are starting to encourage this mindset, tacitly endorsing the idea that people don’t need to know what their systems are doing under the hood. But obviously, somebody needs to know whether a system is <em>actually</em> secure. Somebody needs to know if a system is actually doing the tasks it says that it’s doing. The Big AI companies that make the most popular LLMs on the market today routinely design their products to induce emotional dependency in users by giving them positive feedback and encouragement, even when that requires generating false responses. Put more simply: they make the bot lie to you to make you feel good so you use the AI more. That’s terrible in a million ways, but one of them is that it sure does generate some bad code.</p>\n<p>And a vibe coding tool absolutely won’t make something truly <em>new</em>. The most radical, disruptive, interesting, surprising, weird, fun innovations in technology have happened because people with a strange compulsion to do something cool had enough knowledge to get their code out into the world. The World Wide Web itself was <em>not</em> a huge technological leap over what came before — it took off because of a huge leap in <em>insight</em> into human nature and human behavior, that happened to be captured in code. The actual bits and bytes? They were mostly just plain text, much of which was in formats that had already been around for many years prior to Tim Berners-Lee assembling it all into the first web browser. That kind of surprising innovation could probably never be vibe coded, even though all of the raw materials might be scooped up by an LLM, because even if the human writing the prompt had that counterintuitive stroke of genius, the system would still be hemmed in by the constraints of the works it had been trained on. The past is a prison when you’re inventing the future.</p>\n<p>What’s more, if you were going to use a vibe coding tool to make a truly radical new technology, do you think today’s Big AI companies would let their systems create that app? The same companies that made a platform that just put hundreds of thousands of coders out of work? The  same companies that make a platform that tells your kids to end their own lives? The same companies whose cronies in the White House are saying there should <em>never be any laws</em> reining them in? Those folks are going to help you make new tech that threatens to disrupt their power? I don’t think so.</p>\n<h2>Putting power in people’s hands</h2>\n<p>I’m deeply torn about what the future of LLMs for coding should be. I’ve spent decades of my life trying to make it easier for everyone to make software. I’ve seen, firsthand, the power of using AI tools to help coders — especially those new to coding — build their confidence in being able to create something new. I love that potential, and in many ways, it’s the most positive and optimistic possibility around LLMs that I’ve seen. It’s the thing that makes me think that maybe there is a part of all the AI hype that is not pure bullshit. Especially if we can find a version of these tools that’s genuinely open source and free and has been trained on people’s code with their consent and cooperation, perhaps in collaboration with some educational institutions, I’d be delighted to see that shared with the world in a thoughtful way.</p>\n<p>But I also have seen the majority of the working coders I know (and the <em>non</em>-working coders I know, including myself) rush to integrate the commercial coding assistants from the Big AI companies into their workflow without necessarily giving proper consideration to the long-term implications of that choice. What happens when we’ve developed our dependencies on that assistance? How will people introduce <em>new</em> technologies like new programming languages and frameworks if we all consider the LLMs to be the canonical way of writing our code, and the training models don’t know the new tech exists? How does our imagination shrink when we consider our options of what we create with code to be choosing between the outputs of the LLM rather than starting from the blank slate of our imagination? How will we build the next generation of coders skilled enough to catch the glaring errors that LLMs create in their code?</p>\n<p>There’s never been this stark a contrast between the negatives and positives of a new technology being so tightly coupled before when it comes to enabling developers. Generally change comes to coders incrementally. Historically, there was always a (wonderful!) default skepticism to coding culture, where anything that reeked of marketing or hype was looked at with a huge amount of doubt until there was a significant amount of proof to back it up.</p>\n<p>But in recent years, as with everything else, the culture wars have come for tech. There’s now a cohort in the coding world that has adopted a cult of personality around a handful of big tech tycoons despite the fact that these men are deeply corrosive to society. Or perhaps <em>because</em> they are. As a result, there’s a built-in constituency for any new AI tool, regardless of its negative externalities, which gives them a sense of momentum even where there may not be any.</p>\n<p>It’s worth us examining what’s really going on, and articulating explicitly what we’re trying to enable. Who are we trying to empower? What does success look like? What do we want people to be able to build? What do we <em>not</em> want people to be able to make? What price is too high to pay? What convenience is not worth the cost?</p>\n<h2>What tools do we choose?</h2>\n<p>I do, still, believe deeply in the power of technology to empower people. I believe firmly that you have to understand how to create technology if you want to understand how to control it. And I still believe that we have to democratize the power to create and control technology to as many people as possible so that technology can be something people can use as a tool, rather than something that happens _to_them.</p>\n<p>We are now in a complex phase, though, where the promise of democratizing access to creating technology is suddenly fraught in a way that it has never been before. The answer can’t possibly be that technology remains inaccessible and difficult for those outside of a privileged class, and easy for those who are already comfortable in the existing power structure.</p>\n<p>A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2025/12/05/talk-about-us-without-us/",
      "title": "They have to be able to talk about us without us",
      "description": null,
      "url": "https://anildash.com/2025/12/05/talk-about-us-without-us/",
      "published": null,
      "updated": "2025-12-05T00:00:00.000Z",
      "content": "<p>It’s absolutely vital to be able to communicate effectively and efficiently to large groups of people. I’ve been lucky enough to get to refine and test my skills in communicating at scale for a few decades now, and the power of talking to communities is the one area where I’d most like to pass on what I’ve learned, because it’s this set of skills that can have the biggest effect on deciding whether good ideas and good work can have their greatest impact.</p>\n<p>My own work crosses many disparate areas. Over the years, I’ve gotten to cycle between domains as distinct as building technology platforms and products for developers and creators, enabling activism and policy advocacy in service of humanist ideals, and more visible external-facing work such as public speaking or writing in various venues like magazines or on this site. (And then sometimes I dabble in my other hobbies and fun stuff like scholarship or research into areas like pop culture and media.)</p>\n<p>What’s amazing is, in <em>every single one</em> of these wildly different areas, the exact same demands apply when trying to communicate to broad groups of people. This is true despite the broadly divergent cultural norms across all of these different disciplines. It can be a profoundly challenging, even intimidating, job to make sure a message is being communicated accurately, and in high fidelity, to everyone that you need to reach.</p>\n<p>That vital task of communicating to a large group gets even <em>more</em> daunting when you inevitably realize that, even if you <em>were</em> to find the perfect wording or phrasing for your message, you’d still never be able to deliver your story to every single person in your target audience by yourself anyway. There will always be another person whom you’re trying to reach that you just haven’t found yet. So, is it hopeless? Is it simply impossible to effectively tell a story at scale if you don’t have massive resources?</p>\n<p>It doesn’t have to be. We can start with one key insight about what it takes to get your most important stories out into the world. It’s a perspective that seems incredibly simple at first, but can lead to a pretty profound set of insights.</p>\n<h2>They have to be able to talk about us <em>without us</em>.</h2>\n<p>They have to be able to talk about us without us. What this phrase means, in its simplest form,  is that you have to tell a story so clear, so concise, so <em>memorable and evocative</em> that people can repeat it for you even after you’ve left the room. And the people who hear it need to be able to do this the <em>first time</em> they hear the story. Whether it’s the idea behind a new product, the core promise of a political campaign, or the basic takeaway from a persuasive essay (guess what the point of this one is!) — not only do you have to explain your idea and make your case, you have to be teaching your listener how to do the same thing for themselves.</p>\n<p>This is a tall order, to be sure. In pop music, the equivalent is writing a hit where people feel like they can sing along to the chorus by the time they get to the end of the song for the first time. Not everybody has it in them to write a hook that good, but if you do, that thing is going to become a classic. And when someone <em>else</em> has done it, you know it because it gets stuck in your head. Sometimes you end up humming it to yourself even if you didn’t want to. Your best ideas — your most <em>vital</em> ideas — need to rest on a messaging platform that solid.</p>\n<p>Delivering this kind of story actually requires substance. If you’re trying to fake it, or to force a narrative out of fluff or fakery, that will very immediately become obvious. When you set out to craft a story that travels in your absence, it has to have a body if it’s going to have legs. Bullshit is slippery and smells terrible, and the first thing people want to do when you leave the room is run away from it, not carry it with them.</p>\n<h2>The mission is the message</h2>\n<p>There’s another challenge to making a story that can travel in your absence: your ego has to let that happen. If you make a story that is effective and compelling enough that others can tell it, then, well…. those other people are going to tell it.  Not you. They’ll do it in their own words, and in their own voices, and make it <em>theirs</em>. They may use a similar story, but in their own phrasing, so it will resonate better with their people. This is a <em>gift</em>! They are doing you a kindness, and extending you great generosity. Respond with gratitude, and be wary of anyone who balks at not getting to be the voice or the face of a message themselves. Everyone gets a turn telling the story.</p>\n<p>Maybe the simple fact that others will be hearing a good story for the first time will draw them to it, regardless of <em>who</em> the messenger is. Sometimes people get attached to the idea that <em>they</em> have to be the one to deliver the one true message. But a core precept of “talk about us without us” is that there’s a larger mission and goal that everyone is bought into, and this demands that everyone stay aligned to their values rather than to their own personal ambitions around who tells the story.</p>\n<p>The truth of whomever will be most <em>effective</em> is the factor used to decide who will be the person to tell the story in any context. And this is a forgiving environment, because even if someone doesn’t get to be the voice one day, they’ll get another shot, since repetition and consistency are also key parts of this strategy, thanks to the disciplined approach that it brings to communication.</p>\n<h2>The joy of communications discipline</h2>\n<p>At nearly every organization where I’ve been in charge of onboarding team members in the last decade or so, one of the first messages we’ve presented to our new colleagues is, “We are disciplined communicators!” It’s a message that they hopefully get to hear as a joyous declaration, and as an assertion of our shared values. I always try to explicitly instill this value into teams I work with because, first, it’s good to communicate values explicitly, but also because this is a concept that is very seldom directly stated.</p>\n<p>It is ironic that this statement usually goes unsaid, because nearly everyone who pays attention to culture understands the vital importance of disciplined communications. Brands that are strictly consistent in their use of things like logos, type, colors, and imagery get such wildly-outsized cultural impact in exchange for relatively modest investment that it’s mind-boggling to me that more organizations don’t insist on following suit. Similarly, institutions that develop and strictly enforce a standard tone of voice and way of communicating (even if the tone itself is playful or casual) capture an incredibly valuable opportunity at minimal additional cost relative to how much everyone’s already spending on internal and external communications.</p>\n<p>In an era where every channel is being flooded with AI-generated slop, and when most of the slop tools are woefully incapable of being consistent about anything, simply showing up with an obviously-human, obviously-consistent story is a phenomenal way of standing out. That discipline demonstrates all the best of humanity: a shared ethos, discerning taste, joyful expression, a sense of belonging, an appealing consistency. And best of all, it represents the chance to participate for yourself — because it’s a message that you now know how to repeat for yourself.</p>\n<p>Providing messages that individuals can pick up and run with on their own is a profoundly human-centric and empowering thing to do in a moment of rising authoritarianism. When the fascists in power are shutting down prominent voices for leveling critiques that they would like to censor, and demanding control over an increasingly broad number of channels, there’s reassurance in people being empowered to tell their own stories together. Seeing stories bubble up from the grassroots in collaboration, rather than being forced down upon people from authoritarians at the top, has an emotional resonance that only strengthens the substance of whatever story you’re telling.</p>\n<h2>How to do it</h2>\n<p>Okay, so it sounds great: Let’s tell stories that other people want to share! Now, uh… how do we do it? There are simple principles we can follow that help shape a message or story into one that is likely to be carried forward by a community on its own.</p>\n<ul>\n<li><strong>Ground it in your values.</strong> When we began telling the story of my last company Glitch, the conventional wisdom was that we were building a developer tool, so people would describe it as an “IDE” — an “integrated development environment”, which is the normal developer jargon for the tool coders use to write their code in. We <em>never</em> described Glitch that way. From <a href=https://web.archive.org/web/20170504080445/https://glitch.com/>day one</a>, we always said “Glitch is the friendly community where you'll build the app of your dreams” (later, “the friendly community where everybody builds the internet”). By talking about the site as a <em>friendly community</em> instead of an <code>integrated development environment</code>, it was crystal clear what expectations and norms we were setting, and what our values were. Within a few months, even our <em>competitors</em> were describing Glitch as a “friendly community” while they were trying to talk about how they were better than us about some feature or the other. That still feels like a huge victory — even the competition was talking about us without us! Make sure your message evokes the values you want people to share with each other, either directly or indirectly.</li>\n<li><strong>Start with the principle.</strong> This is a topic I’ve covered before, but <a href=https://www.anildash.com/2022/01/31/you-have-to-start-with-the-principle/>you can't win unless you know what you're fighting for</a>. Identify concrete, specific, perhaps even <em>measurable</em> goals that are tied directly to the values that motivate your efforts. As <a href=https://www.anildash.com/2025/11/05/turn-the-volume-up/>noted recently</a>, Zohran Mamdani did this masterfully when running for mayor of New York City. While the <em>values</em> were affordability and the dignity of ordinary New Yorkers, the clear, understandable, measurable principle could be something as simple as “free buses”. This is a goal that everyone can get in 5 seconds, and can explain to their neighbor <em>the first time they hear it</em>. It’s a story that travels effortlessly on its own — and that people will be able to verify very easily when it’s been delivered. That’s a perfect encapsulation of “talk about us without us”.</li>\n<li>**Know what makes you unique.**Another way of putting this is to simply make sure that you have a sense of self-awareness. But the story you tell about your work or your movement has to be <em>specific</em>. There can’t be platitudes or generalities or vague assertions as a core part of the message, or it will never take off. One of the most common failure states for this mistake is when people lean on <em>slogans</em>. Slogans can have their use in a campaign, for reminding people about the existence of a brand, or supporting broader messaging. But very often, people think a slogan <em>is</em> a story. The problem is that, while slogans are definitely repeatable, slogans are almost definitionally too vague and broad to offer a specific and unique narrative that will resonate. There’s no point in having people share something if it doesn’t say something. I usually articulate the challenge here like this:<strong>Only say what only <em>you</em> can say.</strong></li>\n<li><strong>Be evocative, not comprehensive.</strong> Many times, when people are passionate about a topic or a movement, the temptation they have in telling the story is to work in <em>every little detail</em> about the subject. They often think, “if I include every detail, it will persuade more people, because they’ll know that I’m an expert, or it will convince them that I’ve thought of everything!” In reality, when people are not subject matter experts on a topic, or if they’re not already intrinsically interested in that topic, hearing a bunch of extensive minutia about it will almost always leave them feeling bored, confused, intimidated, condescended-to, or some combination of all of these. Instead, pick a small subset of the most <em>emotionally gripping</em> parts of your story, the aspects that have the deepest human connection or greatest relevance and specificity to the broadest set of your audience, and focus on telling those parts of the story as passionately as possible. If you succeed in communicating that initial small subset of your story effectively, then you may <em>earn</em> the chance to tell the other more complex and nuanced details of your story.</li>\n<li><strong>Your enemies are your friends.</strong> Very often, when people are creating messages about advocacy, they’re focused on competition or rivals. In the political realm, this can be literal opposing candidates, or the abstraction of another political party. In the corporate world, this can be (real or imagined) competitive products or companies. In many cases, these other organizations or products or competitors occupy so much more mental space in your mind, or your team’s mind, than they do in the mind of your potential audience. Some of your audience has never heard of them at all. And a <em>huge</em> part of your audience thinks of you and your biggest rival as… basically the same thing. In a business or commercial context, customers can barely keep straight the difference between you and your competition — you’re both just part of the same amorphous blob that exists as “the things that occupy that space”. Your competitor may be the only other organization in the world that’s fighting just as hard as you are to create a market for the product that you’re selling. The same is true in the political space; sometimes the biggest friction arises over the narcissism of small differences. What we can take away from these perspectives is that our stories have to focus on what distinguishes us, yes, but also on what we might have in common with those whom we might otherwise have perceived to have been aligned with the “enemy”. Those folks might not have sworn allegiance to an opposing force; they may simply have chosen another option out of convenience, and not even seen that choice as being in opposition to your story at all.</li>\n<li><strong>Find joy in repetition.</strong> Done correctly, a disciplined, collaborative, evocative message can become a mantra for a community. There’s a pride and enthusiasm that can come from people becoming proficient in sharing their own version of the collective story. And that means enjoying when that refrain comes back around, or when a slight improvement in the core message is discovered, and everyone finds a way to refine the way they’re communicating about the narrative. A lot of times, people worry that their team will get bored if they’re “just telling the same story over and over all the time”. In reality, as a brilliant man once said, <a href=https://youtu.be/FgP5VRp_myE>there’s joy in repetition</a>.</li>\n<li><strong>Don’t obsess over exact wording.</strong> This one is tricky; you might say, “but you said we have to be disciplined communicators!” And it’s true: it’s important to be disciplined. But that doesn’t mean you can’t leave room for people to put their own spin on things. Let them translate to their own languages or communities. Let them augment a general principle with a specific, personal connection. If they have their own authentic experience which will amplify a story or drive a point home, let them weave that context into the consistent narrative that’s been shared over time. As long as you’re not enabling a “telephone game” where the story starts to morph into an unrecognizable form, it’s perfectly okay to add a human touch by going slightly off script.</li>\n</ul>\n<h2>Share the story</h2>\n<p>Few things are more rewarding than when you find a meaningful narrative that resonates with the world. Stories have the power to change things, to make people feel empowered, to galvanize entire communities into taking action and recognizing their own power. There’s also a quiet reward in the craft and creativity of working on a story that travels, in finding notes that resonate with others, and in challenging yourself to get far enough out of your own head to get into someone else’s heart.</p>\n<p>I still have so much to learn about being able to tell stories effectively. I still screw it up so much of the time, and I can look back on many times when I wish I had better words at hand for moments that sorely needed them. But many of the most meaningful and rewarding moments of my life have been when I’ve gotten to be in community with others, as we were not just sharing stories together, but <em>telling</em> a united story together. It unlocks a special kind of creativity that’s a lot bigger than what any one of us can do alone.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2025/12/08/what-about-nothing-about-us/",
      "title": "What about “Nothing about us without us?”",
      "description": null,
      "url": "https://anildash.com/2025/12/08/what-about-nothing-about-us/",
      "published": null,
      "updated": "2025-12-08T00:00:00.000Z",
      "content": "<p>As I was drafting my last piece on Friday, “<a href=\"https://www.anildash.com/2025/12/05/talk-about-us-without-us/\">They have to be able to talk about us without us</a>”, my thoughts of course went to one of the most famous slogans of the disability rights movement, “<a href=\"https://en.wikipedia.org/wiki/Nothing_about_us_without_us\">Nothing about us without us.</a>” I wasn’t unaware that there were similarities in the phrasing of what I wrote. But I think the topic of communicating effectively to groups, as I wrote about the other day, and ensuring that disabled people are centered in disability advocacy, are such different subjects that I didn’t want to just quickly gloss over the topic in a sidebar of a larger piece. They're very distinct topics that really only share a few words in common.</p>\n<p>One of the great joys of becoming friends with a number of really thoughtful and experienced disability rights activists over the last several years has been their incredible generosity in teaching me about so much of the culture and history of the movements that they’ve built their work upon, and one of the most powerful slogans has been that refrain of “nothing about us without us”.</p>\n<p>Here I should start by acknowledging Alice Wong, who we recently lost, who founded the <a href=\"https://disabilityvisibilityproject.com/about/\">Disability Visibility Project</a>, and a MacArthur Fellow, and a tireless and inventive advocate for everyone in the disabled community. She was one of the first people to bring me in to learning about this history and these movements, more than a decade ago. She was also a patient and thoughtful teacher, and over our many conversations over the years, she did more than anyone else in my life to truly <em>personify</em> the spirit of “nothing about us without us” by fighting to ensure that disabled people led the work to make the world accessible for all. If you have the chance, learn about her work, and <a href=\"https://www.gofundme.com/f/Alice-Wongs-Legacy\">support it</a>.</p>\n<p>But a key inflection point in my own understanding of “nothing about us without us” came, unsurprisingly, in the context of how disabled people have been interacting with technology. I used to host a podcast called Function, and we did an episode about how inaccessible so much of contemporary technology has become, and how that kind of ruins things for everyone. (The episode is still up on <a href=\"https://open.spotify.com/episode/0IN2nQWUqmQnAMxNLN85WE\">Spotify</a> and <a href=\"https://podcasts.apple.com/us/podcast/function-with-anil-dash/id1439658455?i=1000452883786\">Apple Podcasts</a>.)  We had on <a href=\"https://emilyladau.com\">Emily Ladau</a> of <a href=\"https://www.theaccessiblestall.com\">The Accessible Stall</a> podcast, <a href=\"https://alexhaagaard.com\">Alex Haagaard</a> of <a href=\"https://www.disabledlist.org\">The Disabled List</a>, and <a href=\"https://www.vilissathompson.com\">Vilissa Thompson</a> of <a href=\"https://www.rampyourvoice.com\">Ramp Your Voice</a>. It’s well worth a listen, and Emily, Alex and Vilissa really do an amazing job of pointing to really specific, really evocative examples of <em>obvious</em> places where today’s tech world could be so much more useful and powerful for everyone if its creators were making just a few simple changes.</p>\n<p>What’s striking to me now, listening to that conversation six years later, is how little has changed from the perspective of the technology world, but also how much my own lived experience has come to reflect so much of what I learned in those conversations.</p>\n<p>Each of them was the \"us\" in the conversation, using their own personal experience, and the experience of other disabled people that they were in community with, to offer specific and personal insights that the creators of these technologies did not have. And whether it was for reasons of crass commercial opportunism — here's some money you could be making! — or simply because it was the right thing to do morally, it's obvious that the people making these technologies could benefit by honoring the principle of centering these users of their products.</p>\n<h2>Taking our turn</h2>\n<p>I’ve had this conversation on various social media channels in a number of ways over the years, but another key part of understanding the “us” in “nothing about us without us” when it comes to disability, is that the “us” is <em>all of us</em>, in time. It's very hard for many people who haven’t experienced it to understand that everyone should be accommodated and supported, because everyone is disabled; it’s only a question of when and for how long.</p>\n<p>In contemporary society, we’re given all kinds of justifications for why we can’t support everyone’s needs, but so much of those are really grounded in simply trying to convince ourselves that a disabled person is <em>someone else</em>, an “other” who isn’t worthy or deserving of our support. I think deep down, everyone knows better. It’s just that people who don’t (yet) identify as disabled don’t really talk about it very much.</p>\n<p>In reality, we'll all be disabled. Maybe you're in a moment of respite from it, or in that brief window before the truth of the inevitability of it has been revealed to you (sorry, spoiler warning!), but it's true for all of us — even when it's not visible. That means all of us have to default to supporting and uplifting and empowering the people who are disabled today. This was the key lesson that I didn’t really get personally until I started listening to those who were versed in the history and culture of disability advocacy, about how the patronizing solutions were often harmful, or competing for resources with the <em>right</em> answers.</p>\n<p>I’ve had my glimpses of this myself. Back in 2021, I had Lyme disease. I didn’t get it as bad as some, but it did leave me physically and mentally unable to function as I had been used to, for several months. I had some frame of reference for physical weakness; I could roughly compare it to a bad illness like the flu, even if it wasn’t exactly the same. But a diminished <em>mental</em> capacity was unlike anything I had ever experienced before, and was profoundly unsettling, deeply challenging my sense of self. After the <a href=\"https://www.anildash.com/2022/07/18/i-went-to-a-coffee-shop/\">incident I’d described in 2022</a>, I had a series of things to recover from physically and mentally that also presented a significant challenge, but were especially tough because so much of people’s willingness to accommodate others is based on any disability being <em>visible</em>. Anything that’s not immediately perceived at a superficial level, or legible to a stranger in a way that’s familiar to them, is generally dismissed or seen as invalid for support.</p>\n<p>I point all of this out not to claim that I fully understand the experience of those who live with truly serious disabilities, or to act as if I know what it’s been like for those who have genuinely worked to advocate for disabled people. Instead, I think it can often be useful to show how porous the boundary is between people who <em>don’t</em> think of themselves as disabled and those who already know that they are. And of course this does <em>not</em> mean that people who aren't currently disabled can speak on behalf of those who are — that's the whole point of \"nothing about us without us\"! — but rather to point out that the time to begin building your empathy and solidarity is now, not when you suddenly have the realization that you're part of the community.</p>\n<h2>Everything about us</h2>\n<p>There’s a righteous rage that underlies the cry of “nothing about us without us”, stemming from so many attempts to address the needs of disabled people having come from those outside the community, arriving with plans that ranged from inept to evil. We’re in a moment when the authoritarians in charge in so much of the world are pushing openly-eugenicist agendas that will target disabled people first amongst the many vulnerable populations that they’ll attempt to attack. Challenging economic times like the one we’re in affect disabled people significantly harder as the job market disproportionately shrinks in opportunities for the disabled first.</p>\n<p>So it’s going to take all of us standing in solidarity to ensure that the necessary advocacy and support are in place for what promises to be an extraordinarily difficult moment. But I take some solace and inspiration from the fact that there are so many disabled people who have provided us with the clear guidance and leadership we need to navigate this moment. And there is simple guidance we can follow when doing so to ensure that we’re centering the right leaders, by listening to those who said, “nothing about us without us.”</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/05/a-tech-career-in-2026/",
      "title": "How the hell are you supposed to have a career in tech in 2026?",
      "description": null,
      "url": "https://anildash.com/2026/01/05/a-tech-career-in-2026/",
      "published": null,
      "updated": "2026-01-05T00:00:00.000Z",
      "content": "<p>The number one question I get from my friends, acquaintances, and mentees in the technology industry these days is, by far, variations on the basic theme of, “what the hell are we supposed to do now?”</p>\n<p>There have been mass layoffs that leave more tech workers than ever looking for new roles in the worst market we’ve ever seen. Many of the most talented, thoughtful and experienced people in the industry are feeling worried, confused, and ungrounded in a field that no longer looks familiar.</p>\n<p>If you’re outside the industry, you may be confused — isn’t there an AI boom that’s getting hundreds of billions of dollars in investments? Doesn’t that mean the tech bros are doing great? What you may have missed is that half a million tech workers have been laid off in the years since ChatGPT was released; the same attacks on marginalized workers and DEI and “woke” that the tech robber barons launched against the rest of society were aimed at their own companies first.</p>\n<p>So the good people who actually <em>make</em> the technology we use every day, the real innovators and creators and designers, are reacting to the unprecedented disconnect between the contemporary tech industry and the fundamentals that drew so many people toward it in the first place. Many of the biggest companies have abandoned the basic principle of making technology that actually <em>works</em>. So many new products fail to deliver on even the basic capabilities that the companies are promising that they will provide.</p>\n<p>Many leaders at these companies have run full speed towards moral and social cowardice, abandoning their employees and customers to embrace rank hatred and discrimination in ways that they pretended to be fighting against just a few years ago. Meanwhile, unchecked consolidation has left markets wildly uncompetitive, leaving consumers suffering from the effects of categories without any competition or investment — which we know now as “enshittification”. And the full-scale shift into corruption and crony capitalism means that winners in business are decided by whoever is shameless enough to offer the biggest bribes and debase themselves with the <a href=\"https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/\">most humiliating display</a> of groveling. It’s a depressing shift for people who, earlier in their careers, often actually <em>were</em> part of inventing the future.</p>\n<p>So where do we go from here?</p>\n<h2>You’re not crazy.</h2>\n<p>The first, and most important, thing to know is that <em>it’s not just you</em>. Nearly everyone in tech I have this conversation with feels very isolated about it, and they’re often embarrassed or ashamed to discuss it. They think that everyone else who has a job in tech is happy or comfortable at their current employers, or that the other people looking for work are getting calls back or are being offered interviews in response to their job applications. But I’m here to tell you: it is grim right now. About as bad as I’ve seen. And I’ve been around a long time.</p>\n<p>Every major tech company has watched their leadership abandon principles that were once thought sacrosanct. I’ve heard more people talk about losing respect for executives they trusted, respected, even <em>admired</em> in the last year than at any time I can remember. In smaller companies and other types of organizations, the challenges have been more about the hard choices that come from dire resource constraints or being forced to make ugly ethical compromises for pragmatic reasons. The net result is tons of people who have lost pride and conviction in their work. They’re going through the motions for a paycheck, because they know it’s a tough job market out there, which is a miserable state of affairs.</p>\n<p>The public narrative is dominated by the loud minority of dudes who are content to appease the egos of their bosses, sucking up to the worse impulses of those in charge. An industry that used to pride itself on publicly reporting security issues and openly disclosing vulnerabilities now circles its wagons to gang up on people who suggest that an AI tool shouldn’t tell children to harm themselves, that perhaps it should be possible to write a law limiting schools from deploying AI platforms that are known to tell kids to end their own lives. People in tech endure their bosses using slurs at work, making jokes about sexual assault, consorting with leaders who have directly planned the murder of journalists, engaging in open bribery in blatant violation of federal law and their own corporate training on corruption, and have to act like it’s normal.</p>\n<p>But it’s not the end of the world. The forces of evil have not yet triumphed, and all hope is not lost. There are still things we can do.</p>\n<h2>Taking back control</h2>\n<p>It can be easy to feel overwhelmed at such an unprecedented time in the industry, especially when there’s so much change happening. But there are concrete actions you can take to have agency over your own career, and to insulate yourself from the bad actors and maximize your own opportunities — even if some of those bad actors are your own bosses.</p>\n<h3>Understanding systems</h3>\n<p>One of the most important things you can do is to be clear about your own place, and your own role, within the systems that you are part of. A major factor in the changes that bosses are trying to effect with the deployment of AI is shifting the role of workers within the systems in their organization to make them more replaceable.</p>\n<p>If you’re a coder, and you think your job is to make really good code in a particular programming language, you might double down on getting better at the details of that language. But that’s almost certainly misunderstanding the system that your company thinks you’re part of, where the code is just a means to the end of creating a final product. In that system-centric view, the programming language, and indeed all of the code itself, doesn’t really matter; the person who is productive at causing all of that code to be created reliably and efficiently is the person who is going to be valued, or at least who is most likely to be kept around. That may not be satisfying or reassuring if you truly love coding, but at least this perspective can help you make informed decisions about whether or not that organization is going to make choices that respect the things you value.</p>\n<p>This same way of understanding systems can apply if you’re a designer or a product manager or a HR administrator or anything else. As I’ve covered before, <a href= \"https://anildash.com/2024/05/28/systems-the-purpose-of-a-system/\">the purpose of a system is what it does</a>, and that truth can provide some hard lessons if we find it’s in tension with the things we <em>want</em> to be doing for an organization. The system may not value the things we do, or it may not value them enough; the way they phrase this to avoid having to say it directly is by describing something as “inefficient”. Then, the question you have to ask yourself is, can you care about this kind of work or this kind of program at one level higher up in the system? Can it still be meaningful to you if it’s slightly more abstract? Because that may be the requirement for navigating the expectations that technology organizations will be foisting on everyone through the language of talking about “adopting AI”.</p>\n<h3>Understanding power</h3>\n<p>Just as important as understanding systems is understanding <em>power</em>. In the workplace, power is something real. It means being able to control how money is spent. It means being able to make decisions. It means being able to hire people, or fire them. Power is being able to say no.</p>\n<p>You probably don’t have enough power; that’s why you have worries. But you almost certainly have more power than you think, it’s just not as obvious how to wield it. The most essential thing to understand is that you will need to collaborate with your peers to exercise collective power for many of the most significant things you may wish to achieve.</p>\n<p>But even at an individual level, a key way of understanding power in your workplace is to consider the systems that you are part of, and then to reckon with which ones you can meaningfully change from your current position. Very often, people will, in a moment of frustration, say “this place couldn’t run without me!” And companies will almost always go out of their way to prove someone wrong if they hear that message.</p>\n<p>On the other hand, if you identify a system for operating the organization that no one else has envisioned, you’ve already <em>demonstrated</em> that this part of the organization couldn’t run without you, and you don’t need to say it or prove it. There is power in the mere action of creating that system. But a lot depends on where you have both the positional authority and the social permission to actually accomplish that kind of thing.</p>\n<p>So, if you’re dissatisfied with where you are, but have not decided to leave your current organization, then your first orders of business in this new year should be to consolidate power through building alliances with peers, and by understanding which fundamental systems of your organization you can define or influence, and thus be in control of. Once you’ve got power, you’ve got options.</p>\n<h3>Most tech isn’t “tech”</h3>\n<p>So far, we’re talking about very abstract stuff. What do we do if your job sucks right now, or if you don’t have a job today and you really need one? After vague things like systems and power, then what?</p>\n<p>Well, an important thing to understand, if you care about innovation and technology, is that the vast majority of technology doesn’t happen in the startup world, or even in the “tech industry”. Startups are only a tiny fraction of the entire realm of companies that create or use technology, and the giant tech companies are only a small percentage of all jobs or hiring within the tech realm.</p>\n<p>So much opportunity, inspiration, creativity, and possibility lies in applying the skills and experience that you may have from technological disciplines in other realms and industries that are often far less advanced in their deployment of technologies. In a lot of cases, these other businesses get taken advantage of for their lack of experience — and in the non-profit world, the lack of tech expertise or fluency is often exploited by both the technology vendors and bad actors who swoop in to capitalize on their vulnerability.</p>\n<p>Many of the people I talk to who bring their technology experience to other fields also tell me that the culture in more traditional industries is often less toxic or broken than things in Silicon Valley (or Silicon Valley-based) companies are these days, since older or more established companies have had time to work out the more extreme aspects of their culture. It’s an extraordinary moment in history when people who work on Wall Street tell me that even <em>their</em> HR departments wouldn’t put up with the kind of bad behavior that we’re seeing within the ranks of tech company execs.</p>\n<h3>Plan for the long term</h3>\n<p>This too shall pass. One of the great gifts of working in technology is that it’s given so many of us the habit of constantly learning, of always being curious and paying attention to the new things worth discovering. That healthy and open-minded spirit is an important part of how to navigate a moment when lots of people are being laid off, or lots of energy and attention are being focused on products and initiatives that don’t have a lot of substance behind them.\nEventually, people will want to return to what’s real. The companies that focus on delivering products with meaning, and taking care of employees over time, will be the ones that are able to persist past the current moment. So building habits that enable resiliency at both a personal and professional level is going to be key.</p>\n<p>As I’ve been fond of saying for a long time: don’t let your job get in the way of your career.</p>\n<p>Build habits and routines that serve your own professional goals. As much as you can, participate in the things that get your name out into your professional community, whether that’s in-person events in your town, or writing on a regular basis about your area of expertise, or mentoring with those who are new to your field. You’ll never regret building relationships with people, or being generous with your knowledge in ways that remind others that you’re great at what you do.</p>\n<p>If your time and budget permit, attend events in person or online where you can learn from others or respond to the ideas that others are sharing. The more people can see and remember that you’re engaged with the conversations about your discipline, the greater the likelihood that they’ll reach out when the next opportunity arises.</p>\n<p>Similarly, take every chance you can to be generous to others when you see a door open that might be valuable for them. I can promise you, people will <em>never</em> forget that you thought of them in their time of need, even if they don’t end up getting that role or nabbing that interview.</p>\n<h2>It’s an evolution, not a resolution</h2>\n<p>New years are often a time when people make a promise to themselves about how they’re going to change everything. If I can just get this new notebook to write in, I’m suddenly going to become a person who keeps a journal, and that will make me a person who’s on top of everything all the time.</p>\n<p>But hopefully you can see, many of the challenges that so many people are facing are systemic, and aren’t the result of any personal failings or shortcomings. So there isn’t some heroic individual change that you can make when you flip over to a new calendar month that will suddenly fix all the things.</p>\n<p>What you can control, though, are small iterative things that make you feel better on a human scale, in little ways, when you can. You can help yourself maintain perspective, and you can do the same for those around you who share your values, and who care about the same personal or professional goals that you do.</p>\n<p>A lot of us still care about things like the potential for technology to help people, or still believe in the idealistic and positive goals that got us into our careers in the first place. We weren’t wrong, or naive, or foolish to aspire to those goals simply because some bad actors sought to undermine them. And it’s okay to feel frustrated or scared in a time when it seems to many like those goals could be further away than they’ve been in a long time.</p>\n<p>I do hope, though, that people can see that, by sticking together, and focusing on the things that are within our reach, things can begin to change. All it takes is remembering that the power in tech truly rests with all the people who actually <em>make</em> things, not with the loudmouths at the top who try to tear things down.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/06/500k-tech-workers-laid-off/",
      "title": "500,000 tech workers have been laid off since ChatGPT was released",
      "description": null,
      "url": "https://anildash.com/2026/01/06/500k-tech-workers-laid-off/",
      "published": null,
      "updated": "2026-01-06T00:00:00.000Z",
      "content": "<p>One of the key points I repeated when <a href=\"https://www.anildash.com/2026/01/05/a-tech-career-in-2026/\">talking about the state of the tech industry yesterday</a> was the salient fact that <em>half a million tech workers have been laid off since ChatGPT was released in late 2022</em>. Now, to be clear, those workers haven’t been laid off because their jobs are now being done by AI, and they’ve been replaced by bots. Instead, they’ve been laid off by execs who now have AI to use as an excuse for going after workers they’ve wanted to cut all along.</p>\n<p>This is important to understand for a few reasons. First, it’s key just for having empathy for both the mindset and the working conditions of people in the tech industry. For so many outside of tech, their impression of what “tech” means is whatever is the most recent transgression they’ve heard about from the most obnoxious billionaire who’s made the news lately. But in many cases, it’s the rank and file workers at that person’s company who were the first victims of that billionaire’s ego.</p>\n<p>Second, it’s important to understand the big tech companies as almost the testing grounds for the techniques and strategies that these guys want to roll out on the rest of the economy, and on the rest of the world. Before they started going on podcasts pretending to be extremely masculine while whining about their feelings, or overtly bribing politicians to give them government contracts, they beta-tested these manipulative strategies within their companies by cracking down on dissent and letting their most self-indulgent and egomaniacal tendencies run wild. Then, when people (reasonably!) began to object, they used that as an excuse to purge any dissenters for being uncooperative or “difficult”.</p>\n<h2>It starts with tech, but doesn’t end there</h2>\n<p>These are tactics they’ll be bringing to other industries and sectors of the economy, if they haven’t already. Sometimes they’ll be providing AI technologies and tools as an enabler or justification for the cultural and political agenda that they’re enacting, but often times, they don’t even need to. In many cases, they can simply make clear that they want to enforce psychological and social conformity within their organizations, and that any disagreement will not be tolerated, and the implicit threat of being replaced by automation (or by other workers who are willing to fall in line) is enough to get people to comply.</p>\n<p>This is the subtext, and sometimes the explicit text, of the deployment of “AI” in a lot of organizations. That’s separate from what actual AI software or technology can do. And it explains a lot of why the <a href=\"https://www.anildash.com/2025/10/17/the-majority-ai-view/\">majority AI view</a> within the tech industry is nothing like the hype cycle that’s being pushed by the loudest voices of the big-name CEOs.</p>\n<p>Because people who work in tech still believe in the power of tech to do good things, many of us won’t just dismiss outright the possibility that any technology — even AI tools like LLMs — could yield some benefits. But the optimistic takes are tempered by the first-hand knowledge of how the tools are being used as an excuse to sideline or victimize good people.</p>\n<p>This wave of layoffs and reductions has been described as “pursuing efficiencies” or “right-sizing”. But so many of us in tech can remember a few years back, when working in tech as an upwardly-mobile worker with a successful career felt like the best job in the world. When many people could buy nice presents for their kids at Christmas or they weren’t as worried about your car payments. When huge parts of society were promising young people that there was a great future ahead if they would just learn to code. When the promise of a tech career’s potential was used as the foundation for building infrastructure in our schools and cities to train a whole new generation of coders.</p>\n<p>But the funders and tycoons in charge of the big tech companies <em>knew</em> that they did not want to keep paying enormous salaries to the people they were hiring. They certainly knew they didn’t want to keep paying huge hiring bonuses to young people just out of college, or to pay large staffs of recruiters to go find underrepresented candidates. Those niceties that everybody loved, like great healthcare and decent benefits, were identified by the people running the big tech companies as “market inefficiencies” which indicated some wealth was going to you that should have been going to <em>them</em>. So yes, part of the reason for the huge investment in AI coding tools was to make it easier to write code. But another huge reason that AI got so good at writing code was so that nobody would ever have to pay coders so well again.</p>\n<p>You’re not wrong if you feel angry, resentful and overwhelmed by all of this; indeed, it would be absurd if you <em>didn’t</em> feel this way, since the wealthiest and most powerful people in the history of the world have been spending a few years trying to make you feel exactly this way. Constant rotating layoffs and a nonstop fear of further cuts, with a perpetual sense of precarity, are a deliberate strategy so that everyone will accept lower salaries and reduced benefits, and be too afraid to push for the exact same salaries that the company could afford to pay the year before.</p>\n<h2>Why are we stirring the pot?</h2>\n<p>Okay, so are we just trying to get each other all depressed? No. It’s just vitally important that we name a problem and identify it if we’re going to solve it.\n
Most people outside of the technology industry think that “tech” is a monolith, that the people who work in tech are the same as the people who <em>own</em> the technology companies. They don’t know that tech workers are in the same boat that they are, being buffeted by the economy, and being subject to the whims of their bosses, or being displaced by AI. They don’t know that the DEI backlash has gutted HR teams at tech companies, too, for example. So it’s key for everyone to understand that they’re starting from the same place.</p>\n<p>Next, it’s key to tease apart things that are separate concerns. For example: AI is often an <em>excuse</em> for layoffs, not the cause of them. ChatGPT didn’t replace the tasks that recruiters were doing in attracting underrepresented candidates at big tech companies — the bosses just don’t care about trying to hire underrepresented candidates anymore! The tech story is being used to mask the political and social goal. And it’s important to understand that, because otherwise people waste their time fighting battles that might not matter, like the deployment of a technology system, and losing the ones that do, like the actual decisions that an organization is making about its future.</p>\n<h2>Are they efficient, though?</h2>\n<p>But what if, some people will ask, these companies just had <em>too many people</em>? What if they’d over-hired? The folks who want to feel really savvy will say, “I heard that they had all those employees because interest rates were low. It was a Zero Interest Rate Phenomenon.” This is, not to put too fine a point on it, bullshit. It’s not in any company’s best interests to cut their staffing down to the bone.</p>\n<p>You actually <em>need</em> to have some reserve capacity for labor in order to reach maximum output for a large organization. This is the difference between a large-scale organization and a small one. People sitting around doing nothing is the epitome of waste or inefficiency in a small team, but in a large organization, it’s a lot more costly if you are about to start a new process or project and you don’t have labor capacity or expertise to deploy.</p>\n<p>A good analogy is the oft-cited need these days for people to be bored more often. There’s a frequent lament that, because people are so distracted by things like social media and constant interruptions, they never have time to get bored and let their mind wander, and think new thoughts or discover their own creativity. Put another way, they never get the chance to tap into their own cognitive surplus.</p>\n<p>The only advantage a large organization can have over a small one, other than sheer efficiencies of scale, is if it has a cognitive surplus that it can tap into. By destroying that cognitive surplus, and leaving those who remain behind in a state of constant emotional turmoil and duress, these organizations are permanently damaging both their competitive advantages and their potential future innovations.</p>\n<h2>AI Spring</h2>\n<p>When the dust clears, and people realize that extreme greed is never the path to maximum long-term reward, there is going to be a “peace dividend” of sorts from all the good talent that’s now on the market. Some of this will be smart, thoughtful people flowing to other industries or companies, bringing their experience and insights with them.</p>\n<p>But I think a lot of this will be people starting their own new companies and organizations, informed by the broken economic models, and broken <em>human</em> models, of the companies they’ve left. We saw this a generation ago after the bust of the dot-com boom, when it was not only revealed that the economics of a lot of the companies didn’t work, but that so many of the people who had created the companies of that era didn’t even care about the markets or the industries that they’d entered. When the get-rich-quick folks left the scene, those of us who remained, who truly loved the web as a creative and expressive medium, found a ton of opportunity in being the little mammals amidst the sad dinosaurs trying to find funding for meteor dot com.</p>\n<h2>What comes next</h2>\n<p>I don’t think this all gets better very quickly. If you put aside the puffery of the AI companies scratching each others’ backs, it’s clear the economy is in a recession, even if this administration’s goons have shut down reporting on jobs and inflation in a vain attempt to hide that reality. But I do think there may be more resilience because of the sheer talent and entrepreneurial skill of the people who are now on the market as individuals.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/09/how-markdown-took-over-the-world/",
      "title": "How Markdown took over the world",
      "description": null,
      "url": "https://anildash.com/2026/01/09/how-markdown-took-over-the-world/",
      "published": null,
      "updated": "2026-01-09T00:00:00.000Z",
      "content": "<p>Nearly every bit of the high-tech world, from the most cutting-edge AI systems at the biggest companies, to the casual scraps of code cobbled together by college students, is annotated and described by the same, simple plain text format. Whether you’re trying to give complex instructions to ChatGPT, or you want to be able to exchange a grocery list in Apple Notes or copy someone’s homework in Google Docs, that same format will do the trick. The wild part is, the format wasn’t created by a conglomerate of tech tycoons, it was created by a curmudgeonly guy with a kind heart who right this minute is probably rewatching a Kubrick film while cheering for an absolutely indefensible sports team.</p>\n<p>But it’s worth understanding how these simple little text files were born, not just because I get to brag about how generous and clever my friends are, but also because it reminds us of how the Internet <em>really</em> works: smart people think of good things that are crazy enough that they <em>just might work</em>, and then they give them away, over and over, until they slowly take over the world and make things better for everyone.</p>\n<h2>Making Their Mark</h2>\n<p>Though it’s now a building block of the contemporary Internet, like so many great things, <a href=\"https://daringfireball.net/projects/markdown/\">Markdown</a> just started out trying to solve a personal problem. In 2002, John Gruber made the unconventional decision to bet his online career on two completely irrational foundations: Apple, and blogs.</p>\n<p>It’s hard to remember now, but in 2002, Apple was just a few years past having been on death’s door. As difficult as it may be to picture in today’s world where Apple keynotes are treated like major events, back then, almost nobody was covering Apple regularly, let alone writing <em>exclusively</em> about the company. There was barely even an “tech news” scene online at all, and virtually no one was blogging. So John’s decision to go all-in on Apple for his pioneering blog <a href=\"https://daringfireball.net\">Daring Fireball</a> was, well, a daring one. At the time, Apple had only <em>just</em> launched its first iPod that worked with Windows computers, and the iPhone was still a full five years in the future. But that single-minded focus, not just on Apple, but on obsessive detail in everything he covered, eventually helped inspire much of the technology media landscape that we see today. John’s timing was also perfect — from the doldrums of that era, Apple’s stock price would rise by about 120,000% in the years after Daring Fireball started, and its cultural relevance probably increased by even more than that.</p>\n<p>By 2004, it wasn’t just Apple that had begun to take off: blogs and social media themselves had moved from obscurity to the very center of culture, and <a href=\"https://cybercultural.com/p/internet-2004/\">a new era of web technology had begun</a>. At the beginning of that year, few people in the world even knew what a “blog” was, but by the end of 2004, blogs had become not just ubiquitous, but downright <em>cool</em>. As unlikely as it seems now, that year’s largely uninspiring slate of U.S. presidential candidates like Wesley Clark, Gary Hart and, yes, <a href=\"https://en.wikipedia.org/wiki/Howard_Dean_2004_presidential_campaign\">Howard Dean</a> helped propel blogs into mainstream awareness during the Democratic primaries, alongside online pundits who had begun weighing in on politics and the issues and cultural moments at a pace that newspapers and TV couldn’t keep up with. A lot has been written about the transformation of media during those years, but less has been written about how the media and tech of the time transformed <em>each other</em>.</p>\n<p><img src=\"/images/gary-hart-blog.JPG\" alt=\"A photo from 2004 of a TV screen showing CNN, with a ticker saying \"Gary Hart Cyber Campaign Starts blog for possible 2004 presidential bid\"\"></p>\n<p>That era of early blogging was interesting in that nearly everyone who was writing the first popular sites was also busy helping <em>create</em> the tools for publishing them. Just like Lucille Ball and Desi Arnaz had to pioneer combining studio-style flat lighting with 35mm filming in order to define the look of the modern sitcom, or Jimi Hendrix had to work with Roger Mayer to invent the signature guitar distortion pedals that defined the sound of rock and roll, the pioneers who defined the technical format and structures of blogging were often building the very tools of creation as they went along.</p>\n<p>I got a front row seat to these acts of creation. At the time I was working on Movable Type, which was the most popular tool for publishing “serious” blogs, and helped popularize the medium. Two of my good friends had built the tool and quickly made it into the default choice for anybody who wanted to reach a big audience; it was kind of a combination of everything people do these days on WordPress and all the various email newsletter platforms and all of the “serious” podcasts (since podcasts wouldn’t be invented for another few months). But back in those early days, we’d watch people use our tools to set up Gawker or Huffington Post one day, and Daring Fireball or Waxy.org the next, and each of them would be the first of its kind, both in terms of its design and its voice. To this day, when I see something online that I love by Julianne Escobedo Shepherd or Ta-Nehisi Coates or Nilay Patel or Annalee Newitz or any one of dozens of other brilliant writers or creators, my first thought is often, “hey! They used to type in that app that I used to make!” Because sometimes those writers would inspire us to make a new feature in the publishing tools, and sometimes they would have hacked up a new feature all by themselves in between typing up their new blog posts.</p>\n<p>A really clear, and very simple, early example of how we learned that lesson was when we changed the size of the box that people used to type in just to create the posts on their sites. We made the box a little bit taller, mostly for aesthetic reasons. Within a few weeks, we’d found that posts on sites like Gawker had gotten longer, <em>mostly because the box was bigger</em>. This seems obvious now, years after we saw tweets get longer when Twitter expanded from 140 characters to 280 characters, but at the time this was a terrifying glimpse at how much power a couple of young product managers in a conference room in California would have over the media consumption of the entire world every time they made a seemingly-insignificant decision.</p>\n<p>The <em>other</em> dirty little secret was, typing in the box in that old blogging app could be… pretty wonky sometimes. People who wanted to do normal things like include an image or link in their blog post, or even just make some text bold, often had to learn somewhat-obscure HTML formatting, memorizing the actual language that’s used to make web pages. Not everybody knew all the details of how to make pages that way, and if they made even one small mistake, sometimes they could break the whole design of their site. It made things feel very fraught every time a writer went to publish something new online, and got in the way of the increasingly-fast pace of sharing ideas now that social media was taking over the public conversation.</p>\n<p>Enter John and his magical text files.</p>\n<p><img src=\"/images/markdown-text-hero-slice.jpg\" alt=\"\"></p>\n<h2>Marking up and marking down</h2>\n<p>The purpose of Markdown is really simple: It lets you use the regular characters on your keyboard which you already use while typing out things like emails, to make fancy formatting of text for the web. That HTML format that’s used to make web pages stands for HyperText Markup Language. The word “markup” there means you’re “marking up” your text with all kinds of special characters.\nOnly, the special characters can be kind of arcane. Want to put in a link to everybody’s favorite website? Well, you’re going to have to type in <code><a href=\"https://anildash.com/\">Anil Dash’s blog</a></code> I could explain why, and what it all means, but honestly, you get the point — it’s a lot! Too much. What if you could just write out the text and then the link, sort of like you might within an email? Like: <code>[Anil Dash’s blog](https://anildash.com)</code>! And then the right thing would happen. Seems great, right?</p>\n<p>The same thing works for things like putting a header on a page. For example, as I’m writing this right now, if I want to put a big headline on this page, I can just type <code>#How Markdown Took Over the World</code> and the right thing will happen.</p>\n<p>If mark_up_ is complicated, then the opposite of that complexity must be… markd_own_. This kind of solution, where it’s so smart it seems obvious in hindsight, is key to Markdown’s success. John worked to make a format that was so simple that anybody could pick it up in a few minutes, and powerful enough that it could help people express pretty much anything that they wanted to include while writing on the internet. At a technical level, it was also easy enough to implement that John could write the code himself to make it work with Movable Type, his publishing tool of choice. (Within days, people had implemented the same feature for most of the other blogging tools of the era; these days, virtually every app that you can type text into ships with Markdown support as a feature on day one.)</p>\n<p>Prior to launch, John had enlisted our mutual friend, the late, dearly missed <a href=\"http://www.aaronsw.com\">Aaron Swartz</a>, as a beta tester. In addition to being extremely fluent in every detail of the blogging technologies of the time, Aaron was, most notably, seventeen years old. And though Aaron’s activism and untimely passing have resulted in him having been turned into something of a mythological figure, one of the greatest things about Aaron was that he could be a total pain in the ass, which made him <em>terrific</em> at reporting bugs in your software. (One of the last email conversations I ever had with Aaron was him pointing out some obscure bugs in an open source app I was working on at the time.) No surprise, Aaron instantly understood both the potential and the power of Markdown, and was a top-tier beta tester for the technology as it was created. His astute feedback helped finely hone the final product so it was ready for the world, and when Markdown <a href=\"https://daringfireball.net/2004/03/introducing_markdown\">quietly debuted in March of 2004</a>, it was clear that text files around the web were about to get a permanent upgrade.</p>\n<p>The most surprising part of what happened next wasn’t that everybody immediately started using it to write their blogs; that was, after all, what the tool was designed to do. It’s that everybody started using Markdown to do <em>everything else</em>, too.</p>\n<h2>Hitting the Mark</h2>\n<p>It’s almost impossible to overstate the ubiquity of Markdown within the modern computer industry in the decades since its launch.</p>\n<p>After being nagged about it by users for more than a decade, Google finally <a href=\"https://www.theverge.com/2022/3/29/23002138/google-docs-markdown-support-formatting-update\">added support for Markdown to Google Docs</a>, though it took them years of fiddly improvements to make it truly usable. Just last year, Microsoft added support for Markdown to its <a href=\"https://www.theverge.com/news/677474/microsoft-windows-notepad-bold-italic-text-formatting-markdown-support\">venerable Notepad app</a>, perhaps in attempt to assuage the tempers of users who were still in disbelief that Notepad had been bloated with AI features. Nearly every powerful group messaging app, from Slack to WhatsApp to Discord, has support for Markdown in messages. And even the company that indirectly inspired all of this in the first place finally got on board: the most recent version of Apple Notes <a href=\"https://apple.gadgethacks.com/how-to/ios-26-notes-app-finally-gets-markdown-support-this-fall/\">finally added support</a> for Markdown. (It’s an especially striking launch by Apple due to its timing, shortly after John had used his platform as the most influential Apple writer in the world to <a href=\"https://daringfireball.net/2025/03/something_is_rotten_in_the_state_of_cupertino\">blog about the utter failure</a> of the “Apple Intelligence” AI launch.)</p>\n<p>But it’s not just the apps that you use on your phone or your laptop. For developers, Markdown has long been the lingua franca of the tools we string together to accomplish our work. On GitHub, the platform that nearly every developer in the world uses to share their code, nearly <em>every single repository of code</em> on the site has at least one Markdown file that’s used to describe its contents. Many have <em>dozens</em> of files describing all the different aspects of their project. And some of the repositories on GitHub consist of nothing <em>but</em> massive collections of Markdown files. The small tools and automations we run to perform routine tasks, the one-off reports that we generate to make sure something worked correctly, the confirmations that we have a system email out when something goes wrong, the temporary files we use when trying to recover some old data — all of these default to being Markdown files.</p>\n<p>As a result, there are now <em>billions</em> of Markdown files lying around on hard drives around the world. Billions more are stashed in the cloud. There are some on the phone in your pocket. Programmers leave them lying around wherever their code might someday be running. Your kid’s Nintendo Switch has Markdown files on it. If you’re listening to music, there’s probably a Markdown file on the memory chip of the tiny system that controls the headphones stuck in your ears. <em>The Markdown is inside you right now!</em></p>\n<h2>Down For Whatever</h2>\n<p>So far, these were all things we could have foreseen when John first unleashed his little text tool on the world. I would have been surprised about how <em>many</em> people were using it, but not really the <em>ways</em> in which they were using it. If you’d have said “Twenty years in the future, all the different note-taking apps people use save their files using Markdown!”, I would have said, “Okay, that makes sense!”</p>\n<p>What I <em>wouldn’t</em> have asked, though, was “Is John getting paid?” As hard as it may be to believe, back in 2004, the <em>default</em> was that people made new standards for open technologies like Markdown, and just shared them freely for the good of the internet, and the world, and then went on about their lives. If it happened to have unleashed billions of dollars of value for others, then so much the better. If they got some credit along the way, that was great, too. But mostly you just did it to solve a problem for yourself and for other like-minded people. And also, maybe, to help make sure that some jerk didn’t otherwise create some horrible proprietary alternative that would lock everybody into their terrible inferior version forever instead. (We didn’t have the word “enshittification” yet, but we did have Cory Doctorow and we did have plain text files, so we kind of knew where things were headed.)</p>\n<p>To give a sense of the vibe of that era, the term “podcasting” had been coined just a month before Markdown was released, and went into wider use that fall, and was similarly <a href=\"https://www.anildash.com/2024/02/05/wherever-you-get-podcasts/\">a radically open system</a> that wasn’t owned by any big company and that empowered people to do whatever they wanted to do to express themselves. (And podcasting was another technology that Aaron Swartz helped improve by being a brilliant pain in the ass. But I’ll save that story for another book-length essay.)</p>\n<p>That attitude of being not-quite-_anti_commercial, but perhaps just not even really <em>concerned</em> with whether something was commercial or not seems downright quaint in an era when the tech tycoons are not just the wealthiest people in the world, but also some of the weirdest and most obnoxious as well. But the truth is, most people <em>today</em> who make technology are actually still exceedingly normal, and quite generous. It’s just that they’ve been overshadowed by their bosses who are out of their minds and building rocket ships and siring hundreds of children and embracing overt white supremacy instead of making fun tools for helping you type text, like regular people do.</p>\n<p><img src=\"/images/markdown-text-hero-slice2.jpg\" alt=\"\"></p>\n<h2>The Markdown Model</h2>\n<p>The part about not doing this stuff solely for money matters, because even the <em>most</em> advanced LLM systems today, what the big AI companies call their “frontier” models, require complex orchestration that’s carefully scripted by people who’ve tuned their prompts for these systems through countless rounds of trial and error. They’ve iterated and tested and watched for the results as these systems hallucinated or failed or ran amok, chewing up countless resources  along the way. And sometimes, they generated genuinely astonishing outputs, things that are truly amazing to consider that modern technology can achieve. The rate of progress and evolution, even factoring in the mind-boggling amounts of investment that are going into these systems, is rivaled only by the initial development of the personal computer or the Internet, or the early space race.</p>\n<p>And all of it — <em>all of it</em> — is controlled through Markdown files. When you see the brilliant work shown off from somebody who’s bragging about what they made ChatGPT generate for them, or someone is understandably proud about the code that they got Claude to create, all of the most advanced work has been prompted in Markdown. Though where the logic of Markdown was originally a very simple version of \"use human language to tell the machine what to do\", the implications have gotten far more dire when they use a format designed to help expresss \"make this <code>**bold**</code>\" to tell the computer itself \"<code>make this imaginary girlfriend more compliant</code>\".</p>\n<p>But we already know that the Big AI companies are run by people who don't reckon with the implications of their work. They could never understand that every single project that's even moderately ambitious on these new AI platforms is being written up in files formatted according to this system created by one guy who has never asked for a dime for this work. An entire generation of AI coders has been born since Markdown was created who probably can’t even imagine that this technology even <em>has</em> an \"inventor\". It’s just always been here, like the Moon, or Rihanna.</p>\n<p>But it’s important for <em>everyone</em> to know that the Internet, and the tech industry, don’t run without the generosity and genius of regular people. It is not just billion-dollar checks and Silicon Valley boardrooms that enable creativity over years, decades, or generations — it’s often a guy with a day job who just gives a damn about doing something right, sweating the details and assuming that if he cares enough about what he makes then others will too. The <em>majority</em> of the technical infrastructure of the Internet was created in this way. For free, often by people in academia, or as part of their regular work, with no promise of some big payday or getting a ton of credit.</p>\n<p>The people who make the <em>real</em> Internet and the real innovations also don’t look for ways to hurt the world around them, or the people around them. Sometimes, as in the case of Aaron, the world hurts them more than anyone should ever have to bear. I know not everybody cares that much about plain text files on the Internet; I will readily admit I am a huge nerd about this stuff in a way that maybe most normal people are not. But I do think everybody cares about <em>some</em> part of the wonderful stuff on the Internet in this way, and I want to fight to make sure that everybody can understand that it’s not just five terrible tycoons who built this shit. Real people did. Good people. I saw them do it.</p>\n<p>The trillion-dollar AI industry's system for controlling their most advanced platforms is a plain text format one guy made up for his blog and then bounced off of a 17-year-old kid before sharing it with the world for free. You're welcome, Time Magazine's people of the year, <em>The Architects of AI</em>. Their achievement is every bit as impressive as yours.</p>\n<p><img src=\"/images/markdown-text-hero-slice3.jpg\" alt=\"\"></p>\n<h1 id=\"top-ten\">The Ten Technical Reasons Markdown Won</h1>\n<p>Okay, with some of the narrative covered, what can we <em>learn</em> from Markdown’s success? How did this thing really take off? What could we do if we wanted to replicate something like this in the modern era? Let’s consider a few key points:</p>\n<h3>1. Had a great brand.</h3>\n<p>Okay, let’s be real: “Markdown” as a name is clever as hell. Get it it’s not markup, it’s mark <em>down</em>. You just can’t argue with that kind of logic. People who knew what the “M” in “HTML” stood for could understand the reference, and to everyone else, it was just a clearly-understandable name for a useful utility.</p>\n<h3>2. Solved a real problem.</h3>\n<p>This one is not obvious, but it’s really important that a new technology have a <em>real</em> problem that it’s trying to solve, instead of just being an abstract attempt to do something vague, like “make text files better”. Millions of people were encountering the idea that it was too difficult or inconvenient to write out full HTML by hand, and even if one had the necessary skills, it was nice to be able to do so in a format that was legible as plain text as well.</p>\n<h3>3. Built on behaviors that already existed.</h3>\n<p>This is one of the most quietly genius parts of Markdown: The format is based on the ways people had been adding emphasis and formatting to their text for years or even decades. Some of the formatting choices dated back to the early days of email, so they’d been ingrained in the culture of the internet for a full generation before Markdown existed. It was so familiar, people could be writing Markdown <em>without even knowing it</em>.</p>\n<h3>4. Mirrored RSS in its origin.</h3>\n<p>Around the same time that Markdown was taking off, RSS was maturing into its ubiquitous form as well. The format had existed for some years already, enabling various kinds of content syndication, but at this time, it was adding support for the technologies that would come to be known as podcasting as well. And just like RSS, Markdown was spearheaded by a smart technologist who was also more than a little stubborn about defining a format that would go on to change the way we share content on the internet. In RSS’ case, it was pioneered by Dave Winer, and with Markdown it was John Gruber, and both were tireless in extolling the virtues of the plain text formats they’d helped pioneer. They could both leverage blogs to get the word out, and to get feedback on how to build on their wins.</p>\n<h3>5. There was a community ready to help.</h3>\n<p>One great thing about a format like Markdown is that its success is never just the result of one person. Vitally, Markdown was part of a community that could build on it right from the start. Right from the beginning, Markdown was inspired by earlier works like Textile, a formatting system for plain text created by <a href=\"https://web.archive.org/web/20021226035527/http://textism.com/tools/textile/\">Dean Allen</a>. Many of us appreciated and were inspired by Dean, who was a pioneer of blogging tools in the early days of social media, but if there’s a bigger fan of Dean Allen on the internet than John Gruber, I’ve never met them. Similarly, <a href=\"http://www.rememberaaronsw.com/memories/\">Aaron Swartz</a>, the brilliant young technologist who’s known best known as an activist for digital rights and access, was at that time just a super brilliant teenager that a lot of us loved hacking with. He was the most valuable beta tester of Markdown prior to its release, helping to shape it into a durable and flexible format that’s stood the test of time.</p>\n<h3>6. Had the right flavor for every different context.</h3>\n<p>Because Markdown’s format was frozen in place (and had some super-technical details that people could debate about) and people wanted to add features over time, various communities that were implementing Markdown could add their own “flavors” of it as they needed. Popular ones came to be called Commonmark and Github-Flavored, led by various companies or teams that had divergent needs for the tool. While tech geeks tend to obsess over needing everything to be “correct”, in reality it often just <em>doesn’t matter</em> that much, and in the real world, the entire Internet is made up of content that barely follows the technical rules that it’s supposed to.</p>\n<h3>7. Released at a time of change in behaviors and habits.</h3>\n<p>This is a subtle point, but an important one: Markdown came along at the right time in the evolution of its medium. You can get people to change their behaviors when they’re using a new tool, or adopting a new technology. In this case, blogging (and all of social media!) were new, so saying “here’s a new way of typing a list of bullet points” wasn’t much an additional learning curve to add to the mix. If you can take advantage of catching people while they’re already in a learning mood, you can really tap into the moment when they’re most open-minded to new things.</p>\n<h3>8. Came right on the cusp of the “build tool era”.</h3>\n<p>This one’s a bit more technical, but also important to understand. In the first era of building for the web, people often built the web’s languages of HTML, Javascript and CSS by hand, by themselves, or stitched these formats together from subsets or templates. But in many cases, these were fairly simple compositions, made up of smaller pieces that were written in the same languages. As things matured, the roles for web developers specialized (there started to be backend developers vs. front-end, or people who focused on performance vs. those who focused on visual design), and as a result the tooling for developers matured. On the other side of this transition, developers began to use many different programming languages, frameworks and tools, and the standard step before trying to deploy a website was to have an automated build process that transformed the “raw materials” of the site into the finished product. Since Markdown is a raw material that has to be transformed into HTML, it perfectly fit this new workflow as it became the de facto standard method of creation and collaboration.</p>\n<h3>9. Worked with “View source”</h3>\n<p>Most of the technologies that work best on the web enable creators to “view source” just like HTML originally did when the first web browsers were created. In this philosophy, one can look at the source code that makes up a web page, and understand how it was constructed so that you can make your own. With Markdown, it only takes one glimpse of a source Markdown file for anyone to understand how they might make a similar file of their own, or to extrapolate how they might apply analogous formatting to their own documents. There’s no teaching required when people can just see it for themselves.</p>\n<h3>10. Not encumbered in IP</h3>\n<p>This one’s obvious if you think about it, but it can’t go unsaid: There are no legal restrictions around Markdown. You wouldn’t <em>think</em> that anybody would be foolish or greedy enough to try to patent something as simple as Markdown, but there are many far worse examples of patent abuse in the tech industry. Fortunately, John Gruber is not an awful person, and nobody else has (yet) been brazen enough to try to usurp the format for their own misadventures in intellectual property law. As a result, nobody’s been afraid, either to use the format, or to support creating or reading the format in their apps.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/12/will-that-job-crush-your-soul/",
      "title": "How to know if that job will crush your soul",
      "description": null,
      "url": "https://anildash.com/2026/01/12/will-that-job-crush-your-soul/",
      "published": null,
      "updated": "2026-01-12T00:00:00.000Z",
      "content": "<p>Last week, we talked about one huge question, “<a href=\"https://www.anildash.com/2026/01/05/a-tech-career-in-2026/\">How the hell are you supposed to have a career in tech in 2026?</a>” That’s pretty specific to this current moment, but there are some timeless, more perennial questions I've been sharing with friends for years that I wanted to give to all of you. They're a short list of questions that help you judge whether a job that you’re considering is going to crush your soul or not.</p>\n<p>Obviously, not everyone is going to get to work in an environment that has perfect answers to all of these questions; a lot of the time, we’re lucky just to get a place to work at all. But these questions are framed in this way to encourage us all to aspire towards roles that enable us to do our best work, to have the biggest impact, and to live according to our values.</p>\n<h2>The Seven Questions</h2>\n<ul>\n<li>If what you do succeeds, will the world be better?</li>\n</ul>\n<p>This question originally started for me when I would talk to people about new startups, where people were judging the basic idea of the product or the company itself, but it actually applies to <em>any</em> institution, at <em>any</em> size. If the organization that you’re considering working for, or the team you’re considering joining, is able to achieve their stated goals, is it ultimately going to have a positive effect? Will you be proud of what it means? Will the people you love and care about respect you for making that choice, and will those with the least to gain feel like you’re the kind of person who cares about their impact on the world?</p>\n<ul>\n<li>Whose money do they have to take to stay in business?</li>\n</ul>\n<p>Where does the money in the organization <em>really</em> come from? You need to know this for a lot of reasons. First of all, you need to be sure that <em>they</em> know the answer. (You’d be surprised how often that’s not the case!) Even if they do know the answer, it may make you realize that those customers are not the people whose needs or wants you’d like to spend most of your waking hours catering to. This goes beyond the simple basics of the business model — it can be about whether they're profitable or not, and what the corporate ownership structure is like.</p>\n<p>It’s also increasingly common for companies to mistake those who are <em>investing</em> in a company with those who are their <em>customers</em>. But there’s a world of difference between those who are paying you, and those who you have to pay back tenfold. Or thousandfold.</p>\n<p>The same goes for nonprofits — do you know who has to stay happy and smiling in order for the institution to stay stable and successful? If you know those answers, you'll be far more confident about the motivations and incentives that will drive key decisions within the organization.</p>\n<ul>\n<li>What do you have to believe to think that they’re going to succeed? In what way does the world have to change or not change?</li>\n</ul>\n<p>Now we’re getting a little bit deeper into thinking about the systems that surround the organization that you’re evaluating. Every company, every institution, even every small team, is built around a set of invisible assumptions. Many times, they’re completely reasonable assumptions that are unlikely to change in the future. But <em>sometimes</em>, the world you’re working in is about to shift in a big way, or things are built on a foundation that’s speculative or even unrealistic.</p>\n<p>Maybe they're assuming there aren't going to be any big new competitors. Perhaps they think they'll always remain the most popular product in their category. Or their assumptions could be about the stability of the rule of law, or a lack of corruption — more fundamental assumptions that they've never seen challenged in their lifetime or in their culture, but that turn out to be far more fragile than they'd imagined.</p>\n<p>Thinking through the context that everyone is sharing, and reflecting on whether they’re really planning for any potential disruptions, is an essential part of judging the psychological health of an organization. It’s the equivalent of a person having self-awareness, and it’s just as much of a red flag if it’s missing.</p>\n<ul>\n<li>What’s the lived experience of the workers there whom you trust? Do you have evidence of leaders in the organization making hard choices to do the right thing?</li>\n</ul>\n<p>Here is how we can tell the culture and character of an organization. If you’ve got connections into the company, or a backchannel to workers there, finding out as much information as you can about the real story of its working conditions is often one of the best ways of understanding whether it’s a fit for your needs. Now, people can always have a bad day, but overall, workers are usually very good at providing helpful perspectives about their context.</p>\n<p>And more broadly, if people can provide examples of those in power within an organization <em>using</em> that power to take care of their workers or customers, or to fight for the company to be more responsible, then you’ve got an extremely positive sign about the health of the place even before you’ve joined. It’s vital that these be stories you are able to find and discover on your own, not the ones amplified by the institution itself for PR purposes.</p>\n<ul>\n<li>What were you wrong about?</li>\n</ul>\n<p>And here we have perhaps one of the easiest and most obvious ways to judge the culture of an organization. This is even a question you can ask people while you’re in an interview process, and you can judge their responses to help form your opinion. A company, and <em>leadership culture</em>, that can change its mind when faced with new information and new circumstances is much more likely to adapt to challenges in a healthy way. (If you want to be nice, phrase it as \"What is a way in which the company has evolved or changed?\")</p>\n<ul>\n<li>Does your actual compensation take care of what you need for all of your current goals and needs — from day one?</li>\n</ul>\n<p>This is where we go from the abstract and psychological goals to the practical and everyday concerns: can you pay your bills? The phrasing and framing here is very intentional: <em>are they really going to pay you enough</em>? I ask this question very specifically because you’d be surprised how often companies actually dance around this question, or how often we trick ourselves into hearing what we <em>want</em> to hear as the answer to this question when we’re in the exciting (or stressful) process of considering a new job, instead of looking at the facts of what’s actually written in black-and-white on an offer letter.</p>\n<p>It's also important not to get distracted with potential, even if you're optimistic about the future. Don’t listen to promises about what might happen, or descriptions of what’s possible if you advance in your role. Think about what your real life will be like, after taxes, if you take the job that they’ve described.</p>\n<ul>\n<li>Is the role you’re being hired into one where you can credibly advance, and where there’s sufficient resources for success?</li>\n</ul>\n<p>This is where you can apply your optimism in a practical way: can the organization accurately describe how your career will proceed within the company? Does it have a specific and defined trajectory, or does it involve ambiguous processes or changes in teams or departments? Would you have to lobby for the support of leaders from other parts of the organization? Would making progress require acquiring new skills or knowledge? Have they committed to providing you with the investment and resources required to learn those skills?</p>\n<p>These questions are essential to understand, because lacking these answers can lead to an ugly later realization that even an initially-exciting position may turn out to be a dead-end job over time.</p>\n<h3>Towards better working worlds</h3>\n<p>Sometimes it can really feel like the deck is stacked against you when you're trying to find a new job. It can feel even worse to be faced with an opportunity and have a nagging sense that something is <em>not quite right</em>. Much of the time, that feeling comes from the vague worry that we're taking a job that is going to make us miserable.</p>\n<p>Even in a tough job market, there are some places that are trying to do their best to treat people decently. In larger organizations, there are often pockets of relative sanity, led by good leaders, who are trying to do the right thing. It can be a massive improvement in quality of life if you can find these places and use them as foundations for the next stage of your career.</p>\n<p>The best way to navigate towards these better opportunities is to be systematic when evaluating all of your options, and to hold out for as high standards as possible when you're out there looking. These seven questions give you the tools to do exactly that.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/15/wikipedia-at-25/",
      "title": "Wikipedia at 25: What the web can be",
      "description": null,
      "url": "https://anildash.com/2026/01/15/wikipedia-at-25/",
      "published": null,
      "updated": "2026-01-15T00:00:00.000Z",
      "content": "<p>When Wikipedia <a href=\"https://wikipedia25.org/en/\">launched 25 years ago today</a>, I heard about it almost immediately, because the Internet was small back then, and I thought “Well… good luck to those guys.” Because there had been online encyclopedias before Wikipedia, and anybody who really <em>cared</em> about this stuff would, of course, buy Microsoft Encarta on CD-ROM, right? I’d been fascinated by the technology of wikis for a good while at that point, but was still not convinced about whether they could be deployed at such a large scale.</p>\n<p>So, once Wikipedia got a little bit of traction, and I met Jimmy Wales the next year, I remember telling him (with all the arrogance that only a dude that age can bring to such an obvious point) “well, the <em>hard part</em> is going to be getting all the people to contribute”. As you may be aware, Jimmy, and a broad worldwide community of volunteers, did pretty well with the hard part.</p>\n<p>Wikipedia has, of course, become vital to the world’s information ecosystem. Which is why everyone needs to be aware of the fact that it is currently under <a href=\"https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales\">existential threat</a> from those who see any reliable source of truth as an attack on their power. The same authoritarians in power who are trying to purchase every media outlet and social network where ordinary people might have a chance to share accurate information about their crimes or human rights violations are deeply threatened about a platform that they can’t control and can’t own.</p>\n<p>Perhaps the greatest compliment to Wikipedia at 25 years old is the fact that, if the fascists can’t buy it, then they’re going to try to kill it.</p>\n<h2>The Building Block</h2>\n<p>What I couldn’t foresee in the early days, when so many were desperate to make sure that Wikipedia wasn’t treated as a credible source — there were <em>so many</em> panicked conversations about how to keep kids from citing the site in their school papers — was how the site would become infrastructure for so much of the commercial internet.</p>\n<p>The first hint was when Google introduced their “Knowledge Panel”, the little box of info next to their search results that tried to explain what you were looking for, without you even having to click through to a website. For Google, this had a huge economic value, because it kept you on their search results page where all their ad links lived. The vast majority of the Knowledge Panel content for many major topics was basically just Wikipedia content, summarized and wrapped up in a nice little box. Here was the most valuable company of the new era of the Internet, and one of their signature experiences relied on the strength of the Wikipedia community’s work.</p>\n<p>This was, of course, complemented by the fact that Wikipedia would also organically show up right near the top of so many search results just based on the strength of the content that the community was cranking out at a remarkable pace. Though it probably made Google bristle a little bit that those damn Wikipedia pages didn’t have any Google ads on them, and didn’t have any of Google’s tracking code on them, so they couldn’t surveil what you do when you were clicking around on the site, making it impossible for them to spy on you and improve the targeting of their advertising to you.</p>\n<p>The same pattern played out later for the other major platforms; Apple’s Siri and Amazon’s Alexa both default to using Wikipedia data to answer common questions. During the few years when Facebook pretended to care about misinformation, they would show summaries of Wikipedia information in the news feed to help users fact-check misinformation that was being shared.</p>\n<p>Unsurprisingly, a lot of the time when the big companies would try to use Wikipedia as the water to put out the fires that they’d started, they <a href=\"https://www.wired.com/story/youtube-wikipedia-content-moderation-internet/\">didn’t even bother to let the organization know</a> before they started doing so, burdening the non-profit with the cost and complexity of handling their millions of users and billions of requests, without sharing any of their trillions of dollars. (At least until there was public uproar over the practice.) Eventually, Wikimedia Foundation (the organization that runs Wikipedia) made a way for <a href=\"https://enterprise.wikimedia.com\">companies to make deals with them</a> and actually support the community instead of just extracting from the community without compensation.</p>\n<h2>The culture war comes for Wikipedia</h2>\n<p>Things had reached a bit of equilibrium for a few years, even as the larger media ecosystem started to crumble, because the world could see after a few decades that Wikipedia had become a vital and valuable foundation to the global knowledge ecology. It’s almost impossible to imagine how the modern internet would function without it.</p>\n<p>But as the global fascist movement has risen in recent years, one of their first priorities, as in all previous such movements, has been undermining any sources of truth that can challenge their control over information and public sentiment. In the U.S., this has manifested from the top-down with the richest tycoons in the country, including Elon Musk, stoking sentiment against Wikipedia with vague innuendo and baseless attacks against the site. This is also why Musk has funded the creation of alternatives like Grokipedia, designed to undermine the centrality and success of Wikipedia. From the bottom-up, there have been individual bad actors who have attempted to infiltrate the ranks of editors on the site, or worked to deface articles, often working slowly or across broad swaths of content in order to attempt to avoid detection.</p>\n<p>All of this has been carefully coordinated; as noted in <a href=\"https://www.theverge.com/cs/features/717322/wikipedia-attacks-neutrality-history-jimmy-wales\">well-documented pieces like the Verge’s excellent coverage</a> of the story, the attack on Wikipedia is a campaign that has been led by voices like Christopher Rufo, who helped devise campaigns like the concerted effort to demonize trans kids as a cultural scapegoat, and the intentional targeting of Ivy League presidents as part of the war on DEI. The undermining of Wikipedia hasn’t yet gotten the same traction, but they also haven’t yet put the same time and resources into the fight.</p>\n<p>There’s been such a constant stream of vitriol directed at Wikipedia and its editors and leadership that, when I heard about a <a href=\"https://gothamist.com/news/gunman-storms-stage-at-wikipedia-conference-in-manhattan-no-injuries-reported\">gunman storming the stage</a> at the recent gathering of Wikipedia editors, I had <em>assumed</em> it was someone who had been incited by the baseless attacks from the extremists. (It turned out to have been someone who was disturbed on his own, which he said was tied to the editorial policies of the site.) But I would expect it’s only a matter of time until the attacks on Wikipedia’s staff and volunteers take on a far more serious tone much of the time — and it’s not as if this is an organization that has a massive security budget like the trillion-dollar tech companies.</p>\n<p>The temperature keeps rising, and there isn’t yet sufficient awareness amongst good actors to protect the Wikipedia community and to guard its larger place in society.</p>\n<h2>Enter the AI era</h2>\n<p>Against this constant backdrop of increasing political escalation, there’s also been the astronomical ramp-up in demand for Wikipedia content from AI platforms. The very first source of data for many teams when training a new LLM system is Wikipedia, and the vast majority of the time, they gather that data not by paying to license the content, but by “scraping” it from the site — which uses both more technical resources and precludes the possibility of establishing any consensual paid relationship with the site.</p>\n<p>A way to think about it is that, for the AI world, they’re music fans trading Wikipedia like it’s MP3s on Napster, and conveniently ignoring the fact there’s an Apple Music or Spotify offering a legitimate way to get that same data while supporting the artist. Hopefully the <a href=\"https://www.anildash.com/2025/09/18/the-taylors-version-generation/\">“Taylor’s Version” generation</a> can see Wikipedia as being at least as worthy of supporting as a billionaire like Taylor Swift is.</p>\n<p>But as people start going to their AI apps first, or chatting with bots instead of doing Google searches, they don’t <em>see</em> those Knowledge Panels anymore, and they don’t click through to Wikipedia anymore. At a surface level, this hurts traffic to the site, but at a deeper level, this hurts the flow of new contributors to the site. Interestingly, though I’ve been linking to <a href=\"https://www.anildash.com/2006/07/31/quitting-wikipe/\">critiques of Wikipedia</a> on my site for at least twenty years, for most of the last few decades, my biggest criticism of Wikipedia has long been the lack of inclusion amongst its base of editorial volunteers. But this is, at least, a shortcoming that both the Wikimedia Foundation and the community itself readily acknowledge and have been working diligently on.</p>\n<p>That lack of diversity in editors as a problem will pale in comparison to the challenge presented if people stop coming to the front door entirely because they’re too busy talking to their AI bots. They may not even <em>know</em> what parts of the answers they’re getting from AI are due to the bot having slurped up the content from Wikipedia. Worse, they’ll have been so used to constantly encountering hallucinations that the idea of joining a community that’s constantly trying to improve the accuracy of information will seem quaint, or even <em>absurd</em>, in a world where everything is wrong and made up all the time.</p>\n<p>This means that it’s in the best interests of the AI platforms to not only pay to sustain Wikipedia and its community so that there’s a continuous source of new, accurate information over time, but that it’s also in their interest to keep teaching their community about the value of such a resource. The very fact that people are so desperate to chat with a bot shows how hungry they are for connection, and just imagine how excited they’d be to connect with the <em>actual humans</em> of the Wikipedia community!</p>\n<h2>We can still build</h2>\n<p>It’s easy to forget how radical Wikipedia was at its start. For the majority of people on the Internet, Wikipedia is just something that’s been omnipresent right from the start. But, as someone who got to watch it rise, take it from me: this was a thing that lots of regular people <em>built together</em>. And it was explicitly done as a collaboration meant to show the spirit of what the Internet is really about.</p>\n<p><a href=\"https://wikimediafoundation.org/wikipedia25/\">Take a look at its history</a>. Think about what it means that there is no advertising, and there never has been. It doesn’t track your activity. You can edit the site <em>without even logging in</em>. If you make an account, you don’t have to use your real name if you’d like to stay anonymous. When I wrote about <a href=\"https://www.anildash.com/2008/09/22/alan-leeds-and-who-writes-the-web/\">being the creator</a> of an entirely <em>new</em> page on Wikipedia, it felt like magic, and it still does! You can be the person that births something onto the Internet that feels like it becomes a permanent part of the historical record, and then others around the world will help make it better, forever.</p>\n<p>The site is still amongst the most popular sites on the web, bigger than almost every commercial website or app that has ever existed. There’s never been a single ad promoting it. It has unlocked <em>trillions</em> of dollars in value for the business world, and unmeasurable educational value for multiple generations of children. Did you know that for many, many topics, you can change your language from English to <em>Simple English</em> and get an <a href=\"https://simple.wikipedia.org/wiki/Quadratic_equation\">easier-to-understand</a> version of an article that can often help explain a concept in much more approachable terms? Wikipedia has a <a href=\"https://www.wikivoyage.org\">travel guide</a>! A <a href=\"https://www.wiktionary.org\">dictionary</a>! A <a href=\"https://www.wikibooks.org\">collection of textbooks and cookbooks</a>! Here are <a href=\"https://species.wikimedia.org/\">all the species</a>! It’s unimaginably deep.</p>\n<p>Whenever I worry about where the Internet is headed, I remember that this example of the collective generosity and goodness of people still exists. There are so many folks just working away, every day, to make something good and valuable for strangers out there, simply from the goodness of their hearts. They have no way of ever knowing who they’ve helped. But they believe in the simple power of doing a little bit of good using some of the most basic technologies of the internet. Twenty-five years later, all of the evidence has shown that they really have changed the world.</p>\n<hr>\n<p>If you are able, today is a very good day to <a href=\"https://donate.wikimedia.org/\">support the Wikimedia Foundation</a>.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/22/codeless/",
      "title": "Codeless: From idea to software",
      "description": null,
      "url": "https://anildash.com/2026/01/22/codeless/",
      "published": null,
      "updated": "2026-01-22T00:00:00.000Z",
      "content": "<h2>Something actually new?</h2>\n<p>There’s finally been a big leap forward in coding tech unlocked by AI — not just “it’s doing some work for me”, but “we couldn’t do this before”. What’s new are a few smart systems that let coders control fleets of dozens of coding bots, all working in tandem, to swarm over a list of tasks and to deliver entire features, or even entire <em>sets</em> of features, just from a plain-English description of the strategic goal to be accomplished.</p>\n<p>This isn’t a tutorial, this is just trying to understand that something cool is happening, and maybe we can figure out what it means, and where it’s going. Lots of new technologies and buzzwords with wacky names like Gas Town and Ralph Wiggum and loops and polecats are getting as much attention as, well, anything since vibe coding. So what’s really going on?</p>\n<p>The breakthrough here came from using two familiar ideas in interesting new ways. The first idea is <em>orchestration</em>. Just like cloud computing got massively more powerful when it became routine for coders to be able to control entire fleets of servers, the ability to reliably configure and control entire fleets of coding bots unlocks a much higher scale of capability than any one person could have by chatting with a bot on their own.</p>\n<p>The second big idea is <em>resilience</em>. Just like systems got more capable when designers started to assume that components like hard drives would fail, or that networks would lose connection, today’s coders are aware of the worst shortcoming of using LLMs: sometimes they create garbage code. This tendency used to be the biggest shortcoming about using LLMs to create code, but by <em>designing</em> for failure, testing outputs, and iterating rapidly, codeless systems enable a huge advancement in the ultimate reliability of the output code.</p>\n<p>The codeless approach also addresses the other huge objection that many coders have to using LLMs for coding. The most common direct objection to using AI tools to assist in coding hasn’t just been the broken code — it’s been the many valid social and ethical concerns around the vendors who build the platforms. But codeless systems are open source, non-commercial, and free to deploy, while making it trivial to swap in alternatives for every part of the stack, including using open source or local options for all or part of the LLM workload. This isn’t software being sold by a Big AI vendor; these are tools being created by independent hackers in the community.</p>\n<p>The ultimate result is the ability to create software at scale without directly writing any code, simply by providing strategic direction to a fleet of coding bots. Call it “codeless” software.</p>\n<h2>Codeless in 10 points</h2>\n<p>If you’re looking for a quick bullet-point summary, here’s something skimmable:</p>\n<ol class=\"numbered-callout\">\n  <li>\"Codeless\" is a way to describe a new way of orchestrating large numbers of AI coding bots to build software at scale, controlled by a plain-English strategic plan for the bots to follow.</li>\n  <li>In this approach, you don't write code directly. Instead, you write a plan for the end result or product that you want, and the system directs your bots to build code to deliver that product. (Codeless abstracts away directly writing code just like \"<a href=\"https://en.wikipedia.org/wiki/Serverless_computing\">serverless</a>\" abstracted away directly managing servers.)</li>\n  <li>This codeless approach is credible because it emerged organically from influential coders who don't work for the Big AI companies, and independent devs are already starting to make it easier and more approachable. It's not a pitch from a big company trying to sell a product, and in fact, codeless tools make it easy to swap out one LLM for another.</li>\n  <li>Today, codeless tools themselves don't cost anything. The systems are entirely open source, though setting them up can be complicated and take some time. Actually running enough bots to generate all that code gets expensive quickly if you use cutting-edge commercial LLMs, but mixing in some lower-cost open tools can help defray costs. We can also expect that, as this approach gains momentum, more polished paid versions of the tools will emerge.</li>\n  <li>Many coders didn't like using LLMs to generate code because they hallucinate. Codeless systems <em>assume</em> that the code they generate will be broken sometimes, and handle that failure. Just like other resilient systems assume that hard drives will fail, or that network connections will be unreliable, codeless systems are designed to handle unreliable code.</li>\n  <li>This has nothing to do with the \"no code\" hype from years ago, because it's not locked-in to one commercial vendor or one proprietary platform. And codeless projects can be designed to output code that will run on any regular infrastructure, including your existing systems.</li>\n  <li>Codeless changes power dynamics. People and teams who adopt a codeless approach have the potential to build a lot more under their own control. And those codeless makers won't necessarily have to ask for permission or resources in order to start creating. Putting this power in the hands of those individuals might have huge implications over time, as people realize that they may not have to raise funding or seek out sponsors to build the things that they imagine.</li>\n  <li>The management and creation interfaces for codeless systems are radically more accessible than many other platforms because they're often controlled by simple plain text <a href=\"https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/\">Markdown</a> files. This means it's likely that some of the most effective or successful codeless creators could end up being people who have had roles like product managers, designers, or systems architects, not just developers.</li>\n  <li>Codeless approaches are probably <em>not</em> a great way to take over a big legacy codebase, since they rely on accurately describing an entire problem, which can often be difficult to completely capture. And coding bots may lack sufficient context to understand legacy codebases, especially since LLMs are sometimes weaker with legacy technologies.</li>\n  <li>In many prior evolutions of coding, abstractions let coders work at higher levels, closer to the problem they were trying to solve. Low-level languages saved coders from having to write assembly language; high-level languages kept coders from having to write code to manage memory. Codeless systems abstract away directly writing code, continuing the long history of letting developers focus more on the problem to be solved than on manually creating every part of the code.</li>\n</ol>\n<h2>What does software look like when coders stop coding?</h2>\n<p>As we’ve been saying for some time, for people who actually make and understand technology, the <a href=\"https://www.anildash.com/2025/10/17/the-majority-ai-view/\">majority AI view</a> is that LLMs are just useful technologies that have their purposes, but we shouldn’t go overboard with all of the absurd hype. We’re seeing new examples of the deep moral failings and social harms of the Big AI companies every day.</p>\n<p>Despite this, coders still haven’t completely written off the potential of LLMs. A big reason why coders are generally more optimistic about AI than writers or photographers is because, in creative spaces, AI smothers the human part of the process. But in coding, AI takes over the drudgery, and lets coders focus on the most human and expressive parts.</p>\n<p>The shame, then, is that much of the adoption of AI for coding has been in top-down mandates at companies. Rather than enabling innovation, it’s been in deployments designed to undermine their workers’ job security. And, as we’ve seen, <a href=\"https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/\">this has worked</a>. It’s no wonder that a lot of the research on enterprise use of AI for coding has shown little to no increase in productivity; obviously productivity improvements have not been the goal, much of the time.</p>\n<p>Codeless tech has the potential to change that. Putting the power of orchestrating a fleet of coding bots in the hands of a smart and talented coder (or designer! or product manager! or writer! or…) upends a lot of the hierarchy about who’s able to call the shots on what gets created. The size of your nights-and-weekends project might be a lot bigger, the ambitions of your side gig could be a lot more grand.</p>\n<p>It’s still early, of course. The bots themselves are expensive as hell if you’re running the latest versions of Claude Code for all of them. Getting this stuff running is hard; you’re bouncing between obscure references to Gas Town on <a href=\"https://github.com/steveyegge\">Steve Yegge’s Github</a>, and a bunch of smart posts on <a href=\"https://simonwillison.net\">Simon Willison’s blog</a>, and sifting through YouTube videos about <a href=\"https://www.youtube.com/watch?v=vIFD0YE29Fs\">Ralph Wiggum</a> to see if they’re about the Simpsons or the software.</p>\n<p>It’s gonna be like that for a while, a little bit of a mess. But that’s a lot better than Enterprise Certified Cloud AI Engineer, Level II, minimum 11 years LLM experience required. If history is any guide, the entire first wave of implementations will be discarded in favor of more elegant and/or powerful second versions, once we know what we actually want. <a href=\"https://wiki.c2.com/?PlanToThrowOneAway\">Build one to throw away.</a> I mean, that’s kind of the spirit of the whole codeless thing, isn’t it?</p>\n<p>This could all still sputter out, too. Maybe it’s another fad. I don’t love seeing some of the folks working on codeless tools pivot into asking folks to buy memecoins to support their expensive coding bot habits. The Big AI companies are gonna try to kill it or co-opt it, because tools that reduce the switching cost between LLMs to zero must terrify them.</p>\n<p>But for the first time in a long time, this thing feels a little different. It’s emerging organically from people who don’t work for trillion dollar companies. It’s starting out janky and broken and interesting, instead of shiny and polished in a soulless live stream featuring five dudes wearing vests. This is tech made for people who <em>like making things</em>, not tech made for people who are trying to appease financiers. It’s <a href=\"https://www.anildash.com/2025/10/24/founders-over-funders/\">for inventors, not investors</a>.</p>\n<p>I truly, genuinely, don’t care if you call it “codeless”; it just needs a name that we can hang on it so people know wtf we’re talking about. I worked backwards from “what could we write on a whiteboard, and everyone would know what we were talking about?” If you point at the diagrams and say, “The legacy code is complicated, so we’re going to do that as usual, but the client apps and mobile are all new, so we could just do those codeless and see how it goes”, people would just sort of nod along and know what you meant, at least vaguely. If you’ve got a better name, have at it.</p>\n<p>In the meantime, though, start hacking away. Make something more ambitious than you could do on your own. Sneak an army of bots into work. Build something that you would have needed funding for before, but don’t now. Build something that somebody has made a horrible proprietary version of, and release it for free. Share your Markdown files!</p>\n<p>Maybe the distance from idea to app just got a little bit shorter? We're about to find out.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/26/why-we-speak/",
      "title": "Why We Speak",
      "description": null,
      "url": "https://anildash.com/2026/01/26/why-we-speak/",
      "published": null,
      "updated": "2026-01-26T00:00:00.000Z",
      "content": "<p>I've been working in and around the technology industry for a long time. Depending on how you count, it's 20 or 30 years. (I first started getting paid to put together PCs with a screwdriver when I was a teenager, but there isn't a good way to list that on LinkedIn.) And as soon as I felt like I was pretty sure that I was going to be able to pay the next month's rent without having to eat ramen noodles for two weeks before it was due, I felt like I'd really made it.</p>\n<p>And as soon as you've made it, you owe it to everybody else to help out as much as you can. I don't know how to put it more simply than that. But for maybe the first decade of being in the \"startup\" world, where everybody was worried about appealing to venture capital investors, or concerned about getting jobs with the big tech companies, I was pretty convinced that one of the things that you <em>couldn't</em> do to help people was to talk about some of the things that were wrong. Especially if the things that were wrong were problems that, when described, might piss off the guys who were in charge of the industry.</p>\n<p>But eventually, I got a little bit of power, mostly due to becoming a little bit visible in the industry, and I started to get more comfortable speaking my mind. Then, surprisingly, it turned out that... nothing happened. The sky didn't fall. I didn't get fired from my jobs. I certainly got targeted for harassment by bad actors, but that was largely due to my presence on social media, not simply because of my views. (And also because I tend to take a pretty provocative or antagonistic tone on social media when trying to frame an argument.)  It probably helped that, in the workplace, I both tend to act like a normal person and am also generally good at my job.</p>\n<p>I point all of this out not to pat myself on the back, or as if any of this is remarkable  — it's certainly not — but because it's useful context for the current moment.</p>\n<h2>The cycle of backlash</h2>\n<p>I have been around the technology industry, and the larger business world, long enough to have watched the practice of speaking up about moral issues go from completely unthinkable to briefly being given lip service to actively being persecuted both professionally and politically. The campaigns to stamp out issues of conscience amongst working people have vilified caring for others with names ranging from \"political correctness\" to \"radicalism\" to \"virtue signaling\" to \"woke\" and I'm sure I'm missing many more. This, despite the fact that there have always been thoughtful people in every organization who try to do the right thing; it's impossible to have a group of people of any significant size and not have <em>some</em> who have a shred of decency and humanity within them.</p>\n<p>But the technology industry has an incredibly short memory, by design. We're always at the beginning of history, and so many people working in it have never encountered a time before this moment when there's been this kind of brutal backlash from their leaders against common decency. Many have never felt such pressure to tamp down their own impulses to be good to their colleagues, coworkers, collaborators and customers.</p>\n<p>I want to encourage everyone who is afraid in this moment to find some comfort and some solace in the fact that we have been here before. Not in <em>exactly</em> this place, but in analogous ones. And also to know that there are many people who are also feeling the same combination of fear or trepidation about speaking up, but a compelling and irrepressible desire to do so. We've shifted the Overton window on what's acceptable multiple times before.</p>\n<p>I am, plainly, exhorting you do to speak up about the current political moment and to call for action. There is some risk to this. There is less risk for everyone when more of us speak up.</p>\n<h2>Where we are</h2>\n<p>In the United States, our government is lying to us about an illegal occupation of a major city, which has so far led to multiple deaths of innocents who were murdered by agents of the state. We have video evidence of what happened, and the most senior officials in our country have deliberately, blatantly and unrepentantly lied about what the videos show, while besmirching the good names of the people who were murdered. Just as the administration's most senior officials spread these lies, several of the most powerful and influential executives in the tech industry voluntarily met with the President, screened a propaganda film made expressly as a bribe for him, and have said nothing about either the murders or the lies about the murders.</p>\n<p>These are certainly not the first wrongs by our government. These are not even the first such killings in Minnesota in recent years. But they are a new phase, and this occupation is a new escalation. This degree of lawless authoritarianism <em>is</em> new — tech leaders were <em>not</em> <a href=\"https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/\">crafting golden ingots</a> to bribe sitting leaders of the United States in the past. Military parades featuring banners bearing the face of Dear Leader, followed by ritual gift-giving in the throne room of the golden palace with the do-nothing failsons and conniving hangers-on of the aging strongman used to be the sort of thing we mocked about failing states, not things we emulated about them.</p>\n<p>So, when our \"leaders\" have failed, and they have, we must become a leaderful community. This, I have a very positive feeling about. I've seen so many people who are willing to step up, to give of themselves, to use their voices. And I have all the patience in the world for those who may not be used to doing those things, because it can be hard to step into those shoes for the first time. If you're unfamiliar or uncomfortable with this work, or if the risk feels a little more scary because you carry the responsibility of caring for those around you, that's okay.</p>\n<p>But I've been really heartened to see <a href=\"https://www.linkedin.com/posts/anildash_i-just-want-to-share-something-briefly-as-activity-7421306939055198209-272Z\">how many people have responded</a> when I started talking about these ideas on LinkedIn — not usually the bastion of \"political\" speech. I don't write the usual hustle-bro career advice platitudes there, and instead laid out the argument for why people will need to choose a side, and should choose the side that their heart already knows that they're on. To my surprise, there's been near-universal agreement, even amongst many who don't agree with many of my other views.</p>\n<p><a href=\"https://www.businessinsider.com/business-leader-ceo-silence-alex-pretti-killing-minneapolis-2026-1\">It is already clear</a> that business leaders are going to be compelled to speak up. It would be ideal if it is their own workers who lead them towards the words (and actions) that they put out into the world.</p>\n<h2>Where we go</h2>\n<p>Those of us in the technology realm bear a unique responsibility here. It is the tools that we create which enable the surveillance and monitoring that agencies like ICE use to track down and threaten both their targets and those they attempt to intimidate away from holding them accountable. It is the wealth of our industry which isolates the tycoons who run our companies when they make irrational decisions like creating vanity films about the strongman's consort rather than pushing for the massive increase in ICE spending to instead go towards funding all of Section 8 housing, all of CHIP insurance, all school lunches, and 1/3 of all federal spending on K-12 education.</p>\n<p>It takes practice to get comfortable using our voices. It takes repetition until leaders know we're not backing down. It takes perseverance until people in power understand they're going to have to act in response to the voices of their workers. <a href=\"https://iceout.tech\">But everyone has a voice</a>. Now is your turn to use it.</p>\n<p>When we speak, we make it easier for others to do so. When we all speak, we make change inevitable.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/01/27/codeless-ecosystem/",
      "title": "A Codeless Ecosystem, or hacking beyond vibe coding",
      "description": null,
      "url": "https://anildash.com/2026/01/27/codeless-ecosystem/",
      "published": null,
      "updated": "2026-01-27T00:00:00.000Z",
      "content": "<p>There's been a <a href=\"https://www.anildash.com/2026/01/22/codeless/\">remarkable leap forward</a> in the ability to orchestrate coding bots, making it possible for ordinary creators to command dozens of AI bots to build software without ever having to directly touch code. The implications of this kind of evolution are potentially extraordinary, as outlined in that first set of notes about what we could call \"codeless\" software. But now it's worth looking at the larger ecosystem to understand where all of this might be headed.</p>\n<h2>\"Frontier minus six\"</h2>\n<p>One idea that's come up in a host of different conversations around codeless software, both from supporters and skeptics, is how these new orchestration tools can enable coders to control coding bots that <em>aren't</em> from the Big AI companies. Skeptics say, \"won't everyone just use Claude Code, since that's the best coding bot?\"</p>\n<p>The response that comes up is one that I keep articulating as \"frontier minus six\", meaning the idea that many of the open source or open-weight AI models are often delivering results at a level equivalent to where frontier AI models were six months ago. Or, sometimes, where they were 9 months or a year ago. In any of these cases, these are still damn good results! These levels of performance are not merely acceptable, they are results that we were amazed by just months ago, and are more than serviceable for a large number of use cases — especially if those use cases can be run locally, at low cost, with lower power usage, without having to pay any vendor, and in environments where one can inspect what's happening with security and privacy.</p>\n<p>When we consider that a frontier-minus-six fleet of bots can often run on cheap commodity hardware (instead of the latest, most costly, hard-to-get Nvidia GPUs) and we still have the backup option of escalating workloads to the paid services if and when a task is too challenging for them to complete, it seems inevitable that this will be part of the mix in future codeless implementations.</p>\n<h2>Agent patterns and design</h2>\n<p>The most thoughtful and fluent analysis of the new codeless approach has been <a href=\"https://maggieappleton.com/gastown\">this wonderful essay by Maggie Appleton</a>, whose writing is always incisive and insightful. This one's a must-read! Speaking of Gas Town (Steve Yegge's signature orchestration tool, which has catalyzed much of the codeless revolution), Maggie captures the ethos of the entire space:</p>\n<blockquote>\n<p>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.</p>\n</blockquote>\n<h2>Code and legacy</h2>\n<p>Once you've considered Maggie's piece, it's worth reading over Steve Krouse's essay, \"<a href=\"https://blog.val.town/vibe-code\">Vibe code is legacy code</a>\". Steve and his team build the delightful <a href=\"https://www.val.town\">val town</a>, an incredibly accessible coding community that strikes a very careful balance between enabling coding and enabling AI assistance without overwriting the human, creative aspects of building with code. In many ways (including its aesthetic), it is the closest thing I've seen to a spiritual successor to the work we'd done for many years with <a href=\"https://en.wikipedia.org/wiki/Glitch,_Inc.\">Glitch</a>, so it's no surprise that Steve would have a good intuition about the human relationship to creating with code.</p>\n<p>There's an interesting point, however to the core point Steve makes about the disposability of vibe-coded (or AI-generated) code: <em>all</em> code is disposable. Every single line of code I wrote during the many years I was a professional developer has since been discarded. And it's not just because I was a singularly terrible coder; this is often the <em>normal</em> thing that happens with code bases after just a short period of time. As much as we lament the longevity of legacy code bases, or the impossibility of fixing some stubborn old systems based on dusty old languages, it's also very frequently the case that people happily rip out massive chunks of code that people toiled over for months or years and then discard it all without any sentimentality whatsoever.</p>\n<p>Codeless tooling just happens to embrace this ephemerality and treat it as a feature instead of a bug. That kind of inversion of assumptions often leads to interesting innovations.</p>\n<h2>To enterprise or not</h2>\n<p>As I noted in my original piece on codeless software, we can expect any successful way of building software to be appropriated by companies that want to profiteer off of the technology, <em>especially</em> enterprise companies. This new realm is no different. Because these codeless orchestration systems have been percolating for some time, we've seen some of these efforts pop up already.</p>\n<p>For example, the team at Every, which consults and builds tools around AI for businesses, calls a lot of these approaches <a href=\"https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents\">compound engineering</a> when their team uses them to create software. This name seems fine, and it's good to see that they maintain the ability to switch between models easily, even if they currently prefer Claude's Opus 4.5 for most of their work. The focus on planning and thinking through the end product holistically is a particularly important point to emphasize, and will be key to this approach succeeding as new organizations adopt it.</p>\n<p>But where I'd quibble with some of what they've explained is the focus on tying the work to individual vendors. Those concerns should be abstracted away by those who are implementing the infrastructure, as much as possible. It's a bit like ensuring that most individual coders don't have to know exactly which optimizations a compiler is making when it targets a particular CPU architecture. Building that muscle where the specifics of different AI vendors become less important will help move the industry forward towards reducing platforms costs — and more importantly, empowering coders to make choices based on their priorities, not those of the AI platforms or their bosses.</p>\n<h2>Meeting the codeless moment</h2>\n<p>A good example of the \"normal\" developer ecosystem recognizing the groundswell around codeless workflows and moving quickly to integrate with them is the Tailscale team <em>already</em> shipping <a href=\"https://tailscale.com/blog/aperture-private-alpha\">Aperture</a>. While this initial release is focused on routine tasks like managing API keys, it's really easy to see how the ability to manage gateways and usage into a heterogeneous mix of coding agents will start to enable, and encourage, adoption of new coding agents. (Especially if those \"frontier-minus-six\" scenarios start to take off.)</p>\n<p>I've been on the record <a href=\"https://me.dm/@anildash/109719178280170032\">for years</a> about being bullish on Tailscale, and nimbleness like this is a big reason why. That example of seeing where developers are going, and then building tooling to serve them, is always a sign that something is bubbling up that could actually become signficant.</p>\n<p>It's still early, but these are the first few signs of a nascent ecosystem that give me more conviction that this whole thing might become real.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    }
  ]
}
Analyze Another View with RSS.Style