Analysis of https://www.anildash.com/feed.xml

Feed fetched in 231 ms.
Content type is application/xml.
Feed is 120,358 characters long.
Feed has an ETag of W/"aeaf5e27fcb03849108eca78456961f7-ssl-df".
Warning Feed is missing the Last-Modified HTTP header.
Feed is well-formed XML.
Warning Feed has no styling.
This is an Atom feed.
Feed title: Anil Dash
Error Feed self link: https://anildash.com/feed.xml does not match feed URL: https://www.anildash.com/feed.xml.
Warning Feed is missing an image.
Feed has 12 items.
First item published on 2026-01-27T00:00:00.000Z
Last item published on 2026-03-13T00:00:00.000Z
All items have published dates.
Newest item was published on 2026-03-13T00:00:00.000Z.
Home page URL: https://anildash.com/
Warning Home page URL redirected to https://www.anildash.com/.
Home page has feed discovery link in <head>.
Home page has a link to the feed in the <body>

Formatted XML
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:xml="http://www.w3.org/XML/1998/namespace" xml:base="https://anildash.com/">
    <title>Anil Dash</title>
    <subtitle>A blog about making culture. Since 1999.</subtitle>
    <link href="https://anildash.com/feed.xml" rel="self"/>
    <link href="https://anildash.com/"/>
    <updated>2026-03-13T00:00:00Z</updated>
    <id>https://anildash.com</id>
    <author>
        <name>Anil Dash</name>
        <email>[email protected]</email>
    </author>
    <entry>
        <title>A Codeless Ecosystem, or hacking beyond vibe coding</title>
        <link href="https://anildash.com/2026/01/27/codeless-ecosystem/"/>
        <updated>2026-01-27T00:00:00Z</updated>
        <id>https://anildash.com/2026/01/27/codeless-ecosystem/</id>
        <content type="html"><![CDATA[
      <p>There's been a <a href="https://www.anildash.com/2026/01/22/codeless/">remarkable leap forward</a> in the ability to orchestrate coding bots, making it possible for ordinary creators to command dozens of AI bots to build software without ever having to directly touch code. The implications of this kind of evolution are potentially extraordinary, as outlined in that first set of notes about what we could call &quot;codeless&quot; software. But now it's worth looking at the larger ecosystem to understand where all of this might be headed.</p>
<h2>&quot;Frontier minus six&quot;</h2>
<p>One idea that's come up in a host of different conversations around codeless software, both from supporters and skeptics, is how these new orchestration tools can enable coders to control coding bots that <em>aren't</em> from the Big AI companies. Skeptics say, &quot;won't everyone just use Claude Code, since that's the best coding bot?&quot;</p>
<p>The response that comes up is one that I keep articulating as &quot;frontier minus six&quot;, meaning the idea that many of the open source or open-weight AI models are often delivering results at a level equivalent to where frontier AI models were six months ago. Or, sometimes, where they were 9 months or a year ago. In any of these cases, these are still damn good results! These levels of performance are not merely acceptable, they are results that we were amazed by just months ago, and are more than serviceable for a large number of use cases — especially if those use cases can be run locally, at low cost, with lower power usage, without having to pay any vendor, and in environments where one can inspect what's happening with security and privacy.</p>
<p>When we consider that a frontier-minus-six fleet of bots can often run on cheap commodity hardware (instead of the latest, most costly, hard-to-get Nvidia GPUs) and we still have the backup option of escalating workloads to the paid services if and when a task is too challenging for them to complete, it seems inevitable that this will be part of the mix in future codeless implementations.</p>
<h2>Agent patterns and design</h2>
<p>The most thoughtful and fluent analysis of the new codeless approach has been <a href="https://maggieappleton.com/gastown">this wonderful essay by Maggie Appleton</a>, whose writing is always incisive and insightful. This one's a must-read! Speaking of Gas Town (Steve Yegge's signature orchestration tool, which has catalyzed much of the codeless revolution), Maggie captures the ethos of the entire space:</p>
<blockquote>
<p>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.</p>
</blockquote>
<h2>Code and legacy</h2>
<p>Once you've considered Maggie's piece, it's worth reading over Steve Krouse's essay, &quot;<a href="https://blog.val.town/vibe-code">Vibe code is legacy code</a>&quot;. Steve and his team build the delightful <a href="https://www.val.town">val town</a>, an incredibly accessible coding community that strikes a very careful balance between enabling coding and enabling AI assistance without overwriting the human, creative aspects of building with code. In many ways (including its aesthetic), it is the closest thing I've seen to a spiritual successor to the work we'd done for many years with <a href="https://en.wikipedia.org/wiki/Glitch,_Inc.">Glitch</a>, so it's no surprise that Steve would have a good intuition about the human relationship to creating with code.</p>
<p>There's an interesting point, however to the core point Steve makes about the disposability of vibe-coded (or AI-generated) code: <em>all</em> code is disposable. Every single line of code I wrote during the many years I was a professional developer has since been discarded. And it's not just because I was a singularly terrible coder; this is often the <em>normal</em> thing that happens with code bases after just a short period of time. As much as we lament the longevity of legacy code bases, or the impossibility of fixing some stubborn old systems based on dusty old languages, it's also very frequently the case that people happily rip out massive chunks of code that people toiled over for months or years and then discard it all without any sentimentality whatsoever.</p>
<p>Codeless tooling just happens to embrace this ephemerality and treat it as a feature instead of a bug. That kind of inversion of assumptions often leads to interesting innovations.</p>
<h2>To enterprise or not</h2>
<p>As I noted in my original piece on codeless software, we can expect any successful way of building software to be appropriated by companies that want to profiteer off of the technology, <em>especially</em> enterprise companies. This new realm is no different. Because these codeless orchestration systems have been percolating for some time, we've seen some of these efforts pop up already.</p>
<p>For example, the team at Every, which consults and builds tools around AI for businesses, calls a lot of these approaches <a href="https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents">compound engineering</a> when their team uses them to create software. This name seems fine, and it's good to see that they maintain the ability to switch between models easily, even if they currently prefer Claude's Opus 4.5 for most of their work. The focus on planning and thinking through the end product holistically is a particularly important point to emphasize, and will be key to this approach succeeding as new organizations adopt it.</p>
<p>But where I'd quibble with some of what they've explained is the focus on tying the work to individual vendors. Those concerns should be abstracted away by those who are implementing the infrastructure, as much as possible. It's a bit like ensuring that most individual coders don't have to know exactly which optimizations a compiler is making when it targets a particular CPU architecture. Building that muscle where the specifics of different AI vendors become less important will help move the industry forward towards reducing platforms costs — and more importantly, empowering coders to make choices based on their priorities, not those of the AI platforms or their bosses.</p>
<h2>Meeting the codeless moment</h2>
<p>A good example of the &quot;normal&quot; developer ecosystem recognizing the groundswell around codeless workflows and moving quickly to integrate with them is the Tailscale team <em>already</em> shipping <a href="https://tailscale.com/blog/aperture-private-alpha">Aperture</a>. While this initial release is focused on routine tasks like managing API keys, it's really easy to see how the ability to manage gateways and usage into a heterogeneous mix of coding agents will start to enable, and encourage, adoption of new coding agents. (Especially if those &quot;frontier-minus-six&quot; scenarios start to take off.)</p>
<p>I've been on the record <a href="https://me.dm/@anildash/109719178280170032">for years</a> about being bullish on Tailscale, and nimbleness like this is a big reason why. That example of seeing where developers are going, and then building tooling to serve them, is always a sign that something is bubbling up that could actually become signficant.</p>
<p>It's still early, but these are the first few signs of a nascent ecosystem that give me more conviction that this whole thing might become real.</p>

    ]]></content>
    </entry>
    <entry>
        <title>New York Tech at 30: the Crossroads</title>
        <link href="https://anildash.com/2026/02/03/nye-tech-30/"/>
        <updated>2026-02-04T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/03/nye-tech-30/</id>
        <content type="html"><![CDATA[
      <p>This past week, over a series of events, the New York tech community celebrated the 30th anniversary of a nebulous idea described as “Silicon Alley”, the catch-all term for our greater collective of creators and collaborators, founders and funders, inventors and investors, educators and entrepreneurs and electeds, activists and architects and artists. Some of the parties or mixers have been typical industry affairs, the usual glad-handing about deal-making and pleasantries. But a lot have been deeper, reflecting on what’s special and meaningful about the community we’ve built in New York. <a href="https://www.mediapost.com/publications/article/412470/">Steven Rosenbaum’s reflection</a> on the anniversary captures this well from someone who’s been there, and <a href="https://finance.yahoo.com/news/silicon-alley-turns-30-york-114752768.html">Leo Schwartz’s piece for Fortune</a> covers the more conventional business angle.</p>
<p>Beyond the celebrations, though, I wanted to reflect on a number of the deeper conversations I’ve had over these last few days. These are conversations grounded in the reality of where our country and city are today, far beyond spaces where wealthy techies are going to parties and celebrating each other. The hard questions raised in these conversations are the ones that determine where this community goes in the future, and they’re the ones that <em>every</em> tech community is going to face in the current moment.</p>
<p>I know what the New York City tech community has been; there was a time when I was one of its most prominent voices. The question now is what it will be in the future. Because we are at a profound crossroads.</p>
<iframe title="vimeo-player" src="https://player.vimeo.com/video/1159273059?h=b6fe26d204" width="640" height="360" frameborder="0" referrerpolicy="strict-origin-when-cross-origin" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share"   allowfullscreen></iframe>
<h1>What community can be</h1>
<p>Nobody better exemplifies the best of what New York tech has been than Aaron Swartz. As I’d <a href="https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/">written about</a> recently, he was brilliant and delightfully impossible. At an incredibly young age, <a href="https://www.eff.org/deeplinks/2017/01/everyone-made-themselves-hero-remembering-aaron-swartz">he led our community</a> in the battle to push back against a pair of ill-considered bills that threatened free expression on the Internet. (These bills would have done to the web what the current administration has done to broadcast television, having a chilling effect on free speech and putting large swaths of content under government control.) As we stood outside Chuck Schumer’s office and demanded that big business take their hands off our internet, we got our first glimpse of the immense power that our community could wield. And <a href="https://www.eff.org/deeplinks/2017/01/5-years-later-victory-over-sopa-means-more-ever">we won</a>, at least for a while.</p>
<p>My own path within the New York tech community was nowhere near as dramatic, but I was just as motivated in wanting to serve the community. When I became the first person <a href="https://www.anildash.com/2010/12/13/im-running-for-the-new-york-tech-meetup-board/">elected to the board of the New York Tech Meetup</a> (later the New York Tech Alliance), it was the largest member-led organization of tech industry workers in the country. By the time it reached its peak, we were over 100,000 members strong, and could sell out one of our monthly events (at a venue of over 1000 attendees) in minutes. The collective power and impact of that cohort was immense. So, when I say “community”, I mean <em>community</em>. I’m not talking about the contemporary usage of the word, when people call their TikTok followers a “community”. I mean people who care about each other and show up for each other so that they can achieve meaningful things.</p>
<p>New York tech demonstrated its values time and again, and not just in organizing around policy that served its self-interest. When the city was still reeling from 9/11, these were people who not only chose to stay in the city, or who simply talked about how New York ought to rebuild, but actually took the risk and rebuilt the economy of the city — the <em>majority</em> of the economic regrowth and new jobs in New York City in the quarter-century since the attacks of 9/11 have happened thanks to the technology sector.</p>
<p>When Hurricane Sandy hit, these were people who <a href="https://www.nbcnews.com/id/wbna49663102">were amongst the first to step up</a> to help their neighbors dig out. When our city began to <a href="https://www.anildash.com/2011/03/05/nyc-mta-ftw/">open up its data</a>, the community responded in kind by building an entire ecosystem of new tools that laid the groundwork for the tech we now take for granted when navigating around our neighborhoods. There was no reluctance to talk about the importance of diversity and inclusion, and no apology in saying that tech was failing to do its job in hiring and promoting equitably, because we know how much talent is available in our city. Hackers would come to meetups to show off their startups, sure, but just as often to show off how they’d built cool new technology to <a href="https://www.wbur.org/hereandnow/2021/12/28/heat-seek-tool-tenants">help make sure our neighbors in public housing had heat in the winter</a>. This was <a href="https://www.anildash.com/2016/07/15/new-york-style-tech/">New York-style tech</a>.</p>
<p>What’s more, the work of this community happened with remarkable solidarity; the SOPA/PIPA protests that Aaron Swartz spoke at had him standing next to some of the most powerful venture capitalists in the city. When it was time to take action, a number of the most influential tech CEOs in New York took Amtrak down to Washington, D.C. to talk to elected officials and their staffers about the importance of defending free expression online, advocating for the same issue that had been so important to the broke college kids who’d been at the rally just a few days earlier. People had actually gathered around <em>principles</em>. I don’t say this as a Pollyanna who thinks everything was perfect, or that things would have always stayed so idealistically aligned, but simply to point out that <em>this did happen</em>. I don’t have to assert that it is theoretically possible, because I have already seen a community which functions in this way.</p>
<h2>From bottoms-up to big business</h2>
<p>But things have changed in recent years for New York’s tech community. What used to often be about extending a hand to neighbors has, much of the time, become about simply focusing on who’s getting funded to chase the trends defined by Silicon Valley. The vibrancy of the New York Tech Meetup took a huge hit from covid, preventing the ability for the community to gather in person, and the organization’s evolution from a Meetup to an Alliance to being part of Civic Hall shifted its focus in recent years, though there has been a recent push to revitalize its signature events. In its place, much of the public narrative for the community is led by Tech:NYC, which has active and able leadership, but is a far more conventional trade group. There's a focus on pragmatic tools like job listings (their <a href="https://technycdigest.beehiiv.com/subscribe?ref=kdPsdXErYd">email newsletter</a> is excellent), but they're unlikely to lead a rally in front of a Senator's office. An organization whose founding members include Google and Meta is necessarily going to be different than one with 100,000 individual members.</p>
<p>When I <a href="https://web.archive.org/web/20150601041007/https://www.wsj.com/articles/SB10001424127887324624404578255752537705008">spoke to the Wall Street Journal</a> back in 2013 about the political and social power of our community, at a far different time, I called out the breadth of who our community includes:</p>
<blockquote>
<p>The tech constituency encompasses a range of potential voters who remain unlikely to behave as a traditional bloc. &quot;It's venture capitalists and 23-year-old graphic designers in Bushwick,&quot; Mr. Dash said. &quot;It's labor and management. It's not traditional allies.&quot;</p>
</blockquote>
<p>I wanted to make sure people understood that tech in New York is much broader than just, well, what the bosses and the big companies want. It is important to understand that New York is about <a href="https://www.anildash.com/2025/10/24/founders-over-funders/">founders, not just funders</a>.</p>
<p>The distinction between these groups and their goals was never clearer to me than in the 2017 battle around Amazon’s proposed <a href="https://en.wikipedia.org/wiki/Amazon_HQ2">HQ2 headquarters</a>. The public narrative was that Amazon was trying to make a few cities jump through hoops to make the best possible set of bribes to the company so that they would build a new headquarters complex in the host city. The reality was, New York City offered $1.5 billion dollars to the richest man in the world in order to open up an office in a city where the company was inevitably going to do business regardless, and the contract that Amazon would have to sign in exchange only obligated them to hire 500 new workers in the city — <strong>fewer</strong> people than their typical hiring plan would expect in that timeframe. In addition, the proposed plan would have taken over land intended for 6,000 homes, including 1500 affordable units, would have defunded the mass transit system through years of tax breaks for the company while putting massive additional burden on the transit system, and raised housing prices. (Amazon has since signed a lease for 335,000 square feet and hired over 1000 employees, without any subsidies.)</p>
<p>At the time, I was CEO of a company that two entrepreneurs had founded in 2000 and bootstrapped to success, leading to them spinning out multiple companies which would go on to exit for over $2.2 billion, providing over 500 jobs and creating dozens of millionaires out of the workers who joined the companies over the years. Several of the people who had worked at those companies went on to form their own companies, and <em>those</em> companies are now collectively worth over $5 billion. All of these companies, combined, have gotten a total of <em>zero billion dollars</em> from the state and city of New York. In addition, none of those companies have ever had working conditions anywhere close to <a href="https://en.wikipedia.org/wiki/Criticism_of_Amazon#Treatment_of_workers">those Amazon has been criticized for</a>.</p>
<p>But the <em>story</em> of the time was that “New York tech wants HQ2!” Media like newspapers and TV were firmly convinced that techies were in support of Amazon getting a massive unnecessary handout, and I had genuinely struggled to figure out why for a long time. After a while, it became obvious. Everyone that they had spoken to, and all the voices that were considered canonical and credible when talking about “New York tech”, were investors or giant publicly-traded companies.</p>
<p>People who actually <em>built</em> things were no longer the voice of the community. Those who showed up when the power was out, or when the community was hurting, or when there was an issue that called for someone to bravely stand up and lead the crowd even if there was some social or political risk — they were not considered valid. People liked the <em>myth</em> of Aaron Swartz by then, but they would have ignored the fact that he almost certainly would have objected to corporate subsidy for the company.</p>
<h2>New York tech today, and tomorrow</h2>
<p>I am still proud of the New York tech community. But that’s because I get to see what happens in person. Last week, I was reminded at every one of the in-person commemorations of the community that there are so many generous, kind-hearted, thoughtful people who will fight to do the right thing. The challenge today, though, is that those are no longer the people who define the story of the community. That’s not who a <em>new</em> person thinks of when they’re introduced to our community.</p>
<p>When I talk to young people who are new to the industry, or people who are changing careers who are curious about tech, they have heard of things like Tech Week, or they read trade press. In those venues, a big name is generally not our home-grown founders, or even the “big” success stories of New York tech. That’s especially true as once high-flying New York tech companies like Tumblr and Foursquare and Kickstarter and Etsy and Buzzfeed either faded or got acquired, and newer successful startups are more prosaic and less attention-grabbing. Who’s left to tell them a story of what “tech” means in New York? Where will they find community?</p>
<p>One possible future is that they try to build a startup, doing everything you’re “supposed” to do. They pitch the VC firms in town, and the big name firms that they’ve heard of. If they’re looking for community, they go to the events that get the most promotion, which might be Tech Week events. And all of these paths lead the same way — the most prominent VC firm is Andreessen Horowitz, and they run Tech Week too, even though they’re not from NYC.</p>
<p>On that path, New York tech puts you across the table from <a href="https://fortune.com/2025/02/05/daniel-penny-andreessen-horowitz-a16z-investing-david-ulevitch/">the man who strangled my neighbor to death</a>.</p>
<p>Another possible future is that we rebuild the kind of community that we used to have. We start to get together the people who actually <em>make</em> things, and show off what we’ve built for one another. It’s going to require re-centering the hundreds of thousands of people who create and invent, rather than the dozens of people who write checks. It’s going to mean that the stories start with New York City (and maybe even… <em>in the outer boroughs</em>!), rather than taking dictation from those in Silicon Valley who hate our city. And it’s going to require understanding that technology is a set of tools and tactics we can use in service of goals — ideally positive social goals — and not just an economic opportunity to be extracted from.</p>
<p>We would never talk about education by only talking to those who invest in making pencils. We’d never consider a story about a new movie to be complete if we only talked to those who funded the film. And certainly our policymakers would balk if we skipped speaking with them and instead aimed our policy questions directly at their financial backers, though that might result in more accurate responses. Yet somehow, with technology, we’ve given over the narrative entirely to the money men.</p>
<p>In New York, we’ve borne the brunt of that error. A tech community with heart and soul is in danger of being snuffed out by those who will only let its most base instincts survive. Even our <em>investors</em> here are more thoughtful than these stories would make it seem! But we can change it, and maybe even change the larger tech story, if we’re diligent in never letting the bad actors control the narrative of what tech is in the world.</p>
<p>Like so many good things, it can all start with New York City.</p>

    ]]></content>
    </entry>
    <entry>
        <title>There&#39;s no such thing as &quot;tech&quot; (Ten years later)</title>
        <link href="https://anildash.com/2026/02/06/no-such-thing-as-tech/"/>
        <updated>2026-02-06T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/06/no-such-thing-as-tech/</id>
        <content type="html"><![CDATA[
      <p>Ten years ago I wrote that <a href="https://www.anildash.com/2016/08/19/there-is-no-technology-industry/">there is no “technology industry”</a>. It’s more true than ever.</p>
<p>There is no “tech”. There’s no such thing as “a FAANG company”. There is almost nothing in common between the very largest tech companies and the next several hundred biggest companies that happen to create tech platforms. Whatever shorthand we use for the biggest tech companies, they almost never have much in common—whether it's how they make money, what products they make, how they make decisions, who leads them, or what drives their cultures.</p>
<p>It’s important to make these distinctions because the false categorization of wildly dissimilar organizations into one grouping leads to absurdly inappropriate decisions being made. Let’s look at some simple examples to understand why.</p>
<p>Take the once-ubiquitous shorthand of “FAANG” to describe big tech. (It stood, at one time, for Facebook, Amazon, Apple, Netflix and Google. Then Facebook became Meta and Google became Alphabet and Microsoft became upset about not being included, and people started trying to use other more unwieldy, less-popular sobriquets.) This abbreviation still persists because of the mindset it represents, and it is still useful in capturing a certain vision of how the industry functions. I often encounter early-career tech workers who describe their ambitions as “working at a FAANG company”.</p>
<p>But let’s look at <em>what these different companies actually do</em>. For all its complexity, Netflix is, at its heart, about streaming video to people. Meta runs a number of communications platforms and social networks. Apple sells hardware devices. They all have very large side businesses that do other things, but this is what these companies are at their core — and they’re wildly different businesses in their core essence!</p>
<p>If someone said, “I want to be an executive at Walmart, or maybe at A24,” you would think, “This person has no idea what the hell they want to be, or what they’re talking about.” If they were to say, “I want to work for nVidia, or maybe Deloitte,&quot; you would think, “This person is just confused, and that’s kind of sad.” But this is <em>exactly</em> equivalent to asserting “I want to work at a FAANG company” or “I want to work at a startup” or, worse, “I want to work in tech”.</p>
<p>So many have been caught off guard as tech has grabbed massive power over nearly every aspect of society—from individuals who can't figure out their career paths to policy makers who've been bamboozled by tech tycoons. It's no secret how it happened: everyone underestimated the impact because they judged tech by the same rules as other industries.</p>
<h2>Everything and nothing</h2>
<p>These distinctions matter even more because today, <em>everything</em> is tech. Or, if you prefer, nothing is technology. Instead, every area is suffused with tech — and every discipline needs people who are fluent in the concerns of technology, and familiar with the tradeoffs and risks and opportunities that come with the adoption of, and creation of, new technologies.</p>
<p>Now, of course, I know why it’s useful to have the shorthand of being able to say “the tech industry” when talking about a particular sector. But the sleight of hand that comes from being able to hide the enormous, outsized impact that this small number of companies has across a vast number of different sectors of society is possible, in part, because we <em>treat</em> them like they’re one narrow part of the business world. In many cases, an individual division of a giant tech company dwarfs the entirety of other industries. Apple’s AirPods business isn’t even one of the first products one would think of when listing their most important, most influential, or most profitable lines of business, and yet <em>AirPods alone</em> are bigger than the entire domestic radio advertising business in the United States. Google’s ad business alone is larger than the entire U.S. domestic airline industry combined. Things that are considered an “industry” in other categories are smaller than things that are considered a <em>product</em> in “tech”.</p>
<p>That sense of scale is important to keep in mind as we push for accountability and to understand how to plan for what’s ahead. Even building a path for one’s own career — whether that’s inside or outside of the companies we consider to be in the tech sector — requires having a proper perspective on the relative influence of these organizations, and also on the distorting effect it can have when we don’t look at them in their full complexity.</p>
<p>One example from a completely different realm that I find useful in contextualizing this challenge is from the world of retail: Ikea is one of the top 10 restaurants in the world. (By many reports, it’s the 6th largest chain of restaurants.) That is, of course, incidental to its role as a furniture retailer. But this is the nature of massive scale. The second-order impacts are still enough to have outsized effects in the larger world.</p>
<p>At a moment when we have seen that so many of the biggest tech companies are led by people who don’t know how to act responsibly with all of the power that they’ve been given, it’s important that we complicate our views of their companies, and consider that they are <em>much</em> more than just part of the “tech industry”. They are functioning as communications, media, finance, education, infrastructure, transportation, commerce, defense, policing, and government much of the time. And very often, they’re doing it without our awareness or consent.</p>
<p>So, when you hear conversations in society about tech companies, or tech execs, or tech platforms, make sure you push those who are involved in the dialogue to be specific about what they mean. You may find that they haven’t stopped to reflect on the fact that this simple label has long since stopped accurately describing the extraordinary amount of power and control that this handful of companies exert over our daily lives, and over society as a whole.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Coding agents as the new compilers</title>
        <link href="https://anildash.com/2026/02/11/coding-agents-as-the-new-compilers/"/>
        <updated>2026-02-12T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/11/coding-agents-as-the-new-compilers/</id>
        <content type="html"><![CDATA[
      <p>In each successive generation of code creation thus far, we’ve abstracted away the prior generation over time. Usually, only a small percentage of coders still work on the lower layers of the stack that used to be the space where everyone was working. I’ve been coding long enough that people were still creating code in assembly when I started (though I was never any good at it!), though I started with BASIC. Since BASIC was an interpreted language, its interpreter would write the assembly language for me, and I never had to see exactly what assembly language code was being created.</p>
<p>I definitely <em>did</em> know old-school coders who used to, at first, check that assembly code to see if they liked the output. But eventually, over time, they just learned to trust the system and stopped looking at what happened after the system finished compiling. Even people using more “close to the metal” languages like C generally trust that their compilers have been optimized enough that they seldom inspect the output of the compiler to make sure it was perfectly optimized for their particular processor or configuration. The benefits of delegating those concerns to the teams that create compilers, and coding tools in general, yielded so many advantages that that tradeoff was easily worth it, once you got over the slightly uncomfortable feeling.</p>
<p>In the years that followed, though a small cohort of expert coders who would hand-tune assembly code for things like getting the most extreme performance out of a gaming console, most folks stopped writing it, and very few <em>new</em> coders learned assembly at all. The vast majority of working coders treat the output from the compiler layer as a black box, trusting the tools to do the right thing and delegating the concerns below that to the toolmakers.</p>
<p>We may be seeing that pattern repeat itself. Only this time, the abstraction is happening through AI tools abstracting away <em>all</em> the code. Which can feel a little scary.</p>
<h2>Squashing the stack</h2>
<p>Just as interpreted languages took away chores like memory management, and high-level languages took away the tedium of writing assembly code, we’re starting to see the first wave of tools that completely abstract away the writing of code. (I described this in more detail in the piece about <a href="https://www.anildash.com/2026/01/22/codeless/">codeless software</a>recently.</p>
<p>The individual practice of professionalizing the writing of software with LLMs seems to have settled on the term “<a href="https://simonwillison.net/2026/Feb/11/glm-5/">agentic engineering</a>”, as Simon Willison recently noted.</p>
<p>But the next step beyond that is when teams <em>don’t</em> write any of the code themselves, instead moving to an entirely abstracted way of creating code. In this model, teams (or even individual coders):</p>
<ul>
<li>Define the specifications for how the code should work</li>
<li>Ensure that the system is provided with enough context at all times that it can succeed in creating code that is successful as often as possible</li>
<li>Provide sufficient resources that a redundant and resilient set of code outputs can be created to accommodate failures while in iteration</li>
<li>Enforce execution of tests and conformance systems against the code — <a href="https://simonwillison.net/2025/Dec/18/code-proven-to-work/">including human tests with a named, accountable party</a>, not just automated software tests</li>
</ul>
<p>With this kind of model deployed, the software that is created can essentially be output from the system in the way that assembly code or bytecode is output from compilers today, with no direct inspection from the people who are directing its creation. Another way of thinking about this is that we’re abstracting away many different specific programming languages and detailed syntaxes to more human-written Markdown files, created much of the time in <strong>collaboration</strong> with these LLM tools.</p>
<p>Presently, most people and teams who are pursuing this path are doing so with costly commercial LLMs. I would strongly advocate that most organizations, and <em>especially</em> most professional coders, be very fluent in ways of accomplishing these tasks with a fleet of low-cost, locally-hosted, open source/open-weight models contributing to the workload. I don’t think they are performant enough yet to accomplish all of the coding tasks needed for a non-trivial application yet, but there are a significant number of sub-tasks that could reasonably be delegated. More importantly, it will be increasingly vital to ensure that this entire “codeless compilation” stack for agentic engineering works in a vendor-neutral way that can be decoupled from the major LLM vendors, as they get more irresponsible in their business practices and more aggressive towards today’s working coders and creators.</p>
<p>For many, those worries about Big AI are why their reaction to these developments in agentic coding make them want to recoil. But in reality, these issues are exactly why we desperately need to <em>engage</em>.</p>
<h2>Seizing the means</h2>
<p>Many of the smartest coders I know have a lot of legitimate and understandable misgivings about the impact that LLMs are having on the coding world, especially as they’re often being evangelized by companies that plainly have ill intent towards working coders. It is reasonable, and even smart, to be skeptical of their motivations and incentives.</p>
<p>But the response to that skepticism is not to reject the category of technology, but rather to capture it and seize control over its direction, away from the Big AI companies. This shift to a new level of coding abstraction is exactly the kind of platform shift that presents that sort of opportunity. It’s potentially a chance for coders to be in control of some part of their destiny, at a time when a lot of bosses clearly want to <a href="https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/">get rid of as many coders as they can</a>.</p>
<p>At the very least, this is one area where the people who actually <em>make things</em> are ahead of the big platforms that want to cash in on it.</p>
<h2>What if I think this is all bullshit?</h2>
<p>I think a lot of coders are going to be understandably skeptical. The most common concern is, “I write really great code, how could it possibly be good news that we’re going to abstract away the writing of code?”. Or, “How the hell could a software factory be good news for people who make software?”</p>
<p>For that first question, the answer is going to involve some grieving, at first. It may be the case that writing really clean, elegant, idiomatic Python code is a skill that will be reduced in demand in the same way that writing incredibly performant, highly-tuned assembly code is. There <em>is</em> a market for it, but it’s on the edges, in specific scenarios. People ask for it when they need it, but they don’t usually <em>start</em> by saying they need it.</p>
<p>But for the deeper question, we may have a more hopeful answer. By elevating our focus up from the individual lines of code to the more ambitious focus on the overall problem we’re trying to solve, we may reconnect with the “why” that brought us to creating software and tech in the first place. We can raise our gaze from the steps right in front of us to the horizon a bit further ahead, and think more deeply about the problem we’re trying to solve. Or maybe even about the <em>people</em> who we’re trying to solve that problem for.</p>
<p>I think people who create code today, if they have access to super-efficient code-creation tools, will make better and more thoughtful products than the financiers who are currently carrying out mass layoffs of the best and most thoughtful people in the tech industry.</p>
<p>I also know there’s a history of worker-owned factories being safer and more successful than others in their industries, while often making better, longer-lasting products and being better neighbors in their communities. Maybe it’s possible that there’s an internet where agentic engineering tools could enable smart creators to build their own software factories that could work the same way.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Launch it 3 times</title>
        <link href="https://anildash.com/2026/02/13/launch-it-three-times/"/>
        <updated>2026-02-14T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/13/launch-it-three-times/</id>
        <content type="html"><![CDATA[
      <p>I wanted to share one of the bits of advice that I find myself most frequently giving to teams when they’re working on a product, or founders who are creating a new company: launch it three times.</p>
<p>What I mean by that is, it often takes more than one time before your idea actually resonates or sticks with the people you’re trying to reach. Sometimes it takes more than twice! And when I say that you might need to launch again, that can mean a lot of different things. It might just be little tweaks to what you originally put out in the world, It might even be less than that — I’ve worked with teams that put out <strong>literally the exact same thing again</strong> and found success, because the issue they had the first time was about timing. That’s increasingly an issue as people are distracted by the deeply disturbing social and political events going on in the world, and so sometimes they just need you to put things in front of them again so that they can reassess what you were trying to say.</p>
<p>Many relaunches are a little more ambitious, of course. Being a Prince fan, I am of course very partial to strategies that involve changing your name. Re-launching under a new name can be a key strategic move if you think that you’re not effectively reaching your target audience. As I’d written recently, one of the most important goals in getting a message out is that <a href="https://www.anildash.com/2025/12/05/talk-about-us-without-us/">they have to be able to talk about you without you</a>. But if you want people to tell your story even when you’re not around, the most important prerequisite is that they have to remember your name. With Glitch, that was the <em>third</em> name we actually launched the community under, a fact that I was a little bit embarrassed about at the time. But having a memorable name that resonated ended up being almost as much a factor in our early success as our user experience or the deeper technological innovations.</p>
<p>There are other ways of making changes for a successful re-launch. One thing I often suggest is to <em>subtract</em> things (or just de-emphasize them) and use that reduction in complexity to simplify a story. Or you can try to re-center your narrative on your users or community instead of on your product — the emotion and connection of seeing someone succeed often resonates far more than simply reciting a litany of features or technical capabilities.  Any of these small iterations allow you to take another swing at putting something out into the world without having to make a massive change to the core offering.</p>
<p>Often times, people are afraid or embarrassed to make changes to things like branding or design because they’re some of the more visible aspects of a product or service. Instead, they retreat to “safe” areas, like tweaking the pricing or copy on a web page that nobody reads. But the vast majority of the time, the single biggest problem you have is that <em>nobody knows you exist, and nobody gives a damn about what you do</em>. Everything else pales in comparison to that. I’ve seen so many teams trying to figure out how to optimize the engagement of the three users on their app, or the five people who come to their site, while forgetting about the other eight billion people who have no idea they exist.</p>
<h2>What about <em>not</em>  failing?</h2>
<p>This idea of launching again is really important to keep in mind because so much of the narrative in the startup world is about “fail fast” and “90% of startups fail”. When the conventional narrative from VCs prompts you to pivot right away, or an investor is pressuring everyone to grow, grow, grow at all costs, it can be hard to think about slowing down and taking the time to revisit and refine an idea.</p>
<p>But if you’re moving with conviction, and you’ve created something meaningful, and if you’re serving a real community that you have a deep understanding of, then it may  be the case that you simply need to try again. If you are <em>not</em> moving with conviction to create something meaningful for a real community, then you don’t need to do it three times, because you don’t even need to do it once.</p>
<p>So many of the creators and innovators that inspire me most often end up working on their best ideas for years or even decades, iterating and revisiting those ideas with an almost-obsessive passion. Most of the time, they’re doing it because of a combination of their own personal mission and the deep belief that what they’re doing is going to help change people’s lives for the better. For those kinds of people, one of the things I want most is to ensure that they don’t give up before their ideas have had a full and fair chance to succeed, even if that means that sometimes you have to try, try again.</p>

    ]]></content>
    </entry>
    <entry>
        <title>How did we end up threatening our kids’ lives with AI?</title>
        <link href="https://anildash.com/2026/02/18/threatening-kids-with-AI/"/>
        <updated>2026-02-18T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/18/threatening-kids-with-AI/</id>
        <content type="html"><![CDATA[
      <p>I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.</p>
<p>Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.M1A.S4zx.M-CdIbTK0GGI&amp;smid=url-share">encouraged</a> and <a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html?unlocked_article_code=1.M1A.-92e.rGfKZMgP6nE9&amp;smid=url-share">incited</a> children to end their own lives. Grok’s AI <a href="https://www.cnbc.com/2026/01/05/india-eu-investigate-musks-x-after-grok-created-deepfake-child-porn.html">generates sexualized imagery of children</a>, which the company makes available commercially to paid subscribers.</p>
<p>It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, <em>for profit</em>, and not only is there little public uproar, it seems as if very few have even noticed.</p>
<p>How did we get here?</p>
<h2>The ideas behind a crisis</h2>
<p>A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.</p>
<h3>1. Everyone feels desperately behind and wants to catch up</h3>
<p>There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely <em>convinced</em> that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.</p>
<p>At Google, the company’s researchers had published the <a href="https://en.wikipedia.org/wiki/Attention_Is_All_You_Need">fundamental paper</a> underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A <a href="https://www.cnet.com/tech/services-and-software/chatgpt-caused-code-red-at-google-report-says/">crisis ensued</a> within Google in the months that followed.</p>
<p>These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that <em>shipping any product</em> is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.</p>
<h3>2. Accountability is “woke” and must be crushed</h3>
<p>Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.</p>
<p>Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google <em>also</em> saw one of its engineers <a href="https://en.wikipedia.org/wiki/Google%27s_Ideological_Echo_Chamber">publish a sexist screed</a> on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the <a href="https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation">most credible and respected voices</a> in the industry on these issues.</p>
<p>Eliminating those roles was considered <em>vital</em> because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.</p>
<p>It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those <a href="https://arstechnica.com/gadgets/2021/08/a-decade-and-a-half-of-instability-the-history-of-google-messaging-apps/">countless redundant messaging apps</a> they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. <em>it may be a good thing</em> that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!</p>
<h3>3. Product managers are veterans of genocidal regimes</h3>
<p>The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.</p>
<p>But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/">made products that directly enabled and accelerated a genocide</a>. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you <em>chose</em> to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.</p>
<p>Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”</p>
<p>Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, <em>most</em> platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.</p>
<h3>4. Compensation is tied to feature adoption</h3>
<p>This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.</p>
<p>In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need <a href="https://www.anildash.com/2025/05/27/internet-of-consent/">an internet of consent</a>.</p>
<p>But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.</p>
<h3>5. Their cronies have made it impossible to regulate them</h3>
<p>A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an <a href="https://www.nytimes.com/2025/11/30/technology/david-sacks-white-house-profits.html?unlocked_article_code=1.NFA.8q0L.ierVRTr9iVbw&amp;smid=url-share">unbelievably broad number of conflicts of interest</a> from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.</p>
<p>As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">open bribery</a>) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.</p>
<p>All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.</p>
<p>There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.</p>
<h2>What about the kids?</h2>
<p>It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.</p>
<p>People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are <em>already</em> products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.</p>
<p>If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.</p>
<p>We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated <a href="https://openai.com/index/combating-online-child-sexual-exploitation-abuse/">their policy</a> prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, <a href="https://www.thorn.org">Thorn</a>, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose <em>entire purpose</em> is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?</p>
<p>And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?</p>
<h2>How do we move forward?</h2>
<p>It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be <em>unfathomable</em> that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who <em>work in tech</em> probably are barely aware.</p>
<p>What’s worse is, the majority of people I’ve talked to in tech, who <em>do</em> know about this have not taken a single action about it. Not one.</p>
<p>I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?</p>

    ]]></content>
    </entry>
    <entry>
        <title>Taking action against AI harms</title>
        <link href="https://anildash.com/2026/02/23/taking-action-ai-harms/"/>
        <updated>2026-02-24T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/23/taking-action-ai-harms/</id>
        <content type="html"><![CDATA[
      <p>In my last piece, I talked about <a href="https://www.anildash.com/2026/02/18/threatening-kids-with-ai/">the harms that AI is visiting on children</a> through the irresponsible choices made by the platforms creating those products. While we dove a bit into the incentives and institutional pressures that cause those companies to make such wildly irresponsible decisions, what we haven’t yet reckoned with is how we hold these companies accountable.</p>
<p>Often, people tell me they feel overwhelmed at the idea of trying to engage with getting laws passed, or fighting a big political campaign to rein in the giant tech companies that are causing so much harm. And grassroots, local organizing can be <a href="https://patch.com/new-jersey/newbrunswick/new-brunswick-city-council-kills-proposal-build-ai-data-center-100-jersey">extraordinarily effective</a> in standing up for the values of your community against the agenda of the Big AI companies.</p>
<p>But while I think it’s vital that we pursue systemic justice (and it’s the only way to stop many kinds of harm), I do understand the desire for something more immediate and human-scale. So, I wanted to share some direct, personal actions that you can take to respond to the threats that Big AI has made against kids. Each of these tactics have been proven effective by others who have used the same strategies, so you can feel confident when adapting these for your own use.</p>
<h2>Get your company off of Twitter / X</h2>
<p>If your company or organization maintains a presence on Twitter (or X, as they have tried to rename themselves), it is important to protect yourself, your coworkers, and also your employer from the risks of being on the platform. Many times, leadership in organizations have an outdated view of the platform that is uninformed about the current level of danger and harm presented by participating on the social network, and an accurate description of the problem can often be effective in driving a decision to make a change.</p>
<p>Here is some dialogue you can use or modify to catalyze a productive conversation at work:</p>
<blockquote>
<p>Hi, [name]. I saw a while ago that Twitter is being investigated in multiple countries around the world for having generated explicit imagery of women and children. The story even said that their CEO reinstated the account of a user who had shared child exploitation pictures on the site, and monetized the account that had shared the pictures.</p>
</blockquote>
<blockquote>
<p>Can you verify that our team is required to be on the service even though there is child abuse imagery on the site? I know that Musk’s account is shown to everyone on Twitter, so I’m concerned we’ll see whatever content he shares or retweets. Should I forward any of the child abuse material that I encounter in the course of carrying out the duties of my role to HR or legal, or both? And what is our reporting process for reporting this kind of material to the authorities, as I haven’t been trained in any procedures around these kinds of sensitive materials?</p>
</blockquote>
<p>That should be enough to trigger a useful conversation at your workplace. (You can share <a href="https://www.cnbc.com/2026/01/05/india-eu-investigate-musks-x-after-grok-created-deepfake-child-porn.html">this link</a> if they want a credible, business-minded link to reference.)  If they need more context about the burden on workers, you can also mention the fact that content moderators who have to interact with this kind of content have had <a href="https://citizensandtech.org/2024/02/measuring-trauma-among-the-internets-first-responders/">serious issues with trauma</a>, according to many academic studies. There is also the risk of employees and partners having concerns about nonconsensual imagery being generated from their images if the company posts anything on Twitter that features their faces or bodies. As <a href="https://www.liberalcurrents.com/the-new-epstein-island-is-right-in-your-pocket-its-time-to-abandon-elon-musks-paradise-of-abuse/">some articles have noted</a>, the Grok AI tool that Twitter uses is even designed to permit the creation of imagery that makes its targets look like the victims of violence, including targets who are underage.</p>
<p>As a result, your emails to your manager should CC your HR team, and should make explicit that you don’t wish to be liable for the risks the company is taking on by remaining on the platform. Talk to your coworkers, and share this information with them, and see if they will join you in the conversation. If you’re able to, it’s not a bad idea to look up a local labor lawyer and see if they’re willing to talk to you for free in case you need someone to CC on an email while discussing these topics. Make your employers say to you, explicitly, that the decision to remain on the platform is theirs, that they’re aware of the risks, that they indemnify you of those risks. You should ask that they take on accountability for burdens like legal costs or even psychological counseling for the real and severe impacts that come from enduring the harms that crimes like those enabled by Twitter can cause.</p>
<p>All of these strategies can also apply to products that integrate with Twitter’s service at a technical level, for sharing content or posting tweets, or for technical platforms that try to use Grok’s AI features. If you are a product manager, or know a product manager, that is considering connecting to a platform that makes child abuse material, you have failed at the most fundamental tenet of your craft. If you work at a company that has incorporated these technologies, file a bug mentioning the issues listed above, and again, CC your legal team and mention these concerns. “Our product might plug in to a platform that generates CSAM” is a show-stopping bug for any product, and any organization that doesn’t understand that is fundamentally broken.</p>
<p>Once you catalyze this conversation, you can begin mapping out a broader communication strategy that takes advantage of the many excellent options for replacing this legacy social media channel.</p>
<h2>Stop your school from using ChatGPT</h2>
<p>An increasing number of schools are falling prey to the “AI is inevitable!” rhetoric and desperately chasing the idea of putting AI tools into kids’ hands. Worse, a lot of schools think that the only kinds of technology that exist are the kinds made by giant tech companies. And because many of the adults making the decisions about AI are not necessarily experts in every detail of every technology, the decision about <em>which</em> AI platforms to use often comes down to which ones people have heard about the most. For most people, that means ChatGPT, since it’s gotten the most free hype from media.</p>
<p>As a result, many schools and educational institutions are considering the deployment of a platform that has told multiple children to self-harm, including several who have taken their own lives. This is something that you can take action about at your kid’s school.</p>
<p>First, you can begin simply by gathering resources. There are <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.M1A.S4zx.M-CdIbTK0GGI&amp;smid=url-share">many</a> <a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html?unlocked_article_code=1.M1A.-92e.rGfKZMgP6nE9&amp;smid=url-share">credible</a> stories which you can share to illustrate the risk to administrators, and to other parents. Typically, apologists for this product will raise a few objections, which you can respond to in a thoughtful way:</p>
<ul>
<li>“Maybe those kids were already depressed?” Several of the children who have been impacted by these tools were introduced to them as homework assistants, and only evolved into using them as emotional crutches at the prompting of the responses from the tool. Also: your school has children in it who are depressed, why are you willing to endanger them?</li>
<li>“Doesn’t every tool cause this?” No, this is extreme and unusual behavior. Your email software or word processor have never incited your children to commit violence against anyone, let alone themselves. Not even other LLMs prompt this behavior. And again, even if this <em>did</em> happen with every tool in this category, why would that make it okay? If every pill in a bottle is poisonous, does that make it okay to give the bottle of pills to our kids?</li>
<li>“They’ll be missing out on the future.” Ask the parents of the children impacted in these stories about their kids’ futures.</li>
<li>“We should just roll it out as a test.” Who will pay for monitoring all usage by all students in the test?</li>
<li>“It’s a parent’s responsibility.” Forcing a parent to invest hours of time into learning a cutting-edge technology that is being constantly updated is a full-time job. If you are going to burden them with that level of responsibility, how will you provide resources to support them? What is your plan to communicate this responsibility to them and get their consent so they can agree to take on this responsibility?</li>
<li>“The company said it’s working on the problem.” They can change their technology so that it only incites violence against their executives, or publish a notice when it has gone a full year without costing any children their lives. At that point, they may be considered for re-evaluation.</li>
</ul>
<p>With these responses in hand, you can provide some basic facts about the risks of the specific tool or platform that is being recommended, and help present a cogent argument against its deployment. It’s important to frame the argument in terms of child safety — the conventional arguments against LLMs, grounded in concerns like environmental impact, labor impact, intellectual property rights, or other similar issues tend to be dismissed out of hand due to effective propagandizing by Big AI advocates.</p>
<p>If, instead, you ignore the debate about LLMs and focus on real-world safety concerns based on actual threats that have happened to actual children, you should be able to have a very direct impact. And these are messages that others will generally pick up and amplify as well, whether they are fellow parents, or local media.</p>
<p>From here, you can begin a conversation that re-evaluates the <em>goals</em> of the initiative from first principles. &quot;Everyone else is doing it&quot; is not a valid way of advocating for technology, and even if they feel that LLMs are a technology that students should become familiar with, they should begin by engaging with the many resources on the topic created by academics who are not tied to the Big AI companies.</p>
<h2>You have power</h2>
<p>The key reason I wanted to capture some specific actions that people can take around responding to the harms that Big AI poses towards children is to remind us all that the power to take action lies in everyone’s hands. It’s not an abstract concept, or a theoretical thing that we have to wait for someone else to do.</p>
<p>We are in an outrageous place, where the actions of some of the biggest and most influential technology companies in the world are so beyond the pale that we can’t even discuss the things that they are doing in polite company. The actions that take place on these platforms used to mean that simply <em>accessing</em> these kinds of sites during one’s workday would be a firing offense. Now we have employers and schools trying to <em>require</em> people to use these things.</p>
<p>The pushback has to come at every level. Do talk to your elected officials. Do organize with others at your local level. If you work in tech, make sure to resist every attempt at normalizing these platforms, or incorporating their technologies into your own.</p>
<p>Finally, use your voice and your courage, and trust in your sense of basic decency. It might only take you a few minutes to draft up an email and send it to the right people. If you need help figuring out who to send it to, or how to phrase it, let me know and I’ll help! But these things that feel small can be quite enormous when they all add up together. And that’s exactly what our kids deserve.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Talking through the tech reckoning</title>
        <link href="https://anildash.com/2026/02/25/talking-through-the-tech-reckoning/"/>
        <updated>2026-02-26T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/25/talking-through-the-tech-reckoning/</id>
        <content type="html"><![CDATA[
      <p>Many of the topics that we’ve all been discussing about technology these days seem to matter so much more, and the stakes have never been higher. So, I’ve been trying to engage with more conversations out in the world, in hopes of communicating some of the ideas that might not get shared from more traditional voices in technology. These recent conversations have been pretty well received, and I hope you’ll take a minute to give them a listen when you have a moment.</p>
<h2>Galaxy Brain</h2>
<p>First, it was nice to sit down with Charlie Warzel, as he invited me to speak with him on <a href="https://www.theatlantic.com/podcasts/2026/02/the-ai-panic-cycle-and-whats-actually-different-now/686077/?gift=apxH5R6bxFb7BY7F-EpWnOKasXuqQ1RVEcCy4QH0pq8">Galaxy Brain</a> (full transcript at that link), his excellent podcast for The Atlantic. The initial topic was some of the alarmist hype being raised around AI within the tech industry right now, but we had a much more far-ranging conversation, and I was particularly glad that I got to articulate my (somewhat nuanced) take on the rhetoric that many of the Big AI companies push about their LLM products being “inevitable”.</p>
<p>In short, while I think it’s important to fight their narrative that treats big commercial AI products as inevitable, I don’t think it will be effective or successful to do so by trying to stop regular people from using LLMs at all. Instead, I think we have to pursue a third option, which is a multiplicity of small, independent, accountable and purpose-built LLMs. By analogy, the answer to unhealthy fast food is good, home-cooked meals and neighborhood restaurants all using local ingredients.</p>
<p>The full conversation is almost 45 minutes, but I’ve cued up the section on inevitability here:</p>
<iframe src="https://www.youtube-nocookie.com/embed/kNdjLf4f0uU?t=2053 s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen class="video"></iframe>
<h2>Revolution Social</h2>
<p>Next up, I got to reconnect with Rabble, whom I’ve known since the earliest days of social media, for his podcast <a href="https://revolution.social/episodes/silicon-valley-has-lost-its-moral-compass-with-ani/">Revolution.Social</a>. The framing for this episode was “Silicon Valley has lost its moral compass” (did it have one? Ayyyyy) but this was another chance to have a wide-ranging conversation, and I was particularly glad to get into the reckoning that I think is coming around intellectual property in the AI era. Put simply, I think that the current practice of wholesale appropriation of content from creators without consent or compensation by the AI companies is simply untenable. If nothing else, as normal companies start using data and content, they’re going to <em>want</em> to pay for it just so they don’t get sued and so that the quality of the content they’re using is of a known reliability. That will start to change things from he current Wild West “steal all the stuff and sort it out later” mentality.

It will not surprise you to find out that I illustrated this point by using examples that included… Prince and Taylor Swift. But there’s lots of other good stuff in the conversation too! Let me know what you think.</p>
<iframe src="https://www.youtube-nocookie.com/embed/NhBykJqOqAc?t=1560s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen class="video"></iframe>
<h2>What’s next?</h2>
<p>As I’ve been writing more here on my site again, many of these topics seem to have resonated, and there have been some more opportunities to guest on podcasts, or invitations to speak at various events. For the last several years, I had largely declined all such invitations, both out of some fatigue over where the industry was at, and also because I didn’t think I had anything in particular to say.</p>
<p>In all honesty, these days it feels like the stakes are too high, and there are too few people who are addressing some of these issues, so I changed my mind and started to re-engage. I may well be an imperfect messenger, and I would eagerly pass the microphone to others who want to use their voices to talk about how tech can be more accountable and more humanist (if that’s you, let me know!). But if you think there’s value to these kinds of things, let me know, or if you think there are places where I should be getting the message out, do let them know, and I’ll try to do my best to dedicate as much time and energy as I can to doing so. And, as always, if there’s something I could be doing better in communicating in these kinds of platforms, your critique and comments are always welcome!</p>

    ]]></content>
    </entry>
    <entry>
        <title>A Cookie for Dario? — Anthropic and selling death</title>
        <link href="https://anildash.com/2026/02/27/a-cookie-for-dario/"/>
        <updated>2026-02-28T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/27/a-cookie-for-dario/</id>
        <content type="html"><![CDATA[
      <p>A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support <a href="https://www.politico.com/news/2025/11/30/war-crimes-hegseth-venezuela-strikes-00671160">his commission</a> of <a href="https://www.newyorker.com/news/q-and-a/the-legal-consequences-of-pete-hegseths-kill-them-all-order">war crimes</a>. As has become clear this week, Anthropic CEO Dario Amodei has <a href="https://www.nytimes.com/2026/02/26/technology/anthropic-pentagon-talks-ai.html?unlocked_article_code=1.PVA.ao-a.26AX1P-gLWlH&amp;smid=url-share">declined to do so</a>. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.</p>
<p>Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.</p>
<p>To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform <em>would</em> enable a sitting official of any government to knowingly commit such crimes.</p>
<p>We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that <em>this is what we should expect</em>. It’s also good to note that companies may have <em>many</em> reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s <a href="https://www.newyorker.com/news/news-desk/pete-hegseths-secret-history">drunkenly stumbling</a> his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — <a href="https://www.wired.com/story/department-of-defense-department-of-war/">another lie</a>).</p>
<h2>Selling to the Pentagon sucks</h2>
<p>Being on <em>any</em> federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to <em>build</em> these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.</p>
<p>This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been <a href="https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/">massive waves of rolling layoffs</a> over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that <a href="https://www.anildash.com/2026/02/18/threatening-kids-with-ai/">threaten kids’ lives or well-being</a>. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has <a href="https://layoffs.fyi">laid off people <em>at least 17 times</em></a> in the last three years alone.</p>
<p>So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, <em>and</em> you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.</p>
<h2>How did we get here?</h2>
<p>We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/">enabling the Rohingya genocide</a> truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of <a href="https://en.wikipedia.org/wiki/IBM_and_World_War_II">the holocaust in Germany</a> in the 1940s, and that served as the template for their work implementing <a href="https://www.eff.org/deeplinks/2015/02/eff-files-amicus-brief-case-seeks-hold-ibm-responsible-facilitating-apartheid">apartheid in South Africa</a> in the 1970s. IBM actually <em>bid</em> for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry <a href="https://www.vox.com/recode/2019/10/9/20906605/github-ice-contract-immigration-ice-dan-friedman">stood up against contracts with ICE</a>. That moment was also one of the catalyzing events that drove the tech tycoons into <a href="https://www.anildash.com/2023/07/07/vc-qanon/">their group chats</a> where they collectively decided that they needed to bring their workers to heel.</p>
<p>And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been <a href="https://www.bbc.com/news/articles/c5ydddy3qzgo">openly inciting</a> <a href="https://caliber.az/en/post/elon-musk-warns-america-on-brink-of-second-civil-war">civil war</a> <a href="https://www.nbcnews.com/tech/internet/elon-musk-predicting-civil-war-europe-nearly-year-rcna165469"><em>for years</em></a> on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a <em>debate</em> as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.</p>
<p>We don’t have to set the bar this low. We have to remind each other that this isn’t <em>normal</em> for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable.

In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.</p>

    ]]></content>
    </entry>
    <entry>
        <title>Why Apple’s move to video could endanger podcasting&#39;s greatest power</title>
        <link href="https://anildash.com/2026/02/28/apple-video-podcast-power/"/>
        <updated>2026-02-28T00:00:00Z</updated>
        <id>https://anildash.com/2026/02/28/apple-video-podcast-power/</id>
        <content type="html"><![CDATA[
      <p>TL;DR:</p>
<ul>
<li>Apple is adding support for video podcasts to their podcast app</li>
<li>Podcasts are built on an open standard, which is why they aren’t controlled by a bad algorithm and don’t have ads that spy on you</li>
<li>Apple’s new system for video podcasts breaks with the old podcast standard, and forces creators to host their video clips with a few selected companies</li>
<li>The stakes are even higher because all the indie video infrastructure companies have been bought by private equity, while Trump’s goons go after TV and consolidate the big studios</li>
<li>If Apple doesn’t open this up, it could lead to podcasts getting enshittified like all the other media</li>
</ul>
<h2>Podcasts are a radical gift</h2>
<p>As I noted back in 2024, the common phrase “wherever you get your podcasts” masks a subtle point, which is that podcasts are built on an open technology — a design which has radical implications on today’s internet. This is the reason that the podcasts most people consume aren’t skewed by creators chasing an algorithm that dictates what content they should create, aren’t full of surveillance-based advertising, and aren’t locked down to one app or platform that traps both creators and their audience within the walled garden of a single giant tech company.</p>
<p>Many of those merits of the contemporary podcast ecosystem are possible because of choices Apple made almost two decades ago when they embraced open standards in iTunes when adding podcasting features. Their outsized market influence (the term “podcast” itself came from the name iPod) pushed everyone else in the ecosystem to follow their lead, and as a result, we have a major media format that isn’t as poisoned, in some ways, as the rest of social media or even mainstream media.</p>
<p>Sure, there are individual podcast creators one might object to, but notice how you don’t see bad actors like FCC chairman Brendan Carr illegally throwing his weight around to try to censor and persecute podcasters in the same way that he’s been silencing television broadcasters, and you don’t see MAGA legislators trying to game the refs about the algorithm the way they have with Facebook and Twitter. Even the Elon Musks of the world <em>can’t</em> just buy up the whole world of podcasting like he was able to with Twitter, because the ecosystem is decentralized and not controlled by any one player. This is how the Internet was supposed to work. As early Internet advocates were fond of saying, the architecture of the Internet was designed to see censorship as damage, and route around it.</p>
<h2>The move to video</h2>
<p>All of this is at much higher risk now due to the technical decisions Apple has made with its <a href="https://www.apple.com/newsroom/2026/02/apple-introduces-a-new-video-podcast-experience-on-apple-podcasts/">move to support video podcasts</a> in its latest software versions that are about to launch. The motivations for their move are obvious: in recent years, many podcasters have moved to embrace new platforms to increase their distribution, reach, engagement and sponsorship dollars, and that has driven them to add video, which has meant moving to YouTube, and more recently, platforms like Netflix. That is also typically accompanied by putting out promotional clips of the video portion of the podcast on platforms like TikTok and Instagram. Combined with Spotify’s acquisition of multiple studios in order to produce proprietary shows that are not podcasts, but exclusive content locked into their apps, and Apple has faced a significant number of threats to their once-dominant position in the space.</p>
<p>So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses.  For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.</p>
<p>The problem, though, is that Apple is only allowing these new video streams to be served by <a href="https://podcasters.apple.com/partner-search">a small number of pre-approved commercial providers</a> that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on <code>anildash.com</code> and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.</p>
<p>If I want to publish a <em>video</em> podcast to Apple’s new system, though, I can’t just put up a video file on my site and tell people to subscribe to my podcast. I have to sign up for one of the approved partner services, agree to their terms of service, pay their monthly fee, watch them get acquired by Facebook, wait for the stupid corporate battle between Facebook and Apple, endure the service being enshittified, have them put their thumb on the scale about which content they want to promote, deal with my subscribers being spied on when they watch my show, see Brendan Carr make up a pretense to attack the platform I’m on, watch the service use my show to cross-promote violent attacks on vulnerable people, and the entire rest of <a href="https://www.anildash.com/2022/02/09/the-stupid-tech-content-culture-cycle/">that broken tech/content culture cycle</a>.</p>
<p>We <em>don’t have to do this</em>, Apple!</p>
<h2>How this plays out</h2>
<p>What will happen, by default, if Apple doesn’t change course and add support for open video hosting for podcasts is a land grab for control of the infrastructure of the new, closed video podcast technology platform. Some of the bidders may be players that want to own podcasting (Spotify, Netflix, maybe legacy media companies like Disney and Paramount), or a roll-up from a cloud provider like AWS or Google Cloud. Either way, the services will get way more expensive for creators, and far more conservative about what content they allow, while being far more consumer-hostile in terms of privacy and monetization. We’ve seen this play out already — video shows on YouTube give advertisers massive amounts of data about viewers, while podcasts can be delivered to an audience while almost totally preserving their privacy, if a creator wants to help them preserve their anonymity. The reason you see podcasters always talking about “use our promo code” in their sponsor reads is because <em>advertisers can’t track you</em> going from their show to their website.</p>
<p>This will also start to impact content. You <em>don’t</em> hear podcasters saying “unalive” or censoring normal words because there is no algorithm that skews the distribution of their content. The promotional graphics for their shows are often downright boring, and don’t feature the hosts making weird faces like on YouTube thumbnails, because they haven’t been optimized to within an inch of their lives in hopes of getting 12-year-olds to click on them instead of Mr. Beast — because they’re not trying to chase algorithmic amplification. The closest thing that podcasters have to those kinds of games is when they ask you to rate them in Apple’s Podcasts app, because <em>that</em> has an algorithm for making recommendations, but even that is mediated by real humans making actual choices.</p>
<p>But once we’ve got a layer of paid intermediaries distributing video content, and Apple leans more heavily into the visual aspects of their podcast app, incentives are going to start to shift rapidly. Today, other than on laptops, phones and tablets, Apple Podcasts app only exists on their Apple TV hardware, and doesn’t even have a video playback feature. By contrast, a <em>lot</em> of video podcast consumption happens in YouTube’s TV apps in the living room. Apple Podcasts will soon have to be on every set top device like Roku sticks and Amazon Fire TVs and Google’s Chromecasts, as well as on smart TVs like Samsungs and LGs, with a robust video playback feature that can compete with YouTube’s own capabilities. Once that’s happened — which will take at least a year, if not multiple years — creators will immediately begin jockeying for ways to get promoted or amplified within that ecosystem. Even if Apple <em>has</em> allowed independent publishers to make their own video podcast feeds, it’s easy to imagine them treating them as second-class citizens when distributing those podcasts to all of the Apple Podcast users across all of these platforms.</p>
<p>The stakes for all of this are even higher because nearly all of the independent online platforms for video creation outside of YouTube have been <a href="https://youtu.be/bx5bD7F8zvE">bought up by a single private equity firm</a>. In short: even if you don’t know it, if you’re trying to do video off of YouTube, all of your eggs are in one, very precarious, basket.</p>
<h2>What to do</h2>
<p>Apple can mitigate the risks of closing up podcasts by moving as quickly as possible to reassure the entire podcasting ecosystem that they’ll allow creators to use <em>any</em> source for hosting video. Right now, there’s a “fallback” video system where creators can deliver video through the traditional podcast standard, and other podcasting apps will show that video to audiences, but Apple’s apps don’t recognize it. If Apple said they’d support that specification as a second option for those who don’t want to, or can’t, use their video hosting partners, that would go a huge way towards mitigating the ecosystem risk that they’re introducing with this new shift.</p>
<p>If Apple can engage with a wide swath of creators and understand the concerns that are bubbling up, and articulate that they’re aware of the real, significant risks that can arise from the path that they’re currently on, they still have a chance to course-correct.</p>
<p>Some of these decisions can seem like arcane technical discussions. It’s easy to roll your eyes when people talk about specifications and formats and the minutiae of what happens behind the scenes when we click on a link. But the history of the Internet has shown us that, sometimes, even some of what seem like the most inconsequential choices end up leading to massive shifts in a larger ecosystem, or even in culture overall.</p>
<p>A generation ago, a few people at Apple made a choice to embrace an open ecosystem that was in its infancy, and in so doing, they enabled an entire culture of creators to flourish for decades. Podcasting is perhaps the last major media format that is open, free, and not easily able to be captured by authoritarians. The stakes couldn’t be higher. All it takes now is a few decision makers pushing to do the right thing, not just the easy thing, to protect an entire vital medium.</p>

    ]]></content>
    </entry>
    <entry>
        <title>The Neo solves Apple’s embarrassment</title>
        <link href="https://anildash.com/2026/03/08/neo-apple-embarassment/"/>
        <updated>2026-03-08T00:00:00Z</updated>
        <id>https://anildash.com/2026/03/08/neo-apple-embarassment/</id>
        <content type="html"><![CDATA[
      <p>Last week, Apple released a parade of hardware announcements, and the one that captured the most attention across the industry was the $600 ($500 if you’re in education!) <a href="https://amzn.to/46K9mbt">MacBook Neo</a>, the brightly-colored low-end laptop that they launched to great fanfare. The conventional wisdom is that this product opens up Apple to the low end of the laptop market for the first time, radically changing the dynamics of the entire market, and throwing down the gauntlet to the garbage Windows laptop market, as well as challenging a huge swath of Chromebooks which tend to dominate in the education market. This is incorrect.</p>
<p>Apple has, in fact, sold a MacBook Air with an M1 chip <a href="https://www.macworld.com/article/2986234/walmart-m1-macbook-air-too-good-to-be-true.html">at Walmart</a> for <em>years</em>, which it has intermittently discounted to $499 at key times like Black Friday and Cyber Monday. The single-core performance of that laptop (meaning, how it works for most normal tasks that people do, like browsing the web or writing email or watching YouTube videos), is very nearly equivalent to the newly-released MacBook Neo.</p>
<p>But. A laptop with an old design, using a chip that has an old number (the M1 chip came out six years ago!), sold exclusively through a mass-market retailer that is perceived as anything but premium, presents an enormous brand challenge for Apple. It is, to put it simply, <em>embarrassing</em>. Apple can have low-end products in its range. They invest lots of effort in that segment of their product line, as the new iPhone 17e shows, making a new basic entrant to their most recent series of phones. But Apple <em>can’t</em> have old, basic-looking products that people aren’t even able to buy at an Apple Store.</p>
<p>And that’s what Neo solves. It’s a smart reframing of a product that is nearly the same offering as the old M1 Air: the Neo and that old M1 machine both have 13” screens, both weigh just under 3 pounds, both have 8GB of RAM, both start at 256GB of storage, both have about 16 hours of battery life, are both about 8”x12”, both have 2 USB ports and a headphone jack, and both of course cost almost exactly the same. They did add a new yellow (citrus!) color for the Neo, though.</p>
<h2>Wake up, Neo</h2>
<p>What was more striking to me was <a href="https://www.youtube.com/watch?v=u3SIKAmPXY4">Apple’s introductory video</a>, which clearly seems aimed at people who are new to Apple computers, or maybe people who are new to laptop computers entirely. They’re imagining a user base who’s only ever had their smartphones and are buying computers for the first time — which might describe a lot of students. There’s no discussion here of the chamfers of the aluminum, or the pipelines in the GPU cores, and there’s barely even the slightest mention of AI; instead, they describe the basics of what the laptop includes, and even go out of their way to explain how it interoperates with an iPhone.</p>
<p>There’s also a very clear attempt to distinguish Neo’s branding from the rest of Apple’s design language. The type for the “MacBook Neo” name in the launch video, and the “Hello, Neo” text on the <a href="https://www.apple.com/macbook-neo/">product homepage</a> are a rounded typeface that’s so new that it’s not actually even an actual font that Apple’s using; they’ve rendered it as an image instead of a variation of their usual “<a href="https://developer.apple.com/fonts/">San Francisco</a>” font that Apple uses for everything else in their standard marketing materials. The throwback to 2000s-era design (terminal green, the word “Neo” — are we entering the Matrix?) couldn’t be more different from the “it looks expensive” vibes of something like the <a href="https://www.apple.com/apple-watch-hermes/">Apple Watch Hermès</a> branding.</p>
<p>In all, it’s pretty impressive to see Apple use its marketing strengths to take a product that is remarkably similar to something that they’ve had for sale for years at the largest retailer in the world, and position it as a brand-new, category-defining new entry into a space. To me, the biggest thing this shows is the blind spot that traditional tech trade press has to the actual buying patterns and lived experience of normal people who shop at Walmart all the time; it would be pretty hard to see Neo as particularly novel if you had walked by a Walmart tech section any time in the last three years.</p>
<p>At a time when Apple has <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">lost whatever moral compass it had</a>, even though its machines still say “privacy is a human right” when you turn them on, we still want to see positive signs from the company. And a good one is that Apple is engaging with the reality that the current moment calls for products that are far more affordable. It is a good thing indeed when affordable products are presented as being desirable, when most of the product’s enclosure is made of recycled material, and when the lifespan of a product can be expected to be significantly longer than most in its category, instead of simply being treated as disposable. All it took was removing the stigma over the existing affordable laptop that Apple’s been selling for years.</p>

    ]]></content>
    </entry>
    <entry>
        <title>What do coders do after AI?</title>
        <link href="https://anildash.com/2026/03/13/coders-after-ai/"/>
        <updated>2026-03-13T00:00:00Z</updated>
        <id>https://anildash.com/2026/03/13/coders-after-ai/</id>
        <content type="html"><![CDATA[
      <p>For the New York Times Magazine this Sunday, <a href="https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?unlocked_article_code=1.SlA.gzDD.giRxmN2oQFcF&amp;smid=url-share">I talked to Clive Thompson</a> about one of the conversations that I'm having most often these days: What happens to coders in this current moment of extraordinarily rapid evolution in AI? LLMs are now quickly advancing to where they can virtually become entire software factories, radically changing both the economics and the power dynamics of software creation — which has so far mostly been used to displace massive numbers of tech workers.</p>
<p>But it's not so simple as &quot;bosses are firing coders now that AI can write code&quot;.</p>
<p>For one thing, though there are certainly a lot of companies where executives are forcing teams to churn out slop code, and using that as an excuse to carry out mass layoffs, there are plenty of companies where &quot;AI&quot; is just a buzzword being used as a pretense for layoffs that owners have wanted to do anyway. And more importantly, there are a growing number of coders who are having a very <em>different</em> experience with the tools than those bosses may have expected — and a very different outcome than the Big AI labs may have intended. As I said in the story:</p>
<blockquote>
<p>“The reason that tech generally — and coders in particular — see LLMs differently than everyone else is that in the creative disciplines, LLMs take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, LLMs take away the drudgery and leave the human, soulful parts to you.”</p>
</blockquote>
<p>This is a point that's hard for a lot of my artist friends to understand: how come so many coders don't just hate LLMs for stealing their work the way that most writers and photographers and musicians do? The answer boils down to three things:</p>
<ul>
<li>Coders have long had a history of openly sharing code with each other, as part of an open source, collaborative culture that goes back for more than half a century.</li>
<li>Tools for writing and creating code have almost always offered a certain degree of automation and reuse of work, so generating code doesn't feel like as radical a departure from past practices.</li>
<li>Software development is one of the fields with the least-advanced cultures around labor, as workers have almost no history of organizing, and many coders tend to side much more with management as they've been conditioned to think of themselves as &quot;future founders&quot; rather than being in solidarity with other workers.</li>
</ul>
<p>What this means is, attitudes about automation and worker displacement in tech are radically different than they would be in something like the auto industry, and in many cases, I've found that being part of a coder workforce has meant witnessing a level of literacy about past labor movements that is shockingly low, even though their technical knowledge is obviously extremely high.</p>
<h2>Coders, in their heads and hearts</h2>
<p>To be somewhat reductive about it, there are two main cohorts of coders. A larger, less vocal, group who see coding as a stable, well-paying career that they got into in order to support themselves and their families, and to partake in the upward economic mobility that the tech sector has represented for the last few decades. Then there is the smaller, more visible, group who have seen coding as an avocation, which they were drawn to as a form of creative expression and problem-solving just as much as a career opportunity. They certainly haven't been reluctant to capitalize on the huge economic potential of working in tech — this is the group that most startup founders come from — but coding isn't simply something they do from 9 to 5 and then put away at the end of the day. For those of us in this group (yeah... I'm one of these folks), we usually started coding when we were kids, and we have usually kept doing it on nights and weekends ever since, even if it's not even part of our jobs anymore.</p>
<p>Both cohorts of coders are in for a hard time thanks to the new AI tools, but for completely different reasons.</p>
<h3>For the 9 to 5</h3>
<p>The people who started to write software just because it represented a stable job, but who don't see it as part of their own personal identity, are going to be devastated by the ruthlessness with which their bosses will swing the ax. These new LLM-powered software factories can generate orders of magnitude more of the standardized business code that tends to be the bread-and-butter work for these journeyman coders, and it's not the kind of displacement that can be solved by learning a new programming language on nights and weekends, or getting a new professional certification. Much of the &quot;working class&quot; tech industry (speaking of the roles they perform functionally within the system; these are obviously jobs that pay far more than working class salaries today) are seen as ripe targets for deskilling, where lower-paid product roles can delegate coding tasks to coding AI systems, or for being automated by management giving orders to those AI systems.</p>
<p>One of the hardest parts of reckoning with this change is not just the speed with which it is happening, but the level of cultural change that it reflects. Coders are generally very amenable to learning new skills; it's a necessary part of the work, and the mindset is almost never one of being change-averse. But the level at which the change is happening in this transition is one that gets closer to people's sense of self-worth and identity, rather than to their perceptions of simply having to acquire knowledge or skills. It doesn't help that the change is being catalyzed by some of the most venal and irresponsible leaders in the history of business, brazenly acting without any moral boundaries whatsoever.</p>
<h3>For the nights and weekends</h3>
<p>For the coders that see being a coder as part of their identity, the LLM transformation is going to represent an entirely different set of challenges. They may well survive the transition that is coming, but find themselves in an unrecognizable place on the other side of it. The way that these new LLM-based tools work is by turning into virtual software factories that essentially churn out nearly all of the code <em>for</em> you. The actual work of writing the code is abstracted away, with the creator essentially focused more on describing the desired end results, and making sure to test that everything is working correctly. You're more the conductor of the symphony than someone who's holding a violin.</p>
<p>But there are people who have spent decades honing their craft, committing to memory the most obscure vagaries of this computer processor or that web browser or that one gaming console, all in service of creating code that was particularly elegant or especially high-performing, or just <em>really satisfying</em> to write. There's a real art to it. When you get your code to run just so, you feel a quiet pride in yourself, and a sense of relief that there are still things in the world that work as they should. It's a little box that you can type in where things are fair. It's the same reason so many coders like to bake, or knit, or do woodworking — they're all hobbies where precisely doing the right thing is rewarded with a delightful result.</p>
<p>And now that's going away. You won't see the code yourself anymore, the robots will write it for you while falling around and clanking. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work.  Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way.</p>
<p>Your job changes into <em>describing software</em>. Now, if you're the kind of person who only ever wanted to have the end result, maybe this is a liberation. Sometimes, that's what mattered — we wanted to fast-forward to the end result, elegance be damned. But if you were one of those crafters? The people who wrote idiomatic code that made that programming language sing? There's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either.</p>
<h2>If ... Then?</h2>
<p>What do we do about it? This horse is not going back in the barn. The billionaires wouldn't let it, anyway.</p>
<p>I've come to the personal conclusion that the only way forward is for more of the hackers with soul to seize this moment of flux and use these tools to build. The economics of creating code are changing, and it can't just be the worst billionaires in the world who benefit. The latest count is <em>700,000 people</em> laid off in the last few years in the tech industry. We'll be at a million soon, at the rate things are accelerating. Each new layoff announcement is now in the <em>thousands</em>.</p>
<p>It's not going to be a panacea for all the jobs lost, and it's not the only solution we're going to need, but one part of the answer can be coders who still give a damn looking out for each other, and building independent efforts without being reliant on the economics — or ethics — of the people who are laying off their colleagues by the hundreds of thousands.</p>
<p>I've spent my whole career working with communities of coders, building tools for the people who build with code. I don't imagine I'll ever stop doing it. This is the hardest moment that I've ever seen this community go through, and it makes me heartsick to see so many people enduring such stress and anxiety about what's to come. More than anything else, what I hope people can remember is that all of the great things that people love about technology weren't created by the money guys, or the bosses who make HR decisions — they were created by the people who actually build things. That's still an incredible superpower, and it will remain one no matter how much the actual tools of creation continue to change.</p>

    ]]></content>
    </entry>
</feed>
Raw text
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:xml="http://www.w3.org/XML/1998/namespace" xml:base="https://anildash.com/">
  <title>Anil Dash</title>
  <subtitle>A blog about making culture. Since 1999.</subtitle>
  <link href="https://anildash.com/feed.xml" rel="self"/>
  <link href="https://anildash.com/"/>
  
    <updated>2026-03-13T00:00:00Z</updated>
  
  <id>https://anildash.com</id>
  <author>
    <name>Anil Dash</name>
    <email>[email protected]</email>
  </author>
  
    
    <entry>
      <title>A Codeless Ecosystem, or hacking beyond vibe coding</title>
      <link href="https://anildash.com/2026/01/27/codeless-ecosystem/"/>
      <updated>2026-01-27T00:00:00Z</updated>
      <id>https://anildash.com/2026/01/27/codeless-ecosystem/</id>
      <content type="html">
        <![CDATA[
      <p>There's been a <a href="https://www.anildash.com/2026/01/22/codeless/">remarkable leap forward</a> in the ability to orchestrate coding bots, making it possible for ordinary creators to command dozens of AI bots to build software without ever having to directly touch code. The implications of this kind of evolution are potentially extraordinary, as outlined in that first set of notes about what we could call &quot;codeless&quot; software. But now it's worth looking at the larger ecosystem to understand where all of this might be headed.</p>
<h2>&quot;Frontier minus six&quot;</h2>
<p>One idea that's come up in a host of different conversations around codeless software, both from supporters and skeptics, is how these new orchestration tools can enable coders to control coding bots that <em>aren't</em> from the Big AI companies. Skeptics say, &quot;won't everyone just use Claude Code, since that's the best coding bot?&quot;</p>
<p>The response that comes up is one that I keep articulating as &quot;frontier minus six&quot;, meaning the idea that many of the open source or open-weight AI models are often delivering results at a level equivalent to where frontier AI models were six months ago. Or, sometimes, where they were 9 months or a year ago. In any of these cases, these are still damn good results! These levels of performance are not merely acceptable, they are results that we were amazed by just months ago, and are more than serviceable for a large number of use cases — especially if those use cases can be run locally, at low cost, with lower power usage, without having to pay any vendor, and in environments where one can inspect what's happening with security and privacy.</p>
<p>When we consider that a frontier-minus-six fleet of bots can often run on cheap commodity hardware (instead of the latest, most costly, hard-to-get Nvidia GPUs) and we still have the backup option of escalating workloads to the paid services if and when a task is too challenging for them to complete, it seems inevitable that this will be part of the mix in future codeless implementations.</p>
<h2>Agent patterns and design</h2>
<p>The most thoughtful and fluent analysis of the new codeless approach has been <a href="https://maggieappleton.com/gastown">this wonderful essay by Maggie Appleton</a>, whose writing is always incisive and insightful. This one's a must-read! Speaking of Gas Town (Steve Yegge's signature orchestration tool, which has catalyzed much of the codeless revolution), Maggie captures the ethos of the entire space:</p>
<blockquote>
<p>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.</p>
</blockquote>
<h2>Code and legacy</h2>
<p>Once you've considered Maggie's piece, it's worth reading over Steve Krouse's essay, &quot;<a href="https://blog.val.town/vibe-code">Vibe code is legacy code</a>&quot;. Steve and his team build the delightful <a href="https://www.val.town">val town</a>, an incredibly accessible coding community that strikes a very careful balance between enabling coding and enabling AI assistance without overwriting the human, creative aspects of building with code. In many ways (including its aesthetic), it is the closest thing I've seen to a spiritual successor to the work we'd done for many years with <a href="https://en.wikipedia.org/wiki/Glitch,_Inc.">Glitch</a>, so it's no surprise that Steve would have a good intuition about the human relationship to creating with code.</p>
<p>There's an interesting point, however to the core point Steve makes about the disposability of vibe-coded (or AI-generated) code: <em>all</em> code is disposable. Every single line of code I wrote during the many years I was a professional developer has since been discarded. And it's not just because I was a singularly terrible coder; this is often the <em>normal</em> thing that happens with code bases after just a short period of time. As much as we lament the longevity of legacy code bases, or the impossibility of fixing some stubborn old systems based on dusty old languages, it's also very frequently the case that people happily rip out massive chunks of code that people toiled over for months or years and then discard it all without any sentimentality whatsoever.</p>
<p>Codeless tooling just happens to embrace this ephemerality and treat it as a feature instead of a bug. That kind of inversion of assumptions often leads to interesting innovations.</p>
<h2>To enterprise or not</h2>
<p>As I noted in my original piece on codeless software, we can expect any successful way of building software to be appropriated by companies that want to profiteer off of the technology, <em>especially</em> enterprise companies. This new realm is no different. Because these codeless orchestration systems have been percolating for some time, we've seen some of these efforts pop up already.</p>
<p>For example, the team at Every, which consults and builds tools around AI for businesses, calls a lot of these approaches <a href="https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents">compound engineering</a> when their team uses them to create software. This name seems fine, and it's good to see that they maintain the ability to switch between models easily, even if they currently prefer Claude's Opus 4.5 for most of their work. The focus on planning and thinking through the end product holistically is a particularly important point to emphasize, and will be key to this approach succeeding as new organizations adopt it.</p>
<p>But where I'd quibble with some of what they've explained is the focus on tying the work to individual vendors. Those concerns should be abstracted away by those who are implementing the infrastructure, as much as possible. It's a bit like ensuring that most individual coders don't have to know exactly which optimizations a compiler is making when it targets a particular CPU architecture. Building that muscle where the specifics of different AI vendors become less important will help move the industry forward towards reducing platforms costs — and more importantly, empowering coders to make choices based on their priorities, not those of the AI platforms or their bosses.</p>
<h2>Meeting the codeless moment</h2>
<p>A good example of the &quot;normal&quot; developer ecosystem recognizing the groundswell around codeless workflows and moving quickly to integrate with them is the Tailscale team <em>already</em> shipping <a href="https://tailscale.com/blog/aperture-private-alpha">Aperture</a>. While this initial release is focused on routine tasks like managing API keys, it's really easy to see how the ability to manage gateways and usage into a heterogeneous mix of coding agents will start to enable, and encourage, adoption of new coding agents. (Especially if those &quot;frontier-minus-six&quot; scenarios start to take off.)</p>
<p>I've been on the record <a href="https://me.dm/@anildash/109719178280170032">for years</a> about being bullish on Tailscale, and nimbleness like this is a big reason why. That example of seeing where developers are going, and then building tooling to serve them, is always a sign that something is bubbling up that could actually become signficant.</p>
<p>It's still early, but these are the first few signs of a nascent ecosystem that give me more conviction that this whole thing might become real.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>New York Tech at 30: the Crossroads</title>
      <link href="https://anildash.com/2026/02/03/nye-tech-30/"/>
      <updated>2026-02-04T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/03/nye-tech-30/</id>
      <content type="html">
        <![CDATA[
      <p>This past week, over a series of events, the New York tech community celebrated the 30th anniversary of a nebulous idea described as “Silicon Alley”, the catch-all term for our greater collective of creators and collaborators, founders and funders, inventors and investors, educators and entrepreneurs and electeds, activists and architects and artists. Some of the parties or mixers have been typical industry affairs, the usual glad-handing about deal-making and pleasantries. But a lot have been deeper, reflecting on what’s special and meaningful about the community we’ve built in New York. <a href="https://www.mediapost.com/publications/article/412470/">Steven Rosenbaum’s reflection</a> on the anniversary captures this well from someone who’s been there, and <a href="https://finance.yahoo.com/news/silicon-alley-turns-30-york-114752768.html">Leo Schwartz’s piece for Fortune</a> covers the more conventional business angle.</p>
<p>Beyond the celebrations, though, I wanted to reflect on a number of the deeper conversations I’ve had over these last few days. These are conversations grounded in the reality of where our country and city are today, far beyond spaces where wealthy techies are going to parties and celebrating each other. The hard questions raised in these conversations are the ones that determine where this community goes in the future, and they’re the ones that <em>every</em> tech community is going to face in the current moment.</p>
<p>I know what the New York City tech community has been; there was a time when I was one of its most prominent voices. The question now is what it will be in the future. Because we are at a profound crossroads.</p>
<iframe title="vimeo-player" src="https://player.vimeo.com/video/1159273059?h=b6fe26d204" width="640" height="360" frameborder="0" referrerpolicy="strict-origin-when-cross-origin" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share"   allowfullscreen></iframe>
<h1>What community can be</h1>
<p>Nobody better exemplifies the best of what New York tech has been than Aaron Swartz. As I’d <a href="https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/">written about</a> recently, he was brilliant and delightfully impossible. At an incredibly young age, <a href="https://www.eff.org/deeplinks/2017/01/everyone-made-themselves-hero-remembering-aaron-swartz">he led our community</a> in the battle to push back against a pair of ill-considered bills that threatened free expression on the Internet. (These bills would have done to the web what the current administration has done to broadcast television, having a chilling effect on free speech and putting large swaths of content under government control.) As we stood outside Chuck Schumer’s office and demanded that big business take their hands off our internet, we got our first glimpse of the immense power that our community could wield. And <a href="https://www.eff.org/deeplinks/2017/01/5-years-later-victory-over-sopa-means-more-ever">we won</a>, at least for a while.</p>
<p>My own path within the New York tech community was nowhere near as dramatic, but I was just as motivated in wanting to serve the community. When I became the first person <a href="https://www.anildash.com/2010/12/13/im-running-for-the-new-york-tech-meetup-board/">elected to the board of the New York Tech Meetup</a> (later the New York Tech Alliance), it was the largest member-led organization of tech industry workers in the country. By the time it reached its peak, we were over 100,000 members strong, and could sell out one of our monthly events (at a venue of over 1000 attendees) in minutes. The collective power and impact of that cohort was immense. So, when I say “community”, I mean <em>community</em>. I’m not talking about the contemporary usage of the word, when people call their TikTok followers a “community”. I mean people who care about each other and show up for each other so that they can achieve meaningful things.</p>
<p>New York tech demonstrated its values time and again, and not just in organizing around policy that served its self-interest. When the city was still reeling from 9/11, these were people who not only chose to stay in the city, or who simply talked about how New York ought to rebuild, but actually took the risk and rebuilt the economy of the city — the <em>majority</em> of the economic regrowth and new jobs in New York City in the quarter-century since the attacks of 9/11 have happened thanks to the technology sector.</p>
<p>When Hurricane Sandy hit, these were people who <a href="https://www.nbcnews.com/id/wbna49663102">were amongst the first to step up</a> to help their neighbors dig out. When our city began to <a href="https://www.anildash.com/2011/03/05/nyc-mta-ftw/">open up its data</a>, the community responded in kind by building an entire ecosystem of new tools that laid the groundwork for the tech we now take for granted when navigating around our neighborhoods. There was no reluctance to talk about the importance of diversity and inclusion, and no apology in saying that tech was failing to do its job in hiring and promoting equitably, because we know how much talent is available in our city. Hackers would come to meetups to show off their startups, sure, but just as often to show off how they’d built cool new technology to <a href="https://www.wbur.org/hereandnow/2021/12/28/heat-seek-tool-tenants">help make sure our neighbors in public housing had heat in the winter</a>. This was <a href="https://www.anildash.com/2016/07/15/new-york-style-tech/">New York-style tech</a>.</p>
<p>What’s more, the work of this community happened with remarkable solidarity; the SOPA/PIPA protests that Aaron Swartz spoke at had him standing next to some of the most powerful venture capitalists in the city. When it was time to take action, a number of the most influential tech CEOs in New York took Amtrak down to Washington, D.C. to talk to elected officials and their staffers about the importance of defending free expression online, advocating for the same issue that had been so important to the broke college kids who’d been at the rally just a few days earlier. People had actually gathered around <em>principles</em>. I don’t say this as a Pollyanna who thinks everything was perfect, or that things would have always stayed so idealistically aligned, but simply to point out that <em>this did happen</em>. I don’t have to assert that it is theoretically possible, because I have already seen a community which functions in this way.</p>
<h2>From bottoms-up to big business</h2>
<p>But things have changed in recent years for New York’s tech community. What used to often be about extending a hand to neighbors has, much of the time, become about simply focusing on who’s getting funded to chase the trends defined by Silicon Valley. The vibrancy of the New York Tech Meetup took a huge hit from covid, preventing the ability for the community to gather in person, and the organization’s evolution from a Meetup to an Alliance to being part of Civic Hall shifted its focus in recent years, though there has been a recent push to revitalize its signature events. In its place, much of the public narrative for the community is led by Tech:NYC, which has active and able leadership, but is a far more conventional trade group. There's a focus on pragmatic tools like job listings (their <a href="https://technycdigest.beehiiv.com/subscribe?ref=kdPsdXErYd">email newsletter</a> is excellent), but they're unlikely to lead a rally in front of a Senator's office. An organization whose founding members include Google and Meta is necessarily going to be different than one with 100,000 individual members.</p>
<p>When I <a href="https://web.archive.org/web/20150601041007/https://www.wsj.com/articles/SB10001424127887324624404578255752537705008">spoke to the Wall Street Journal</a> back in 2013 about the political and social power of our community, at a far different time, I called out the breadth of who our community includes:</p>
<blockquote>
<p>The tech constituency encompasses a range of potential voters who remain unlikely to behave as a traditional bloc. &quot;It's venture capitalists and 23-year-old graphic designers in Bushwick,&quot; Mr. Dash said. &quot;It's labor and management. It's not traditional allies.&quot;</p>
</blockquote>
<p>I wanted to make sure people understood that tech in New York is much broader than just, well, what the bosses and the big companies want. It is important to understand that New York is about <a href="https://www.anildash.com/2025/10/24/founders-over-funders/">founders, not just funders</a>.</p>
<p>The distinction between these groups and their goals was never clearer to me than in the 2017 battle around Amazon’s proposed <a href="https://en.wikipedia.org/wiki/Amazon_HQ2">HQ2 headquarters</a>. The public narrative was that Amazon was trying to make a few cities jump through hoops to make the best possible set of bribes to the company so that they would build a new headquarters complex in the host city. The reality was, New York City offered $1.5 billion dollars to the richest man in the world in order to open up an office in a city where the company was inevitably going to do business regardless, and the contract that Amazon would have to sign in exchange only obligated them to hire 500 new workers in the city — <strong>fewer</strong> people than their typical hiring plan would expect in that timeframe. In addition, the proposed plan would have taken over land intended for 6,000 homes, including 1500 affordable units, would have defunded the mass transit system through years of tax breaks for the company while putting massive additional burden on the transit system, and raised housing prices. (Amazon has since signed a lease for 335,000 square feet and hired over 1000 employees, without any subsidies.)</p>
<p>At the time, I was CEO of a company that two entrepreneurs had founded in 2000 and bootstrapped to success, leading to them spinning out multiple companies which would go on to exit for over $2.2 billion, providing over 500 jobs and creating dozens of millionaires out of the workers who joined the companies over the years. Several of the people who had worked at those companies went on to form their own companies, and <em>those</em> companies are now collectively worth over $5 billion. All of these companies, combined, have gotten a total of <em>zero billion dollars</em> from the state and city of New York. In addition, none of those companies have ever had working conditions anywhere close to <a href="https://en.wikipedia.org/wiki/Criticism_of_Amazon#Treatment_of_workers">those Amazon has been criticized for</a>.</p>
<p>But the <em>story</em> of the time was that “New York tech wants HQ2!” Media like newspapers and TV were firmly convinced that techies were in support of Amazon getting a massive unnecessary handout, and I had genuinely struggled to figure out why for a long time. After a while, it became obvious. Everyone that they had spoken to, and all the voices that were considered canonical and credible when talking about “New York tech”, were investors or giant publicly-traded companies.</p>
<p>People who actually <em>built</em> things were no longer the voice of the community. Those who showed up when the power was out, or when the community was hurting, or when there was an issue that called for someone to bravely stand up and lead the crowd even if there was some social or political risk — they were not considered valid. People liked the <em>myth</em> of Aaron Swartz by then, but they would have ignored the fact that he almost certainly would have objected to corporate subsidy for the company.</p>
<h2>New York tech today, and tomorrow</h2>
<p>I am still proud of the New York tech community. But that’s because I get to see what happens in person. Last week, I was reminded at every one of the in-person commemorations of the community that there are so many generous, kind-hearted, thoughtful people who will fight to do the right thing. The challenge today, though, is that those are no longer the people who define the story of the community. That’s not who a <em>new</em> person thinks of when they’re introduced to our community.</p>
<p>When I talk to young people who are new to the industry, or people who are changing careers who are curious about tech, they have heard of things like Tech Week, or they read trade press. In those venues, a big name is generally not our home-grown founders, or even the “big” success stories of New York tech. That’s especially true as once high-flying New York tech companies like Tumblr and Foursquare and Kickstarter and Etsy and Buzzfeed either faded or got acquired, and newer successful startups are more prosaic and less attention-grabbing. Who’s left to tell them a story of what “tech” means in New York? Where will they find community?</p>
<p>One possible future is that they try to build a startup, doing everything you’re “supposed” to do. They pitch the VC firms in town, and the big name firms that they’ve heard of. If they’re looking for community, they go to the events that get the most promotion, which might be Tech Week events. And all of these paths lead the same way — the most prominent VC firm is Andreessen Horowitz, and they run Tech Week too, even though they’re not from NYC.</p>
<p>On that path, New York tech puts you across the table from <a href="https://fortune.com/2025/02/05/daniel-penny-andreessen-horowitz-a16z-investing-david-ulevitch/">the man who strangled my neighbor to death</a>.</p>
<p>Another possible future is that we rebuild the kind of community that we used to have. We start to get together the people who actually <em>make</em> things, and show off what we’ve built for one another. It’s going to require re-centering the hundreds of thousands of people who create and invent, rather than the dozens of people who write checks. It’s going to mean that the stories start with New York City (and maybe even… <em>in the outer boroughs</em>!), rather than taking dictation from those in Silicon Valley who hate our city. And it’s going to require understanding that technology is a set of tools and tactics we can use in service of goals — ideally positive social goals — and not just an economic opportunity to be extracted from.</p>
<p>We would never talk about education by only talking to those who invest in making pencils. We’d never consider a story about a new movie to be complete if we only talked to those who funded the film. And certainly our policymakers would balk if we skipped speaking with them and instead aimed our policy questions directly at their financial backers, though that might result in more accurate responses. Yet somehow, with technology, we’ve given over the narrative entirely to the money men.</p>
<p>In New York, we’ve borne the brunt of that error. A tech community with heart and soul is in danger of being snuffed out by those who will only let its most base instincts survive. Even our <em>investors</em> here are more thoughtful than these stories would make it seem! But we can change it, and maybe even change the larger tech story, if we’re diligent in never letting the bad actors control the narrative of what tech is in the world.</p>
<p>Like so many good things, it can all start with New York City.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>There&#39;s no such thing as &quot;tech&quot; (Ten years later)</title>
      <link href="https://anildash.com/2026/02/06/no-such-thing-as-tech/"/>
      <updated>2026-02-06T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/06/no-such-thing-as-tech/</id>
      <content type="html">
        <![CDATA[
      <p>Ten years ago I wrote that <a href="https://www.anildash.com/2016/08/19/there-is-no-technology-industry/">there is no “technology industry”</a>. It’s more true than ever.</p>
<p>There is no “tech”. There’s no such thing as “a FAANG company”. There is almost nothing in common between the very largest tech companies and the next several hundred biggest companies that happen to create tech platforms. Whatever shorthand we use for the biggest tech companies, they almost never have much in common—whether it's how they make money, what products they make, how they make decisions, who leads them, or what drives their cultures.</p>
<p>It’s important to make these distinctions because the false categorization of wildly dissimilar organizations into one grouping leads to absurdly inappropriate decisions being made. Let’s look at some simple examples to understand why.</p>
<p>Take the once-ubiquitous shorthand of “FAANG” to describe big tech. (It stood, at one time, for Facebook, Amazon, Apple, Netflix and Google. Then Facebook became Meta and Google became Alphabet and Microsoft became upset about not being included, and people started trying to use other more unwieldy, less-popular sobriquets.) This abbreviation still persists because of the mindset it represents, and it is still useful in capturing a certain vision of how the industry functions. I often encounter early-career tech workers who describe their ambitions as “working at a FAANG company”.</p>
<p>But let’s look at <em>what these different companies actually do</em>. For all its complexity, Netflix is, at its heart, about streaming video to people. Meta runs a number of communications platforms and social networks. Apple sells hardware devices. They all have very large side businesses that do other things, but this is what these companies are at their core — and they’re wildly different businesses in their core essence!</p>
<p>If someone said, “I want to be an executive at Walmart, or maybe at A24,” you would think, “This person has no idea what the hell they want to be, or what they’re talking about.” If they were to say, “I want to work for nVidia, or maybe Deloitte,&quot; you would think, “This person is just confused, and that’s kind of sad.” But this is <em>exactly</em> equivalent to asserting “I want to work at a FAANG company” or “I want to work at a startup” or, worse, “I want to work in tech”.</p>
<p>So many have been caught off guard as tech has grabbed massive power over nearly every aspect of society—from individuals who can't figure out their career paths to policy makers who've been bamboozled by tech tycoons. It's no secret how it happened: everyone underestimated the impact because they judged tech by the same rules as other industries.</p>
<h2>Everything and nothing</h2>
<p>These distinctions matter even more because today, <em>everything</em> is tech. Or, if you prefer, nothing is technology. Instead, every area is suffused with tech — and every discipline needs people who are fluent in the concerns of technology, and familiar with the tradeoffs and risks and opportunities that come with the adoption of, and creation of, new technologies.</p>
<p>Now, of course, I know why it’s useful to have the shorthand of being able to say “the tech industry” when talking about a particular sector. But the sleight of hand that comes from being able to hide the enormous, outsized impact that this small number of companies has across a vast number of different sectors of society is possible, in part, because we <em>treat</em> them like they’re one narrow part of the business world. In many cases, an individual division of a giant tech company dwarfs the entirety of other industries. Apple’s AirPods business isn’t even one of the first products one would think of when listing their most important, most influential, or most profitable lines of business, and yet <em>AirPods alone</em> are bigger than the entire domestic radio advertising business in the United States. Google’s ad business alone is larger than the entire U.S. domestic airline industry combined. Things that are considered an “industry” in other categories are smaller than things that are considered a <em>product</em> in “tech”.</p>
<p>That sense of scale is important to keep in mind as we push for accountability and to understand how to plan for what’s ahead. Even building a path for one’s own career — whether that’s inside or outside of the companies we consider to be in the tech sector — requires having a proper perspective on the relative influence of these organizations, and also on the distorting effect it can have when we don’t look at them in their full complexity.</p>
<p>One example from a completely different realm that I find useful in contextualizing this challenge is from the world of retail: Ikea is one of the top 10 restaurants in the world. (By many reports, it’s the 6th largest chain of restaurants.) That is, of course, incidental to its role as a furniture retailer. But this is the nature of massive scale. The second-order impacts are still enough to have outsized effects in the larger world.</p>
<p>At a moment when we have seen that so many of the biggest tech companies are led by people who don’t know how to act responsibly with all of the power that they’ve been given, it’s important that we complicate our views of their companies, and consider that they are <em>much</em> more than just part of the “tech industry”. They are functioning as communications, media, finance, education, infrastructure, transportation, commerce, defense, policing, and government much of the time. And very often, they’re doing it without our awareness or consent.</p>
<p>So, when you hear conversations in society about tech companies, or tech execs, or tech platforms, make sure you push those who are involved in the dialogue to be specific about what they mean. You may find that they haven’t stopped to reflect on the fact that this simple label has long since stopped accurately describing the extraordinary amount of power and control that this handful of companies exert over our daily lives, and over society as a whole.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Coding agents as the new compilers</title>
      <link href="https://anildash.com/2026/02/11/coding-agents-as-the-new-compilers/"/>
      <updated>2026-02-12T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/11/coding-agents-as-the-new-compilers/</id>
      <content type="html">
        <![CDATA[
      <p>In each successive generation of code creation thus far, we’ve abstracted away the prior generation over time. Usually, only a small percentage of coders still work on the lower layers of the stack that used to be the space where everyone was working. I’ve been coding long enough that people were still creating code in assembly when I started (though I was never any good at it!), though I started with BASIC. Since BASIC was an interpreted language, its interpreter would write the assembly language for me, and I never had to see exactly what assembly language code was being created.</p>
<p>I definitely <em>did</em> know old-school coders who used to, at first, check that assembly code to see if they liked the output. But eventually, over time, they just learned to trust the system and stopped looking at what happened after the system finished compiling. Even people using more “close to the metal” languages like C generally trust that their compilers have been optimized enough that they seldom inspect the output of the compiler to make sure it was perfectly optimized for their particular processor or configuration. The benefits of delegating those concerns to the teams that create compilers, and coding tools in general, yielded so many advantages that that tradeoff was easily worth it, once you got over the slightly uncomfortable feeling.</p>
<p>In the years that followed, though a small cohort of expert coders who would hand-tune assembly code for things like getting the most extreme performance out of a gaming console, most folks stopped writing it, and very few <em>new</em> coders learned assembly at all. The vast majority of working coders treat the output from the compiler layer as a black box, trusting the tools to do the right thing and delegating the concerns below that to the toolmakers.</p>
<p>We may be seeing that pattern repeat itself. Only this time, the abstraction is happening through AI tools abstracting away <em>all</em> the code. Which can feel a little scary.</p>
<h2>Squashing the stack</h2>
<p>Just as interpreted languages took away chores like memory management, and high-level languages took away the tedium of writing assembly code, we’re starting to see the first wave of tools that completely abstract away the writing of code. (I described this in more detail in the piece about <a href="https://www.anildash.com/2026/01/22/codeless/">codeless software</a>recently.</p>
<p>The individual practice of professionalizing the writing of software with LLMs seems to have settled on the term “<a href="https://simonwillison.net/2026/Feb/11/glm-5/">agentic engineering</a>”, as Simon Willison recently noted.</p>
<p>But the next step beyond that is when teams <em>don’t</em> write any of the code themselves, instead moving to an entirely abstracted way of creating code. In this model, teams (or even individual coders):</p>
<ul>
<li>Define the specifications for how the code should work</li>
<li>Ensure that the system is provided with enough context at all times that it can succeed in creating code that is successful as often as possible</li>
<li>Provide sufficient resources that a redundant and resilient set of code outputs can be created to accommodate failures while in iteration</li>
<li>Enforce execution of tests and conformance systems against the code — <a href="https://simonwillison.net/2025/Dec/18/code-proven-to-work/">including human tests with a named, accountable party</a>, not just automated software tests</li>
</ul>
<p>With this kind of model deployed, the software that is created can essentially be output from the system in the way that assembly code or bytecode is output from compilers today, with no direct inspection from the people who are directing its creation. Another way of thinking about this is that we’re abstracting away many different specific programming languages and detailed syntaxes to more human-written Markdown files, created much of the time in <strong>collaboration</strong> with these LLM tools.</p>
<p>Presently, most people and teams who are pursuing this path are doing so with costly commercial LLMs. I would strongly advocate that most organizations, and <em>especially</em> most professional coders, be very fluent in ways of accomplishing these tasks with a fleet of low-cost, locally-hosted, open source/open-weight models contributing to the workload. I don’t think they are performant enough yet to accomplish all of the coding tasks needed for a non-trivial application yet, but there are a significant number of sub-tasks that could reasonably be delegated. More importantly, it will be increasingly vital to ensure that this entire “codeless compilation” stack for agentic engineering works in a vendor-neutral way that can be decoupled from the major LLM vendors, as they get more irresponsible in their business practices and more aggressive towards today’s working coders and creators.</p>
<p>For many, those worries about Big AI are why their reaction to these developments in agentic coding make them want to recoil. But in reality, these issues are exactly why we desperately need to <em>engage</em>.</p>
<h2>Seizing the means</h2>
<p>Many of the smartest coders I know have a lot of legitimate and understandable misgivings about the impact that LLMs are having on the coding world, especially as they’re often being evangelized by companies that plainly have ill intent towards working coders. It is reasonable, and even smart, to be skeptical of their motivations and incentives.</p>
<p>But the response to that skepticism is not to reject the category of technology, but rather to capture it and seize control over its direction, away from the Big AI companies. This shift to a new level of coding abstraction is exactly the kind of platform shift that presents that sort of opportunity. It’s potentially a chance for coders to be in control of some part of their destiny, at a time when a lot of bosses clearly want to <a href="https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/">get rid of as many coders as they can</a>.</p>
<p>At the very least, this is one area where the people who actually <em>make things</em> are ahead of the big platforms that want to cash in on it.</p>
<h2>What if I think this is all bullshit?</h2>
<p>I think a lot of coders are going to be understandably skeptical. The most common concern is, “I write really great code, how could it possibly be good news that we’re going to abstract away the writing of code?”. Or, “How the hell could a software factory be good news for people who make software?”</p>
<p>For that first question, the answer is going to involve some grieving, at first. It may be the case that writing really clean, elegant, idiomatic Python code is a skill that will be reduced in demand in the same way that writing incredibly performant, highly-tuned assembly code is. There <em>is</em> a market for it, but it’s on the edges, in specific scenarios. People ask for it when they need it, but they don’t usually <em>start</em> by saying they need it.</p>
<p>But for the deeper question, we may have a more hopeful answer. By elevating our focus up from the individual lines of code to the more ambitious focus on the overall problem we’re trying to solve, we may reconnect with the “why” that brought us to creating software and tech in the first place. We can raise our gaze from the steps right in front of us to the horizon a bit further ahead, and think more deeply about the problem we’re trying to solve. Or maybe even about the <em>people</em> who we’re trying to solve that problem for.</p>
<p>I think people who create code today, if they have access to super-efficient code-creation tools, will make better and more thoughtful products than the financiers who are currently carrying out mass layoffs of the best and most thoughtful people in the tech industry.</p>
<p>I also know there’s a history of worker-owned factories being safer and more successful than others in their industries, while often making better, longer-lasting products and being better neighbors in their communities. Maybe it’s possible that there’s an internet where agentic engineering tools could enable smart creators to build their own software factories that could work the same way.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Launch it 3 times</title>
      <link href="https://anildash.com/2026/02/13/launch-it-three-times/"/>
      <updated>2026-02-14T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/13/launch-it-three-times/</id>
      <content type="html">
        <![CDATA[
      <p>I wanted to share one of the bits of advice that I find myself most frequently giving to teams when they’re working on a product, or founders who are creating a new company: launch it three times.</p>
<p>What I mean by that is, it often takes more than one time before your idea actually resonates or sticks with the people you’re trying to reach. Sometimes it takes more than twice! And when I say that you might need to launch again, that can mean a lot of different things. It might just be little tweaks to what you originally put out in the world, It might even be less than that — I’ve worked with teams that put out <strong>literally the exact same thing again</strong> and found success, because the issue they had the first time was about timing. That’s increasingly an issue as people are distracted by the deeply disturbing social and political events going on in the world, and so sometimes they just need you to put things in front of them again so that they can reassess what you were trying to say.</p>
<p>Many relaunches are a little more ambitious, of course. Being a Prince fan, I am of course very partial to strategies that involve changing your name. Re-launching under a new name can be a key strategic move if you think that you’re not effectively reaching your target audience. As I’d written recently, one of the most important goals in getting a message out is that <a href="https://www.anildash.com/2025/12/05/talk-about-us-without-us/">they have to be able to talk about you without you</a>. But if you want people to tell your story even when you’re not around, the most important prerequisite is that they have to remember your name. With Glitch, that was the <em>third</em> name we actually launched the community under, a fact that I was a little bit embarrassed about at the time. But having a memorable name that resonated ended up being almost as much a factor in our early success as our user experience or the deeper technological innovations.</p>
<p>There are other ways of making changes for a successful re-launch. One thing I often suggest is to <em>subtract</em> things (or just de-emphasize them) and use that reduction in complexity to simplify a story. Or you can try to re-center your narrative on your users or community instead of on your product — the emotion and connection of seeing someone succeed often resonates far more than simply reciting a litany of features or technical capabilities.  Any of these small iterations allow you to take another swing at putting something out into the world without having to make a massive change to the core offering.</p>
<p>Often times, people are afraid or embarrassed to make changes to things like branding or design because they’re some of the more visible aspects of a product or service. Instead, they retreat to “safe” areas, like tweaking the pricing or copy on a web page that nobody reads. But the vast majority of the time, the single biggest problem you have is that <em>nobody knows you exist, and nobody gives a damn about what you do</em>. Everything else pales in comparison to that. I’ve seen so many teams trying to figure out how to optimize the engagement of the three users on their app, or the five people who come to their site, while forgetting about the other eight billion people who have no idea they exist.</p>
<h2>What about <em>not</em>  failing?</h2>
<p>This idea of launching again is really important to keep in mind because so much of the narrative in the startup world is about “fail fast” and “90% of startups fail”. When the conventional narrative from VCs prompts you to pivot right away, or an investor is pressuring everyone to grow, grow, grow at all costs, it can be hard to think about slowing down and taking the time to revisit and refine an idea.</p>
<p>But if you’re moving with conviction, and you’ve created something meaningful, and if you’re serving a real community that you have a deep understanding of, then it may  be the case that you simply need to try again. If you are <em>not</em> moving with conviction to create something meaningful for a real community, then you don’t need to do it three times, because you don’t even need to do it once.</p>
<p>So many of the creators and innovators that inspire me most often end up working on their best ideas for years or even decades, iterating and revisiting those ideas with an almost-obsessive passion. Most of the time, they’re doing it because of a combination of their own personal mission and the deep belief that what they’re doing is going to help change people’s lives for the better. For those kinds of people, one of the things I want most is to ensure that they don’t give up before their ideas have had a full and fair chance to succeed, even if that means that sometimes you have to try, try again.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>How did we end up threatening our kids’ lives with AI?</title>
      <link href="https://anildash.com/2026/02/18/threatening-kids-with-AI/"/>
      <updated>2026-02-18T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/18/threatening-kids-with-AI/</id>
      <content type="html">
        <![CDATA[
      <p>I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.</p>
<p>Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.M1A.S4zx.M-CdIbTK0GGI&amp;smid=url-share">encouraged</a> and <a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html?unlocked_article_code=1.M1A.-92e.rGfKZMgP6nE9&amp;smid=url-share">incited</a> children to end their own lives. Grok’s AI <a href="https://www.cnbc.com/2026/01/05/india-eu-investigate-musks-x-after-grok-created-deepfake-child-porn.html">generates sexualized imagery of children</a>, which the company makes available commercially to paid subscribers.</p>
<p>It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, <em>for profit</em>, and not only is there little public uproar, it seems as if very few have even noticed.</p>
<p>How did we get here?</p>
<h2>The ideas behind a crisis</h2>
<p>A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.</p>
<h3>1. Everyone feels desperately behind and wants to catch up</h3>
<p>There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely <em>convinced</em> that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.</p>
<p>At Google, the company’s researchers had published the <a href="https://en.wikipedia.org/wiki/Attention_Is_All_You_Need">fundamental paper</a> underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A <a href="https://www.cnet.com/tech/services-and-software/chatgpt-caused-code-red-at-google-report-says/">crisis ensued</a> within Google in the months that followed.</p>
<p>These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that <em>shipping any product</em> is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.</p>
<h3>2. Accountability is “woke” and must be crushed</h3>
<p>Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.</p>
<p>Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google <em>also</em> saw one of its engineers <a href="https://en.wikipedia.org/wiki/Google%27s_Ideological_Echo_Chamber">publish a sexist screed</a> on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the <a href="https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation">most credible and respected voices</a> in the industry on these issues.</p>
<p>Eliminating those roles was considered <em>vital</em> because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.</p>
<p>It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those <a href="https://arstechnica.com/gadgets/2021/08/a-decade-and-a-half-of-instability-the-history-of-google-messaging-apps/">countless redundant messaging apps</a> they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. <em>it may be a good thing</em> that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!</p>
<h3>3. Product managers are veterans of genocidal regimes</h3>
<p>The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.</p>
<p>But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/">made products that directly enabled and accelerated a genocide</a>. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you <em>chose</em> to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.</p>
<p>Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”</p>
<p>Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, <em>most</em> platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.</p>
<h3>4. Compensation is tied to feature adoption</h3>
<p>This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.</p>
<p>In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need <a href="https://www.anildash.com/2025/05/27/internet-of-consent/">an internet of consent</a>.</p>
<p>But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.</p>
<h3>5. Their cronies have made it impossible to regulate them</h3>
<p>A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an <a href="https://www.nytimes.com/2025/11/30/technology/david-sacks-white-house-profits.html?unlocked_article_code=1.NFA.8q0L.ierVRTr9iVbw&amp;smid=url-share">unbelievably broad number of conflicts of interest</a> from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.</p>
<p>As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">open bribery</a>) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.</p>
<p>All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.</p>
<p>There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.</p>
<h2>What about the kids?</h2>
<p>It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.</p>
<p>People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are <em>already</em> products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.</p>
<p>If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.</p>
<p>We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated <a href="https://openai.com/index/combating-online-child-sexual-exploitation-abuse/">their policy</a> prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, <a href="https://www.thorn.org">Thorn</a>, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose <em>entire purpose</em> is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?</p>
<p>And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?</p>
<h2>How do we move forward?</h2>
<p>It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be <em>unfathomable</em> that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who <em>work in tech</em> probably are barely aware.</p>
<p>What’s worse is, the majority of people I’ve talked to in tech, who <em>do</em> know about this have not taken a single action about it. Not one.</p>
<p>I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Taking action against AI harms</title>
      <link href="https://anildash.com/2026/02/23/taking-action-ai-harms/"/>
      <updated>2026-02-24T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/23/taking-action-ai-harms/</id>
      <content type="html">
        <![CDATA[
      <p>In my last piece, I talked about <a href="https://www.anildash.com/2026/02/18/threatening-kids-with-ai/">the harms that AI is visiting on children</a> through the irresponsible choices made by the platforms creating those products. While we dove a bit into the incentives and institutional pressures that cause those companies to make such wildly irresponsible decisions, what we haven’t yet reckoned with is how we hold these companies accountable.</p>
<p>Often, people tell me they feel overwhelmed at the idea of trying to engage with getting laws passed, or fighting a big political campaign to rein in the giant tech companies that are causing so much harm. And grassroots, local organizing can be <a href="https://patch.com/new-jersey/newbrunswick/new-brunswick-city-council-kills-proposal-build-ai-data-center-100-jersey">extraordinarily effective</a> in standing up for the values of your community against the agenda of the Big AI companies.</p>
<p>But while I think it’s vital that we pursue systemic justice (and it’s the only way to stop many kinds of harm), I do understand the desire for something more immediate and human-scale. So, I wanted to share some direct, personal actions that you can take to respond to the threats that Big AI has made against kids. Each of these tactics have been proven effective by others who have used the same strategies, so you can feel confident when adapting these for your own use.</p>
<h2>Get your company off of Twitter / X</h2>
<p>If your company or organization maintains a presence on Twitter (or X, as they have tried to rename themselves), it is important to protect yourself, your coworkers, and also your employer from the risks of being on the platform. Many times, leadership in organizations have an outdated view of the platform that is uninformed about the current level of danger and harm presented by participating on the social network, and an accurate description of the problem can often be effective in driving a decision to make a change.</p>
<p>Here is some dialogue you can use or modify to catalyze a productive conversation at work:</p>
<blockquote>
<p>Hi, [name]. I saw a while ago that Twitter is being investigated in multiple countries around the world for having generated explicit imagery of women and children. The story even said that their CEO reinstated the account of a user who had shared child exploitation pictures on the site, and monetized the account that had shared the pictures.</p>
</blockquote>
<blockquote>
<p>Can you verify that our team is required to be on the service even though there is child abuse imagery on the site? I know that Musk’s account is shown to everyone on Twitter, so I’m concerned we’ll see whatever content he shares or retweets. Should I forward any of the child abuse material that I encounter in the course of carrying out the duties of my role to HR or legal, or both? And what is our reporting process for reporting this kind of material to the authorities, as I haven’t been trained in any procedures around these kinds of sensitive materials?</p>
</blockquote>
<p>That should be enough to trigger a useful conversation at your workplace. (You can share <a href="https://www.cnbc.com/2026/01/05/india-eu-investigate-musks-x-after-grok-created-deepfake-child-porn.html">this link</a> if they want a credible, business-minded link to reference.)  If they need more context about the burden on workers, you can also mention the fact that content moderators who have to interact with this kind of content have had <a href="https://citizensandtech.org/2024/02/measuring-trauma-among-the-internets-first-responders/">serious issues with trauma</a>, according to many academic studies. There is also the risk of employees and partners having concerns about nonconsensual imagery being generated from their images if the company posts anything on Twitter that features their faces or bodies. As <a href="https://www.liberalcurrents.com/the-new-epstein-island-is-right-in-your-pocket-its-time-to-abandon-elon-musks-paradise-of-abuse/">some articles have noted</a>, the Grok AI tool that Twitter uses is even designed to permit the creation of imagery that makes its targets look like the victims of violence, including targets who are underage.</p>
<p>As a result, your emails to your manager should CC your HR team, and should make explicit that you don’t wish to be liable for the risks the company is taking on by remaining on the platform. Talk to your coworkers, and share this information with them, and see if they will join you in the conversation. If you’re able to, it’s not a bad idea to look up a local labor lawyer and see if they’re willing to talk to you for free in case you need someone to CC on an email while discussing these topics. Make your employers say to you, explicitly, that the decision to remain on the platform is theirs, that they’re aware of the risks, that they indemnify you of those risks. You should ask that they take on accountability for burdens like legal costs or even psychological counseling for the real and severe impacts that come from enduring the harms that crimes like those enabled by Twitter can cause.</p>
<p>All of these strategies can also apply to products that integrate with Twitter’s service at a technical level, for sharing content or posting tweets, or for technical platforms that try to use Grok’s AI features. If you are a product manager, or know a product manager, that is considering connecting to a platform that makes child abuse material, you have failed at the most fundamental tenet of your craft. If you work at a company that has incorporated these technologies, file a bug mentioning the issues listed above, and again, CC your legal team and mention these concerns. “Our product might plug in to a platform that generates CSAM” is a show-stopping bug for any product, and any organization that doesn’t understand that is fundamentally broken.</p>
<p>Once you catalyze this conversation, you can begin mapping out a broader communication strategy that takes advantage of the many excellent options for replacing this legacy social media channel.</p>
<h2>Stop your school from using ChatGPT</h2>
<p>An increasing number of schools are falling prey to the “AI is inevitable!” rhetoric and desperately chasing the idea of putting AI tools into kids’ hands. Worse, a lot of schools think that the only kinds of technology that exist are the kinds made by giant tech companies. And because many of the adults making the decisions about AI are not necessarily experts in every detail of every technology, the decision about <em>which</em> AI platforms to use often comes down to which ones people have heard about the most. For most people, that means ChatGPT, since it’s gotten the most free hype from media.</p>
<p>As a result, many schools and educational institutions are considering the deployment of a platform that has told multiple children to self-harm, including several who have taken their own lives. This is something that you can take action about at your kid’s school.</p>
<p>First, you can begin simply by gathering resources. There are <a href="https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.M1A.S4zx.M-CdIbTK0GGI&amp;smid=url-share">many</a> <a href="https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html?unlocked_article_code=1.M1A.-92e.rGfKZMgP6nE9&amp;smid=url-share">credible</a> stories which you can share to illustrate the risk to administrators, and to other parents. Typically, apologists for this product will raise a few objections, which you can respond to in a thoughtful way:</p>
<ul>
<li>“Maybe those kids were already depressed?” Several of the children who have been impacted by these tools were introduced to them as homework assistants, and only evolved into using them as emotional crutches at the prompting of the responses from the tool. Also: your school has children in it who are depressed, why are you willing to endanger them?</li>
<li>“Doesn’t every tool cause this?” No, this is extreme and unusual behavior. Your email software or word processor have never incited your children to commit violence against anyone, let alone themselves. Not even other LLMs prompt this behavior. And again, even if this <em>did</em> happen with every tool in this category, why would that make it okay? If every pill in a bottle is poisonous, does that make it okay to give the bottle of pills to our kids?</li>
<li>“They’ll be missing out on the future.” Ask the parents of the children impacted in these stories about their kids’ futures.</li>
<li>“We should just roll it out as a test.” Who will pay for monitoring all usage by all students in the test?</li>
<li>“It’s a parent’s responsibility.” Forcing a parent to invest hours of time into learning a cutting-edge technology that is being constantly updated is a full-time job. If you are going to burden them with that level of responsibility, how will you provide resources to support them? What is your plan to communicate this responsibility to them and get their consent so they can agree to take on this responsibility?</li>
<li>“The company said it’s working on the problem.” They can change their technology so that it only incites violence against their executives, or publish a notice when it has gone a full year without costing any children their lives. At that point, they may be considered for re-evaluation.</li>
</ul>
<p>With these responses in hand, you can provide some basic facts about the risks of the specific tool or platform that is being recommended, and help present a cogent argument against its deployment. It’s important to frame the argument in terms of child safety — the conventional arguments against LLMs, grounded in concerns like environmental impact, labor impact, intellectual property rights, or other similar issues tend to be dismissed out of hand due to effective propagandizing by Big AI advocates.</p>
<p>If, instead, you ignore the debate about LLMs and focus on real-world safety concerns based on actual threats that have happened to actual children, you should be able to have a very direct impact. And these are messages that others will generally pick up and amplify as well, whether they are fellow parents, or local media.</p>
<p>From here, you can begin a conversation that re-evaluates the <em>goals</em> of the initiative from first principles. &quot;Everyone else is doing it&quot; is not a valid way of advocating for technology, and even if they feel that LLMs are a technology that students should become familiar with, they should begin by engaging with the many resources on the topic created by academics who are not tied to the Big AI companies.</p>
<h2>You have power</h2>
<p>The key reason I wanted to capture some specific actions that people can take around responding to the harms that Big AI poses towards children is to remind us all that the power to take action lies in everyone’s hands. It’s not an abstract concept, or a theoretical thing that we have to wait for someone else to do.</p>
<p>We are in an outrageous place, where the actions of some of the biggest and most influential technology companies in the world are so beyond the pale that we can’t even discuss the things that they are doing in polite company. The actions that take place on these platforms used to mean that simply <em>accessing</em> these kinds of sites during one’s workday would be a firing offense. Now we have employers and schools trying to <em>require</em> people to use these things.</p>
<p>The pushback has to come at every level. Do talk to your elected officials. Do organize with others at your local level. If you work in tech, make sure to resist every attempt at normalizing these platforms, or incorporating their technologies into your own.</p>
<p>Finally, use your voice and your courage, and trust in your sense of basic decency. It might only take you a few minutes to draft up an email and send it to the right people. If you need help figuring out who to send it to, or how to phrase it, let me know and I’ll help! But these things that feel small can be quite enormous when they all add up together. And that’s exactly what our kids deserve.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Talking through the tech reckoning</title>
      <link href="https://anildash.com/2026/02/25/talking-through-the-tech-reckoning/"/>
      <updated>2026-02-26T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/25/talking-through-the-tech-reckoning/</id>
      <content type="html">
        <![CDATA[
      <p>Many of the topics that we’ve all been discussing about technology these days seem to matter so much more, and the stakes have never been higher. So, I’ve been trying to engage with more conversations out in the world, in hopes of communicating some of the ideas that might not get shared from more traditional voices in technology. These recent conversations have been pretty well received, and I hope you’ll take a minute to give them a listen when you have a moment.</p>
<h2>Galaxy Brain</h2>
<p>First, it was nice to sit down with Charlie Warzel, as he invited me to speak with him on <a href="https://www.theatlantic.com/podcasts/2026/02/the-ai-panic-cycle-and-whats-actually-different-now/686077/?gift=apxH5R6bxFb7BY7F-EpWnOKasXuqQ1RVEcCy4QH0pq8">Galaxy Brain</a> (full transcript at that link), his excellent podcast for The Atlantic. The initial topic was some of the alarmist hype being raised around AI within the tech industry right now, but we had a much more far-ranging conversation, and I was particularly glad that I got to articulate my (somewhat nuanced) take on the rhetoric that many of the Big AI companies push about their LLM products being “inevitable”.</p>
<p>In short, while I think it’s important to fight their narrative that treats big commercial AI products as inevitable, I don’t think it will be effective or successful to do so by trying to stop regular people from using LLMs at all. Instead, I think we have to pursue a third option, which is a multiplicity of small, independent, accountable and purpose-built LLMs. By analogy, the answer to unhealthy fast food is good, home-cooked meals and neighborhood restaurants all using local ingredients.</p>
<p>The full conversation is almost 45 minutes, but I’ve cued up the section on inevitability here:</p>
<iframe src="https://www.youtube-nocookie.com/embed/kNdjLf4f0uU?t=2053 s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen class="video"></iframe>
<h2>Revolution Social</h2>
<p>Next up, I got to reconnect with Rabble, whom I’ve known since the earliest days of social media, for his podcast <a href="https://revolution.social/episodes/silicon-valley-has-lost-its-moral-compass-with-ani/">Revolution.Social</a>. The framing for this episode was “Silicon Valley has lost its moral compass” (did it have one? Ayyyyy) but this was another chance to have a wide-ranging conversation, and I was particularly glad to get into the reckoning that I think is coming around intellectual property in the AI era. Put simply, I think that the current practice of wholesale appropriation of content from creators without consent or compensation by the AI companies is simply untenable. If nothing else, as normal companies start using data and content, they’re going to <em>want</em> to pay for it just so they don’t get sued and so that the quality of the content they’re using is of a known reliability. That will start to change things from he current Wild West “steal all the stuff and sort it out later” mentality.

It will not surprise you to find out that I illustrated this point by using examples that included… Prince and Taylor Swift. But there’s lots of other good stuff in the conversation too! Let me know what you think.</p>
<iframe src="https://www.youtube-nocookie.com/embed/NhBykJqOqAc?t=1560s" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen class="video"></iframe>
<h2>What’s next?</h2>
<p>As I’ve been writing more here on my site again, many of these topics seem to have resonated, and there have been some more opportunities to guest on podcasts, or invitations to speak at various events. For the last several years, I had largely declined all such invitations, both out of some fatigue over where the industry was at, and also because I didn’t think I had anything in particular to say.</p>
<p>In all honesty, these days it feels like the stakes are too high, and there are too few people who are addressing some of these issues, so I changed my mind and started to re-engage. I may well be an imperfect messenger, and I would eagerly pass the microphone to others who want to use their voices to talk about how tech can be more accountable and more humanist (if that’s you, let me know!). But if you think there’s value to these kinds of things, let me know, or if you think there are places where I should be getting the message out, do let them know, and I’ll try to do my best to dedicate as much time and energy as I can to doing so. And, as always, if there’s something I could be doing better in communicating in these kinds of platforms, your critique and comments are always welcome!</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>A Cookie for Dario? — Anthropic and selling death</title>
      <link href="https://anildash.com/2026/02/27/a-cookie-for-dario/"/>
      <updated>2026-02-28T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/27/a-cookie-for-dario/</id>
      <content type="html">
        <![CDATA[
      <p>A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support <a href="https://www.politico.com/news/2025/11/30/war-crimes-hegseth-venezuela-strikes-00671160">his commission</a> of <a href="https://www.newyorker.com/news/q-and-a/the-legal-consequences-of-pete-hegseths-kill-them-all-order">war crimes</a>. As has become clear this week, Anthropic CEO Dario Amodei has <a href="https://www.nytimes.com/2026/02/26/technology/anthropic-pentagon-talks-ai.html?unlocked_article_code=1.PVA.ao-a.26AX1P-gLWlH&amp;smid=url-share">declined to do so</a>. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.</p>
<p>Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.</p>
<p>To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform <em>would</em> enable a sitting official of any government to knowingly commit such crimes.</p>
<p>We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that <em>this is what we should expect</em>. It’s also good to note that companies may have <em>many</em> reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s <a href="https://www.newyorker.com/news/news-desk/pete-hegseths-secret-history">drunkenly stumbling</a> his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — <a href="https://www.wired.com/story/department-of-defense-department-of-war/">another lie</a>).</p>
<h2>Selling to the Pentagon sucks</h2>
<p>Being on <em>any</em> federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to <em>build</em> these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.</p>
<p>This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been <a href="https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/">massive waves of rolling layoffs</a> over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that <a href="https://www.anildash.com/2026/02/18/threatening-kids-with-ai/">threaten kids’ lives or well-being</a>. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has <a href="https://layoffs.fyi">laid off people <em>at least 17 times</em></a> in the last three years alone.</p>
<p>So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, <em>and</em> you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.</p>
<h2>How did we get here?</h2>
<p>We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in <a href="https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/">enabling the Rohingya genocide</a> truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of <a href="https://en.wikipedia.org/wiki/IBM_and_World_War_II">the holocaust in Germany</a> in the 1940s, and that served as the template for their work implementing <a href="https://www.eff.org/deeplinks/2015/02/eff-files-amicus-brief-case-seeks-hold-ibm-responsible-facilitating-apartheid">apartheid in South Africa</a> in the 1970s. IBM actually <em>bid</em> for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry <a href="https://www.vox.com/recode/2019/10/9/20906605/github-ice-contract-immigration-ice-dan-friedman">stood up against contracts with ICE</a>. That moment was also one of the catalyzing events that drove the tech tycoons into <a href="https://www.anildash.com/2023/07/07/vc-qanon/">their group chats</a> where they collectively decided that they needed to bring their workers to heel.</p>
<p>And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been <a href="https://www.bbc.com/news/articles/c5ydddy3qzgo">openly inciting</a> <a href="https://caliber.az/en/post/elon-musk-warns-america-on-brink-of-second-civil-war">civil war</a> <a href="https://www.nbcnews.com/tech/internet/elon-musk-predicting-civil-war-europe-nearly-year-rcna165469"><em>for years</em></a> on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a <em>debate</em> as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.</p>
<p>We don’t have to set the bar this low. We have to remind each other that this isn’t <em>normal</em> for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable.

In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>Why Apple’s move to video could endanger podcasting&#39;s greatest power</title>
      <link href="https://anildash.com/2026/02/28/apple-video-podcast-power/"/>
      <updated>2026-02-28T00:00:00Z</updated>
      <id>https://anildash.com/2026/02/28/apple-video-podcast-power/</id>
      <content type="html">
        <![CDATA[
      <p>TL;DR:</p>
<ul>
<li>Apple is adding support for video podcasts to their podcast app</li>
<li>Podcasts are built on an open standard, which is why they aren’t controlled by a bad algorithm and don’t have ads that spy on you</li>
<li>Apple’s new system for video podcasts breaks with the old podcast standard, and forces creators to host their video clips with a few selected companies</li>
<li>The stakes are even higher because all the indie video infrastructure companies have been bought by private equity, while Trump’s goons go after TV and consolidate the big studios</li>
<li>If Apple doesn’t open this up, it could lead to podcasts getting enshittified like all the other media</li>
</ul>
<h2>Podcasts are a radical gift</h2>
<p>As I noted back in 2024, the common phrase “wherever you get your podcasts” masks a subtle point, which is that podcasts are built on an open technology — a design which has radical implications on today’s internet. This is the reason that the podcasts most people consume aren’t skewed by creators chasing an algorithm that dictates what content they should create, aren’t full of surveillance-based advertising, and aren’t locked down to one app or platform that traps both creators and their audience within the walled garden of a single giant tech company.</p>
<p>Many of those merits of the contemporary podcast ecosystem are possible because of choices Apple made almost two decades ago when they embraced open standards in iTunes when adding podcasting features. Their outsized market influence (the term “podcast” itself came from the name iPod) pushed everyone else in the ecosystem to follow their lead, and as a result, we have a major media format that isn’t as poisoned, in some ways, as the rest of social media or even mainstream media.</p>
<p>Sure, there are individual podcast creators one might object to, but notice how you don’t see bad actors like FCC chairman Brendan Carr illegally throwing his weight around to try to censor and persecute podcasters in the same way that he’s been silencing television broadcasters, and you don’t see MAGA legislators trying to game the refs about the algorithm the way they have with Facebook and Twitter. Even the Elon Musks of the world <em>can’t</em> just buy up the whole world of podcasting like he was able to with Twitter, because the ecosystem is decentralized and not controlled by any one player. This is how the Internet was supposed to work. As early Internet advocates were fond of saying, the architecture of the Internet was designed to see censorship as damage, and route around it.</p>
<h2>The move to video</h2>
<p>All of this is at much higher risk now due to the technical decisions Apple has made with its <a href="https://www.apple.com/newsroom/2026/02/apple-introduces-a-new-video-podcast-experience-on-apple-podcasts/">move to support video podcasts</a> in its latest software versions that are about to launch. The motivations for their move are obvious: in recent years, many podcasters have moved to embrace new platforms to increase their distribution, reach, engagement and sponsorship dollars, and that has driven them to add video, which has meant moving to YouTube, and more recently, platforms like Netflix. That is also typically accompanied by putting out promotional clips of the video portion of the podcast on platforms like TikTok and Instagram. Combined with Spotify’s acquisition of multiple studios in order to produce proprietary shows that are not podcasts, but exclusive content locked into their apps, and Apple has faced a significant number of threats to their once-dominant position in the space.</p>
<p>So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses.  For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.</p>
<p>The problem, though, is that Apple is only allowing these new video streams to be served by <a href="https://podcasters.apple.com/partner-search">a small number of pre-approved commercial providers</a> that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on <code>anildash.com</code> and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.</p>
<p>If I want to publish a <em>video</em> podcast to Apple’s new system, though, I can’t just put up a video file on my site and tell people to subscribe to my podcast. I have to sign up for one of the approved partner services, agree to their terms of service, pay their monthly fee, watch them get acquired by Facebook, wait for the stupid corporate battle between Facebook and Apple, endure the service being enshittified, have them put their thumb on the scale about which content they want to promote, deal with my subscribers being spied on when they watch my show, see Brendan Carr make up a pretense to attack the platform I’m on, watch the service use my show to cross-promote violent attacks on vulnerable people, and the entire rest of <a href="https://www.anildash.com/2022/02/09/the-stupid-tech-content-culture-cycle/">that broken tech/content culture cycle</a>.</p>
<p>We <em>don’t have to do this</em>, Apple!</p>
<h2>How this plays out</h2>
<p>What will happen, by default, if Apple doesn’t change course and add support for open video hosting for podcasts is a land grab for control of the infrastructure of the new, closed video podcast technology platform. Some of the bidders may be players that want to own podcasting (Spotify, Netflix, maybe legacy media companies like Disney and Paramount), or a roll-up from a cloud provider like AWS or Google Cloud. Either way, the services will get way more expensive for creators, and far more conservative about what content they allow, while being far more consumer-hostile in terms of privacy and monetization. We’ve seen this play out already — video shows on YouTube give advertisers massive amounts of data about viewers, while podcasts can be delivered to an audience while almost totally preserving their privacy, if a creator wants to help them preserve their anonymity. The reason you see podcasters always talking about “use our promo code” in their sponsor reads is because <em>advertisers can’t track you</em> going from their show to their website.</p>
<p>This will also start to impact content. You <em>don’t</em> hear podcasters saying “unalive” or censoring normal words because there is no algorithm that skews the distribution of their content. The promotional graphics for their shows are often downright boring, and don’t feature the hosts making weird faces like on YouTube thumbnails, because they haven’t been optimized to within an inch of their lives in hopes of getting 12-year-olds to click on them instead of Mr. Beast — because they’re not trying to chase algorithmic amplification. The closest thing that podcasters have to those kinds of games is when they ask you to rate them in Apple’s Podcasts app, because <em>that</em> has an algorithm for making recommendations, but even that is mediated by real humans making actual choices.</p>
<p>But once we’ve got a layer of paid intermediaries distributing video content, and Apple leans more heavily into the visual aspects of their podcast app, incentives are going to start to shift rapidly. Today, other than on laptops, phones and tablets, Apple Podcasts app only exists on their Apple TV hardware, and doesn’t even have a video playback feature. By contrast, a <em>lot</em> of video podcast consumption happens in YouTube’s TV apps in the living room. Apple Podcasts will soon have to be on every set top device like Roku sticks and Amazon Fire TVs and Google’s Chromecasts, as well as on smart TVs like Samsungs and LGs, with a robust video playback feature that can compete with YouTube’s own capabilities. Once that’s happened — which will take at least a year, if not multiple years — creators will immediately begin jockeying for ways to get promoted or amplified within that ecosystem. Even if Apple <em>has</em> allowed independent publishers to make their own video podcast feeds, it’s easy to imagine them treating them as second-class citizens when distributing those podcasts to all of the Apple Podcast users across all of these platforms.</p>
<p>The stakes for all of this are even higher because nearly all of the independent online platforms for video creation outside of YouTube have been <a href="https://youtu.be/bx5bD7F8zvE">bought up by a single private equity firm</a>. In short: even if you don’t know it, if you’re trying to do video off of YouTube, all of your eggs are in one, very precarious, basket.</p>
<h2>What to do</h2>
<p>Apple can mitigate the risks of closing up podcasts by moving as quickly as possible to reassure the entire podcasting ecosystem that they’ll allow creators to use <em>any</em> source for hosting video. Right now, there’s a “fallback” video system where creators can deliver video through the traditional podcast standard, and other podcasting apps will show that video to audiences, but Apple’s apps don’t recognize it. If Apple said they’d support that specification as a second option for those who don’t want to, or can’t, use their video hosting partners, that would go a huge way towards mitigating the ecosystem risk that they’re introducing with this new shift.</p>
<p>If Apple can engage with a wide swath of creators and understand the concerns that are bubbling up, and articulate that they’re aware of the real, significant risks that can arise from the path that they’re currently on, they still have a chance to course-correct.</p>
<p>Some of these decisions can seem like arcane technical discussions. It’s easy to roll your eyes when people talk about specifications and formats and the minutiae of what happens behind the scenes when we click on a link. But the history of the Internet has shown us that, sometimes, even some of what seem like the most inconsequential choices end up leading to massive shifts in a larger ecosystem, or even in culture overall.</p>
<p>A generation ago, a few people at Apple made a choice to embrace an open ecosystem that was in its infancy, and in so doing, they enabled an entire culture of creators to flourish for decades. Podcasting is perhaps the last major media format that is open, free, and not easily able to be captured by authoritarians. The stakes couldn’t be higher. All it takes now is a few decision makers pushing to do the right thing, not just the easy thing, to protect an entire vital medium.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>The Neo solves Apple’s embarrassment</title>
      <link href="https://anildash.com/2026/03/08/neo-apple-embarassment/"/>
      <updated>2026-03-08T00:00:00Z</updated>
      <id>https://anildash.com/2026/03/08/neo-apple-embarassment/</id>
      <content type="html">
        <![CDATA[
      <p>Last week, Apple released a parade of hardware announcements, and the one that captured the most attention across the industry was the $600 ($500 if you’re in education!) <a href="https://amzn.to/46K9mbt">MacBook Neo</a>, the brightly-colored low-end laptop that they launched to great fanfare. The conventional wisdom is that this product opens up Apple to the low end of the laptop market for the first time, radically changing the dynamics of the entire market, and throwing down the gauntlet to the garbage Windows laptop market, as well as challenging a huge swath of Chromebooks which tend to dominate in the education market. This is incorrect.</p>
<p>Apple has, in fact, sold a MacBook Air with an M1 chip <a href="https://www.macworld.com/article/2986234/walmart-m1-macbook-air-too-good-to-be-true.html">at Walmart</a> for <em>years</em>, which it has intermittently discounted to $499 at key times like Black Friday and Cyber Monday. The single-core performance of that laptop (meaning, how it works for most normal tasks that people do, like browsing the web or writing email or watching YouTube videos), is very nearly equivalent to the newly-released MacBook Neo.</p>
<p>But. A laptop with an old design, using a chip that has an old number (the M1 chip came out six years ago!), sold exclusively through a mass-market retailer that is perceived as anything but premium, presents an enormous brand challenge for Apple. It is, to put it simply, <em>embarrassing</em>. Apple can have low-end products in its range. They invest lots of effort in that segment of their product line, as the new iPhone 17e shows, making a new basic entrant to their most recent series of phones. But Apple <em>can’t</em> have old, basic-looking products that people aren’t even able to buy at an Apple Store.</p>
<p>And that’s what Neo solves. It’s a smart reframing of a product that is nearly the same offering as the old M1 Air: the Neo and that old M1 machine both have 13” screens, both weigh just under 3 pounds, both have 8GB of RAM, both start at 256GB of storage, both have about 16 hours of battery life, are both about 8”x12”, both have 2 USB ports and a headphone jack, and both of course cost almost exactly the same. They did add a new yellow (citrus!) color for the Neo, though.</p>
<h2>Wake up, Neo</h2>
<p>What was more striking to me was <a href="https://www.youtube.com/watch?v=u3SIKAmPXY4">Apple’s introductory video</a>, which clearly seems aimed at people who are new to Apple computers, or maybe people who are new to laptop computers entirely. They’re imagining a user base who’s only ever had their smartphones and are buying computers for the first time — which might describe a lot of students. There’s no discussion here of the chamfers of the aluminum, or the pipelines in the GPU cores, and there’s barely even the slightest mention of AI; instead, they describe the basics of what the laptop includes, and even go out of their way to explain how it interoperates with an iPhone.</p>
<p>There’s also a very clear attempt to distinguish Neo’s branding from the rest of Apple’s design language. The type for the “MacBook Neo” name in the launch video, and the “Hello, Neo” text on the <a href="https://www.apple.com/macbook-neo/">product homepage</a> are a rounded typeface that’s so new that it’s not actually even an actual font that Apple’s using; they’ve rendered it as an image instead of a variation of their usual “<a href="https://developer.apple.com/fonts/">San Francisco</a>” font that Apple uses for everything else in their standard marketing materials. The throwback to 2000s-era design (terminal green, the word “Neo” — are we entering the Matrix?) couldn’t be more different from the “it looks expensive” vibes of something like the <a href="https://www.apple.com/apple-watch-hermes/">Apple Watch Hermès</a> branding.</p>
<p>In all, it’s pretty impressive to see Apple use its marketing strengths to take a product that is remarkably similar to something that they’ve had for sale for years at the largest retailer in the world, and position it as a brand-new, category-defining new entry into a space. To me, the biggest thing this shows is the blind spot that traditional tech trade press has to the actual buying patterns and lived experience of normal people who shop at Walmart all the time; it would be pretty hard to see Neo as particularly novel if you had walked by a Walmart tech section any time in the last three years.</p>
<p>At a time when Apple has <a href="https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/">lost whatever moral compass it had</a>, even though its machines still say “privacy is a human right” when you turn them on, we still want to see positive signs from the company. And a good one is that Apple is engaging with the reality that the current moment calls for products that are far more affordable. It is a good thing indeed when affordable products are presented as being desirable, when most of the product’s enclosure is made of recycled material, and when the lifespan of a product can be expected to be significantly longer than most in its category, instead of simply being treated as disposable. All it took was removing the stigma over the existing affordable laptop that Apple’s been selling for years.</p>

    ]]>
      </content>
    </entry>
  
    
    <entry>
      <title>What do coders do after AI?</title>
      <link href="https://anildash.com/2026/03/13/coders-after-ai/"/>
      <updated>2026-03-13T00:00:00Z</updated>
      <id>https://anildash.com/2026/03/13/coders-after-ai/</id>
      <content type="html">
        <![CDATA[
      <p>For the New York Times Magazine this Sunday, <a href="https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?unlocked_article_code=1.SlA.gzDD.giRxmN2oQFcF&amp;smid=url-share">I talked to Clive Thompson</a> about one of the conversations that I'm having most often these days: What happens to coders in this current moment of extraordinarily rapid evolution in AI? LLMs are now quickly advancing to where they can virtually become entire software factories, radically changing both the economics and the power dynamics of software creation — which has so far mostly been used to displace massive numbers of tech workers.</p>
<p>But it's not so simple as &quot;bosses are firing coders now that AI can write code&quot;.</p>
<p>For one thing, though there are certainly a lot of companies where executives are forcing teams to churn out slop code, and using that as an excuse to carry out mass layoffs, there are plenty of companies where &quot;AI&quot; is just a buzzword being used as a pretense for layoffs that owners have wanted to do anyway. And more importantly, there are a growing number of coders who are having a very <em>different</em> experience with the tools than those bosses may have expected — and a very different outcome than the Big AI labs may have intended. As I said in the story:</p>
<blockquote>
<p>“The reason that tech generally — and coders in particular — see LLMs differently than everyone else is that in the creative disciplines, LLMs take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, LLMs take away the drudgery and leave the human, soulful parts to you.”</p>
</blockquote>
<p>This is a point that's hard for a lot of my artist friends to understand: how come so many coders don't just hate LLMs for stealing their work the way that most writers and photographers and musicians do? The answer boils down to three things:</p>
<ul>
<li>Coders have long had a history of openly sharing code with each other, as part of an open source, collaborative culture that goes back for more than half a century.</li>
<li>Tools for writing and creating code have almost always offered a certain degree of automation and reuse of work, so generating code doesn't feel like as radical a departure from past practices.</li>
<li>Software development is one of the fields with the least-advanced cultures around labor, as workers have almost no history of organizing, and many coders tend to side much more with management as they've been conditioned to think of themselves as &quot;future founders&quot; rather than being in solidarity with other workers.</li>
</ul>
<p>What this means is, attitudes about automation and worker displacement in tech are radically different than they would be in something like the auto industry, and in many cases, I've found that being part of a coder workforce has meant witnessing a level of literacy about past labor movements that is shockingly low, even though their technical knowledge is obviously extremely high.</p>
<h2>Coders, in their heads and hearts</h2>
<p>To be somewhat reductive about it, there are two main cohorts of coders. A larger, less vocal, group who see coding as a stable, well-paying career that they got into in order to support themselves and their families, and to partake in the upward economic mobility that the tech sector has represented for the last few decades. Then there is the smaller, more visible, group who have seen coding as an avocation, which they were drawn to as a form of creative expression and problem-solving just as much as a career opportunity. They certainly haven't been reluctant to capitalize on the huge economic potential of working in tech — this is the group that most startup founders come from — but coding isn't simply something they do from 9 to 5 and then put away at the end of the day. For those of us in this group (yeah... I'm one of these folks), we usually started coding when we were kids, and we have usually kept doing it on nights and weekends ever since, even if it's not even part of our jobs anymore.</p>
<p>Both cohorts of coders are in for a hard time thanks to the new AI tools, but for completely different reasons.</p>
<h3>For the 9 to 5</h3>
<p>The people who started to write software just because it represented a stable job, but who don't see it as part of their own personal identity, are going to be devastated by the ruthlessness with which their bosses will swing the ax. These new LLM-powered software factories can generate orders of magnitude more of the standardized business code that tends to be the bread-and-butter work for these journeyman coders, and it's not the kind of displacement that can be solved by learning a new programming language on nights and weekends, or getting a new professional certification. Much of the &quot;working class&quot; tech industry (speaking of the roles they perform functionally within the system; these are obviously jobs that pay far more than working class salaries today) are seen as ripe targets for deskilling, where lower-paid product roles can delegate coding tasks to coding AI systems, or for being automated by management giving orders to those AI systems.</p>
<p>One of the hardest parts of reckoning with this change is not just the speed with which it is happening, but the level of cultural change that it reflects. Coders are generally very amenable to learning new skills; it's a necessary part of the work, and the mindset is almost never one of being change-averse. But the level at which the change is happening in this transition is one that gets closer to people's sense of self-worth and identity, rather than to their perceptions of simply having to acquire knowledge or skills. It doesn't help that the change is being catalyzed by some of the most venal and irresponsible leaders in the history of business, brazenly acting without any moral boundaries whatsoever.</p>
<h3>For the nights and weekends</h3>
<p>For the coders that see being a coder as part of their identity, the LLM transformation is going to represent an entirely different set of challenges. They may well survive the transition that is coming, but find themselves in an unrecognizable place on the other side of it. The way that these new LLM-based tools work is by turning into virtual software factories that essentially churn out nearly all of the code <em>for</em> you. The actual work of writing the code is abstracted away, with the creator essentially focused more on describing the desired end results, and making sure to test that everything is working correctly. You're more the conductor of the symphony than someone who's holding a violin.</p>
<p>But there are people who have spent decades honing their craft, committing to memory the most obscure vagaries of this computer processor or that web browser or that one gaming console, all in service of creating code that was particularly elegant or especially high-performing, or just <em>really satisfying</em> to write. There's a real art to it. When you get your code to run just so, you feel a quiet pride in yourself, and a sense of relief that there are still things in the world that work as they should. It's a little box that you can type in where things are fair. It's the same reason so many coders like to bake, or knit, or do woodworking — they're all hobbies where precisely doing the right thing is rewarded with a delightful result.</p>
<p>And now that's going away. You won't see the code yourself anymore, the robots will write it for you while falling around and clanking. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work.  Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way.</p>
<p>Your job changes into <em>describing software</em>. Now, if you're the kind of person who only ever wanted to have the end result, maybe this is a liberation. Sometimes, that's what mattered — we wanted to fast-forward to the end result, elegance be damned. But if you were one of those crafters? The people who wrote idiomatic code that made that programming language sing? There's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either.</p>
<h2>If ... Then?</h2>
<p>What do we do about it? This horse is not going back in the barn. The billionaires wouldn't let it, anyway.</p>
<p>I've come to the personal conclusion that the only way forward is for more of the hackers with soul to seize this moment of flux and use these tools to build. The economics of creating code are changing, and it can't just be the worst billionaires in the world who benefit. The latest count is <em>700,000 people</em> laid off in the last few years in the tech industry. We'll be at a million soon, at the rate things are accelerating. Each new layoff announcement is now in the <em>thousands</em>.</p>
<p>It's not going to be a panacea for all the jobs lost, and it's not the only solution we're going to need, but one part of the answer can be coders who still give a damn looking out for each other, and building independent efforts without being reliant on the economics — or ethics — of the people who are laying off their colleagues by the hundreds of thousands.</p>
<p>I've spent my whole career working with communities of coders, building tools for the people who build with code. I don't imagine I'll ever stop doing it. This is the hardest moment that I've ever seen this community go through, and it makes me heartsick to see so many people enduring such stress and anxiety about what's to come. More than anything else, what I hope people can remember is that all of the great things that people love about technology weren't created by the money guys, or the bosses who make HR decisions — they were created by the people who actually build things. That's still an incredible superpower, and it will remain one no matter how much the actual tools of creation continue to change.</p>

    ]]>
      </content>
    </entry>
  
</feed>
Raw headers
{
  "age": "40630",
  "cache-control": "public,max-age=0,must-revalidate",
  "cache-status": "\"Netlify Edge\"; hit",
  "cf-cache-status": "DYNAMIC",
  "cf-ray": "9dc5c6d00b555751-CMH",
  "content-type": "application/xml",
  "date": "Sat, 14 Mar 2026 19:45:04 GMT",
  "etag": "W/\"aeaf5e27fcb03849108eca78456961f7-ssl-df\"",
  "referrer-policy": "strict-origin-when-cross-origin",
  "server": "cloudflare",
  "strict-transport-security": "max-age=31536000",
  "transfer-encoding": "chunked",
  "vary": "Accept-Encoding",
  "x-content-type-options": "nosniff",
  "x-frame-options": "SAMEORIGIN",
  "x-nf-request-id": "01KKPY2ZQ2B90NM2R74TZ2WVQ0",
  "x-xss-protection": "1; mode=block"
}
Parsed with @rowanmanning/feed-parser
{
  "meta": {
    "type": "atom",
    "version": "1.0"
  },
  "language": null,
  "title": "Anil Dash",
  "description": "A blog about making culture. Since 1999.",
  "copyright": null,
  "url": "https://anildash.com/",
  "self": "https://anildash.com/feed.xml",
  "published": null,
  "updated": "2026-03-13T00:00:00.000Z",
  "generator": null,
  "image": null,
  "authors": [
    {
      "name": "Anil Dash",
      "email": "[email protected]",
      "url": null
    }
  ],
  "categories": [],
  "items": [
    {
      "id": "https://anildash.com/2026/01/27/codeless-ecosystem/",
      "title": "A Codeless Ecosystem, or hacking beyond vibe coding",
      "description": null,
      "url": "https://anildash.com/2026/01/27/codeless-ecosystem/",
      "published": null,
      "updated": "2026-01-27T00:00:00.000Z",
      "content": "<p>There's been a <a href=\"https://www.anildash.com/2026/01/22/codeless/\">remarkable leap forward</a> in the ability to orchestrate coding bots, making it possible for ordinary creators to command dozens of AI bots to build software without ever having to directly touch code. The implications of this kind of evolution are potentially extraordinary, as outlined in that first set of notes about what we could call \"codeless\" software. But now it's worth looking at the larger ecosystem to understand where all of this might be headed.</p>\n<h2>\"Frontier minus six\"</h2>\n<p>One idea that's come up in a host of different conversations around codeless software, both from supporters and skeptics, is how these new orchestration tools can enable coders to control coding bots that <em>aren't</em> from the Big AI companies. Skeptics say, \"won't everyone just use Claude Code, since that's the best coding bot?\"</p>\n<p>The response that comes up is one that I keep articulating as \"frontier minus six\", meaning the idea that many of the open source or open-weight AI models are often delivering results at a level equivalent to where frontier AI models were six months ago. Or, sometimes, where they were 9 months or a year ago. In any of these cases, these are still damn good results! These levels of performance are not merely acceptable, they are results that we were amazed by just months ago, and are more than serviceable for a large number of use cases — especially if those use cases can be run locally, at low cost, with lower power usage, without having to pay any vendor, and in environments where one can inspect what's happening with security and privacy.</p>\n<p>When we consider that a frontier-minus-six fleet of bots can often run on cheap commodity hardware (instead of the latest, most costly, hard-to-get Nvidia GPUs) and we still have the backup option of escalating workloads to the paid services if and when a task is too challenging for them to complete, it seems inevitable that this will be part of the mix in future codeless implementations.</p>\n<h2>Agent patterns and design</h2>\n<p>The most thoughtful and fluent analysis of the new codeless approach has been <a href=\"https://maggieappleton.com/gastown\">this wonderful essay by Maggie Appleton</a>, whose writing is always incisive and insightful. This one's a must-read! Speaking of Gas Town (Steve Yegge's signature orchestration tool, which has catalyzed much of the codeless revolution), Maggie captures the ethos of the entire space:</p>\n<blockquote>\n<p>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.</p>\n</blockquote>\n<h2>Code and legacy</h2>\n<p>Once you've considered Maggie's piece, it's worth reading over Steve Krouse's essay, \"<a href=\"https://blog.val.town/vibe-code\">Vibe code is legacy code</a>\". Steve and his team build the delightful <a href=\"https://www.val.town\">val town</a>, an incredibly accessible coding community that strikes a very careful balance between enabling coding and enabling AI assistance without overwriting the human, creative aspects of building with code. In many ways (including its aesthetic), it is the closest thing I've seen to a spiritual successor to the work we'd done for many years with <a href=\"https://en.wikipedia.org/wiki/Glitch,_Inc.\">Glitch</a>, so it's no surprise that Steve would have a good intuition about the human relationship to creating with code.</p>\n<p>There's an interesting point, however to the core point Steve makes about the disposability of vibe-coded (or AI-generated) code: <em>all</em> code is disposable. Every single line of code I wrote during the many years I was a professional developer has since been discarded. And it's not just because I was a singularly terrible coder; this is often the <em>normal</em> thing that happens with code bases after just a short period of time. As much as we lament the longevity of legacy code bases, or the impossibility of fixing some stubborn old systems based on dusty old languages, it's also very frequently the case that people happily rip out massive chunks of code that people toiled over for months or years and then discard it all without any sentimentality whatsoever.</p>\n<p>Codeless tooling just happens to embrace this ephemerality and treat it as a feature instead of a bug. That kind of inversion of assumptions often leads to interesting innovations.</p>\n<h2>To enterprise or not</h2>\n<p>As I noted in my original piece on codeless software, we can expect any successful way of building software to be appropriated by companies that want to profiteer off of the technology, <em>especially</em> enterprise companies. This new realm is no different. Because these codeless orchestration systems have been percolating for some time, we've seen some of these efforts pop up already.</p>\n<p>For example, the team at Every, which consults and builds tools around AI for businesses, calls a lot of these approaches <a href=\"https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents\">compound engineering</a> when their team uses them to create software. This name seems fine, and it's good to see that they maintain the ability to switch between models easily, even if they currently prefer Claude's Opus 4.5 for most of their work. The focus on planning and thinking through the end product holistically is a particularly important point to emphasize, and will be key to this approach succeeding as new organizations adopt it.</p>\n<p>But where I'd quibble with some of what they've explained is the focus on tying the work to individual vendors. Those concerns should be abstracted away by those who are implementing the infrastructure, as much as possible. It's a bit like ensuring that most individual coders don't have to know exactly which optimizations a compiler is making when it targets a particular CPU architecture. Building that muscle where the specifics of different AI vendors become less important will help move the industry forward towards reducing platforms costs — and more importantly, empowering coders to make choices based on their priorities, not those of the AI platforms or their bosses.</p>\n<h2>Meeting the codeless moment</h2>\n<p>A good example of the \"normal\" developer ecosystem recognizing the groundswell around codeless workflows and moving quickly to integrate with them is the Tailscale team <em>already</em> shipping <a href=\"https://tailscale.com/blog/aperture-private-alpha\">Aperture</a>. While this initial release is focused on routine tasks like managing API keys, it's really easy to see how the ability to manage gateways and usage into a heterogeneous mix of coding agents will start to enable, and encourage, adoption of new coding agents. (Especially if those \"frontier-minus-six\" scenarios start to take off.)</p>\n<p>I've been on the record <a href=\"https://me.dm/@anildash/109719178280170032\">for years</a> about being bullish on Tailscale, and nimbleness like this is a big reason why. That example of seeing where developers are going, and then building tooling to serve them, is always a sign that something is bubbling up that could actually become signficant.</p>\n<p>It's still early, but these are the first few signs of a nascent ecosystem that give me more conviction that this whole thing might become real.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/03/nye-tech-30/",
      "title": "New York Tech at 30: the Crossroads",
      "description": null,
      "url": "https://anildash.com/2026/02/03/nye-tech-30/",
      "published": null,
      "updated": "2026-02-04T00:00:00.000Z",
      "content": "<p>This past week, over a series of events, the New York tech community celebrated the 30th anniversary of a nebulous idea described as “Silicon Alley”, the catch-all term for our greater collective of creators and collaborators, founders and funders, inventors and investors, educators and entrepreneurs and electeds, activists and architects and artists. Some of the parties or mixers have been typical industry affairs, the usual glad-handing about deal-making and pleasantries. But a lot have been deeper, reflecting on what’s special and meaningful about the community we’ve built in New York. <a href=\"https://www.mediapost.com/publications/article/412470/\">Steven Rosenbaum’s reflection</a> on the anniversary captures this well from someone who’s been there, and <a href=\"https://finance.yahoo.com/news/silicon-alley-turns-30-york-114752768.html\">Leo Schwartz’s piece for Fortune</a> covers the more conventional business angle.</p>\n<p>Beyond the celebrations, though, I wanted to reflect on a number of the deeper conversations I’ve had over these last few days. These are conversations grounded in the reality of where our country and city are today, far beyond spaces where wealthy techies are going to parties and celebrating each other. The hard questions raised in these conversations are the ones that determine where this community goes in the future, and they’re the ones that <em>every</em> tech community is going to face in the current moment.</p>\n<p>I know what the New York City tech community has been; there was a time when I was one of its most prominent voices. The question now is what it will be in the future. Because we are at a profound crossroads.</p>\n<iframe title=\"vimeo-player\" src=\"https://player.vimeo.com/video/1159273059?h=b6fe26d204\" width=\"640\" height=\"360\" frameborder=\"0\" referrerpolicy=\"strict-origin-when-cross-origin\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share\"   allowfullscreen></iframe>\n<h1>What community can be</h1>\n<p>Nobody better exemplifies the best of what New York tech has been than Aaron Swartz. As I’d <a href=\"https://www.anildash.com/2026/01/09/how-markdown-took-over-the-world/\">written about</a> recently, he was brilliant and delightfully impossible. At an incredibly young age, <a href=\"https://www.eff.org/deeplinks/2017/01/everyone-made-themselves-hero-remembering-aaron-swartz\">he led our community</a> in the battle to push back against a pair of ill-considered bills that threatened free expression on the Internet. (These bills would have done to the web what the current administration has done to broadcast television, having a chilling effect on free speech and putting large swaths of content under government control.) As we stood outside Chuck Schumer’s office and demanded that big business take their hands off our internet, we got our first glimpse of the immense power that our community could wield. And <a href=\"https://www.eff.org/deeplinks/2017/01/5-years-later-victory-over-sopa-means-more-ever\">we won</a>, at least for a while.</p>\n<p>My own path within the New York tech community was nowhere near as dramatic, but I was just as motivated in wanting to serve the community. When I became the first person <a href=\"https://www.anildash.com/2010/12/13/im-running-for-the-new-york-tech-meetup-board/\">elected to the board of the New York Tech Meetup</a> (later the New York Tech Alliance), it was the largest member-led organization of tech industry workers in the country. By the time it reached its peak, we were over 100,000 members strong, and could sell out one of our monthly events (at a venue of over 1000 attendees) in minutes. The collective power and impact of that cohort was immense. So, when I say “community”, I mean <em>community</em>. I’m not talking about the contemporary usage of the word, when people call their TikTok followers a “community”. I mean people who care about each other and show up for each other so that they can achieve meaningful things.</p>\n<p>New York tech demonstrated its values time and again, and not just in organizing around policy that served its self-interest. When the city was still reeling from 9/11, these were people who not only chose to stay in the city, or who simply talked about how New York ought to rebuild, but actually took the risk and rebuilt the economy of the city — the <em>majority</em> of the economic regrowth and new jobs in New York City in the quarter-century since the attacks of 9/11 have happened thanks to the technology sector.</p>\n<p>When Hurricane Sandy hit, these were people who <a href=\"https://www.nbcnews.com/id/wbna49663102\">were amongst the first to step up</a> to help their neighbors dig out. When our city began to <a href=\"https://www.anildash.com/2011/03/05/nyc-mta-ftw/\">open up its data</a>, the community responded in kind by building an entire ecosystem of new tools that laid the groundwork for the tech we now take for granted when navigating around our neighborhoods. There was no reluctance to talk about the importance of diversity and inclusion, and no apology in saying that tech was failing to do its job in hiring and promoting equitably, because we know how much talent is available in our city. Hackers would come to meetups to show off their startups, sure, but just as often to show off how they’d built cool new technology to <a href=\"https://www.wbur.org/hereandnow/2021/12/28/heat-seek-tool-tenants\">help make sure our neighbors in public housing had heat in the winter</a>. This was <a href=\"https://www.anildash.com/2016/07/15/new-york-style-tech/\">New York-style tech</a>.</p>\n<p>What’s more, the work of this community happened with remarkable solidarity; the SOPA/PIPA protests that Aaron Swartz spoke at had him standing next to some of the most powerful venture capitalists in the city. When it was time to take action, a number of the most influential tech CEOs in New York took Amtrak down to Washington, D.C. to talk to elected officials and their staffers about the importance of defending free expression online, advocating for the same issue that had been so important to the broke college kids who’d been at the rally just a few days earlier. People had actually gathered around <em>principles</em>. I don’t say this as a Pollyanna who thinks everything was perfect, or that things would have always stayed so idealistically aligned, but simply to point out that <em>this did happen</em>. I don’t have to assert that it is theoretically possible, because I have already seen a community which functions in this way.</p>\n<h2>From bottoms-up to big business</h2>\n<p>But things have changed in recent years for New York’s tech community. What used to often be about extending a hand to neighbors has, much of the time, become about simply focusing on who’s getting funded to chase the trends defined by Silicon Valley. The vibrancy of the New York Tech Meetup took a huge hit from covid, preventing the ability for the community to gather in person, and the organization’s evolution from a Meetup to an Alliance to being part of Civic Hall shifted its focus in recent years, though there has been a recent push to revitalize its signature events. In its place, much of the public narrative for the community is led by Tech:NYC, which has active and able leadership, but is a far more conventional trade group. There's a focus on pragmatic tools like job listings (their <a href=\"https://technycdigest.beehiiv.com/subscribe?ref=kdPsdXErYd\">email newsletter</a> is excellent), but they're unlikely to lead a rally in front of a Senator's office. An organization whose founding members include Google and Meta is necessarily going to be different than one with 100,000 individual members.</p>\n<p>When I <a href=\"https://web.archive.org/web/20150601041007/https://www.wsj.com/articles/SB10001424127887324624404578255752537705008\">spoke to the Wall Street Journal</a> back in 2013 about the political and social power of our community, at a far different time, I called out the breadth of who our community includes:</p>\n<blockquote>\n<p>The tech constituency encompasses a range of potential voters who remain unlikely to behave as a traditional bloc. \"It's venture capitalists and 23-year-old graphic designers in Bushwick,\" Mr. Dash said. \"It's labor and management. It's not traditional allies.\"</p>\n</blockquote>\n<p>I wanted to make sure people understood that tech in New York is much broader than just, well, what the bosses and the big companies want. It is important to understand that New York is about <a href=\"https://www.anildash.com/2025/10/24/founders-over-funders/\">founders, not just funders</a>.</p>\n<p>The distinction between these groups and their goals was never clearer to me than in the 2017 battle around Amazon’s proposed <a href=\"https://en.wikipedia.org/wiki/Amazon_HQ2\">HQ2 headquarters</a>. The public narrative was that Amazon was trying to make a few cities jump through hoops to make the best possible set of bribes to the company so that they would build a new headquarters complex in the host city. The reality was, New York City offered $1.5 billion dollars to the richest man in the world in order to open up an office in a city where the company was inevitably going to do business regardless, and the contract that Amazon would have to sign in exchange only obligated them to hire 500 new workers in the city — <strong>fewer</strong> people than their typical hiring plan would expect in that timeframe. In addition, the proposed plan would have taken over land intended for 6,000 homes, including 1500 affordable units, would have defunded the mass transit system through years of tax breaks for the company while putting massive additional burden on the transit system, and raised housing prices. (Amazon has since signed a lease for 335,000 square feet and hired over 1000 employees, without any subsidies.)</p>\n<p>At the time, I was CEO of a company that two entrepreneurs had founded in 2000 and bootstrapped to success, leading to them spinning out multiple companies which would go on to exit for over $2.2 billion, providing over 500 jobs and creating dozens of millionaires out of the workers who joined the companies over the years. Several of the people who had worked at those companies went on to form their own companies, and <em>those</em> companies are now collectively worth over $5 billion. All of these companies, combined, have gotten a total of <em>zero billion dollars</em> from the state and city of New York. In addition, none of those companies have ever had working conditions anywhere close to <a href=\"https://en.wikipedia.org/wiki/Criticism_of_Amazon#Treatment_of_workers\">those Amazon has been criticized for</a>.</p>\n<p>But the <em>story</em> of the time was that “New York tech wants HQ2!” Media like newspapers and TV were firmly convinced that techies were in support of Amazon getting a massive unnecessary handout, and I had genuinely struggled to figure out why for a long time. After a while, it became obvious. Everyone that they had spoken to, and all the voices that were considered canonical and credible when talking about “New York tech”, were investors or giant publicly-traded companies.</p>\n<p>People who actually <em>built</em> things were no longer the voice of the community. Those who showed up when the power was out, or when the community was hurting, or when there was an issue that called for someone to bravely stand up and lead the crowd even if there was some social or political risk — they were not considered valid. People liked the <em>myth</em> of Aaron Swartz by then, but they would have ignored the fact that he almost certainly would have objected to corporate subsidy for the company.</p>\n<h2>New York tech today, and tomorrow</h2>\n<p>I am still proud of the New York tech community. But that’s because I get to see what happens in person. Last week, I was reminded at every one of the in-person commemorations of the community that there are so many generous, kind-hearted, thoughtful people who will fight to do the right thing. The challenge today, though, is that those are no longer the people who define the story of the community. That’s not who a <em>new</em> person thinks of when they’re introduced to our community.</p>\n<p>When I talk to young people who are new to the industry, or people who are changing careers who are curious about tech, they have heard of things like Tech Week, or they read trade press. In those venues, a big name is generally not our home-grown founders, or even the “big” success stories of New York tech. That’s especially true as once high-flying New York tech companies like Tumblr and Foursquare and Kickstarter and Etsy and Buzzfeed either faded or got acquired, and newer successful startups are more prosaic and less attention-grabbing. Who’s left to tell them a story of what “tech” means in New York? Where will they find community?</p>\n<p>One possible future is that they try to build a startup, doing everything you’re “supposed” to do. They pitch the VC firms in town, and the big name firms that they’ve heard of. If they’re looking for community, they go to the events that get the most promotion, which might be Tech Week events. And all of these paths lead the same way — the most prominent VC firm is Andreessen Horowitz, and they run Tech Week too, even though they’re not from NYC.</p>\n<p>On that path, New York tech puts you across the table from <a href=\"https://fortune.com/2025/02/05/daniel-penny-andreessen-horowitz-a16z-investing-david-ulevitch/\">the man who strangled my neighbor to death</a>.</p>\n<p>Another possible future is that we rebuild the kind of community that we used to have. We start to get together the people who actually <em>make</em> things, and show off what we’ve built for one another. It’s going to require re-centering the hundreds of thousands of people who create and invent, rather than the dozens of people who write checks. It’s going to mean that the stories start with New York City (and maybe even… <em>in the outer boroughs</em>!), rather than taking dictation from those in Silicon Valley who hate our city. And it’s going to require understanding that technology is a set of tools and tactics we can use in service of goals — ideally positive social goals — and not just an economic opportunity to be extracted from.</p>\n<p>We would never talk about education by only talking to those who invest in making pencils. We’d never consider a story about a new movie to be complete if we only talked to those who funded the film. And certainly our policymakers would balk if we skipped speaking with them and instead aimed our policy questions directly at their financial backers, though that might result in more accurate responses. Yet somehow, with technology, we’ve given over the narrative entirely to the money men.</p>\n<p>In New York, we’ve borne the brunt of that error. A tech community with heart and soul is in danger of being snuffed out by those who will only let its most base instincts survive. Even our <em>investors</em> here are more thoughtful than these stories would make it seem! But we can change it, and maybe even change the larger tech story, if we’re diligent in never letting the bad actors control the narrative of what tech is in the world.</p>\n<p>Like so many good things, it can all start with New York City.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/06/no-such-thing-as-tech/",
      "title": "There's no such thing as \"tech\" (Ten years later)",
      "description": null,
      "url": "https://anildash.com/2026/02/06/no-such-thing-as-tech/",
      "published": null,
      "updated": "2026-02-06T00:00:00.000Z",
      "content": "<p>Ten years ago I wrote that <a href=\"https://www.anildash.com/2016/08/19/there-is-no-technology-industry/\">there is no “technology industry”</a>. It’s more true than ever.</p>\n<p>There is no “tech”. There’s no such thing as “a FAANG company”. There is almost nothing in common between the very largest tech companies and the next several hundred biggest companies that happen to create tech platforms. Whatever shorthand we use for the biggest tech companies, they almost never have much in common—whether it's how they make money, what products they make, how they make decisions, who leads them, or what drives their cultures.</p>\n<p>It’s important to make these distinctions because the false categorization of wildly dissimilar organizations into one grouping leads to absurdly inappropriate decisions being made. Let’s look at some simple examples to understand why.</p>\n<p>Take the once-ubiquitous shorthand of “FAANG” to describe big tech. (It stood, at one time, for Facebook, Amazon, Apple, Netflix and Google. Then Facebook became Meta and Google became Alphabet and Microsoft became upset about not being included, and people started trying to use other more unwieldy, less-popular sobriquets.) This abbreviation still persists because of the mindset it represents, and it is still useful in capturing a certain vision of how the industry functions. I often encounter early-career tech workers who describe their ambitions as “working at a FAANG company”.</p>\n<p>But let’s look at <em>what these different companies actually do</em>. For all its complexity, Netflix is, at its heart, about streaming video to people. Meta runs a number of communications platforms and social networks. Apple sells hardware devices. They all have very large side businesses that do other things, but this is what these companies are at their core — and they’re wildly different businesses in their core essence!</p>\n<p>If someone said, “I want to be an executive at Walmart, or maybe at A24,” you would think, “This person has no idea what the hell they want to be, or what they’re talking about.” If they were to say, “I want to work for nVidia, or maybe Deloitte,\" you would think, “This person is just confused, and that’s kind of sad.” But this is <em>exactly</em> equivalent to asserting “I want to work at a FAANG company” or “I want to work at a startup” or, worse, “I want to work in tech”.</p>\n<p>So many have been caught off guard as tech has grabbed massive power over nearly every aspect of society—from individuals who can't figure out their career paths to policy makers who've been bamboozled by tech tycoons. It's no secret how it happened: everyone underestimated the impact because they judged tech by the same rules as other industries.</p>\n<h2>Everything and nothing</h2>\n<p>These distinctions matter even more because today, <em>everything</em> is tech. Or, if you prefer, nothing is technology. Instead, every area is suffused with tech — and every discipline needs people who are fluent in the concerns of technology, and familiar with the tradeoffs and risks and opportunities that come with the adoption of, and creation of, new technologies.</p>\n<p>Now, of course, I know why it’s useful to have the shorthand of being able to say “the tech industry” when talking about a particular sector. But the sleight of hand that comes from being able to hide the enormous, outsized impact that this small number of companies has across a vast number of different sectors of society is possible, in part, because we <em>treat</em> them like they’re one narrow part of the business world. In many cases, an individual division of a giant tech company dwarfs the entirety of other industries. Apple’s AirPods business isn’t even one of the first products one would think of when listing their most important, most influential, or most profitable lines of business, and yet <em>AirPods alone</em> are bigger than the entire domestic radio advertising business in the United States. Google’s ad business alone is larger than the entire U.S. domestic airline industry combined. Things that are considered an “industry” in other categories are smaller than things that are considered a <em>product</em> in “tech”.</p>\n<p>That sense of scale is important to keep in mind as we push for accountability and to understand how to plan for what’s ahead. Even building a path for one’s own career — whether that’s inside or outside of the companies we consider to be in the tech sector — requires having a proper perspective on the relative influence of these organizations, and also on the distorting effect it can have when we don’t look at them in their full complexity.</p>\n<p>One example from a completely different realm that I find useful in contextualizing this challenge is from the world of retail: Ikea is one of the top 10 restaurants in the world. (By many reports, it’s the 6th largest chain of restaurants.) That is, of course, incidental to its role as a furniture retailer. But this is the nature of massive scale. The second-order impacts are still enough to have outsized effects in the larger world.</p>\n<p>At a moment when we have seen that so many of the biggest tech companies are led by people who don’t know how to act responsibly with all of the power that they’ve been given, it’s important that we complicate our views of their companies, and consider that they are <em>much</em> more than just part of the “tech industry”. They are functioning as communications, media, finance, education, infrastructure, transportation, commerce, defense, policing, and government much of the time. And very often, they’re doing it without our awareness or consent.</p>\n<p>So, when you hear conversations in society about tech companies, or tech execs, or tech platforms, make sure you push those who are involved in the dialogue to be specific about what they mean. You may find that they haven’t stopped to reflect on the fact that this simple label has long since stopped accurately describing the extraordinary amount of power and control that this handful of companies exert over our daily lives, and over society as a whole.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/11/coding-agents-as-the-new-compilers/",
      "title": "Coding agents as the new compilers",
      "description": null,
      "url": "https://anildash.com/2026/02/11/coding-agents-as-the-new-compilers/",
      "published": null,
      "updated": "2026-02-12T00:00:00.000Z",
      "content": "<p>In each successive generation of code creation thus far, we’ve abstracted away the prior generation over time. Usually, only a small percentage of coders still work on the lower layers of the stack that used to be the space where everyone was working. I’ve been coding long enough that people were still creating code in assembly when I started (though I was never any good at it!), though I started with BASIC. Since BASIC was an interpreted language, its interpreter would write the assembly language for me, and I never had to see exactly what assembly language code was being created.</p>\n<p>I definitely <em>did</em> know old-school coders who used to, at first, check that assembly code to see if they liked the output. But eventually, over time, they just learned to trust the system and stopped looking at what happened after the system finished compiling. Even people using more “close to the metal” languages like C generally trust that their compilers have been optimized enough that they seldom inspect the output of the compiler to make sure it was perfectly optimized for their particular processor or configuration. The benefits of delegating those concerns to the teams that create compilers, and coding tools in general, yielded so many advantages that that tradeoff was easily worth it, once you got over the slightly uncomfortable feeling.</p>\n<p>In the years that followed, though a small cohort of expert coders who would hand-tune assembly code for things like getting the most extreme performance out of a gaming console, most folks stopped writing it, and very few <em>new</em> coders learned assembly at all. The vast majority of working coders treat the output from the compiler layer as a black box, trusting the tools to do the right thing and delegating the concerns below that to the toolmakers.</p>\n<p>We may be seeing that pattern repeat itself. Only this time, the abstraction is happening through AI tools abstracting away <em>all</em> the code. Which can feel a little scary.</p>\n<h2>Squashing the stack</h2>\n<p>Just as interpreted languages took away chores like memory management, and high-level languages took away the tedium of writing assembly code, we’re starting to see the first wave of tools that completely abstract away the writing of code. (I described this in more detail in the piece about <a href=\"https://www.anildash.com/2026/01/22/codeless/\">codeless software</a>recently.</p>\n<p>The individual practice of professionalizing the writing of software with LLMs seems to have settled on the term “<a href=\"https://simonwillison.net/2026/Feb/11/glm-5/\">agentic engineering</a>”, as Simon Willison recently noted.</p>\n<p>But the next step beyond that is when teams <em>don’t</em> write any of the code themselves, instead moving to an entirely abstracted way of creating code. In this model, teams (or even individual coders):</p>\n<ul>\n<li>Define the specifications for how the code should work</li>\n<li>Ensure that the system is provided with enough context at all times that it can succeed in creating code that is successful as often as possible</li>\n<li>Provide sufficient resources that a redundant and resilient set of code outputs can be created to accommodate failures while in iteration</li>\n<li>Enforce execution of tests and conformance systems against the code — <a href=\"https://simonwillison.net/2025/Dec/18/code-proven-to-work/\">including human tests with a named, accountable party</a>, not just automated software tests</li>\n</ul>\n<p>With this kind of model deployed, the software that is created can essentially be output from the system in the way that assembly code or bytecode is output from compilers today, with no direct inspection from the people who are directing its creation. Another way of thinking about this is that we’re abstracting away many different specific programming languages and detailed syntaxes to more human-written Markdown files, created much of the time in <strong>collaboration</strong> with these LLM tools.</p>\n<p>Presently, most people and teams who are pursuing this path are doing so with costly commercial LLMs. I would strongly advocate that most organizations, and <em>especially</em> most professional coders, be very fluent in ways of accomplishing these tasks with a fleet of low-cost, locally-hosted, open source/open-weight models contributing to the workload. I don’t think they are performant enough yet to accomplish all of the coding tasks needed for a non-trivial application yet, but there are a significant number of sub-tasks that could reasonably be delegated. More importantly, it will be increasingly vital to ensure that this entire “codeless compilation” stack for agentic engineering works in a vendor-neutral way that can be decoupled from the major LLM vendors, as they get more irresponsible in their business practices and more aggressive towards today’s working coders and creators.</p>\n<p>For many, those worries about Big AI are why their reaction to these developments in agentic coding make them want to recoil. But in reality, these issues are exactly why we desperately need to <em>engage</em>.</p>\n<h2>Seizing the means</h2>\n<p>Many of the smartest coders I know have a lot of legitimate and understandable misgivings about the impact that LLMs are having on the coding world, especially as they’re often being evangelized by companies that plainly have ill intent towards working coders. It is reasonable, and even smart, to be skeptical of their motivations and incentives.</p>\n<p>But the response to that skepticism is not to reject the category of technology, but rather to capture it and seize control over its direction, away from the Big AI companies. This shift to a new level of coding abstraction is exactly the kind of platform shift that presents that sort of opportunity. It’s potentially a chance for coders to be in control of some part of their destiny, at a time when a lot of bosses clearly want to <a href=\"https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/\">get rid of as many coders as they can</a>.</p>\n<p>At the very least, this is one area where the people who actually <em>make things</em> are ahead of the big platforms that want to cash in on it.</p>\n<h2>What if I think this is all bullshit?</h2>\n<p>I think a lot of coders are going to be understandably skeptical. The most common concern is, “I write really great code, how could it possibly be good news that we’re going to abstract away the writing of code?”. Or, “How the hell could a software factory be good news for people who make software?”</p>\n<p>For that first question, the answer is going to involve some grieving, at first. It may be the case that writing really clean, elegant, idiomatic Python code is a skill that will be reduced in demand in the same way that writing incredibly performant, highly-tuned assembly code is. There <em>is</em> a market for it, but it’s on the edges, in specific scenarios. People ask for it when they need it, but they don’t usually <em>start</em> by saying they need it.</p>\n<p>But for the deeper question, we may have a more hopeful answer. By elevating our focus up from the individual lines of code to the more ambitious focus on the overall problem we’re trying to solve, we may reconnect with the “why” that brought us to creating software and tech in the first place. We can raise our gaze from the steps right in front of us to the horizon a bit further ahead, and think more deeply about the problem we’re trying to solve. Or maybe even about the <em>people</em> who we’re trying to solve that problem for.</p>\n<p>I think people who create code today, if they have access to super-efficient code-creation tools, will make better and more thoughtful products than the financiers who are currently carrying out mass layoffs of the best and most thoughtful people in the tech industry.</p>\n<p>I also know there’s a history of worker-owned factories being safer and more successful than others in their industries, while often making better, longer-lasting products and being better neighbors in their communities. Maybe it’s possible that there’s an internet where agentic engineering tools could enable smart creators to build their own software factories that could work the same way.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/13/launch-it-three-times/",
      "title": "Launch it 3 times",
      "description": null,
      "url": "https://anildash.com/2026/02/13/launch-it-three-times/",
      "published": null,
      "updated": "2026-02-14T00:00:00.000Z",
      "content": "<p>I wanted to share one of the bits of advice that I find myself most frequently giving to teams when they’re working on a product, or founders who are creating a new company: launch it three times.</p>\n<p>What I mean by that is, it often takes more than one time before your idea actually resonates or sticks with the people you’re trying to reach. Sometimes it takes more than twice! And when I say that you might need to launch again, that can mean a lot of different things. It might just be little tweaks to what you originally put out in the world, It might even be less than that — I’ve worked with teams that put out <strong>literally the exact same thing again</strong> and found success, because the issue they had the first time was about timing. That’s increasingly an issue as people are distracted by the deeply disturbing social and political events going on in the world, and so sometimes they just need you to put things in front of them again so that they can reassess what you were trying to say.</p>\n<p>Many relaunches are a little more ambitious, of course. Being a Prince fan, I am of course very partial to strategies that involve changing your name. Re-launching under a new name can be a key strategic move if you think that you’re not effectively reaching your target audience. As I’d written recently, one of the most important goals in getting a message out is that <a href=\"https://www.anildash.com/2025/12/05/talk-about-us-without-us/\">they have to be able to talk about you without you</a>. But if you want people to tell your story even when you’re not around, the most important prerequisite is that they have to remember your name. With Glitch, that was the <em>third</em> name we actually launched the community under, a fact that I was a little bit embarrassed about at the time. But having a memorable name that resonated ended up being almost as much a factor in our early success as our user experience or the deeper technological innovations.</p>\n<p>There are other ways of making changes for a successful re-launch. One thing I often suggest is to <em>subtract</em> things (or just de-emphasize them) and use that reduction in complexity to simplify a story. Or you can try to re-center your narrative on your users or community instead of on your product — the emotion and connection of seeing someone succeed often resonates far more than simply reciting a litany of features or technical capabilities.  Any of these small iterations allow you to take another swing at putting something out into the world without having to make a massive change to the core offering.</p>\n<p>Often times, people are afraid or embarrassed to make changes to things like branding or design because they’re some of the more visible aspects of a product or service. Instead, they retreat to “safe” areas, like tweaking the pricing or copy on a web page that nobody reads. But the vast majority of the time, the single biggest problem you have is that <em>nobody knows you exist, and nobody gives a damn about what you do</em>. Everything else pales in comparison to that. I’ve seen so many teams trying to figure out how to optimize the engagement of the three users on their app, or the five people who come to their site, while forgetting about the other eight billion people who have no idea they exist.</p>\n<h2>What about <em>not</em>  failing?</h2>\n<p>This idea of launching again is really important to keep in mind because so much of the narrative in the startup world is about “fail fast” and “90% of startups fail”. When the conventional narrative from VCs prompts you to pivot right away, or an investor is pressuring everyone to grow, grow, grow at all costs, it can be hard to think about slowing down and taking the time to revisit and refine an idea.</p>\n<p>But if you’re moving with conviction, and you’ve created something meaningful, and if you’re serving a real community that you have a deep understanding of, then it may  be the case that you simply need to try again. If you are <em>not</em> moving with conviction to create something meaningful for a real community, then you don’t need to do it three times, because you don’t even need to do it once.</p>\n<p>So many of the creators and innovators that inspire me most often end up working on their best ideas for years or even decades, iterating and revisiting those ideas with an almost-obsessive passion. Most of the time, they’re doing it because of a combination of their own personal mission and the deep belief that what they’re doing is going to help change people’s lives for the better. For those kinds of people, one of the things I want most is to ensure that they don’t give up before their ideas have had a full and fair chance to succeed, even if that means that sometimes you have to try, try again.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/18/threatening-kids-with-AI/",
      "title": "How did we end up threatening our kids’ lives with AI?",
      "description": null,
      "url": "https://anildash.com/2026/02/18/threatening-kids-with-AI/",
      "published": null,
      "updated": "2026-02-18T00:00:00.000Z",
      "content": "<p>I have to begin by warning you about the content in this piece; while I won’t be dwelling on any specifics, this will necessarily be a broad discussion about some of the most disturbing topics imaginable. I resent that I have to give you that warning, but I’m forced to because of the choices that the Big AI companies have made that affect children. I don’t say this lightly. But this is the point we must reckon with if we are having an honest conversation about contemporary technology.</p>\n<p>Let me get the worst of it out of the way right up front, and then we can move on to understanding how this happened. ChatGPT has repeatedly produced output that <a href=\"https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.M1A.S4zx.M-CdIbTK0GGI&smid=url-share\">encouraged</a> and <a href=\"https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html?unlocked_article_code=1.M1A.-92e.rGfKZMgP6nE9&smid=url-share\">incited</a> children to end their own lives. Grok’s AI <a href=\"https://www.cnbc.com/2026/01/05/india-eu-investigate-musks-x-after-grok-created-deepfake-child-porn.html\">generates sexualized imagery of children</a>, which the company makes available commercially to paid subscribers.</p>\n<p>It used to be that encouraging children to self-harm, or producing sexualized imagery of children, were universally agreed upon as being amongst the worst things one could do in society. These were among the rare truly non-partisan, unifying moral agreements that transcended all social and cultural barriers. And now, some of the world’s biggest and most powerful companies, led by a few of the wealthiest and most powerful men who have ever lived, are violating these rules, <em>for profit</em>, and not only is there little public uproar, it seems as if very few have even noticed.</p>\n<p>How did we get here?</p>\n<h2>The ideas behind a crisis</h2>\n<p>A perfect storm of factors have combined to lead us towards the worst case scenario for AI. There is now an entire market of commercial products that attack our children, and to understand why, we need to look at the mindset of the people who are creating those products. Here are some of the key motivations that drove them to this point.</p>\n<h3>1. Everyone feels desperately behind and wants to catch up</h3>\n<p>There’s an old adage from Intel’s founder Andy Grove that people in Silicon Valley used to love to quote: “Only the paranoid survive”. This attitude persists, with leaders absolutely <em>convinced</em> that everything is a zero-sum game, and any perceived success by another company is an existential threat to one’s own future.</p>\n<p>At Google, the company’s researchers had published the <a href=\"https://en.wikipedia.org/wiki/Attention_Is_All_You_Need\">fundamental paper</a> underlying the creation of LLMs in 2017, but hadn’t capitalized on that invention by making a successful consumer product by 2022, when OpenAI released ChatGPT. Within Google leadership (and amongst the big tech tycoons), the fact that OpenAI was able to have a hit product with this technology was seen as a grave failure by Google, despite the fact that even OpenAI’s own leadership hadn’t expected ChatGPT to be a big hit upon launch. A <a href=\"https://www.cnet.com/tech/services-and-software/chatgpt-caused-code-red-at-google-report-says/\">crisis ensued</a> within Google in the months that followed.</p>\n<p>These kinds of industry narratives have more weight than reality in driving decision-making and investment, and the refrain of “move fast and break things” is still burned into people’s heads, so the end result these days is that <em>shipping any product</em> is okay, as long as it helps you catch up to your competitor. Thus, since Grok is seriously behind its competitors in usage, and of course Grok's CEO Elon Musk is always desperate for attention, they have every incentive to ship a product with a catastrophically toxic design — including one that creates abusive imagery.</p>\n<h3>2. Accountability is “woke” and must be crushed</h3>\n<p>Another fundamental article of faith in the last decade amongst tech tycoons (and their fanboys) is that woke culture must be destroyed. They have an amorphous and ever-evolving definition of what “woke” means, but it always includes any measures of accountability. One key example is the trust and safety teams that had been trying to keep all of the major technology platforms from committing the worst harms that their products were capable of producing.</p>\n<p>Here, again, Google provides us with useful context. The company had one of the most mature and experienced AI safety research teams in the world at the time when the first paper on the transformer model (LLMs) was published. Right around the time that paper was published, Google <em>also</em> saw one of its engineers <a href=\"https://en.wikipedia.org/wiki/Google%27s_Ideological_Echo_Chamber\">publish a sexist screed</a> on gender essentialism designed to bait the company into becoming part of the culture war, which it ham-handedly stumbled directly into. Like so much of Silicon Valley, Google’s leadership did not understand that these campaigns are always attempts to game the refs, and they let themselves be played by these bad actors; within a few years, a backlash had built and they began cutting everyone who had warned about risks around the new AI platforms, including some of the <a href=\"https://www.theverge.com/2021/4/13/22370158/google-ai-ethics-timnit-gebru-margaret-mitchell-firing-reputation\">most credible and respected voices</a> in the industry on these issues.</p>\n<p>Eliminating those roles was considered <em>vital</em> because these people were blamed for having “slowed down” the company with their silly concerns about things like people’s lives, or the health of the world’s information ecosystem. A lot of the wealthy execs across the industry were absolutely convinced that the reason Google had ended up behind in AI, despite having invented LLMs, was because they had too many “woke” employees, and those employees were too worried about esoteric concerns like people’s well-being.</p>\n<p>It does not ever enter the conversation that 1. executives are accountable for the failures that happen at a company, 2. Google had a million other failures during these same years (including those <a href=\"https://arstechnica.com/gadgets/2021/08/a-decade-and-a-half-of-instability-the-history-of-google-messaging-apps/\">countless redundant messaging apps</a> they kept launching!) that may have had far more to do with their inability to seize the market opportunity and 3. <em>it may be a good thing</em> that Google didn’t rush to market with a product that tells children to harm themselves, and those workers who ended up being fired may have saved Google from that fate!</p>\n<h3>3. Product managers are veterans of genocidal regimes</h3>\n<p>The third fact that enabled the creation of pernicious AI products is more subtle, but has more wide-ranging implications once we face it. In the tech industry, product managers are often quietly amongst the most influential figures in determining the influence a company has on culture. (At least until all the product managers are replaced by an LLM being run by their CEO.) At their best, product managers are the people who decide exactly what features and functionality go into a product, synthesizing and coordinating between the disciplines of engineering, marketing, sales, support, research, design, and many other specialties. I’m a product person, so I have a lot of empathy for the challenges of the role, and a healthy respect for the power it can often hold.</p>\n<p>But in today’s Silicon Valley, a huge number of the people who act as product managers spent the formative years of their careers in companies like Facebook (now Meta). If those PMs now work at OpenAI, then the moments when they were learning how to practice their craft were spent at a company that <a href=\"https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/\">made products that directly enabled and accelerated a genocide</a>. That’s not according to me, that’s the opinion of multiple respected international human rights organizations. If you <em>chose</em> to go work at Facebook after the Rohingya genocide had happened, then you were certainly not going to learn from your manager that you should not make products that encourage or incite people to commit violence.</p>\n<p>Even when they’re not enabling the worst things in the world, product managers who spend time in these cultures learn more destructive habits, like strategic line-stepping. This is the habit of repeatedly violating their own policies on things like privacy and security, or allowing users to violate platform policies on things like abuse and harassment. This tactic is followed by then feigning surprise when the behavior is caught. After sending out an obligatory apology, they repeat the behavior again a few more times until everyone either gets so used to it that they stop complaining or the continued bad actions drives off the good people, which makes it seem to the media or outside observers that the problem has gone away. Then, they amend their terms of service to say that the formerly-disallowed behavior is now permissible, so that in the future they can say, “See? It doesn’t violate our policy.”</p>\n<p>Because so many people in the industry now have these kind of credential on their LinkedIn profiles, their peers can’t easily mention many kinds of ethical concerns when designing a product without implicitly condemning their coworkers. This becomes even more fraught when someone might potentially be unknowingly offending one of their leaders. As a result, it becomes a race to the bottom, where the person with the worst ethical standards on the team determines the standards to which everyone designs their work. As a result, if the prevailing sentiment about creating products at a company is that having millions of users just inevitably means killing some of them (“you’ve got to break a few eggs to make an omelet”), there can be risk to contradicting that idea. Pointing out that, in fact, <em>most</em> platforms on the internet do not harm users in these ways and their creators work very hard to ensure that tech products don’t present a risk to their communities, can end up being a career-limiting move.</p>\n<h3>4. Compensation is tied to feature adoption</h3>\n<p>This is a more subtle point, but explains a lot of the incentives and motivations behind so much of what happens with today’s major technology platforms. The introduction or rollout of new capabilities is measured when these companies launch new features, and the success of those rollouts or launches are often tied to the measurements of individual performance for the people who were responsible for those features. These will be measured using metrics like “KPIs” (key performance indicators) or other similar corporate acronyms, all of which basically represent the concept of being rewarded for whether the thing you made was adopted by users in the real world. In the abstract, it makes sense to reward employees based on whether the things they create actually succeed in the market, so that their work is aligned with whatever makes the company succeed.</p>\n<p>In practice, people’s incentives and motivations get incredibly distorted over time by these kinds of gamified systems being used to measure their work, especially as it becomes a larger and larger part of their compensation. If you’ve ever wondered why some intrusive AI feature that you never asked for is jumping in front of your cursor when you’re just trying to do a normal task the same way that you’ve been doing it for years, it’s because someone’s KPI was measuring whether you were going to click on that AI button. Much of the time, the system doesn’t distinguish between “I accidentally clicked on this feature while trying to get rid of it” and “I enthusiastically chose to click on this button”. This is what I mean when I say we need <a href=\"https://www.anildash.com/2025/05/27/internet-of-consent/\">an internet of consent</a>.</p>\n<p>But you see the grim end game of this kind of thinking, and these kinds of reward systems, when kids’ well-being is on the line. Someone’s compensation may well be tied to a metric or measurement of “how many people used the image generation feature?” without regard to whether that feature was being used to generate imagery of children without consent. Getting a user addicted to a product, even to the point where they’re getting positive reinforcement when discussing the most self-destructive behaviors, will show up in a measurement system as increased engagement — exactly the kind of behavior that most compensation systems reward employees for producing.</p>\n<h3>5. Their cronies have made it impossible to regulate them</h3>\n<p>A strange reality of the United States’ sad decline into authoritarianism is that it is presently impossible to create federal regulation to stop the harms that these large AI platforms are causing. Most Americans are not familiar with this level of corruption and crony capitalism, but Trump’s AI Czar David Sacks has an <a href=\"https://www.nytimes.com/2025/11/30/technology/david-sacks-white-house-profits.html?unlocked_article_code=1.NFA.8q0L.ierVRTr9iVbw&smid=url-share\">unbelievably broad number of conflicts of interest</a> from his investments across the AI spectrum; it’s impossible to know how many because nobody in the Trump administration follows even the basic legal requirements around disclosure or disinvestment, and the entire corrupt Republican Party in Congress refuses to do their constitutionally-required duty to hold the executive branch accountable for these failures.</p>\n<p>As a result, at the behest of the most venal power brokers in Silicon Valley, the Trump administration is insisting on trying to stop all AI regulations at the state level, and of course will have the collusion of the captive Supreme Court to assist in this endeavor. Because they regularly have completely unaccountable and unrecorded conversations, the leaders of the Big AI companies (all of whom attended the Inauguration of this President and support the rampant lawbreaking of this administration with rewards like <a href=\"https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/\">open bribery</a>) know that there will be no constraints on the products that they launch, and no punishments or accountability if those products cause harm.</p>\n<p>All of the pertinent regulatory bodies, from the Federal Trade Commission to the Consumer Financial Protection Bureau have had their competent leadership replaced by Trump cronies as well, meaning that their agendas are captured and they will not be able to protect citizens from these companies, either.</p>\n<p>There will, of course, still be attempts at accountability at the state and local level, and these will wind their way through the courts over time. But the harms will continue in the meantime. And there will be attempts to push back on the international level, both from regulators overseas, and increasingly by governments and consumers outside the United States refusing to use technologies developed in this country. But again, these remedies will take time to mature, and in the meantime, children will still be in harm’s way.</p>\n<h2>What about the kids?</h2>\n<p>It used to be such a trope of political campaigns and social movements to say “what about the children?” that it is almost beyond parody. I personally have mocked the phrase because it’s so often deployed in bad faith, to short-circuit complicated topics and suppress debate. But this is that rare circumstance where things are actually not that complicated. Simply discussing the reality of what these products do should be enough.</p>\n<p>People will say, “but it’s inevitable! These products will just have these problems sometimes!” And that is simply false. There are <em>already</em> products on the market that don’t have these egregious moral failings. More to the point, even if it were true that these products couldn’t exist without killing or harming children — then that’s a reason not to ship them at all.</p>\n<p>If it is, indeed absolutely unavoidable that, for example, ChatGPT has to advocate violence, then let’s simply attach a rule in the code that modifies it to change the object of the violence to be Sam Altman. Or your boss. I suspect that if, suddenly, the chatbot deployed to every laptop at your company had a chance of suggesting that people cause bodily harm to your CEO, people would suddenly figure out a way to fix that bug. But somehow when it makes that suggestion about your 12-year-old, this is an insurmountably complex challenge.</p>\n<p>We can expect things to get worse before they get better. OpenAI has already announced that it is going to be allowing people to generate sexual content on its service for a fee later this year. To their credit, when doing so, they stated <a href=\"https://openai.com/index/combating-online-child-sexual-exploitation-abuse/\">their policy</a> prohibiting the use of the service to generate images that sexualize children. But the service they’re using to ensure compliance, <a href=\"https://www.thorn.org\">Thorn</a>, whose product is meant to help protect against such content, was conspicuously silent about Musk’s recent foray into generating sexualized imagery of children. An organization whose <em>entire purpose</em> is preventing this kind of material, where every public message they have put out is decrying this content, somehow falls mute when the world’s richest man carries out the most blatant launch of this capability ever? If even the watchdogs have lost their voice, how are regular people supposed to feel like they have a chance at fighting back?</p>\n<p>And then, if no one is reining in OpenAI, and they have to keep up with their competitors, and the competition isn’t worried about silly concerns like ethics, and the other platforms are selling child exploitation material, and all of the product mangers are Meta alumni who know that they can just keep gaming the terms of service if they need to, and laws aren’t being enforced, and all the product managers making the product learned to make decisions while they were at Meta… well, will you be surprised?</p>\n<h2>How do we move forward?</h2>\n<p>It should be an industry-stopping scandal that this is the current state of two of the biggest players in the most-hyped, most-funded, most consequential area of the entire business world right now. It should be <em>unfathomable</em> that people are thinking about deploying these technologies in their businesses — in their schools! — or integrating these products into their own platforms. And yet I would bet that the vast majority of people using these products have no idea about these risks or realities of these platforms at all. Even the vast majority of people who <em>work in tech</em> probably are barely aware.</p>\n<p>What’s worse is, the majority of people I’ve talked to in tech, who <em>do</em> know about this have not taken a single action about it. Not one.</p>\n<p>I’ll be following up with an entire list of suggestions about actions we can take, and ways we can push for accountability for the bad actors who are endangering kids every day. In the meantime, reflect for yourself about this reality. Who will you share this information with? How will this change your view of what these companies are? How will this change the way you make decisions about using these products? Now that you know: what will you do?</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/23/taking-action-ai-harms/",
      "title": "Taking action against AI harms",
      "description": null,
      "url": "https://anildash.com/2026/02/23/taking-action-ai-harms/",
      "published": null,
      "updated": "2026-02-24T00:00:00.000Z",
      "content": "<p>In my last piece, I talked about <a href=\"https://www.anildash.com/2026/02/18/threatening-kids-with-ai/\">the harms that AI is visiting on children</a> through the irresponsible choices made by the platforms creating those products. While we dove a bit into the incentives and institutional pressures that cause those companies to make such wildly irresponsible decisions, what we haven’t yet reckoned with is how we hold these companies accountable.</p>\n<p>Often, people tell me they feel overwhelmed at the idea of trying to engage with getting laws passed, or fighting a big political campaign to rein in the giant tech companies that are causing so much harm. And grassroots, local organizing can be <a href=\"https://patch.com/new-jersey/newbrunswick/new-brunswick-city-council-kills-proposal-build-ai-data-center-100-jersey\">extraordinarily effective</a> in standing up for the values of your community against the agenda of the Big AI companies.</p>\n<p>But while I think it’s vital that we pursue systemic justice (and it’s the only way to stop many kinds of harm), I do understand the desire for something more immediate and human-scale. So, I wanted to share some direct, personal actions that you can take to respond to the threats that Big AI has made against kids. Each of these tactics have been proven effective by others who have used the same strategies, so you can feel confident when adapting these for your own use.</p>\n<h2>Get your company off of Twitter / X</h2>\n<p>If your company or organization maintains a presence on Twitter (or X, as they have tried to rename themselves), it is important to protect yourself, your coworkers, and also your employer from the risks of being on the platform. Many times, leadership in organizations have an outdated view of the platform that is uninformed about the current level of danger and harm presented by participating on the social network, and an accurate description of the problem can often be effective in driving a decision to make a change.</p>\n<p>Here is some dialogue you can use or modify to catalyze a productive conversation at work:</p>\n<blockquote>\n<p>Hi, [name]. I saw a while ago that Twitter is being investigated in multiple countries around the world for having generated explicit imagery of women and children. The story even said that their CEO reinstated the account of a user who had shared child exploitation pictures on the site, and monetized the account that had shared the pictures.</p>\n</blockquote>\n<blockquote>\n<p>Can you verify that our team is required to be on the service even though there is child abuse imagery on the site? I know that Musk’s account is shown to everyone on Twitter, so I’m concerned we’ll see whatever content he shares or retweets. Should I forward any of the child abuse material that I encounter in the course of carrying out the duties of my role to HR or legal, or both? And what is our reporting process for reporting this kind of material to the authorities, as I haven’t been trained in any procedures around these kinds of sensitive materials?</p>\n</blockquote>\n<p>That should be enough to trigger a useful conversation at your workplace. (You can share <a href=\"https://www.cnbc.com/2026/01/05/india-eu-investigate-musks-x-after-grok-created-deepfake-child-porn.html\">this link</a> if they want a credible, business-minded link to reference.)  If they need more context about the burden on workers, you can also mention the fact that content moderators who have to interact with this kind of content have had <a href=\"https://citizensandtech.org/2024/02/measuring-trauma-among-the-internets-first-responders/\">serious issues with trauma</a>, according to many academic studies. There is also the risk of employees and partners having concerns about nonconsensual imagery being generated from their images if the company posts anything on Twitter that features their faces or bodies. As <a href=\"https://www.liberalcurrents.com/the-new-epstein-island-is-right-in-your-pocket-its-time-to-abandon-elon-musks-paradise-of-abuse/\">some articles have noted</a>, the Grok AI tool that Twitter uses is even designed to permit the creation of imagery that makes its targets look like the victims of violence, including targets who are underage.</p>\n<p>As a result, your emails to your manager should CC your HR team, and should make explicit that you don’t wish to be liable for the risks the company is taking on by remaining on the platform. Talk to your coworkers, and share this information with them, and see if they will join you in the conversation. If you’re able to, it’s not a bad idea to look up a local labor lawyer and see if they’re willing to talk to you for free in case you need someone to CC on an email while discussing these topics. Make your employers say to you, explicitly, that the decision to remain on the platform is theirs, that they’re aware of the risks, that they indemnify you of those risks. You should ask that they take on accountability for burdens like legal costs or even psychological counseling for the real and severe impacts that come from enduring the harms that crimes like those enabled by Twitter can cause.</p>\n<p>All of these strategies can also apply to products that integrate with Twitter’s service at a technical level, for sharing content or posting tweets, or for technical platforms that try to use Grok’s AI features. If you are a product manager, or know a product manager, that is considering connecting to a platform that makes child abuse material, you have failed at the most fundamental tenet of your craft. If you work at a company that has incorporated these technologies, file a bug mentioning the issues listed above, and again, CC your legal team and mention these concerns. “Our product might plug in to a platform that generates CSAM” is a show-stopping bug for any product, and any organization that doesn’t understand that is fundamentally broken.</p>\n<p>Once you catalyze this conversation, you can begin mapping out a broader communication strategy that takes advantage of the many excellent options for replacing this legacy social media channel.</p>\n<h2>Stop your school from using ChatGPT</h2>\n<p>An increasing number of schools are falling prey to the “AI is inevitable!” rhetoric and desperately chasing the idea of putting AI tools into kids’ hands. Worse, a lot of schools think that the only kinds of technology that exist are the kinds made by giant tech companies. And because many of the adults making the decisions about AI are not necessarily experts in every detail of every technology, the decision about <em>which</em> AI platforms to use often comes down to which ones people have heard about the most. For most people, that means ChatGPT, since it’s gotten the most free hype from media.</p>\n<p>As a result, many schools and educational institutions are considering the deployment of a platform that has told multiple children to self-harm, including several who have taken their own lives. This is something that you can take action about at your kid’s school.</p>\n<p>First, you can begin simply by gathering resources. There are <a href=\"https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.M1A.S4zx.M-CdIbTK0GGI&smid=url-share\">many</a> <a href=\"https://www.nytimes.com/2025/11/06/technology/chatgpt-lawsuit-suicides-delusions.html?unlocked_article_code=1.M1A.-92e.rGfKZMgP6nE9&smid=url-share\">credible</a> stories which you can share to illustrate the risk to administrators, and to other parents. Typically, apologists for this product will raise a few objections, which you can respond to in a thoughtful way:</p>\n<ul>\n<li>“Maybe those kids were already depressed?” Several of the children who have been impacted by these tools were introduced to them as homework assistants, and only evolved into using them as emotional crutches at the prompting of the responses from the tool. Also: your school has children in it who are depressed, why are you willing to endanger them?</li>\n<li>“Doesn’t every tool cause this?” No, this is extreme and unusual behavior. Your email software or word processor have never incited your children to commit violence against anyone, let alone themselves. Not even other LLMs prompt this behavior. And again, even if this <em>did</em> happen with every tool in this category, why would that make it okay? If every pill in a bottle is poisonous, does that make it okay to give the bottle of pills to our kids?</li>\n<li>“They’ll be missing out on the future.” Ask the parents of the children impacted in these stories about their kids’ futures.</li>\n<li>“We should just roll it out as a test.” Who will pay for monitoring all usage by all students in the test?</li>\n<li>“It’s a parent’s responsibility.” Forcing a parent to invest hours of time into learning a cutting-edge technology that is being constantly updated is a full-time job. If you are going to burden them with that level of responsibility, how will you provide resources to support them? What is your plan to communicate this responsibility to them and get their consent so they can agree to take on this responsibility?</li>\n<li>“The company said it’s working on the problem.” They can change their technology so that it only incites violence against their executives, or publish a notice when it has gone a full year without costing any children their lives. At that point, they may be considered for re-evaluation.</li>\n</ul>\n<p>With these responses in hand, you can provide some basic facts about the risks of the specific tool or platform that is being recommended, and help present a cogent argument against its deployment. It’s important to frame the argument in terms of child safety — the conventional arguments against LLMs, grounded in concerns like environmental impact, labor impact, intellectual property rights, or other similar issues tend to be dismissed out of hand due to effective propagandizing by Big AI advocates.</p>\n<p>If, instead, you ignore the debate about LLMs and focus on real-world safety concerns based on actual threats that have happened to actual children, you should be able to have a very direct impact. And these are messages that others will generally pick up and amplify as well, whether they are fellow parents, or local media.</p>\n<p>From here, you can begin a conversation that re-evaluates the <em>goals</em> of the initiative from first principles. \"Everyone else is doing it\" is not a valid way of advocating for technology, and even if they feel that LLMs are a technology that students should become familiar with, they should begin by engaging with the many resources on the topic created by academics who are not tied to the Big AI companies.</p>\n<h2>You have power</h2>\n<p>The key reason I wanted to capture some specific actions that people can take around responding to the harms that Big AI poses towards children is to remind us all that the power to take action lies in everyone’s hands. It’s not an abstract concept, or a theoretical thing that we have to wait for someone else to do.</p>\n<p>We are in an outrageous place, where the actions of some of the biggest and most influential technology companies in the world are so beyond the pale that we can’t even discuss the things that they are doing in polite company. The actions that take place on these platforms used to mean that simply <em>accessing</em> these kinds of sites during one’s workday would be a firing offense. Now we have employers and schools trying to <em>require</em> people to use these things.</p>\n<p>The pushback has to come at every level. Do talk to your elected officials. Do organize with others at your local level. If you work in tech, make sure to resist every attempt at normalizing these platforms, or incorporating their technologies into your own.</p>\n<p>Finally, use your voice and your courage, and trust in your sense of basic decency. It might only take you a few minutes to draft up an email and send it to the right people. If you need help figuring out who to send it to, or how to phrase it, let me know and I’ll help! But these things that feel small can be quite enormous when they all add up together. And that’s exactly what our kids deserve.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/25/talking-through-the-tech-reckoning/",
      "title": "Talking through the tech reckoning",
      "description": null,
      "url": "https://anildash.com/2026/02/25/talking-through-the-tech-reckoning/",
      "published": null,
      "updated": "2026-02-26T00:00:00.000Z",
      "content": "<p>Many of the topics that we’ve all been discussing about technology these days seem to matter so much more, and the stakes have never been higher. So, I’ve been trying to engage with more conversations out in the world, in hopes of communicating some of the ideas that might not get shared from more traditional voices in technology. These recent conversations have been pretty well received, and I hope you’ll take a minute to give them a listen when you have a moment.</p>\n<h2>Galaxy Brain</h2>\n<p>First, it was nice to sit down with Charlie Warzel, as he invited me to speak with him on <a href=\"https://www.theatlantic.com/podcasts/2026/02/the-ai-panic-cycle-and-whats-actually-different-now/686077/?gift=apxH5R6bxFb7BY7F-EpWnOKasXuqQ1RVEcCy4QH0pq8\">Galaxy Brain</a> (full transcript at that link), his excellent podcast for The Atlantic. The initial topic was some of the alarmist hype being raised around AI within the tech industry right now, but we had a much more far-ranging conversation, and I was particularly glad that I got to articulate my (somewhat nuanced) take on the rhetoric that many of the Big AI companies push about their LLM products being “inevitable”.</p>\n<p>In short, while I think it’s important to fight their narrative that treats big commercial AI products as inevitable, I don’t think it will be effective or successful to do so by trying to stop regular people from using LLMs at all. Instead, I think we have to pursue a third option, which is a multiplicity of small, independent, accountable and purpose-built LLMs. By analogy, the answer to unhealthy fast food is good, home-cooked meals and neighborhood restaurants all using local ingredients.</p>\n<p>The full conversation is almost 45 minutes, but I’ve cued up the section on inevitability here:</p>\n<iframe src=\"https://www.youtube-nocookie.com/embed/kNdjLf4f0uU?t=2053 s\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen class=\"video\"></iframe>\n<h2>Revolution Social</h2>\n<p>Next up, I got to reconnect with Rabble, whom I’ve known since the earliest days of social media, for his podcast <a href=\"https://revolution.social/episodes/silicon-valley-has-lost-its-moral-compass-with-ani/\">Revolution.Social</a>. The framing for this episode was “Silicon Valley has lost its moral compass” (did it have one? Ayyyyy) but this was another chance to have a wide-ranging conversation, and I was particularly glad to get into the reckoning that I think is coming around intellectual property in the AI era. Put simply, I think that the current practice of wholesale appropriation of content from creators without consent or compensation by the AI companies is simply untenable. If nothing else, as normal companies start using data and content, they’re going to <em>want</em> to pay for it just so they don’t get sued and so that the quality of the content they’re using is of a known reliability. That will start to change things from he current Wild West “steal all the stuff and sort it out later” mentality.\n
It will not surprise you to find out that I illustrated this point by using examples that included… Prince and Taylor Swift. But there’s lots of other good stuff in the conversation too! Let me know what you think.</p>\n<iframe src=\"https://www.youtube-nocookie.com/embed/NhBykJqOqAc?t=1560s\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen class=\"video\"></iframe>\n<h2>What’s next?</h2>\n<p>As I’ve been writing more here on my site again, many of these topics seem to have resonated, and there have been some more opportunities to guest on podcasts, or invitations to speak at various events. For the last several years, I had largely declined all such invitations, both out of some fatigue over where the industry was at, and also because I didn’t think I had anything in particular to say.</p>\n<p>In all honesty, these days it feels like the stakes are too high, and there are too few people who are addressing some of these issues, so I changed my mind and started to re-engage. I may well be an imperfect messenger, and I would eagerly pass the microphone to others who want to use their voices to talk about how tech can be more accountable and more humanist (if that’s you, let me know!). But if you think there’s value to these kinds of things, let me know, or if you think there are places where I should be getting the message out, do let them know, and I’ll try to do my best to dedicate as much time and energy as I can to doing so. And, as always, if there’s something I could be doing better in communicating in these kinds of platforms, your critique and comments are always welcome!</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/27/a-cookie-for-dario/",
      "title": "A Cookie for Dario? — Anthropic and selling death",
      "description": null,
      "url": "https://anildash.com/2026/02/27/a-cookie-for-dario/",
      "published": null,
      "updated": "2026-02-28T00:00:00.000Z",
      "content": "<p>A big tech headline this week is Anthropic (makers of Claude, widely regarded as one of the best LLM platforms) resisting Secretary of Defense Pete Hegseth’s calls to modify their platform in order to enable it to support <a href=\"https://www.politico.com/news/2025/11/30/war-crimes-hegseth-venezuela-strikes-00671160\">his commission</a> of <a href=\"https://www.newyorker.com/news/q-and-a/the-legal-consequences-of-pete-hegseths-kill-them-all-order\">war crimes</a>. As has become clear this week, Anthropic CEO Dario Amodei has <a href=\"https://www.nytimes.com/2026/02/26/technology/anthropic-pentagon-talks-ai.html?unlocked_article_code=1.PVA.ao-a.26AX1P-gLWlH&smid=url-share\">declined to do so</a>. The administration couches the request as an attempt to use the technology for “lawful purposes”, but given that they’ve also described their recent crimes as legal, this is obviously not a description that can be trusted.</p>\n<p>Many people have, understandably, rushed to praise Dario and Anthropic’s leadership for this decision. I’m not so sure we should be handing out a cookie just because someone is saying they’re not going to let their tech be used to cause extrajudicial deaths.</p>\n<p>To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform <em>would</em> enable a sitting official of any government to knowingly commit such crimes.</p>\n<p>We have to hold the line on normalizing this stuff, and remind people where reality still lives. This means we can recognize it as a positive move when companies do the reasonable thing, but also know that <em>this is what we should expect</em>. It’s also good to note that companies may have <em>many</em> reasons that they don’t want to sell to the Pentagon in addition to the obvious moral qualms about enabling an unqualified TV host who’s <a href=\"https://www.newyorker.com/news/news-desk/pete-hegseths-secret-history\">drunkenly stumbling</a> his way through playacting as Secretary of Defense (which they insist on dressing up as the “Department of War” — <a href=\"https://www.wired.com/story/department-of-defense-department-of-war/\">another lie</a>).</p>\n<h2>Selling to the Pentagon sucks</h2>\n<p>Being on <em>any</em> federal procurement schedule as a technology vendor is a tedious nightmare. There’s endless paperwork and process, all falling squarely into the types of procedures that a fast-moving technology startup is likely to be particularly bad at completing, with very few staff members having had prior familiarity handling such challenges. Right now, Anthropic handles most of the worst parts of these issues through partners like Amazon and Palantir. Addressing more of these unique and tedious needs for a demanding customer like the Pentagon themselves would almost certainly require blowing up the product roadmap or hiring focus within Anthropic for months or more, potentially delaying the release of cool and interesting features in service of boring (or just plain evil) capabilities that would be of little interest to 99.9% of normal users. Worse, if they have to <em>build</em> these features, it could exhaust or antagonize a significant percentage of the very expensive, very finicky employees of the company.</p>\n<p>This is a key part of the calculus for Anthropic. A big part of their entire brand within the tech industry, and a huge part of why they’re appreciated by coders (in addition to the capabilities of their technology), is that they’re the “we don’t totally suck” LLM company. Think of them as “woke-light”. Within tech, as there have been <a href=\"https://www.anildash.com/2026/01/06/500k-tech-workers-laid-off/\">massive waves of rolling layoffs</a> over the last few years, people have felt terrified and unsettled about their future job prospects, even at the biggest tech companies. The only opportunities that feel relatively stable are on big AI teams, and most people of conscience don’t want to work for the ones that <a href=\"https://www.anildash.com/2026/02/18/threatening-kids-with-ai/\">threaten kids’ lives or well-being</a>. That leaves Anthropic alone amongst the big names, other than maybe Google. And Google has <a href=\"https://layoffs.fyi\">laid off people <em>at least 17 times</em></a> in the last three years alone.</p>\n<p>So, if you’re Dario, and you want to keep your employees happy, and maintain your brand as the AI company that doesn’t suck, and you don’t want to blow up your roadmap, and you don’t want to have to hire a bunch of pricey procurement consultants, and you can stay focused on your core enterprise market, <em>and</em> you can take the right moral stand? It’s a pretty straightforward decision. It’s almost, I would suggest, an easy decision.</p>\n<h2>How did we get here?</h2>\n<p>We’ve only allowed ourselves to lower the bar this far because so many of the most powerful voices in Silicon Valley have so completely embraced the authoritarian administration currently in power in the United States. Facebook’s role in <a href=\"https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/\">enabling the Rohingya genocide</a> truly served as a tipping point in the contemporary normalization of major tech companies enabling crimes against humanity that would have been unthinkable just a few years prior; we can’t picture a world where MySpace helped accelerate the Darfur genocide, because the Silicon Valley tech companies we know about today didn’t yet aspire to that level of political and social control. But there are deeper precedents: IBM provided technology that helped enable the horrors of <a href=\"https://en.wikipedia.org/wiki/IBM_and_World_War_II\">the holocaust in Germany</a> in the 1940s, and that served as the template for their work implementing <a href=\"https://www.eff.org/deeplinks/2015/02/eff-files-amicus-brief-case-seeks-hold-ibm-responsible-facilitating-apartheid\">apartheid in South Africa</a> in the 1970s. IBM actually <em>bid</em> for the contract to build these products for the South African government. And the systems IBM built were still in place when Elon Musk, Peter Thiel, David Sacks and a number of other Silicon Valley tycoons all lived there during their formative years. Later, as they became the vaunted “PayPal Mafia”, today’s generation of Silicon Valley product managers were taught to look up to them, so it’s no surprise that their acolytes have helped create companies that enable mass persecution and surveillance. But it’s also why one of the first big displays of worker power in tech was when many across the industry <a href=\"https://www.vox.com/recode/2019/10/9/20906605/github-ice-contract-immigration-ice-dan-friedman\">stood up against contracts with ICE</a>. That moment was also one of the catalyzing events that drove the tech tycoons into <a href=\"https://www.anildash.com/2023/07/07/vc-qanon/\">their group chats</a> where they collectively decided that they needed to bring their workers to heel.</p>\n<p>And they’ve escalated since then. Now, the richest man in the world, who is CEO of a few of the biggest tech companies, including one of the most influential social networks — and a major defense vendor to the United States government — has been <a href=\"https://www.bbc.com/news/articles/c5ydddy3qzgo\">openly inciting</a> <a href=\"https://caliber.az/en/post/elon-musk-warns-america-on-brink-of-second-civil-war\">civil war</a> <a href=\"https://www.nbcnews.com/tech/internet/elon-musk-predicting-civil-war-europe-nearly-year-rcna165469\"><em>for years</em></a> on the basis of his racist conspiracy theories. The other tech tycoons, who look to him as a role model, think they’re being reasonable by comparison in the fact that they’re only enabling mass violence indirectly. That’s shifted the public conversation into such an extreme direction that we think it’s a <em>debate</em> as to whether or not companies should be party to crimes against humanity, or whether they should automate war crimes. No, they shouldn’t. This isn’t hard.</p>\n<p>We don’t have to set the bar this low. We have to remind each other that this isn’t <em>normal</em> for the world, and doesn’t have to be normal for tech. We have to keep repeating the truth about where things stand, because too many people have taken this twisted narrative and accepted it as being real. The majority of tech’s biggest leaders are acting and speaking far beyond the boundaries of decency or basic humanity, and it’s time to stop coddling their behavior or acting as if it’s tolerable.\n
In the meantime, yes, we can note when one has the temerity to finally, finally do the right thing. And then? Let’s get back to work.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/02/28/apple-video-podcast-power/",
      "title": "Why Apple’s move to video could endanger podcasting's greatest power",
      "description": null,
      "url": "https://anildash.com/2026/02/28/apple-video-podcast-power/",
      "published": null,
      "updated": "2026-02-28T00:00:00.000Z",
      "content": "<p>TL;DR:</p>\n<ul>\n<li>Apple is adding support for video podcasts to their podcast app</li>\n<li>Podcasts are built on an open standard, which is why they aren’t controlled by a bad algorithm and don’t have ads that spy on you</li>\n<li>Apple’s new system for video podcasts breaks with the old podcast standard, and forces creators to host their video clips with a few selected companies</li>\n<li>The stakes are even higher because all the indie video infrastructure companies have been bought by private equity, while Trump’s goons go after TV and consolidate the big studios</li>\n<li>If Apple doesn’t open this up, it could lead to podcasts getting enshittified like all the other media</li>\n</ul>\n<h2>Podcasts are a radical gift</h2>\n<p>As I noted back in 2024, the common phrase “wherever you get your podcasts” masks a subtle point, which is that podcasts are built on an open technology — a design which has radical implications on today’s internet. This is the reason that the podcasts most people consume aren’t skewed by creators chasing an algorithm that dictates what content they should create, aren’t full of surveillance-based advertising, and aren’t locked down to one app or platform that traps both creators and their audience within the walled garden of a single giant tech company.</p>\n<p>Many of those merits of the contemporary podcast ecosystem are possible because of choices Apple made almost two decades ago when they embraced open standards in iTunes when adding podcasting features. Their outsized market influence (the term “podcast” itself came from the name iPod) pushed everyone else in the ecosystem to follow their lead, and as a result, we have a major media format that isn’t as poisoned, in some ways, as the rest of social media or even mainstream media.</p>\n<p>Sure, there are individual podcast creators one might object to, but notice how you don’t see bad actors like FCC chairman Brendan Carr illegally throwing his weight around to try to censor and persecute podcasters in the same way that he’s been silencing television broadcasters, and you don’t see MAGA legislators trying to game the refs about the algorithm the way they have with Facebook and Twitter. Even the Elon Musks of the world <em>can’t</em> just buy up the whole world of podcasting like he was able to with Twitter, because the ecosystem is decentralized and not controlled by any one player. This is how the Internet was supposed to work. As early Internet advocates were fond of saying, the architecture of the Internet was designed to see censorship as damage, and route around it.</p>\n<h2>The move to video</h2>\n<p>All of this is at much higher risk now due to the technical decisions Apple has made with its <a href=\"https://www.apple.com/newsroom/2026/02/apple-introduces-a-new-video-podcast-experience-on-apple-podcasts/\">move to support video podcasts</a> in its latest software versions that are about to launch. The motivations for their move are obvious: in recent years, many podcasters have moved to embrace new platforms to increase their distribution, reach, engagement and sponsorship dollars, and that has driven them to add video, which has meant moving to YouTube, and more recently, platforms like Netflix. That is also typically accompanied by putting out promotional clips of the video portion of the podcast on platforms like TikTok and Instagram. Combined with Spotify’s acquisition of multiple studios in order to produce proprietary shows that are not podcasts, but exclusive content locked into their apps, and Apple has faced a significant number of threats to their once-dominant position in the space.</p>\n<p>So it was inevitable that Apple would add video support to their podcasting apps. And it makes sense for Apple to update the technical underpinnings; the assumptions that were made when designing podcasts over two decades ago aren’t really appropriate for many contemporary uses.  For example, back then, by default an entire podcast episode would be downloaded to your iPod for convenient listening on the go, just like songs in your music library. But downloading a giant 4K video clip of an hour-long podcast show that you might not even watch, just in case you might want to see it, would be a huge waste of resources and bandwidth. Modern users are used to streaming everything. Thus, Apple updated their apps to support just grabbing snippets of video as they’re needed, and to their credit, Apple is embracing an open video format when doing so, instead of some proprietary system that requires podcasters to pay a fee or get permission.</p>\n<p>The problem, though, is that Apple is only allowing these new video streams to be served by <a href=\"https://podcasters.apple.com/partner-search\">a small number of pre-approved commercial providers</a> that they’ve hand-selected. In the podcasting world, there are no gatekeepers; if I want to start a podcast today, I can publish a podcast feed here on <code>anildash.com</code> and put up some MP3s with my episodes, and anyone anywhere in the world can subscribe to that podcast, I don’t have to ask anyone’s permission, tell anyone about it, or agree to anyone’s terms of service.</p>\n<p>If I want to publish a <em>video</em> podcast to Apple’s new system, though, I can’t just put up a video file on my site and tell people to subscribe to my podcast. I have to sign up for one of the approved partner services, agree to their terms of service, pay their monthly fee, watch them get acquired by Facebook, wait for the stupid corporate battle between Facebook and Apple, endure the service being enshittified, have them put their thumb on the scale about which content they want to promote, deal with my subscribers being spied on when they watch my show, see Brendan Carr make up a pretense to attack the platform I’m on, watch the service use my show to cross-promote violent attacks on vulnerable people, and the entire rest of <a href=\"https://www.anildash.com/2022/02/09/the-stupid-tech-content-culture-cycle/\">that broken tech/content culture cycle</a>.</p>\n<p>We <em>don’t have to do this</em>, Apple!</p>\n<h2>How this plays out</h2>\n<p>What will happen, by default, if Apple doesn’t change course and add support for open video hosting for podcasts is a land grab for control of the infrastructure of the new, closed video podcast technology platform. Some of the bidders may be players that want to own podcasting (Spotify, Netflix, maybe legacy media companies like Disney and Paramount), or a roll-up from a cloud provider like AWS or Google Cloud. Either way, the services will get way more expensive for creators, and far more conservative about what content they allow, while being far more consumer-hostile in terms of privacy and monetization. We’ve seen this play out already — video shows on YouTube give advertisers massive amounts of data about viewers, while podcasts can be delivered to an audience while almost totally preserving their privacy, if a creator wants to help them preserve their anonymity. The reason you see podcasters always talking about “use our promo code” in their sponsor reads is because <em>advertisers can’t track you</em> going from their show to their website.</p>\n<p>This will also start to impact content. You <em>don’t</em> hear podcasters saying “unalive” or censoring normal words because there is no algorithm that skews the distribution of their content. The promotional graphics for their shows are often downright boring, and don’t feature the hosts making weird faces like on YouTube thumbnails, because they haven’t been optimized to within an inch of their lives in hopes of getting 12-year-olds to click on them instead of Mr. Beast — because they’re not trying to chase algorithmic amplification. The closest thing that podcasters have to those kinds of games is when they ask you to rate them in Apple’s Podcasts app, because <em>that</em> has an algorithm for making recommendations, but even that is mediated by real humans making actual choices.</p>\n<p>But once we’ve got a layer of paid intermediaries distributing video content, and Apple leans more heavily into the visual aspects of their podcast app, incentives are going to start to shift rapidly. Today, other than on laptops, phones and tablets, Apple Podcasts app only exists on their Apple TV hardware, and doesn’t even have a video playback feature. By contrast, a <em>lot</em> of video podcast consumption happens in YouTube’s TV apps in the living room. Apple Podcasts will soon have to be on every set top device like Roku sticks and Amazon Fire TVs and Google’s Chromecasts, as well as on smart TVs like Samsungs and LGs, with a robust video playback feature that can compete with YouTube’s own capabilities. Once that’s happened — which will take at least a year, if not multiple years — creators will immediately begin jockeying for ways to get promoted or amplified within that ecosystem. Even if Apple <em>has</em> allowed independent publishers to make their own video podcast feeds, it’s easy to imagine them treating them as second-class citizens when distributing those podcasts to all of the Apple Podcast users across all of these platforms.</p>\n<p>The stakes for all of this are even higher because nearly all of the independent online platforms for video creation outside of YouTube have been <a href=\"https://youtu.be/bx5bD7F8zvE\">bought up by a single private equity firm</a>. In short: even if you don’t know it, if you’re trying to do video off of YouTube, all of your eggs are in one, very precarious, basket.</p>\n<h2>What to do</h2>\n<p>Apple can mitigate the risks of closing up podcasts by moving as quickly as possible to reassure the entire podcasting ecosystem that they’ll allow creators to use <em>any</em> source for hosting video. Right now, there’s a “fallback” video system where creators can deliver video through the traditional podcast standard, and other podcasting apps will show that video to audiences, but Apple’s apps don’t recognize it. If Apple said they’d support that specification as a second option for those who don’t want to, or can’t, use their video hosting partners, that would go a huge way towards mitigating the ecosystem risk that they’re introducing with this new shift.</p>\n<p>If Apple can engage with a wide swath of creators and understand the concerns that are bubbling up, and articulate that they’re aware of the real, significant risks that can arise from the path that they’re currently on, they still have a chance to course-correct.</p>\n<p>Some of these decisions can seem like arcane technical discussions. It’s easy to roll your eyes when people talk about specifications and formats and the minutiae of what happens behind the scenes when we click on a link. But the history of the Internet has shown us that, sometimes, even some of what seem like the most inconsequential choices end up leading to massive shifts in a larger ecosystem, or even in culture overall.</p>\n<p>A generation ago, a few people at Apple made a choice to embrace an open ecosystem that was in its infancy, and in so doing, they enabled an entire culture of creators to flourish for decades. Podcasting is perhaps the last major media format that is open, free, and not easily able to be captured by authoritarians. The stakes couldn’t be higher. All it takes now is a few decision makers pushing to do the right thing, not just the easy thing, to protect an entire vital medium.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/03/08/neo-apple-embarassment/",
      "title": "The Neo solves Apple’s embarrassment",
      "description": null,
      "url": "https://anildash.com/2026/03/08/neo-apple-embarassment/",
      "published": null,
      "updated": "2026-03-08T00:00:00.000Z",
      "content": "<p>Last week, Apple released a parade of hardware announcements, and the one that captured the most attention across the industry was the $600 ($500 if you’re in education!) <a href=\"https://amzn.to/46K9mbt\">MacBook Neo</a>, the brightly-colored low-end laptop that they launched to great fanfare. The conventional wisdom is that this product opens up Apple to the low end of the laptop market for the first time, radically changing the dynamics of the entire market, and throwing down the gauntlet to the garbage Windows laptop market, as well as challenging a huge swath of Chromebooks which tend to dominate in the education market. This is incorrect.</p>\n<p>Apple has, in fact, sold a MacBook Air with an M1 chip <a href=\"https://www.macworld.com/article/2986234/walmart-m1-macbook-air-too-good-to-be-true.html\">at Walmart</a> for <em>years</em>, which it has intermittently discounted to $499 at key times like Black Friday and Cyber Monday. The single-core performance of that laptop (meaning, how it works for most normal tasks that people do, like browsing the web or writing email or watching YouTube videos), is very nearly equivalent to the newly-released MacBook Neo.</p>\n<p>But. A laptop with an old design, using a chip that has an old number (the M1 chip came out six years ago!), sold exclusively through a mass-market retailer that is perceived as anything but premium, presents an enormous brand challenge for Apple. It is, to put it simply, <em>embarrassing</em>. Apple can have low-end products in its range. They invest lots of effort in that segment of their product line, as the new iPhone 17e shows, making a new basic entrant to their most recent series of phones. But Apple <em>can’t</em> have old, basic-looking products that people aren’t even able to buy at an Apple Store.</p>\n<p>And that’s what Neo solves. It’s a smart reframing of a product that is nearly the same offering as the old M1 Air: the Neo and that old M1 machine both have 13” screens, both weigh just under 3 pounds, both have 8GB of RAM, both start at 256GB of storage, both have about 16 hours of battery life, are both about 8”x12”, both have 2 USB ports and a headphone jack, and both of course cost almost exactly the same. They did add a new yellow (citrus!) color for the Neo, though.</p>\n<h2>Wake up, Neo</h2>\n<p>What was more striking to me was <a href=\"https://www.youtube.com/watch?v=u3SIKAmPXY4\">Apple’s introductory video</a>, which clearly seems aimed at people who are new to Apple computers, or maybe people who are new to laptop computers entirely. They’re imagining a user base who’s only ever had their smartphones and are buying computers for the first time — which might describe a lot of students. There’s no discussion here of the chamfers of the aluminum, or the pipelines in the GPU cores, and there’s barely even the slightest mention of AI; instead, they describe the basics of what the laptop includes, and even go out of their way to explain how it interoperates with an iPhone.</p>\n<p>There’s also a very clear attempt to distinguish Neo’s branding from the rest of Apple’s design language. The type for the “MacBook Neo” name in the launch video, and the “Hello, Neo” text on the <a href=\"https://www.apple.com/macbook-neo/\">product homepage</a> are a rounded typeface that’s so new that it’s not actually even an actual font that Apple’s using; they’ve rendered it as an image instead of a variation of their usual “<a href=\"https://developer.apple.com/fonts/\">San Francisco</a>” font that Apple uses for everything else in their standard marketing materials. The throwback to 2000s-era design (terminal green, the word “Neo” — are we entering the Matrix?) couldn’t be more different from the “it looks expensive” vibes of something like the <a href=\"https://www.apple.com/apple-watch-hermes/\">Apple Watch Hermès</a> branding.</p>\n<p>In all, it’s pretty impressive to see Apple use its marketing strengths to take a product that is remarkably similar to something that they’ve had for sale for years at the largest retailer in the world, and position it as a brand-new, category-defining new entry into a space. To me, the biggest thing this shows is the blind spot that traditional tech trade press has to the actual buying patterns and lived experience of normal people who shop at Walmart all the time; it would be pretty hard to see Neo as particularly novel if you had walked by a Walmart tech section any time in the last three years.</p>\n<p>At a time when Apple has <a href=\"https://www.anildash.com/2025/09/09/how-tim-cook-sold-out-steve-jobs/\">lost whatever moral compass it had</a>, even though its machines still say “privacy is a human right” when you turn them on, we still want to see positive signs from the company. And a good one is that Apple is engaging with the reality that the current moment calls for products that are far more affordable. It is a good thing indeed when affordable products are presented as being desirable, when most of the product’s enclosure is made of recycled material, and when the lifespan of a product can be expected to be significantly longer than most in its category, instead of simply being treated as disposable. All it took was removing the stigma over the existing affordable laptop that Apple’s been selling for years.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    },
    {
      "id": "https://anildash.com/2026/03/13/coders-after-ai/",
      "title": "What do coders do after AI?",
      "description": null,
      "url": "https://anildash.com/2026/03/13/coders-after-ai/",
      "published": null,
      "updated": "2026-03-13T00:00:00.000Z",
      "content": "<p>For the New York Times Magazine this Sunday, <a href=\"https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html?unlocked_article_code=1.SlA.gzDD.giRxmN2oQFcF&smid=url-share\">I talked to Clive Thompson</a> about one of the conversations that I'm having most often these days: What happens to coders in this current moment of extraordinarily rapid evolution in AI? LLMs are now quickly advancing to where they can virtually become entire software factories, radically changing both the economics and the power dynamics of software creation — which has so far mostly been used to displace massive numbers of tech workers.</p>\n<p>But it's not so simple as \"bosses are firing coders now that AI can write code\".</p>\n<p>For one thing, though there are certainly a lot of companies where executives are forcing teams to churn out slop code, and using that as an excuse to carry out mass layoffs, there are plenty of companies where \"AI\" is just a buzzword being used as a pretense for layoffs that owners have wanted to do anyway. And more importantly, there are a growing number of coders who are having a very <em>different</em> experience with the tools than those bosses may have expected — and a very different outcome than the Big AI labs may have intended. As I said in the story:</p>\n<blockquote>\n<p>“The reason that tech generally — and coders in particular — see LLMs differently than everyone else is that in the creative disciplines, LLMs take away the most soulful human parts of the work and leave the drudgery to you,” Dash says. “And in coding, LLMs take away the drudgery and leave the human, soulful parts to you.”</p>\n</blockquote>\n<p>This is a point that's hard for a lot of my artist friends to understand: how come so many coders don't just hate LLMs for stealing their work the way that most writers and photographers and musicians do? The answer boils down to three things:</p>\n<ul>\n<li>Coders have long had a history of openly sharing code with each other, as part of an open source, collaborative culture that goes back for more than half a century.</li>\n<li>Tools for writing and creating code have almost always offered a certain degree of automation and reuse of work, so generating code doesn't feel like as radical a departure from past practices.</li>\n<li>Software development is one of the fields with the least-advanced cultures around labor, as workers have almost no history of organizing, and many coders tend to side much more with management as they've been conditioned to think of themselves as \"future founders\" rather than being in solidarity with other workers.</li>\n</ul>\n<p>What this means is, attitudes about automation and worker displacement in tech are radically different than they would be in something like the auto industry, and in many cases, I've found that being part of a coder workforce has meant witnessing a level of literacy about past labor movements that is shockingly low, even though their technical knowledge is obviously extremely high.</p>\n<h2>Coders, in their heads and hearts</h2>\n<p>To be somewhat reductive about it, there are two main cohorts of coders. A larger, less vocal, group who see coding as a stable, well-paying career that they got into in order to support themselves and their families, and to partake in the upward economic mobility that the tech sector has represented for the last few decades. Then there is the smaller, more visible, group who have seen coding as an avocation, which they were drawn to as a form of creative expression and problem-solving just as much as a career opportunity. They certainly haven't been reluctant to capitalize on the huge economic potential of working in tech — this is the group that most startup founders come from — but coding isn't simply something they do from 9 to 5 and then put away at the end of the day. For those of us in this group (yeah... I'm one of these folks), we usually started coding when we were kids, and we have usually kept doing it on nights and weekends ever since, even if it's not even part of our jobs anymore.</p>\n<p>Both cohorts of coders are in for a hard time thanks to the new AI tools, but for completely different reasons.</p>\n<h3>For the 9 to 5</h3>\n<p>The people who started to write software just because it represented a stable job, but who don't see it as part of their own personal identity, are going to be devastated by the ruthlessness with which their bosses will swing the ax. These new LLM-powered software factories can generate orders of magnitude more of the standardized business code that tends to be the bread-and-butter work for these journeyman coders, and it's not the kind of displacement that can be solved by learning a new programming language on nights and weekends, or getting a new professional certification. Much of the \"working class\" tech industry (speaking of the roles they perform functionally within the system; these are obviously jobs that pay far more than working class salaries today) are seen as ripe targets for deskilling, where lower-paid product roles can delegate coding tasks to coding AI systems, or for being automated by management giving orders to those AI systems.</p>\n<p>One of the hardest parts of reckoning with this change is not just the speed with which it is happening, but the level of cultural change that it reflects. Coders are generally very amenable to learning new skills; it's a necessary part of the work, and the mindset is almost never one of being change-averse. But the level at which the change is happening in this transition is one that gets closer to people's sense of self-worth and identity, rather than to their perceptions of simply having to acquire knowledge or skills. It doesn't help that the change is being catalyzed by some of the most venal and irresponsible leaders in the history of business, brazenly acting without any moral boundaries whatsoever.</p>\n<h3>For the nights and weekends</h3>\n<p>For the coders that see being a coder as part of their identity, the LLM transformation is going to represent an entirely different set of challenges. They may well survive the transition that is coming, but find themselves in an unrecognizable place on the other side of it. The way that these new LLM-based tools work is by turning into virtual software factories that essentially churn out nearly all of the code <em>for</em> you. The actual work of writing the code is abstracted away, with the creator essentially focused more on describing the desired end results, and making sure to test that everything is working correctly. You're more the conductor of the symphony than someone who's holding a violin.</p>\n<p>But there are people who have spent decades honing their craft, committing to memory the most obscure vagaries of this computer processor or that web browser or that one gaming console, all in service of creating code that was particularly elegant or especially high-performing, or just <em>really satisfying</em> to write. There's a real art to it. When you get your code to run just so, you feel a quiet pride in yourself, and a sense of relief that there are still things in the world that work as they should. It's a little box that you can type in where things are fair. It's the same reason so many coders like to bake, or knit, or do woodworking — they're all hobbies where precisely doing the right thing is rewarded with a delightful result.</p>\n<p>And now that's going away. You won't see the code yourself anymore, the robots will write it for you while falling around and clanking. Half the time, the code they write will be garbage, or nonsense. Slop. But it's so cheap to write that the computer can just throw it away and write some more, over and over, until it finally happens to work.  Is it elegant? Who cares? It's cheap. Ten thousand times cheaper than paying you to write it, so we can afford to waste a lot of code along the way.</p>\n<p>Your job changes into <em>describing software</em>. Now, if you're the kind of person who only ever wanted to have the end result, maybe this is a liberation. Sometimes, that's what mattered — we wanted to fast-forward to the end result, elegance be damned. But if you were one of those crafters? The people who wrote idiomatic code that made that programming language sing? There's a real grief here. It's not as serious as when we know a human language is dying out, but it's not entirely dissimilar, either.</p>\n<h2>If ... Then?</h2>\n<p>What do we do about it? This horse is not going back in the barn. The billionaires wouldn't let it, anyway.</p>\n<p>I've come to the personal conclusion that the only way forward is for more of the hackers with soul to seize this moment of flux and use these tools to build. The economics of creating code are changing, and it can't just be the worst billionaires in the world who benefit. The latest count is <em>700,000 people</em> laid off in the last few years in the tech industry. We'll be at a million soon, at the rate things are accelerating. Each new layoff announcement is now in the <em>thousands</em>.</p>\n<p>It's not going to be a panacea for all the jobs lost, and it's not the only solution we're going to need, but one part of the answer can be coders who still give a damn looking out for each other, and building independent efforts without being reliant on the economics — or ethics — of the people who are laying off their colleagues by the hundreds of thousands.</p>\n<p>I've spent my whole career working with communities of coders, building tools for the people who build with code. I don't imagine I'll ever stop doing it. This is the hardest moment that I've ever seen this community go through, and it makes me heartsick to see so many people enduring such stress and anxiety about what's to come. More than anything else, what I hope people can remember is that all of the great things that people love about technology weren't created by the money guys, or the bosses who make HR decisions — they were created by the people who actually build things. That's still an incredible superpower, and it will remain one no matter how much the actual tools of creation continue to change.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Anil Dash",
          "email": "[email protected]",
          "url": null
        }
      ],
      "categories": []
    }
  ]
}
Analyze Another View with RSS.Style