Analysis of https://apenwarr.ca/log/rss.php

Feed fetched in 290 ms.
Content type is text/xml; charset=UTF-8.
Feed is 117,940 characters long.
Feed has an ETag of W/"f0642961f87415adc57a2c79442af169f292429d".
Warning Feed is missing the Last-Modified HTTP header.
Feed is well-formed XML.
Warning Feed has no styling.
This is an RSS feed.
Feed title: apenwarr
Warning Feed is missing a self link.
Warning Feed is missing an image.
Feed has 6 items.
First item published on 2025-11-20T14:19:14.000Z
Last item published on 2023-07-11T03:12:47.000Z
All items have published dates.
Newest item was published on 2025-11-20T14:19:14.000Z.
Home page URL: https://apenwarr.ca/log/
Home page has feed discovery link in <head>.
Home page has a link to the feed in the <body>

Formatted XML
<rss version="2.0">
    <channel>
        <title>apenwarr</title>
        <description>apenwarr - NITLog</description>
        <link>https://apenwarr.ca/log/</link>
        <language>en-ca</language>
        <generator>PyNITLog</generator>
        <docs>http://blogs.law.harvard.edu/tech/rss</docs>
        <item>
            <title>Systems design 3: LLMs and the semantic revolution</title>
            <pubDate>Thu, 20 Nov 2025 14:19:14 +0000</pubDate>
            <link>https://apenwarr.ca/log/20251120</link>
            <guid isPermaLink="true">https://apenwarr.ca/log/20251120</guid>
            <description>&lt;p&gt;Long ago in the 1990s when I was in high school, my chemistry+physics
teacher pulled me aside. &quot;Avery, you know how the Internet works, right? I
have a question.&quot;&lt;/p&gt;
&lt;p&gt;I now know the correct response to that was, &quot;Does anyone &lt;em&gt;really&lt;/em&gt; know how
the Internet works?&quot; But as a naive young high schooler I did not have that
level of self-awareness. (Decades later, as a CEO, that&#39;s my answer to
almost everything.)&lt;/p&gt;
&lt;p&gt;Anyway, he asked his question, and it was simple but deep. How do they make
all the computers connect?&lt;/p&gt;
&lt;p&gt;We can&#39;t even get the world to agree on 60 Hz vs 50 Hz, 120V vs 240V, or
which kind of physical power plug to use. Communications equipment uses way
more frequencies, way more voltages, way more plug types. Phone companies
managed to federate with each other, eventually, barely, but the ring tones
were different everywhere, there was pulse dialing and tone dialing, and
some of them &lt;em&gt;still&lt;/em&gt; charge $3/minute for international long distance, and
connections take a long time to establish and humans seem to be involved in
suspiciously many places when things get messy, and every country has a
different long-distance dialing standard and phone number format.&lt;/p&gt;
&lt;p&gt;So Avery, he said, now they&#39;re telling me every computer in the world can
connect to every other computer, in milliseconds, for free, between Canada
and France and China and Russia. And they all use a single standardized
address format, and then you just log in and transfer files and stuff? How?
How did they make the whole world cooperate? And who?&lt;/p&gt;
&lt;p&gt;When he asked that question, it was a formative moment in my life that I&#39;ll
never forget, because as an early member of what would be the first Internet
generation…  I Had Simply Never Thought of That.&lt;/p&gt;
&lt;p&gt;I mean, I had to stop and think for a second. Wait, is protocol
standardization even a hard problem? Of course it is. Humans can&#39;t agree on
anything. We can&#39;t agree on a unit of length or the size of a pint, or which
side of the road to drive on. Humans in two regions of Europe no farther
apart than Thunder Bay and Toronto can&#39;t understand each other&#39;s speech. But
this Internet thing just, kinda, worked.&lt;/p&gt;
&lt;p&gt;&quot;There&#39;s… a layer on top,&quot; I uttered, unsatisfyingly. Nobody had taught me
yet that the OSI stack model existed, let alone that it was at best a weak
explanation of reality.&lt;/p&gt;
&lt;p&gt;&quot;When something doesn&#39;t talk to something else, someone makes an adapter.
Uh, and some of the adapters are just programs rather than physical things.
It&#39;s not like everyone in the world agrees. But as soon as one person makes
an adapter, the two things come together.&quot;&lt;/p&gt;
&lt;p&gt;I don&#39;t think he was impressed with my answer. Why would he be? Surely
nothing so comprehensively connected could be engineered with no central
architecture, by a loosely-knit cult of mostly-volunteers building an
endless series of whimsical half-considered &quot;adapters&quot; in their basements
and cramped university tech labs. Such a creation would be a monstrosity,
just as likely to topple over as to barely function.&lt;/p&gt;
&lt;p&gt;I didn&#39;t try to convince him, because honestly, how could I know? But the
question has dominated my life ever since.&lt;/p&gt;
&lt;p&gt;When things don&#39;t connect, why don&#39;t they connect? When they do, why? How?
…and who?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Postel&#39;s Law&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The closest clue I&#39;ve found is this thing called Postel&#39;s Law, one of the
foundational principles of the Internet. It was best stated by one of the
founders of the Internet, Jon Postel. &quot;Be conservative in what you send, and
liberal in what you accept.&quot;&lt;/p&gt;
&lt;p&gt;What it means to me is, if there&#39;s a standard, do your best to follow it,
when you&#39;re sending. And when you&#39;re receiving, uh, assume the best
intentions of your counterparty and do your best and if that doesn&#39;t work,
guess.&lt;/p&gt;
&lt;p&gt;A rephrasing I use sometimes is, &quot;It takes two to miscommunicate.&quot;
Communication works best and most smoothly if you have a good listener and a
clear speaker, sharing a language and context. But it can still bumble along
successfully if you have a poor speaker with a great listener, or even a
great speaker with a mediocre listener. Sometimes you have to say the same
thing five ways before it gets across (wifi packet retransmits), or ask way
too many clarifying questions, but if one side or the other is diligent
enough, you can almost always make it work.&lt;/p&gt;
&lt;p&gt;This asymmetry is key to all high-level communication. It makes network bugs
much less severe. Without Postel&#39;s Law, triggering a bug in the sender would
break the connection; so would triggering a bug in the receiver. With
Postel&#39;s Law, we acknowledge from the start that there are always bugs and
we have twice as many chances to work around them. Only if you trigger both
sets of bugs at once is the flaw fatal.&lt;/p&gt;
&lt;p&gt;…So okay, if you&#39;ve used the Internet, you&#39;ve probably observed that fatal
connection errors are nevertheless pretty common. But that misses how
&lt;em&gt;incredibly much more common&lt;/em&gt; they would be in a non-Postel world. That
world would be the one my physics teacher imagined, where nothing ever works
and it all topples over.&lt;/p&gt;
&lt;p&gt;And we know that&#39;s true because we&#39;ve tried it. Science! Let us digress.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;XML&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We had the Internet (&quot;OSI Layer 3&quot;) mostly figured out by the time my era
began in the late 1900s, but higher layers of the stack still had work to
do. It was the early days of the web. We had these newfangled hypertext
(&quot;HTML&quot;) browsers that would connect to a server, download some stuff, and
then try their best to render it.&lt;/p&gt;
&lt;p&gt;Web browsers are and have always been an epic instantiation of Postel&#39;s Law.
From the very beginning, they assumed that the server (content author) had
absolutely no clue what they were doing and did their best to apply some
kind of meaning on top, despite every indication that this was a lost cause.
List items that never end? Sure. Tags you&#39;ve never heard of? Whatever.
Forgot some semicolons in your javascript? I&#39;ll interpolate some. Partially
overlapping italics and bold? Leave it to me. No indication what language or
encoding the page is in? I&#39;ll just guess.&lt;/p&gt;
&lt;p&gt;The evolution of browsers gives us some insight into why Postel&#39;s Law is a
law and not just, you know, Postel&#39;s Advice. The answer is: competition. It
works like this. If your browser interprets someone&#39;s mismash subjectively
better than another browser, your browser wins.&lt;/p&gt;
&lt;p&gt;I think economists call this an iterated prisoner&#39;s dilemma. Over and over,
people write web pages (defect) and browsers try to render them (defect) and
absolutely nobody actually cares what the HTML standard says (stays loyal).
Because if there&#39;s a popular page that&#39;s wrong and you render it &quot;right&quot; and
it doesn&#39;t work? Straight to jail.&lt;/p&gt;
&lt;p&gt;(By now almost all the evolutionary lines of browsers have been sent to
jail, one by one, and the HTML standard is effectively whatever Chromium and
Safari say it is. Sorry.)&lt;/p&gt;
&lt;p&gt;This law offends engineers to the deepness of their soul. We went through a
period where loyalists would run their pages through &quot;validators&quot; and
proudly add a logo to the bottom of their page saying how valid their HTML
was. Browsers, of course, didn&#39;t care and continued to try their best.&lt;/p&gt;
&lt;p&gt;Another valiant effort was the definition of &quot;quirks mode&quot;: a legacy
rendering mode meant to document, normalize, and push aside all the legacy
wonko interpretations of old web pages. It was paired with a new,
standards-compliant rendering mode that everyone was supposed to agree on,
starting from scratch with an actual written spec and tests this time, and
public shaming if you made a browser that did it wrong. Of course, outside
of browser academia, nobody cares about the public shaming and everyone
cares if your browser can render the popular web sites, so there are still
plenty of quirks outside quirks mode. It&#39;s better and it was well worth the
effort, but it&#39;s not all the way there. It never can be.&lt;/p&gt;
&lt;p&gt;We can be sure it&#39;s not all the way there because there was another exciting
development, HTML Strict (and its fancier twin, XHTML), which was meant to
be the same thing, but with a special feature. Instead of sending browsers
to jail for rendering wrong pages wrong, we&#39;d send page authors to jail for
writing wrong pages!&lt;/p&gt;
&lt;p&gt;To mark your web page as HTML Strict was a vote against the iterated
prisoner&#39;s dilemma and Postel&#39;s Law. No, your vote said. No more. We cannot
accept this madness. We are going to be Correct. I certify this page is
correct. If it is not correct, you must sacrifice me, not all of society. My
honour demands it.&lt;/p&gt;
&lt;p&gt;Anyway, many page authors were thus sacrificed and now nobody uses HTML
Strict. Nobody wants to do tech support for a web page that asks browsers to
crash when parsing it, when you can just… not do that.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Excuse me, the above XML section didn&#39;t have any XML&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Yes, I&#39;m getting to that. (And you&#39;re soon going to appreciate that meta
joke about schemas.)&lt;/p&gt;
&lt;p&gt;In parallel with that dead branch of HTML, a bunch of people had realized
that, more generally, HTML-like languages (technically SGML-like languages)
had turned out to be a surprisingly effective way to build interconnected
data systems.&lt;/p&gt;
&lt;p&gt;In retrospect we now know that the reason for HTML&#39;s resilience is Postel&#39;s
Law. It&#39;s simply easier to fudge your way through parsing incorrect
hypertext, than to fudge your way through parsing a Microsoft Word or Excel
file&#39;s hairball of binary OLE streams, which famously even Microsoft at one
point lost the knowledge of how to parse. But, that Postel&#39;s Law connection
wasn&#39;t really understood at the time.&lt;/p&gt;
&lt;p&gt;Instead we had a different hypothesis: &quot;separation of structure and
content.&quot; Syntax and semantics. Writing software to deal with structure is
repetitive overhead, and content is where the money is. Let&#39;s automate away
the structure so you can spend your time on the content: semantics.&lt;/p&gt;
&lt;p&gt;We can standardize the syntax with a single Extensible Markup Language
(XML). Write your content, then &quot;mark it up&quot; by adding structure right in
the doc, just like we did with plaintext human documents. Data, plus
self-describing metadata, all in one place. Never write a parser again!&lt;/p&gt;
&lt;p&gt;Of course, with 20/20 hindsight (or now 2025 hindsight), this is laughable.
Yes, we now have XML parser libraries. If you&#39;ve ever tried to use one, you
will find they indeed produce parse trees automatically… if you&#39;re lucky. If
you&#39;re not lucky, they produce a stream of &quot;tokens&quot; and leave it to you to
figure out how to arrange it in a tree, for reasons involving streaming,
performance, memory efficiency, and so on. Basically, if you use XML you now
have to &lt;em&gt;deeply&lt;/em&gt; care about structure, perhaps more than ever, but you also
have to include some giant external parsing library that, left in its normal
mode, &lt;a href=&quot;https://cheatsheetseries.owasp.org/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html&quot;&gt;might spontaneously start making a lot of uncached HTTP requests that
can also exploit remote code execution vulnerabilities haha
oops&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you&#39;ve ever taken a parser class, or even if you&#39;ve just barely tried to
write a parser, you&#39;ll know the truth: the value added by outsourcing
&lt;em&gt;parsing&lt;/em&gt; (or in some cases only tokenization) is not a lot. This is because
almost all the trouble of document processing (or compiling) is the
&lt;em&gt;semantic&lt;/em&gt; layer, the part where you make sense of the parse tree. The part
where you just read a stream of characters into a data structure is the
trivial, well-understood first step.&lt;/p&gt;
&lt;p&gt;Now, semantics is where it gets interesting. XML was all about separating
syntax from semantics. And they did some pretty neat stuff with that
separation, in a computer science sense. XML is neat because it&#39;s such a
regular and strict language that you can completely &lt;em&gt;validate&lt;/em&gt; the syntax
(text and tags) without knowing what any of the tags &lt;em&gt;mean&lt;/em&gt; or which tags
are intended to be valid at all.&lt;/p&gt;
&lt;p&gt;…aha! Did someone say &lt;em&gt;validate?!&lt;/em&gt; Like those old HTML validators we
talked about? Oh yes. Yes! And this time the validation will be completely
strict and baked into every implementation from day 1. And, the language
syntax itself will be so easy and consistent to validate (unlike SGML and
HTML, which are, in all fairness, bananas) that nobody can possibly screw it
up.&lt;/p&gt;
&lt;p&gt;A layer on top of this basic, highly validatable XML, was a thing called XML
Schemas. These were documents (mysteriously not written in XML) that
described which tags were allowed in which places in a certain kind of
document. Not only could you parse and validate the basic XML syntax, you
could also then validate its XML schema as a separate step, to be totally
sure that every tag in the document was allowed where it was used, and
present if it was required. And if not? Well, straight to jail. We all
agreed on this, everyone. Day one. No exceptions. Every document validates.
Straight to jail.&lt;/p&gt;
&lt;p&gt;Anyway XML schema validation became an absolute farce. Just parsing or
understanding, let alone writing, the awful schema file format is an
unpleasant ordeal. To say nothing of complying with the schema, or (heaven
forbid) obtaining a copy of someone&#39;s custom schema and loading it into the
validator at the right time.&lt;/p&gt;
&lt;p&gt;The core XML syntax validation was easy enough to do while parsing.
Unfortunately, in a second violation of Postel&#39;s Law, almost no software
that &lt;em&gt;outputs&lt;/em&gt; XML runs it through a validator before sending. I mean, why
would they, the language is highly regular and easy to generate and thus the
output is already perfect. …Yeah, sure.&lt;/p&gt;
&lt;p&gt;Anyway we all use JSON now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;JSON&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Whoa, wait! I wasn&#39;t done!&lt;/p&gt;
&lt;p&gt;This is the part where I note, for posterity&#39;s sake, that XML became a
decade-long fad in the early 2000s that justified billions of dollars of
software investment. None of XML&#39;s technical promises played out; it is a
stain on the history of the computer industry. But, a lot of legacy software
got un-stuck because of those billions of dollars, and so we did make
progress.&lt;/p&gt;
&lt;p&gt;What was that progress? Interconnection.&lt;/p&gt;
&lt;p&gt;Before the Internet, we kinda didn&#39;t really need to interconnect software
together. I mean, we sort of did, like cut-and-pasting between apps on
Windows or macOS or X11, all of which were surprisingly difficult little
mini-Postel&#39;s Law protocol adventures in their own right and remain quite
useful when they work (&lt;a href=&quot;https://news.ycombinator.com/item?id=31356896&quot;&gt;except &quot;paste formatted text,&quot; wtf are you people
thinking&lt;/a&gt;). What makes
cut-and-paste possible is top-down standards imposed by each operating
system vendor.&lt;/p&gt;
&lt;p&gt;If you want the same kind of thing on the open Internet, ie. the ability to
&quot;copy&quot; information out of one server and &quot;paste&quot; it into another, you need
&lt;em&gt;some&lt;/em&gt; kind of standard. XML was a valiant effort to create one. It didn&#39;t
work, but it was valiant.&lt;/p&gt;
&lt;p&gt;Whereas all that money investment &lt;em&gt;did&lt;/em&gt; work. Companies spent billions of
dollars to update their servers to publish APIs that could serve not just
human-formatted HTML, but also something machine-readable. The great
innovation was not XML per se, it was serving data over HTTP that wasn&#39;t
always HTML. That was a big step, and didn&#39;t become obvious until afterward.&lt;/p&gt;
&lt;p&gt;The most common clients of HTTP were web browsers, and web browsers only
knew how to parse two things: HTML and javascript. To a first approximation,
valid XML is &quot;valid&quot; (please don&#39;t ask the validator) HTML, so we could do
that at first, and there were some Microsoft extensions. Later, after a few
billions of dollars, true standardized XML parsing arrived in browsers.
Similarly, to a first approximation, valid JSON is valid javascript, which
woo hoo, that&#39;s a story in itself (you could parse it with eval(), tee hee)
but that&#39;s why we got here.&lt;/p&gt;
&lt;p&gt;JSON (minus the rest of javascript) is a vastly simpler language than XML.
It&#39;s easy to consistently parse (&lt;a href=&quot;https://github.com/tailscale/hujson&quot;&gt;other than that pesky trailing
comma&lt;/a&gt;); browsers already did. It
represents only (a subset of) the data types normal programming languages
already have, unlike XML&#39;s weird mishmash of single attributes, multiply
occurring attributes, text content, and CDATA. It&#39;s obviously a tree and
everyone knows how that tree will map into their favourite programming
language. It inherently works with unicode and only unicode. You don&#39;t need
cumbersome and duplicative &quot;closing tags&quot; that double the size of every
node. And best of all, no guilt about skipping that overcomplicated and
impossible-to-get-right schema validator, because, well, nobody liked
schemas anyway so nobody added them to JSON
(&lt;a href=&quot;https://json-schema.org/&quot;&gt;almost&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Today, if you look at APIs you need to call, you can tell which ones were a
result of the $billions invested in the 2000s, because it&#39;s all XML. And you
can tell which came in the 2010s and later after learning some hard lessons,
because it&#39;s all JSON. But either way, the big achievement is you can call
them all from javascript. That&#39;s pretty good.&lt;/p&gt;
&lt;p&gt;(Google is an interesting exception: they invented and used protobuf during
the same time period because they disliked XML&#39;s inefficiency, they did like
schemas, and they had the automated infrastructure to make schemas actually
work (mostly, after more hard lessons). But it mostly didn&#39;t spread beyond
Google… maybe because it&#39;s hard to do from javascript.)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Blockchain&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The 2010s were another decade of massive multi-billion dollar tech
investment. Once again it was triggered by an overwrought boondoggle
technology, and once again we benefited from systems finally getting updated
that really needed to be updated.&lt;/p&gt;
&lt;p&gt;Let&#39;s leave aside cryptocurrencies (which although used primarily for crime,
at least demonstrably have a functioning use case, ie. crime) and look at
the more general form of the technology.&lt;/p&gt;
&lt;p&gt;Blockchains in general make the promise of a &quot;distributed ledger&quot; which
allows everyone the ability to make claims and then later validate other
people&#39;s claims. The claims that &quot;real&quot; companies invested in were meant to
be about manufacturing, shipping, assembly, purchases, invoices, receipts,
ownership, and so on. What&#39;s the pattern? That&#39;s the stuff of businesses
doing business with other businesses. In other words, data exchange. Data
exchange is exactly what XML didn&#39;t really solve (although progress was made
by virtue of the dollars invested) in the previous decade.&lt;/p&gt;
&lt;p&gt;Blockchain tech was a more spectacular boondoggle than XML for a few
reasons. First, it didn&#39;t even have a purpose you could explain. Why do we
even need a purely distributed system for this? Why can&#39;t we just trust a
third party auditor? Who even wants their entire supply chain (including
number of widgets produced and where each one is right now) to be visible to
the whole world? What is the problem we&#39;re trying to solve with that?&lt;/p&gt;
&lt;p&gt;…and you know there really was no purpose, because after all the huge
 investment to rewrite all that stuff, which was itself valuable work, we
 simply dropped the useless blockchain part and then we were fine. I don&#39;t
 think even the people working on it felt like they needed a real
 distributed ledger. They just needed an &lt;em&gt;updated&lt;/em&gt; ledger and a budget to
 create one. If you make the &quot;ledger&quot; module pluggable in your big fancy
 supply chain system, you can later drop out the useless &quot;distributed&quot;
 ledger and use a regular old ledger. The protocols, the partnerships, the
 databases, the supply chain, and all the rest can stay the same.&lt;/p&gt;
&lt;p&gt;In XML&#39;s defense, at least it was not worth the effort to rip out once the
world came to its senses.&lt;/p&gt;
&lt;p&gt;Another interesting similarity between XML and blockchains was the computer
science appeal. A particular kind of person gets very excited about
&lt;em&gt;validation&lt;/em&gt; and &lt;em&gt;verifiability.&lt;/em&gt; Both times, the whole computer industry
followed those people down into the pits of despair and when we finally
emerged… still no validation, still no verifiability, still didn&#39;t matter.
Just some computers communicating with each other a little better than they
did before.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;LLMs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the 2020s, our industry fad is LLMs. I&#39;m going to draw some comparisons
here to the last two fads, but there are some big differences too.&lt;/p&gt;
&lt;p&gt;One similarity is the computer science appeal: so much math! Just the
matrix sizes alone are a technological marvel the likes of which we have
never seen. Beautiful. Colossal. Monumental. An inspiration to nerds
everywhere.&lt;/p&gt;
&lt;p&gt;But a big difference is verification and validation. If there is one thing
LLMs absolutely are not, it&#39;s &lt;em&gt;verifiable.&lt;/em&gt; LLMs are the flakiest thing the
computer industry has ever produced! So far. And remember, this is the
industry that brought you HTML rendering.&lt;/p&gt;
&lt;p&gt;LLMs are an almost cartoonishly amplified realization of Postel&#39;s Law. They
write human grammar perfectly, or almost perfectly, or when they&#39;re not
perfect it&#39;s a bug and we train them harder. And, they can receive just
about any kind of gibberish and turn it into a data structure. In other
words, they&#39;re conservative in what they send and liberal in what they
accept.&lt;/p&gt;
&lt;p&gt;LLMs also solve the syntax problem, in the sense that they can figure out
how to transliterate (convert) basically any file syntax into any other.
Modulo flakiness. But if you need a CSV in the form of a limerick or a
quarterly financial report formatted as a mysql dump, sure, no problem, make
it so.&lt;/p&gt;
&lt;p&gt;In theory we already had syntax solved though. XML and JSON did that
already. We were even making progress interconnecting old school company
supply chain stuff the hard way, thanks to our nominally XML- and
blockchain- investment decades. We had to do every interconnection by hand –
by writing an adapter – but we could do it.&lt;/p&gt;
&lt;p&gt;What&#39;s really new is that LLMs address &lt;em&gt;semantics.&lt;/em&gt; Semantics are the
biggest remaining challenge in connecting one system to another. If XML
solved syntax, that was the first 10%. Semantics are the last 90%. When I
want to copy from one database to another, how do I map the fields? When I
want to scrape a series of uncooperative web pages and turn it into a table
of products and prices, how do I turn that HTML into something structured?
(Predictably &lt;a href=&quot;https://microformats.org/&quot;&gt;microformats&lt;/a&gt;, aka schemas, did not
work out.) If I want to query a database (or join a few disparate
databases!) using some language that isn&#39;t SQL, what options do I have?&lt;/p&gt;
&lt;p&gt;LLMs can do it all.&lt;/p&gt;
&lt;p&gt;Listen, we can argue forever about whether LLMs &quot;understand&quot; things, or will
achieve anything we might call intelligence, or will take over the world and
eradicate all humans, or are useful assistants, or just produce lots of text
sludge that will certainly clog up the web and social media, or will also be
able to filter the sludge, or what it means for capitalism that we willingly
invented a machine we pay to produce sludge that we also pay to remove the
sludge.&lt;/p&gt;
&lt;p&gt;But what we can&#39;t argue is that LLMs interconnect things. Anything. To
anything. Whether you like it or not. Whether it&#39;s bug free or not (spoiler:
it&#39;s not). Whether it gets the right answer or not (spoiler: erm…).&lt;/p&gt;
&lt;p&gt;This is the thing we have gone through at least two decades of hype cycles
desperately chasing. (Three, if you count java &quot;write once run anywhere&quot; in
the 1990s.) It&#39;s application-layer interconnection, the holy grail of the
Internet.&lt;/p&gt;
&lt;p&gt;And this time, it actually works! (mostly)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The curse of success&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;LLMs aren&#39;t going away. Really we should coin a term for this use case, call
it &quot;b2b AI&quot; or something. For this use case, LLMs work. And they&#39;re still
getting better and the precision will improve with practice. For example,
imagine asking an LLM to write a data translator in some conventional
programming language, instead of asking it to directly translate a dataset
on its own. We&#39;re still at the beginning.&lt;/p&gt;
&lt;p&gt;But, this use case, which I predict is the big one, isn&#39;t what we expected.
We expected LLMs to write poetry or give strategic advice or whatever. We
didn&#39;t expect them to call APIs and immediately turn around and use what it
learned to call other APIs.&lt;/p&gt;
&lt;p&gt;After 30 years of trying and failing to connect one system to another, we
now have a literal universal translator. Plug it into any two things and
it&#39;ll just go, for better or worse, no matter how confused it becomes. And
everyone is doing it, fast, often with a corporate mandate to do it even
faster.&lt;/p&gt;
&lt;p&gt;This kind of scale and speed of (successful!) rollout is unprecedented,
even by the Internet itself, and especially in the glacially slow world of
enterprise system interconnections, where progress grinds to a halt once a
decade only to be finally dislodged by the next misguided technology wave.
Nobody was prepared for it, so nobody was prepared for the consequences.&lt;/p&gt;
&lt;p&gt;One of the odd features of Postel&#39;s Law is it&#39;s irresistible. Big Central
Infrastructure projects rise and fall with funding, but Postel&#39;s Law
projects are powered by love. A little here, a little there, over time. One
more person plugging one more thing into one more other thing. We did it
once with the Internet, overcoming all the incompatibilities at OSI layers 1
and 2. It subsumed, it is still subsuming, everything.&lt;/p&gt;
&lt;p&gt;Now we&#39;re doing it again at the application layer, the information layer.
And just like we found out when we connected all the computers together the
first time, naively hyperconnected networks make it easy for bad actors to
spread and disrupt at superhuman speeds. We had to invent firewalls, NATs,
TLS, authentication systems, two-factor authentication systems,
phishing-resistant two-factor authentication systems, methodical software
patching, CVE tracking, sandboxing, antivirus systems, EDR systems, DLP
systems, everything. We&#39;ll have to do it all again, but faster and
different.&lt;/p&gt;
&lt;p&gt;Because this time, it&#39;s all software.&lt;/p&gt;</description>
        </item>
        <item>
            <title>Billionaire math</title>
            <pubDate>Fri, 11 Jul 2025 16:18:52 +0000</pubDate>
            <link>https://apenwarr.ca/log/20250711</link>
            <guid isPermaLink="true">https://apenwarr.ca/log/20250711</guid>
            <description>&lt;p&gt;I have a friend who exited his startup a few years ago and is now rich. How
rich is unclear. One day, we were discussing ways to expedite the delivery
of his superyacht and I suggested paying extra. His response, as to so
many of my suggestions, was, “Avery, I’m not &lt;em&gt;that&lt;/em&gt; rich.”&lt;/p&gt;
&lt;p&gt;Everyone has their limit.&lt;/p&gt;
&lt;p&gt;I, too, am not that rich. I have shares in a startup that has not exited,
and they seem to be gracefully ticking up in value as the years pass. But I
have to come to work each day, and if I make a few wrong medium-quality
choices (not even bad ones!), it could all be vaporized in an instant.
Meanwhile, I can’t spend it. So what I have is my accumulated savings from a
long career of writing software and modest tastes (I like hot dogs).&lt;/p&gt;
&lt;p&gt;Those accumulated savings and modest tastes are enough to retire
indefinitely. Is that bragging? It was true even before I started my
startup. Back in 2018, I calculated my “personal runway” to see how long I
could last if I started a company and we didn’t get funded, before I had to
go back to work. My conclusion was I should move from New York City back to
Montreal and then stop worrying about it forever.&lt;/p&gt;
&lt;p&gt;Of course, being in that position means I’m lucky and special. But I’m not
&lt;em&gt;that&lt;/em&gt; lucky and special. My numbers aren’t that different from the average
Canadian or (especially) American software developer nowadays. We all talk a
lot about how the “top 1%” are screwing up society, but software developers
nowadays fall mostly in the top 1-2%[1] of income earners in the US or
Canada. It doesn’t feel like we’re that rich, because we’re surrounded by
people who are about equally rich. And we occasionally bump into a few who
are much more rich, who in turn surround themselves with people who are
about equally rich, so they don’t feel that rich either.&lt;/p&gt;
&lt;p&gt;But, we’re rich.&lt;/p&gt;
&lt;p&gt;Based on my readership demographics, if you’re reading this, you’re probably
a software developer. Do you feel rich?&lt;/p&gt;
&lt;p&gt;&lt;b&gt;It’s all your fault&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;So let’s trace this through. By the numbers, you’re probably a software
developer. So you’re probably in the top 1-2% of wage earners in your
country, and even better globally. So you’re one of those 1%ers ruining
society.&lt;/p&gt;
&lt;p&gt;I’m not the first person to notice this. When I read other posts about it,
they usually stop at this point and say, ha ha. Okay, obviously that’s not
what we meant. Most 1%ers are nice people who pay their taxes. Actually it’s
the top 0.1% screwing up society!&lt;/p&gt;
&lt;p&gt;No.&lt;/p&gt;
&lt;p&gt;I’m not letting us off that easily. Okay, the 0.1%ers are probably worse
(with apologies to my friend and his chronically delayed superyacht). But,
there aren’t that many of them[2] which means they aren’t as powerful as
they think. No one person has very much capacity to do bad things. They only
have the capacity to pay other people to do bad things.&lt;/p&gt;
&lt;p&gt;Some people have no choice but to take that money and do some bad things so
they can feed their families or whatever. But that’s not you. That’s not us.
We’re rich. If we do bad things, that’s entirely on us, no matter who’s
paying our bills.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;What does the top 1% spend their money on?&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Mostly real estate, food, and junk. If they have kids, maybe they spend a
few hundred $k on overpriced university education (which in sensible
countries is free or cheap).&lt;/p&gt;
&lt;p&gt;What they &lt;em&gt;don’t&lt;/em&gt; spend their money on is making the world a better place.
Because they are convinced they are &lt;em&gt;not that rich&lt;/em&gt; and the world’s problems
are caused by &lt;em&gt;somebody else&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;When I worked at a megacorp, I spoke to highly paid software engineers who
were torn up about their declined promotion to L4 or L5 or L6, because they
needed to earn more money, because without more money they wouldn’t be able
to afford the mortgage payments on an &lt;a href=&quot;https://apenwarr.ca/log/20180918&quot;&gt;overpriced $1M+ run-down Bay Area
townhome&lt;/a&gt; which is a prerequisite to
starting a family and thus living a meaningful life. This treadmill started
the day after graduation.[3]&lt;/p&gt;
&lt;p&gt;I tried to tell some of these L3 and L4 engineers that they were already in
the top 5%, probably top 2% of wage earners, and their earning potential was
only going up. They didn’t believe me until I showed them the arithmetic and
the economic stats. And even then, facts didn’t help, because it didn’t make
their fears about money go away. They &lt;em&gt;needed&lt;/em&gt; more money before they could
feel safe, and in the meantime, they had no disposable income. Sort of.
Well, for the sort of definition of disposable income that rich people
use.[4]&lt;/p&gt;
&lt;p&gt;Anyway there are psychology studies about this phenomenon. “&lt;a href=&quot;https://www.cbc.ca/news/business/why-no-one-feels-rich-1.5138657&quot;&gt;What people
consider rich is about three times what they currently
make&lt;/a&gt;.” No
matter what they make. So, I’ll forgive you for falling into this trap. I’ll
even forgive me for falling into this trap.&lt;/p&gt;
&lt;p&gt;But it’s time to fall out of it.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The meaning of life&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;My rich friend is a fountain of wisdom. Part of this wisdom came from the
shock effect of going from normal-software-developer rich to
founder-successful-exit rich, all at once. He described his existential
crisis: “Maybe you do find something you want to spend your money on. But,
I&#39;d bet you never will. It’s a rare problem. M&lt;strong&gt;oney, which is the driver
for everyone, is no longer a thing in my life.&lt;/strong&gt;”&lt;/p&gt;
&lt;p&gt;Growing up, I really liked the saying, “Money is just a way of keeping
score.” I think that metaphor goes deeper than most people give it credit
for. Remember &lt;a href=&quot;https://www.reddit.com/r/Mario/comments/13v3hoc/what_even_is_the_point_of_the_score_counter/&quot;&gt;old Super Mario Brothers, which had a vestigial score
counter&lt;/a&gt;?
Do you know anybody who rated their Super Mario Brothers performance based
on the score? I don’t. I’m sure those people exist. They probably have
Twitch channels and are probably competitive to the point of being annoying.
Most normal people get some other enjoyment out of Mario that is not from
the score. Eventually, Nintendo stopped including a score system in Mario
games altogether. Most people have never noticed. The games are still fun.&lt;/p&gt;
&lt;p&gt;Back in the world of capitalism, we’re still keeping score, and we’re still
weirdly competitive about it. We programmers, we 1%ers, are in the top
percentile of capitalism high scores in the entire world - that’s the
literal definition - but we keep fighting with each other to get closer to
top place. Why?&lt;/p&gt;
&lt;p&gt;Because we forgot there’s anything else. Because someone convinced us that
the score even matters.&lt;/p&gt;
&lt;p&gt;The saying isn’t, “Money is &lt;em&gt;the way&lt;/em&gt; of keeping score.” Money is &lt;em&gt;just one
way&lt;/em&gt; of keeping score.&lt;/p&gt;
&lt;p&gt;It’s mostly a pretty good way. Capitalism, for all its flaws, mostly aligns
incentives so we’re motivated to work together and produce more stuff, and
more valuable stuff, than otherwise. Then it automatically gives more power
to people who empirically[5] seem to be good at organizing others to make
money. Rinse and repeat. Number goes up.&lt;/p&gt;
&lt;p&gt;But there are limits. And in the ever-accelerating feedback loop of modern
capitalism, more people reach those limits faster than ever. They might
realize, like my friend, that money is no longer a thing in their life. You
might realize that. We might.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;There’s nothing more dangerous than a powerful person with nothing to prove&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Billionaires run into this existential crisis, that they obviously have to
have something to live for, and money just isn’t it. Once you can buy
anything you want, you quickly realize that what you want was not very
expensive all along. And then what?&lt;/p&gt;
&lt;p&gt;Some people, the less dangerous ones, retire to their superyacht (if it ever
finally gets delivered, come on already). The dangerous ones pick ever
loftier goals (colonize Mars) and then bet everything on it. Everything.
Their time, their reputation, their relationships, their fortune, their
companies, their morals, everything they’ve ever built. Because if there’s
nothing on the line, there’s no reason to wake up in the morning. And they
really &lt;em&gt;need&lt;/em&gt; to want to wake up in the morning. Even if the reason to wake
up is to deal with today’s unnecessary emergency. As long as, you know, the
emergency requires &lt;em&gt;them&lt;/em&gt; to &lt;em&gt;do something&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Dear reader, statistically speaking, you are not a billionaire. But you have
this problem.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;So what then&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Good question. We live at a moment in history when society is richer and
more productive than it has ever been, with opportunities for even more of
us to become even more rich and productive even more quickly than ever. And
yet, we live in existential fear: the fear that nothing we do matters.[6][7]&lt;/p&gt;
&lt;p&gt;I have bad news for you. This blog post is not going to solve that.&lt;/p&gt;
&lt;p&gt;I have worse news. 98% of society gets to wake up each day and go to work
because they have no choice, so at worst, for them this is a background
philosophical question, like the trolley problem.&lt;/p&gt;
&lt;p&gt;Not you.&lt;/p&gt;
&lt;p&gt;For you this unsolved philosophy problem is urgent &lt;em&gt;right now&lt;/em&gt;. There are
people tied to the tracks. You’re driving the metaphorical trolley. Maybe
nobody told you you’re driving the trolley. Maybe they lied to you and said
someone else is driving. Maybe you have no idea there are people on the
tracks. Maybe you do know, but you’ll get promoted to L6 if you pull the
right lever. Maybe you’re blind. Maybe you’re asleep. Maybe there are no
people on the tracks after all and you’re just destined to go around and
around in circles, forever.&lt;/p&gt;
&lt;p&gt;But whatever happens next: you chose it.&lt;/p&gt;
&lt;p&gt;We chose it.&lt;/p&gt;
&lt;p style=&quot;padding-top: 2em;&quot;&gt;&lt;b&gt;Footnotes&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;[1] Beware of estimates of the “average income of the top 1%.” That average
includes all the richest people in the world. You only need to earn the very
bottom of the 1% bucket in order to be in the top 1%.&lt;/p&gt;
&lt;p&gt;[2] If the population of the US is 340 million, there are actually 340,000
people in the top 0.1%.&lt;/p&gt;
&lt;p&gt;[3] I’m Canadian so I’m disconnected from this phenomenon, but if TV and
movies are to be believed, in America the treadmill starts all the way back
in high school where you stress over getting into an elite university so
that you can land the megacorp job after graduation so that you can stress
about getting promoted. If that’s so, I send my sympathies. That’s not how
it was where I grew up.&lt;/p&gt;
&lt;p&gt;[4] Rich people like us methodically put money into savings accounts,
investments, life insurance, home equity, and so on, and only what’s left
counts as “disposable income.” This is not the definition normal people use.&lt;/p&gt;
&lt;p&gt;[5] Such an interesting double entendre.&lt;/p&gt;
&lt;p&gt;[6] This is what AI doomerism is about. A few people have worked themselves
into a terror that if AI becomes too smart, it will realize that humans are
not actually that useful, and eliminate us in the name of efficiency. That’s
not a story about AI. It’s a story about what we already worry is true.&lt;/p&gt;
&lt;p&gt;[7] I’m in favour of Universal Basic Income (UBI), but it has a big
problem: it reduces your need to wake up in the morning. If the alternative
is &lt;a href=&quot;https://en.wikipedia.org/wiki/Bullshit_Jobs&quot;&gt;bullshit jobs&lt;/a&gt; or suffering
then yeah, UBI is obviously better. And the people who think that if you
don’t work hard, you don’t deserve to live, are nuts. But it’s horribly
dystopian to imagine a society where lots of people wake up and have nothing
that motivates them. The utopian version is to wake up and be able to spend
all your time doing what gives your life meaning. Alas, so far science has
produced no evidence that anything gives your life meaning.&lt;/p&gt;</description>
        </item>
        <item>
            <title>The evasive evitability of enshittification</title>
            <pubDate>Sun, 15 Jun 2025 02:52:58 +0000</pubDate>
            <link>https://apenwarr.ca/log/20250530</link>
            <guid isPermaLink="true">https://apenwarr.ca/log/20250530</guid>
            <description>&lt;p&gt;Our company recently announced a fundraise.  We were grateful for all
the community support, but the Internet also raised a few of its collective
eyebrows, wondering whether this meant the dreaded “enshittification” was
coming next.&lt;/p&gt;
&lt;p&gt;That word describes a very real pattern we’ve all seen before: products
start great, grow fast, and then slowly become worse as the people running
them trade user love for short-term revenue.&lt;/p&gt;
&lt;p&gt;It’s a topic I find genuinely fascinating, and I&#39;ve seen the downward spiral
firsthand at companies I once admired. So I want to talk about why this
happens, and more importantly, why it won&#39;t happen to us. That&#39;s big talk, I
know. But it&#39;s a promise I&#39;m happy for people to hold us to.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is enshittification?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The term &quot;enshittification&quot; was first popularized in a &lt;a href=&quot;https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys&quot;&gt;blog post by Corey
Doctorow&lt;/a&gt;, who put
a catchy name to an effect we&#39;ve all experienced. Software starts off good,
then goes bad. How? Why?&lt;/p&gt;
&lt;p&gt;Enshittification proposes not just a name, but a mechanism. First, a product
is well loved and gains in popularity, market share, and revenue. In fact,
it gets so popular that it starts to defeat competitors. Eventually, it&#39;s
the primary product in the space: a monopoly, or as close as you can get.
And then, suddenly, the owners, who are Capitalists, have their evil nature
finally revealed and they exploit that monopoly to raise prices and make the
product worse, so the captive customers all have to pay more. Quality
doesn&#39;t matter anymore, only exploitation.&lt;/p&gt;
&lt;p&gt;I agree with most of that thesis. I think Doctorow has that mechanism
&lt;em&gt;mostly&lt;/em&gt; right. But, there&#39;s one thing that doesn&#39;t add up for me:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Enshittification is not a success mechanism.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I can&#39;t think of any examples of companies that, in real life, enshittified
because they were &lt;em&gt;successful&lt;/em&gt;. What I&#39;ve seen is companies that made their
product worse because they were... scared.&lt;/p&gt;
&lt;p&gt;A company that&#39;s growing fast can afford to be optimistic. They create a
positive feedback loop: more user love, more word of mouth, more users, more
money, more product improvements, more user love, and so on. Everyone in the
company can align around that positive feedback loop. It&#39;s a beautiful
thing. It&#39;s also fragile: miss a beat and it flattens out, and soon it&#39;s a
downward spiral instead of an upward one.&lt;/p&gt;
&lt;p&gt;So, if I were, hypothetically, running a company, I think I would be pretty
hesitant to deliberately sacrifice any part of that positive feedback loop,
the loop I and the whole company spent so much time and energy building, to
see if I can grow faster. User love? Nah, I&#39;m sure we&#39;ll be fine, look how
much money and how many users we have! Time to switch strategies!&lt;/p&gt;
&lt;p&gt;Why would I do that? Switching strategies is always a tremendous risk. When
you switch strategies, it&#39;s triggered by passing a threshold, where something
fundamental changes, and your old strategy becomes wrong.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Threshold moments and control&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://en.wikipedia.org/wiki/Reversing_Falls&quot;&gt;Saint John, New Brunswick, there&#39;s a
river&lt;/a&gt; that flows one
direction at high tide, and the other way at low tide. Four times a day,
gravity equalizes, then crosses a threshold to gently start pulling the
other way, then accelerates. What &lt;em&gt;doesn&#39;t&lt;/em&gt; happen is a rapidly flowing
river in one direction &quot;suddenly&quot; shifts to rapidly flowing the other way.
Yes, there&#39;s an instant where the limit from the left is positive and the
limit from the right is negative. But you can see that threshold coming.
It&#39;s predictable.&lt;/p&gt;
&lt;p&gt;In my experience, for a company or a product, there are two kinds of
thresholds like this, that build up slowly and then when crossed, create a
sudden flow change.&lt;/p&gt;
&lt;p&gt;The first one is control: if the visionaries in charge lose control, chances
are high that their replacements won&#39;t &quot;get it.&quot;&lt;/p&gt;
&lt;p&gt;The new people didn&#39;t build the underlying feedback loop, and so they don&#39;t
realize how fragile it is. There are lots of reasons for a change in
control: financial mismanagement, boards of directors, hostile takeovers.&lt;/p&gt;
&lt;p&gt;The worst one is temptation. Being a founder is, well, it actually sucks.
It&#39;s oddly like being repeatedly punched in the face. When I look back at my
career, I guess I&#39;m surprised by how few times per day it feels like I was
punched in the face. But, the
constant face punching gets to you after a while. Once you&#39;ve established a
great product, and amazing customer love, and lots of money, and an upward
spiral, isn&#39;t your creation strong enough yet? Can&#39;t you step back and let
the professionals just run it, confident that they won&#39;t kill the golden
goose?&lt;/p&gt;
&lt;p&gt;Empirically, mostly no, you can&#39;t. Actually the success rate of control
changes, for well loved products, is abysmal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The saturation trap&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The second trigger of a flow change is comes from outside: saturation. Every
successful product, at some point, reaches approximately all the users it&#39;s
ever going to reach. Before that, you can watch its exponential growth rate
slow down: the &lt;a href=&quot;https://blog.apnic.net/2022/02/21/another-year-of-the-transition-to-ipv6/&quot;&gt;infamous
S-curve&lt;/a&gt;
of product adoption.&lt;/p&gt;
&lt;p&gt;Saturation can lead us back to control change: the founders get frustrated
and back out, or the board ousts them and puts in &quot;real business people&quot; who
know how to get growth going again. Generally that doesn&#39;t work. Modern VCs
consider founder replacement a truly desperate move. Maybe
a last-ditch effort to boost short term numbers in preparation for an
acquisition, if you&#39;re lucky.&lt;/p&gt;
&lt;p&gt;But sometimes the leaders stay on despite saturation, and they try on their
own to make things better. Sometimes that &lt;em&gt;does&lt;/em&gt; work. Actually, it&#39;s kind
of amazing how often it seems to work. Among successful companies,
it&#39;s rare to find one that sustained hypergrowth, nonstop, without suffering
through one of these dangerous periods.&lt;/p&gt;
&lt;p&gt;(That&#39;s called survivorship bias. All companies have dangerous periods.
The successful ones surivived them. But of those survivors, suspiciously few
are ones that replaced their founders.)&lt;/p&gt;
&lt;p&gt;If you saturate and can&#39;t recover - either by growing more in a big-enough
current market, or by finding new markets to expand into - then the best you
can hope for is for your upward spiral to mature gently into decelerating
growth. If so, and you&#39;re a buddhist, then you hire less, you optimize
margins a bit, you resign yourself to being About This Rich And I Guess
That&#39;s All But It&#39;s Not So Bad.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The devil&#39;s bargain&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Alas, very few people reach that state of zen. Especially the kind of
ambitious people who were able to get that far in the first place. If you
can&#39;t accept saturation and you can&#39;t beat saturation, then you&#39;re down to
two choices: step away and let the new owners enshittify it, hopefully
slowly. Or take the devil&#39;s bargain: enshittify it yourself.&lt;/p&gt;
&lt;p&gt;I would not recommend the latter. If you&#39;re a founder and you find yourself
in that position, honestly, you won&#39;t enjoy doing it and you probably aren&#39;t
even good at it and it&#39;s getting enshittified either way. Let someone else
do the job.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Defenses against enshittification&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Okay, maybe that section was not as uplifting as we might have hoped. I&#39;ve
gotta be honest with you here. Doctorow is, after all, mostly right. This
does happen all the time.&lt;/p&gt;
&lt;p&gt;Most founders aren&#39;t perfect for every stage of growth. Most product owners
stumble. Most markets saturate. Most VCs get board control pretty early on
and want hypergrowth or bust. In tech, a lot of the time, if you&#39;re choosing
a product or company to join, that kind of company is all you can get.&lt;/p&gt;
&lt;p&gt;As a founder, maybe you&#39;re okay with growing slowly. Then some copycat shows
up, steals your idea, grows super fast, squeezes you out along with your
moral high ground, and then runs headlong into all the same saturation
problems as everyone else. Tech incentives are awful.&lt;/p&gt;
&lt;p&gt;But, it&#39;s not a lost cause. There are companies (and open source projects)
that keep a good thing going, for decades or more. What do they have in
common?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;An expansive vision that&#39;s not about money&lt;/strong&gt;, and which opens you up to
lots of users. A big addressable market means you don&#39;t have to
worry about saturation for a long time, even at hypergrowth speeds. Google
certainly never had an incentive to make Google Search worse.&lt;/p&gt;
&lt;p&gt;&lt;i&gt;(Update 2025-06-14: A few people disputed that last bit.  Okay. 
Perhaps Google has ccasionally responded to what they thought were
incentives to make search worse -- I wasn&#39;t there, I don&#39;t know -- but it
seems clear in retrospect that when search gets worse, Google does worse. 
So I&#39;ll stick to my claim that their true incentives are to keep improving.)&lt;/i&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Keep control.&lt;/strong&gt; It&#39;s easy to lose control of a project or company at any
point. If you stumble, and you don&#39;t have a backup plan, and there&#39;s someone
waiting to jump on your mistake, then it&#39;s over. Too many companies &quot;bet it
all&quot; on nonstop hypergrowth and &lt;s&gt;&lt;a href=&quot;https://www.reddit.com/r/movies/comments/yuekuu/can_someone_explain_me_this_dialogue_from_gattaca/&quot;&gt;don&#39;t have any way
back&lt;/a&gt;&lt;/s&gt;
have no room in the budget, if results slow down even temporarily.&lt;/p&gt;
&lt;p&gt;Stories abound of companies that scraped close to bankruptcy before
finally pulling through. But far more companies scraped close to
bankruptcy and then went bankrupt. Those companies are forgotten. Avoid
it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Track your data.&lt;/strong&gt; Part of control is predictability. If you know how
big your market is, and you monitor your growth carefully, you can detect
incoming saturation years before it happens. Knowing the telltale shape of
each part of that S-curve is a superpower. If you can see the future, you
can prevent your own future mistakes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Believe in competition.&lt;/strong&gt; Google used to have this saying they lived by:
&quot;&lt;a href=&quot;https://9to5google.com/2012/04/05/larry-page-posts-update-from-the-ceo-2012%E2%80%B3-memo-detailing-googles-aspirations/&quot;&gt;the competition is only a click
away&lt;/a&gt;.&quot; That was
excellent framing, because it was true, and it will remain true even if
Google captures 99% of the search market. The key is to cultivate a healthy
fear of competing products, not of your investors or the end of
hypergrowth. Enshittification helps your competitors. That would be dumb.&lt;/p&gt;
&lt;p&gt;(And don&#39;t cheat by using lock-in to make competitors
not, anymore, &quot;only a click away.&quot; That&#39;s missing the whole point!)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Inoculate yourself.&lt;/strong&gt; If you have to, create your own competition. Linus
  Torvalds, the creator of the Linux kernel, &lt;a href=&quot;https://git-scm.com/about&quot;&gt;famously also created
  Git&lt;/a&gt;, the greatest tool for forking (and maybe
  merging) open source projects that has ever existed. And then he said,
  this is my fork, the &lt;a href=&quot;https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/&quot;&gt;Linus fork&lt;/a&gt;; use it if you want; use someone else&#39;s if
  you want; and now if I want to win, I have to make mine the best. Git was
  created back in 2005, twenty years ago. To this day, Linus&#39;s fork is still
  the central one.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you combine these defenses, you can be safe from the decline that others
tell you is inevitable. If you look around for examples, you&#39;ll find that
this does actually work. You won&#39;t be the first. You&#39;ll just be rare.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Side note: Things that aren&#39;t enshittification&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I often see people worry about enshittification that isn&#39;t. They might be
good or bad, wise or unwise, but that&#39;s a different topic. Tools aren&#39;t
inherently good or evil. They&#39;re just tools.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&quot;Helpfulness.&quot;&lt;/strong&gt; There&#39;s a fine line between &quot;telling users about this
cool new feature we built&quot; in the spirit of helping them, and &quot;pestering
users about this cool new feature we built&quot; (typically a misguided AI
implementation) to improve some quarterly KPI. Sometimes it&#39;s hard to see
where that line is. But when you&#39;ve crossed it, you know.&lt;/p&gt;
&lt;p&gt;Are you trying to help a user do what &lt;em&gt;they&lt;/em&gt; want to do, or are you trying
to get them to do what &lt;em&gt;you&lt;/em&gt; want them to do?&lt;/p&gt;
&lt;p&gt;Look into your heart. Avoid the second one. I know you know how. Or you
knew how, once. Remember what that feels like.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Charging money for your product.&lt;/strong&gt; Charging money is okay. Get serious.
&lt;a href=&quot;https://apenwarr.ca/log/20211229&quot;&gt;Companies have to stay in business&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That said, I personally really revile the &quot;we&#39;ll make it &lt;a href=&quot;https://tailscale.com/blog/free-plan&quot;&gt;free for
now&lt;/a&gt; and we&#39;ll start charging for the
exact same thing later&quot; strategy. Keep your promises.&lt;/p&gt;
&lt;p&gt;I&#39;m pretty sure nobody but drug dealers breaks those promises on purpose.
But, again, desperation is a powerful motivator. Growth slowing down?
Costs way higher than expected? Time to capture some of that value we
were giving away for free!&lt;/p&gt;
&lt;p&gt;In retrospect, that&#39;s a bait-and-switch, but most founders never planned
it that way. They just didn&#39;t do the math up front, or they were too
naive to know they would have to. And then they had to.&lt;/p&gt;
&lt;p&gt;Famously, Dropbox had a &quot;free forever&quot; plan that provided a certain
amount of free storage.  What they didn&#39;t count on was abandoned
accounts, accumulating every year, with stored stuff they could never
delete.  Even if a very good fixed fraction of users each year upgraded
to a paid plan, all the ones that didn&#39;t, kept piling up...  year after
year...  after year...  until they had to start &lt;a href=&quot;https://www.cnbc.com/2018/02/23/dropbox-shows-how-it-manages-costs-by-deleting-inactive-accounts.html&quot;&gt;deleting old free
accounts and the data in
them&lt;/a&gt;. 
A similar story &lt;a href=&quot;https://news.ycombinator.com/item?id=24143588&quot;&gt;happened with
Docker&lt;/a&gt;,
which used to host unlimited container downloads for free.  In hindsight
that was mathematically unsustainable.  Success guaranteed failure.&lt;/p&gt;
&lt;p&gt;Do the math up
front. If you&#39;re not sure, find someone who can.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Value pricing.&lt;/strong&gt; (ie. charging different prices to different people.)
It&#39;s okay to charge money. It&#39;s even okay to charge money to some kinds of
people (say, corporate users) and not others. It&#39;s also okay to charge money
for an almost-the-same-but-slightly-better product. It&#39;s okay to charge
money for support for your open source tool (though I stay away from that;
it incentivizes you to make the product worse).&lt;/p&gt;
&lt;p&gt;It&#39;s even okay to charge immense amounts of money for a commercial
product that&#39;s barely better than your open source one! Or for a part of
your product that costs you almost nothing.&lt;/p&gt;
&lt;p&gt;But, you have to
do the rest of the work. Make sure the reason your users don&#39;t
switch away is that you&#39;re the best, not that you have the best lock-in.
Yeah, I&#39;m talking to you, cloud egress fees.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Copying competitors.&lt;/strong&gt; It&#39;s okay to copy features from competitors.
It&#39;s okay to position yourself against competitors. It&#39;s okay to win
customers away from competitors. But it&#39;s not okay to lie.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Bugs.&lt;/strong&gt; It&#39;s okay to fix bugs. It&#39;s okay to decide not to fix bugs;
&lt;a href=&quot;https://apenwarr.ca/log/20171213&quot;&gt;you&#39;ll have to sometimes, anyway&lt;/a&gt;. It&#39;s
okay to take out &lt;a href=&quot;https://apenwarr.ca/log/20230605&quot;&gt;technical debt&lt;/a&gt;. It&#39;s
okay to pay off technical debt. It&#39;s okay to let technical debt languish
forever.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Backward incompatible changes.&lt;/strong&gt; It&#39;s &lt;a href=&quot;https://tailscale.com/blog/community-projects&quot;&gt;dumb to release a new version
that breaks backward
compatibility&lt;/a&gt; with your old
version. It&#39;s tempting. It annoys your users. But it&#39;s not enshittification
for the simple reason that it&#39;s phenomenally ineffective at maintaining
or exploiting a monopoly, which is what enshittification is supposed to be
about. You know who&#39;s good at monopolies? Intel and Microsoft. They don&#39;t
break old versions.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Enshittification is real, and tragic. But let&#39;s protect a
useful term and its definition! Those things aren&#39;t it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Epilogue: a special note to founders&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you&#39;re a founder or a product owner, I hope all this helps. I&#39;m sad to
say, you have a lot of potential pitfalls in your future. But, remember that
they&#39;re only &lt;em&gt;potential&lt;/em&gt; pitfalls. Not everyone falls into them.&lt;/p&gt;
&lt;p&gt;Plan ahead. Remember where you came from. Keep your integrity. Do your best.&lt;/p&gt;
&lt;p&gt;I will too.&lt;/p&gt;</description>
        </item>
        <item>
            <title>NPS, the good parts</title>
            <pubDate>Tue, 05 Dec 2023 05:01:12 +0000</pubDate>
            <link>https://apenwarr.ca/log/20231204</link>
            <guid isPermaLink="true">https://apenwarr.ca/log/20231204</guid>
            <description>&lt;p&gt;The Net Promoter Score (NPS) is a statistically questionable way to turn a
set of 10-point ratings into a single number you can compare with other
NPSes. That&#39;s not the good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Humans&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;To understand the good parts, first we have to start with humans. Humans
have emotions, and those emotions are what they mostly use when asked to
rate things on a 10-point scale.&lt;/p&gt;
&lt;p&gt;Almost exactly twenty years ago, I wrote about sitting on a plane next to a
&lt;a href=&quot;/log/20031227&quot;&gt;musician who told me about music album reviews&lt;/a&gt;. The worst
rating an artist can receive, he said, is a lukewarm one. If people think
your music is neutral, it means you didn&#39;t make them feel anything at all.
You failed. Someone might buy music that reviewers hate, or buy music that
people love, but they aren&#39;t really that interested in music that is just
kinda meh. They listen to music because they want to feel something.&lt;/p&gt;
&lt;p&gt;(At the time I contrasted that with tech reviews in computer magazines
(remember those?), and how negative ratings were the worst thing for a tech
product, so magazines never produced them, lest they get fewer free samples.
All these years later, journalism is dead but we&#39;re still debating the
ethics of game companies sponsoring Twitch streams. You can bet there&#39;s no
sponsored game that gets an actively negative review during 5+ hours of
gameplay and still gets more money from that sponsor. If artists just want
you to feel something, but no vendor will pay for a game review that says it
sucks, I wonder what that says about video game companies and art?)&lt;/p&gt;
&lt;p&gt;Anyway, when you ask regular humans, who are not being sponsored, to rate
things on a 10-point scale, they will rate based on their emotions. Most
of the ratings will be just kinda meh, because most products are, if we&#39;re
honest, just kinda meh. I go through most of my days using a variety of
products and services that do not, on any more than the rarest basis, elicit
any emotion at all. Mostly I don&#39;t notice those. I notice when I have
experiences that are surprisingly good, or (less surprisingly but still
notably) bad. Or, I notice when one of the services in any of those three
categories asks me to rate them on a 10-point scale.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The moment&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The moment when they ask me is important. Many products and services are
just kinda invisibly meh, most of the time, so perhaps I&#39;d give them a meh
rating. But if my bluetooth headphones are currently failing to connect, or
I just had to use an airline&#39;s online international check-in system and it
once again rejected my passport for no reason, then maybe my score will be
extra low. Or if Apple releases a new laptop that finally brings back a
non-sucky keyboard after making laptops with sucky keyboards for literally
years because of some obscure internal political battle, maybe I&#39;ll give a
high rating for a while.&lt;/p&gt;
&lt;p&gt;If you&#39;re a person who likes manipulating ratings, you&#39;ll figure out what
moments are best for asking for the rating you want. But let&#39;s assume you&#39;re
above that sort of thing, because that&#39;s not one of the good parts.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The calibration&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Just now I said that if I&#39;m using an invisible meh product or service, I
would rate it with a meh rating. But that&#39;s not true in real life, because
even though I was having no emotion about, say, Google Meet during a call,
perhaps when they ask me (after every...single...call) how it was, that
makes me feel an emotion after all. Maybe that emotion is &quot;leave me alone,
you ask me this way too often.&quot; Or maybe I&#39;ve learned that if I pick
anything other than five stars, I get a clicky multi-tab questionnaire that
I don&#39;t have time to answer, so I almost always pick five stars unless the
experience was &lt;em&gt;so&lt;/em&gt; bad that I feel it&#39;s worth an extra minute because I
simply need to tell the unresponsive and uncaring machine how I really feel.&lt;/p&gt;
&lt;p&gt;Google Meet never gets a meh rating. It&#39;s designed not to. In Google Meet,
meh gets five stars.&lt;/p&gt;
&lt;p&gt;Or maybe I bought something from Amazon and it came with a thank-you card
begging for a 5-star rating (this happens). Or a restaurant offers free
stuff if I leave a 5-star rating and prove it (this happens). Or I ride in
an Uber and there&#39;s a sign on the back seat talking about how they really
need a 5-star rating because this job is essential so they can support their
family and too many 4-star ratings get them disqualified (this happens,
though apparently not at UberEats). Okay. As one of my high school teachers,
Physics I think, once said, &quot;A&#39;s don&#39;t cost me anything. What grade do you
want?&quot; (He was that kind of teacher. I learned a lot.)&lt;/p&gt;
&lt;p&gt;I&#39;m not a professional reviewer. Almost nobody you ask is a professional
reviewer. Most people don&#39;t actually care; they have no basis for
comparison; just about anything will influence their score. They will not
feel badly about this. They&#39;re just trying to exit your stupid popup
interruption as quickly as possible, and half the time they would have
mashed the X button instead but you hid it, so they mashed this one instead.
People&#39;s answers will be... untrustworthy at best.&lt;/p&gt;
&lt;p&gt;That&#39;s not the good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;And yet&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;And yet. As in so many things, randomness tends to average out, &lt;a href=&quot;https://en.wikipedia.org/wiki/Central_limit_theorem&quot;&gt;probably
into a Gaussian distribution, says the Central Limit
Theorem&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Central Limit Theorem is the fun-destroying reason that you can&#39;t just
average 10-point ratings or star ratings and get something useful: most
scores are meh, a few are extra bad, a few are extra good, and the next
thing you know, every Uber driver is a 4.997. Or you can &lt;a href=&quot;https://xkcd.com/325/&quot;&gt;ship a bobcat one
in 30 times&lt;/a&gt; and still get 97% positive feedback.&lt;/p&gt;
&lt;p&gt;There&#39;s some deep truth hidden in NPS calculations: that meh ratings mean
nothing, that the frequency of strong emotions matters a lot, and that
deliriously happy moments don&#39;t average out disastrous ones.&lt;/p&gt;
&lt;p&gt;Deming might call this &lt;a href=&quot;/log/20161226&quot;&gt;the continuous region and the &quot;special
causes&quot;&lt;/a&gt; (outliers). NPS is all about counting outliers, and
averages don&#39;t work on outliers.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The degrees of meh&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Just kidding, there are no degrees of meh. If you&#39;re not feeling anything,
you&#39;re just not. You&#39;re not feeling more nothing, or less nothing.&lt;/p&gt;
&lt;p&gt;One of my friends used to say, on a scale of 6 to 9, how good is this? It
was a joke about how nobody ever gives a score less than 6 out of 10, and
nothing ever deserves a 10. It was one of those jokes that was never funny
because they always had to explain it. But they seemed to enjoy explaining
it, and after hearing the explanation the first several times, that part was
kinda funny. Anyway, if you took the 6-to-9 instructions seriously, you&#39;d
end up rating almost everything between 7 and 8, just to save room for
something unimaginably bad or unimaginably good, just like you did with
1-to-10, so it didn&#39;t help at all.&lt;/p&gt;
&lt;p&gt;And so, the NPS people say, rather than changing the scale, let&#39;s just
define meaningful regions in the existing scale. Only very angry people
use scores like 1-6. Only very happy people use scores like 9 or 10. And if
you&#39;re not one of those you&#39;re meh. It doesn&#39;t matter how meh. And in fact,
it doesn&#39;t matter much whether you&#39;re &quot;5 angry&quot; or &quot;1 angry&quot;; that says more
about your internal rating system than about the degree of what you
experienced. Similarly with 9 vs 10; it seems like you&#39;re quite happy. Let&#39;s
not split hairs.&lt;/p&gt;
&lt;p&gt;So with NPS we take a 10-point scale and turn it into a 3-point scale. The
exact opposite of my old friend: you know people misuse the 10-point scale,
but instead of giving them a new 3-point scale to misuse, you just
postprocess the 10-point scale to clean it up. And now we have a 3-point
scale with 3 meaningful points. That&#39;s a good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Evangelism&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;So then what? Average out the measurements on the newly calibrated 1-2-3
scale, right?&lt;/p&gt;
&lt;p&gt;Still no. It turns out there are three kinds of people: the ones so mad they
will tell everyone how mad they are about your thing; the ones who don&#39;t
care and will never think about you again if they can avoid it; and the ones
who had such an over-the-top amazing experience that they will tell everyone
how happy they are about your thing.&lt;/p&gt;
&lt;p&gt;NPS says, you really care about the 1s and the 3s, but averaging them makes
no sense. And the 2s have no effect on anything, so you can just leave them
out.&lt;/p&gt;
&lt;p&gt;Cool, right?&lt;/p&gt;
&lt;p&gt;Pretty cool. Unfortunately, that&#39;s still two valuable numbers but we
promised you one single score. So NPS says, let&#39;s subtract them! Yay! Okay,
no. That&#39;s not the good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The threefold path&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;I like to look at it this way instead. First of all, we have computers now,
we&#39;re not tracking ratings on one of those 1980s desktop bookkeeping
printer-calculators, you don&#39;t have to make every analysis into one single
all-encompassing number.&lt;/p&gt;
&lt;p&gt;Postprocessing a 10-point scale into a 3-point one, that seems pretty smart.
But you have to stop there. Maybe you now have three separate aggregate
numbers. That&#39;s tough, I&#39;m sorry. Here&#39;s a nickel, kid, go sell your
personal information in exchange for a spreadsheet app. (I don&#39;t know what
you&#39;ll do with the nickel. Anyway I don&#39;t need it. Here. Go.)&lt;/p&gt;
&lt;p&gt;Each of those three rating types gives you something different you can do in
response:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The &lt;b&gt;ones&lt;/b&gt; had a very bad experience, which is hopefully an
  outlier, unless you&#39;re Comcast or the New York Times subscription
  department. Normally you want to get rid of every bad experience. The
  absence of awful isn&#39;t greatness, it&#39;s just meh, but meh is infinitely
  better than awful. Eliminating negative outliers is a whole job. It&#39;s a
  job filled with Deming&#39;s special causes. It&#39;s hard, and it requires
  creativity, but it really matters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;b&gt;twos&lt;/b&gt; had a meh experience. This is, most commonly, the
  majority. But perhaps they could have had a better experience. Perhaps
  even a great one? Deming would say you can and should work to improve the
  average experience and reduce the standard deviation. That&#39;s the dream;
  heck, what if the average experience could be an amazing one? That&#39;s
  rarely achieved, but a few products achieve it, especially luxury brands.
  And maybe that Broadway show, Hamilton? I don&#39;t know, I couldn&#39;t get tickets,
  because everyone said it was great so it was always sold out and I guess
  that&#39;s my point.&lt;/p&gt;
&lt;p&gt;If getting the average up to three is too hard or will
  take too long (and it will take a long time!), you could still try to at
  least randomly turn a few of them into threes. For example, they say
  users who have a great customer support experience often rate a product more
  highly than the ones who never needed to contact support at all, because
  the support interaction made the company feel more personal. Maybe you can&#39;t
  afford to interact with everyone, but if you have to interact anyway,
  perhaps you can use that chance to make it great instead of meh.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;b&gt;threes&lt;/b&gt; already had an amazing experience. Nothing to do, right?
  No! These are the people who are, or who can become, your superfan
  evangelists. Sometimes that happens on its own, but often people don&#39;t
  know where to put that excess positive energy. You can help them. Pop
  stars and fashion brands know all about this; get some true believers
  really excited about your product, and the impact is huge. This is a
  completely different job than turning ones into twos, or twos into threes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;b&gt;What not to do&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Those are all good parts. Let&#39;s ignore that unfortunately they
aren&#39;t part of NPS at all and we&#39;ve strayed way off topic.&lt;/p&gt;
&lt;p&gt;From here, there are several additional things you can do, but it turns out
you shouldn&#39;t.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t compare scores with other products.&lt;/b&gt; I guarantee you, your methodology
isn&#39;t the same as theirs. The slightest change in timing or presentation
will change the score in incomparable ways. You just can&#39;t. I&#39;m sorry.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t reward your team based on aggregate ratings.&lt;/b&gt; They will find a
way to change the ratings. Trust me, it&#39;s too easy.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t average or difference the bad with the great.&lt;/b&gt; The two groups have
nothing to do with each other, require completely different responses
(usually from different teams), and are often very small. They&#39;re outliers
after all. They&#39;re by definition not the mainstream. Outlier data is very
noisy and each terrible experience is different from the others; each
deliriously happy experience is special. As the famous writer said, &lt;a href=&quot;https://en.wikipedia.org/wiki/Anna_Karenina_principle&quot;&gt;all
meh families are
alike&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t fret about which &quot;standard&quot; rating ranges translate to
bad-meh-good.&lt;/b&gt; Your particular survey or product will have the bad
outliers, the big centre, and the great outliers. Run your survey enough and
you&#39;ll be able to find them.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t call it NPS.&lt;/b&gt; NPS nowadays has a bad reputation. Nobody can
really explain the bad reputation; I&#39;ve asked. But they&#39;ve all heard it&#39;s
bad and wrong and misguided and unscientific and &quot;not real statistics&quot; and
gives wrong answers and leads to bad incentives. You don&#39;t want that stigma
attached to your survey mechanic. But if you call it a &lt;em&gt;satisfaction
survey&lt;/em&gt; on a 10-point or 5-point scale, tada, clear skies and lush green fields ahead.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bonus advice&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Perhaps the neatest thing about NPS is how much information you can get from
just one simple question that can be answered with the same effort it takes
to dismiss a popup.&lt;/p&gt;
&lt;p&gt;I joked about Google Meet earlier, but I wasn&#39;t
really kidding; after having a few meetings, if I had learned that I could
just rank from 1 to 5 stars and then &lt;em&gt;not&lt;/em&gt; get guilted for giving anything
other than 5, I would do it. It would be great science and pretty
unobtrusive. As it is, I lie instead. (I don&#39;t even skip, because it&#39;s
faster to get back to the menu by lying than by skipping.)&lt;/p&gt;
&lt;p&gt;While we&#39;re here, only the weirdest people want to answer a survey that says
it will take &quot;just 5 minutes&quot; or &quot;just 30 seconds.&quot; I don&#39;t have 30 seconds,
I&#39;m busy being mad/meh/excited about your product, I have other things to
do! But I can click just one single star rating, as long as I&#39;m 100%
confident that the survey will go the heck away after that. (And don&#39;t even
get me started about the extra layer in &quot;Can we ask you a few simple
questions about our website? Yes or no&quot;)&lt;/p&gt;
&lt;p&gt;Also, don&#39;t be the survey that promises one question and then asks &quot;just one
more question.&quot; Be the survey that gets a reputation for really truly asking
that one question. Then ask it, optionally, in more places and more often. A
good role model is those knowledgebases where every article offers just
thumbs up or thumbs down (or the default of no click, which means meh). That
way you can legitimately look at aggregates or even the same person&#39;s
answers over time, at different points in the app, after they have different
parts of the experience. And you can compare scores at the same point after
you update the experience.&lt;/p&gt;
&lt;p&gt;But for heaven&#39;s sake, not by just averaging them.&lt;/p&gt;</description>
        </item>
        <item>
            <title>Interesting</title>
            <pubDate>Fri, 06 Oct 2023 20:59:31 +0000</pubDate>
            <link>https://apenwarr.ca/log/20231006</link>
            <guid isPermaLink="true">https://apenwarr.ca/log/20231006</guid>
            <description>&lt;p&gt;A few conversations last week made me realize I use the word “interesting” in an unusual way.&lt;/p&gt;
&lt;p&gt;I rely heavily on mental models. Of course, everyone &lt;em&gt;relies&lt;/em&gt; on mental models. But I do it intentionally and I push it extra hard.&lt;/p&gt;
&lt;p&gt;What I mean by that is, when I’m making predictions about what will happen next, I mostly don’t look around me and make a judgement based on my immediate surroundings. Instead, I look at what I see, try to match it to something inside my mental model, and then let the mental model extrapolate what “should” happen from there.&lt;/p&gt;
&lt;p&gt;If this sounds predictably error prone: yes. It is.&lt;/p&gt;
&lt;p&gt;But it’s also powerful, when used the right way, which I try to do. Here’s my system.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Confirmation bias&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;First of all, let’s acknowledge the problem with mental models: confirmation bias. Confirmation bias is the tendency of all people, including me and you, to consciously or subconsciously look for evidence to support what we already believe to be true, and try to ignore or reject evidence that disagrees with our beliefs.&lt;/p&gt;
&lt;p&gt;This is just something your brain does. If you believe you’re exempt from this, you’re wrong, and dangerously so. Confirmation bias gives you more certainty where certainty is not necessarily warranted, and we all act on that unwarranted certainty sometimes.&lt;/p&gt;
&lt;p&gt;On the one hand, we would all collapse from stress and probably die from bear attacks if we didn’t maintain some amount of certainty, even if it’s certainty about wrong things. But on the other hand, certainty about wrong things is pretty inefficient.&lt;/p&gt;
&lt;p&gt;There’s a word for the feeling of stress when your brain is working hard to ignore or reject evidence against your beliefs: cognitive dissonance. Certain Internet Dingbats have recently made entire careers talking about how to build and exploit cognitive dissonance, so I’ll try to change the subject quickly, but I’ll say this: cognitive dissonance is bad… if you don’t realize you’re having it.&lt;/p&gt;
&lt;p&gt;But your own cognitive dissonance is &lt;em&gt;amazingly useful&lt;/em&gt; if you notice the feeling and use it as a tool.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The search for dissonance&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Whether you like it or not, your brain is going to be working full time, on automatic pilot, in the background, looking for evidence to support your beliefs. But you know that; at least, you know it now because I just told you. You can be aware of this effect, but you can’t prevent it, which is annoying.&lt;/p&gt;
&lt;p&gt;But you can try to compensate for it. What that means is using the part of your brain you have control over — the supposedly rational part — to look for the opposite: things that don’t match what you believe.&lt;/p&gt;
&lt;p&gt;To take a slight detour, what’s the relationship between your beliefs and your mental model? For the purposes of this discussion, I’m going to say that mental models are a &lt;em&gt;system for generating beliefs.&lt;/em&gt; Beliefs are the output of mental models. And there’s a feedback loop: beliefs are also the things you generalize in order to produce your mental model. (Self-proclaimed ”Bayesians” will know what I’m talking about here.)&lt;/p&gt;
&lt;p&gt;So let’s put it this way: your mental model, combined with current observations, produce your set of beliefs about the world and about what will happen next.&lt;/p&gt;
&lt;p&gt;Now, what happens if what you expected to happen next, doesn’t happen? Or something happens that was entirely unexpected? Or even, what if someone tells you you’re wrong and they expect something else to happen?&lt;/p&gt;
&lt;p&gt;Those situations are some of the most useful ones in the world. They’re what I mean by &lt;em&gt;interesting&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;&lt;b&gt;The “aha” moment&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;i&gt;The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny…”&lt;/i&gt;
&lt;br&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— &lt;a
href=&quot;https://quoteinvestigator.com/2015/03/02/eureka-funny/&quot;&gt;possibly&lt;/a&gt; Isaac Asimov
&lt;/ul&gt;

&lt;p&gt;When you encounter evidence that your mental model mismatches someone else’s model, that’s an exciting opportunity to compare and figure out which one of you is wrong (or both). Not everybody is super excited about doing that with you, so you have to be be respectful. But the most important people to surround yourself with, at least for mental model purposes, are the ones who will talk it through with you.&lt;/p&gt;
&lt;p&gt;Or, if you get really lucky, your predictions turn out to be demonstrably concretely wrong. That’s an even bigger opportunity, because now you get to figure out what part of your mental model is mistaken, and you don’t have to negotiate with a possibly-unwilling partner in order to do it. It’s you against reality. It’s science: you had a hypothesis, you did an experiment, your hypothesis was proven wrong. Neat! Now we’re getting somewhere.&lt;/p&gt;
&lt;p&gt;What follows is then the often-tedious process of figuring out what actual thing was wrong with your model, updating the model, generating new outputs that presumably match your current observations, and then generating new hypotheses that you can try out to see if the new model works better more generally.&lt;/p&gt;
&lt;p&gt;For physicists, this whole process can sometimes take decades and require building multiple supercolliders. For most of us, it often takes less time than that, so we should count ourselves fortunate even if sometimes we get frustrated.&lt;/p&gt;
&lt;p&gt;The reason we update our model, of course, is that most of the time, the update changes a lot more predictions than just the one you’re working with right now. Turning observations back into generalizable mental models allows you to learn things you’ve never been taught; perhaps things nobody has ever learned before. That’s a superpower.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Proceeding under uncertainty&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;But we still have a problem: that pesky slowness. Observing outcomes, updating models, generating new hypotheses, and repeating the loop, although productive, can be very time consuming. My guess is that’s why we didn’t evolve to do that loop most of the time. Analysis paralysis is no good when a tiger is chasing you and you’re worried your preconceived notion that it wants to eat you may or may not be correct.&lt;/p&gt;
&lt;p&gt;Let’s tie this back to business for a moment.&lt;/p&gt;
&lt;p&gt;You have evidence that your mental model about your business is not correct. For example, let’s say you have two teams of people, both very smart and well-informed, who believe conflicting things about what you should do next. That’s &lt;em&gt;interesting&lt;/em&gt;, because first of all, your mental model is that these two groups of people are very smart and make right decisions almost all the time, or you wouldn’t have hired them. How can two conflicting things be the right decision? They probably can’t. That means we have a few possibilities:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The first group is right&lt;/li&gt;
&lt;li&gt;The second group is right&lt;/li&gt;
&lt;li&gt;Both groups are wrong&lt;/li&gt;
&lt;li&gt;The appearance of conflict is actually not correct, because you missed something critical&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There is also often a fifth possibility:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Okay, it’s probably one of the first four but I don’t have time to figure that out right now&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In that case, there’s various wisdom out there involving &lt;a href=&quot;https://www.inc.com/jeff-haden/amazon-founder-jeff-bezos-this-is-how-successful-people-make-such-smart-decisions.html&quot;&gt;one- vs two-way doors&lt;/a&gt;, and oxen pulling in different directions, and so on. But it comes down to this: almost always, it’s better to get everyone aligned to the same direction, even if it’s a somewhat wrong direction, than to have different people going in different directions.&lt;/p&gt;
&lt;p&gt;To be honest, I quite dislike it when that’s necessary. But sometimes it is, and you might as well accept it in the short term.&lt;/p&gt;
&lt;p&gt;The way I make myself feel better about it is to choose the path that will allow us to learn as much as possible, as quickly as possible, in order to update our mental models as quickly as possible (without doing &lt;em&gt;too&lt;/em&gt; much damage) so we have fewer of these situations in the future. In other words, yes, we “bias toward action” — but maybe more of a “bias toward learning.” And even after the action has started, we don’t stop trying to figure out the truth.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Being wrong&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Leaving aside many philosophers’ objections to the idea that “the truth” exists, I think we can all agree that being wrong is pretty uncomfortable. Partly that’s cognitive dissonance again, and partly it’s just being embarrassed in front of your peers. But for me, what matters more is the objective operational expense of the bad decisions we make by being wrong.&lt;/p&gt;
&lt;p&gt;You know what’s even worse (and more embarrassing, and more expensive) than being wrong? Being wrong for &lt;em&gt;even longer&lt;/em&gt; because we ignored the evidence in front of our eyes.&lt;/p&gt;
&lt;p&gt;You might have to talk yourself into this point of view. For many of us, admitting wrongness hurts more than continuing wrongness. But if you can pull off that change in perspective, you’ll be able to do things few other people can.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bonus: Strong opinions held weakly&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Like many young naive nerds, when I first heard of the idea of “strong opinions held weakly,” I thought it was a pretty good idea. At least, clearly more productive than weak opinions held weakly (which are fine if you want to keep your job), or weak opinions held strongly (which usually keep you out of the spotlight).&lt;/p&gt;
&lt;p&gt;The real competitor to strong opinions held weakly is, of course, strong opinions held strongly. We’ve all met those people. They are supremely confident and inspiring, until they inspire everyone to jump off a cliff with them.&lt;/p&gt;
&lt;p&gt;Strong opinions held weakly, on the other hand, is really an invitation to debate. If you disagree with me, why not try to convince me otherwise? Let the best idea win.&lt;/p&gt;
&lt;p&gt;After some decades of experience with this approach, however, I eventually learned that the problem with this framing is the word “debate.” Everyone has a mental model, but not everyone wants to debate it. And if you’re really good at debating — the thing they teach you to be, in debate club or whatever — then you learn how to “win” debates without uncovering actual truth.&lt;/p&gt;
&lt;p&gt;Some days it feels like most of the Internet today is people “debating” their weakly-held strong beliefs and pulling out every rhetorical trick they can find, in order to “win” some kind of low-stakes war of opinion where there was no right answer in the first place.&lt;/p&gt;
&lt;p&gt;Anyway, I don’t recommend it, it’s kind of a waste of time. The people who want to hang out with you at the debate club are the people who already, secretly, have the same mental models as you in all the ways that matter.&lt;/p&gt;
&lt;p&gt;What’s really useful, and way harder, is to find the people who are not interested in debating you at all, and figure out why.&lt;/p&gt;</description>
        </item>
        <item>
            <title>Tech debt metaphor maximalism</title>
            <pubDate>Tue, 11 Jul 2023 03:12:47 +0000</pubDate>
            <link>https://apenwarr.ca/log/20230605</link>
            <guid isPermaLink="true">https://apenwarr.ca/log/20230605</guid>
            <description>&lt;p&gt;I really like the &quot;tech debt&quot; metaphor. A lot of people don&#39;t,
but I think that&#39;s because they either don&#39;t extend the metaphor far enough,
or because they don&#39;t properly understand financial debt.&lt;/p&gt;
&lt;p&gt;So let&#39;s talk about debt!&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Consumer debt vs capital investment&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Back in school my professor, &lt;a href=&quot;http://lwsmith.ca/&quot;&gt;Canadian economics superhero Larry
Smith&lt;/a&gt;, explained debt this way (paraphrased): debt is
stupid if it&#39;s for instant gratification that you pay for later, with
interest. But debt is great if it means you can make more money than the
interest payments.&lt;/p&gt;
&lt;p&gt;A family that takes on high-interest credit card debt
for a visit to Disneyland is wasting money. If you think you can pay it off
in a year, you&#39;ll pay 20%-ish interest for that year for no reason. You can
instead save up for a year and get the same gratification next year without
the 20% surcharge.&lt;/p&gt;
&lt;p&gt;But if you want to buy a $500k machine that will earn your factory an additional
$1M/year in revenue, it would be foolish &lt;em&gt;not&lt;/em&gt; to buy it now, even with 20%
interest ($100k/year). That&#39;s a profit of $900k in just the first year!
(excluding depreciation)&lt;/p&gt;
&lt;p&gt;There&#39;s a reason profitable companies with CFOs take on debt, and often the
total debt increases rather than decreases over time. They&#39;re not idiots.
They&#39;re making a rational choice that&#39;s win-win for everyone. (The
company earns more money faster, the banks earn interest, the interest gets
paid out to consumers&#39; deposit accounts.)&lt;/p&gt;
&lt;p&gt;Debt is bad when you take out the wrong kind, or you mismanage it, or it has
weird strings attached (hello Venture Debt that requires you to put all your
savings in &lt;a href=&quot;https://www.washingtonpost.com/business/2023/03/15/svb-billions-uninsured-assets-companies/&quot;&gt;one underinsured
place&lt;/a&gt;).
But done right, debt is a way to move faster instead of slower.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;High-interest vs low-interest debt&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;For a consumer, the highest interest rates are for &quot;store&quot; credit cards, the
kinds issued by Best Buy or Macy&#39;s or whatever that only work in that one
store. They aren&#39;t as picky about risk (thus have more defaults) because
it&#39;s the ultimate loyalty programme: it gets people to spend more at their
store instead of other stores, in some cases because it&#39;s the only place
that would issue those people debt in the first place.&lt;/p&gt;
&lt;p&gt;The second-highest interest rate is on a general-purpose credit card like
Visa or Mastercard. They can get away with high interest rates because
they&#39;re also the payment system and so they&#39;re very convenient.&lt;/p&gt;
&lt;p&gt;(Incidentally, when I looked at the stats a decade or so ago, in Canada
credit cards make &lt;em&gt;most&lt;/em&gt; of their income on payment fees because Canadians
are annoyingly persistent about paying off their cards; in the US it&#39;s the
opposite. The rumours are true: Canadians really are more cautious about
spending.)&lt;/p&gt;
&lt;p&gt;If you have a good credit rating, you can get better interest rates on a
bank-issued &quot;line of credit&quot; (LOC) (lower interest rate, but less convenient
than a card). In Canada, one reason many people pay off their credit card
each month is simply that they transfer the balance to a lower-interest LOC.&lt;/p&gt;
&lt;p&gt;Even lower interest rates can be obtained if you&#39;re willing to provide
collateral: most obviously, the equity in your home. This greatly reduces
the risk for the lender because they can repossess and then resell your home
if you don&#39;t pay up. Which is pretty good for them even if you don&#39;t pay,
but what&#39;s better is it makes you much more likely to pay rather
than lose your home.&lt;/p&gt;
&lt;p&gt;Some people argue that you should almost never plan to pay off your
mortgage: typical mortgage interest rates are lower than the rates you&#39;d get
long-term from investing in the S&amp;amp;P. The advice that you should &quot;always buy
the biggest home you can afford&quot; is often perversely accurate, especially if
you believe property values will keep going up. And subject to your risk
tolerance and lock-in preferences.&lt;/p&gt;
&lt;p&gt;What&#39;s the pattern here? Just this: high-interest debt is quick and
convenient but you should pay it off quickly. Sometimes you pay it off just
by converting to longer-term lower-rate debt. Sometimes debt is
collateralized and sometimes it isn&#39;t.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;High-interest and low-interest tech debt&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Bringing that back to tech debt: a simple kind of high-interest short-term
debt would be committing code without tests or documentation. Yay, it works,
ship it! And truthfully, maybe you should, because the revenue (and customer
feedback) you get from shipping fast can outweigh how much more bug-prone
you made the code in the short term.&lt;/p&gt;
&lt;p&gt;But like all high-interest debt, you should plan to pay it back fast. Tech
debt generally manifests as a slowdown in your development velocity (ie.
overhead on everything else you do), which means fewer features
launched in the medium-long term, which means less revenue and customer
feedback.&lt;/p&gt;
&lt;p&gt;Whoa, weird, right? This short-term high-interest debt both &lt;em&gt;increases&lt;/em&gt;
revenue and feedback rate, and &lt;em&gt;decreases&lt;/em&gt; it. Why?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If you take a single pull request (PR) that adds a new feature, and launch
  it without tests or documentation, you will definitely get the benefits of
  that PR sooner.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Every PR you try to write after that, before adding the tests and docs
  (ie. repaying the debt) will be slower because you risk creating
  undetected bugs or running into undocumented edge cases.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you take a long time to pay off the debt, the slowdown in future
  launches will outweigh the speedup from the first launch.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is exactly how CFOs manage corporate financial debt. Debt is a drain on
your revenues; the thing you did to incur the debt is a boost to your
revenues; if you take too long to pay back the debt, it&#39;s an overall loss.&lt;/p&gt;
&lt;p&gt;CFOs can calculate that. Engineers don&#39;t like to. (Partly because tech debt
is less quantifiable. And partly because engineers are the sort of people who
pay off their loans sooner than they mathematically should, as a matter of
principle.)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Debt ceilings&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The US government has imposed a &lt;a href=&quot;https://www.reuters.com/world/us/biden-signs-bill-lifting-us-debt-limit-2023-06-03/&quot;&gt;famously ill-advised debt
ceiling&lt;/a&gt;
on itself, that mainly serves to cause drama and create a great place to
push through unrelated riders that nobody will read, because the bill to
raise the debt ceiling will always pass.&lt;/p&gt;
&lt;p&gt;Real-life debt ceilings are defined by your creditworthiness: banks simply
will not lend you more money if you&#39;ve got so much outstanding debt that
they don&#39;t believe you can handle the interest payments. That&#39;s your credit
limit, or the largest mortgage they&#39;ll let you have.&lt;/p&gt;
&lt;p&gt;Banks take a systematic approach to calculating the debt ceiling for each
client. How much can we lend you so that you take out the biggest loan you
possibly can, thus paying as much interest as possible, without starving to
death or (even worse) missing more than two consecutive payments? Also,
morbidly but honestly, since debts are generally not passed down to your
descendants, they would like you to be able to just barely pay it all off
(perhaps by selling off all your assets) right before you kick the bucket.&lt;/p&gt;
&lt;p&gt;They can math this, they&#39;re good at it. Remember, they don&#39;t want you to pay
it off early. If you have leftover money you might use it to pay down your
debt. That&#39;s no good, because less debt means lower interest payments.
They&#39;d rather you incur even more debt, then use that leftover monthly
income even for bigger interest payments. That&#39;s when you&#39;re trapped.&lt;/p&gt;
&lt;p&gt;The equivalent in tech debt is when you are so far behind that you can
barely keep the system running with no improvements at all; the perfect
balance. If things get worse over time, you&#39;re underwater and will
eventually fail. But if you reach this zen state of perfect equilibrium, you
can keep going forever, running in place. That&#39;s your tech debt ceiling.&lt;/p&gt;
&lt;p&gt;Unlike the banking world, I can&#39;t think of a way to anthropomorphize a
villain who wants you to go that far into debt. Maybe the CEO? I guess maybe
someone who is trying to juice revenues for a well-timed acquisition.
Private Equity firms also specialize in maximizing both financial and
technical debt so they can extract the assets while your company slowly
dies.&lt;/p&gt;
&lt;p&gt;Anyway, both in finance and tech, you want to stay well away from your
credit limit.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Debt to income ratios&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;There are many imperfect rules of thumb for how much debt is healthy.
(Remember, some debt is very often healthy, and only people who don&#39;t
understand debt rush to pay it all off as fast as they can.)&lt;/p&gt;
&lt;p&gt;One measure is the debt to income ratio (or for governments, the
debt to GDP ratio). The problem with debt-to-income is debt and income are two
different things. The first produces a mostly-predictable repayment cost
spread over an undefined period of time; the other is a
possibly-fast-changing benefit measured annually. One is an amount, the
other is a rate.&lt;/p&gt;
&lt;p&gt;It would be better to measure interest payments as a fraction of revenue. At
least that encompasses the distinction between high-interest and
low-interest loans. And it compares two cashflow rates rather
than the nonsense comparison of a balance sheet measure vs a cashflow
measure. Banks love interest-to-income ratios; that&#39;s why your income level
has such a big impact on your debt ceiling.&lt;/p&gt;
&lt;p&gt;In the tech world, the interest-to-income equivalent is how much time you
spend dealing with overhead compared to building new revenue-generating
features. Again, getting to zero overhead is probably not worth it. I like
this &lt;a href=&quot;https://xkcd.com/1205/&quot;&gt;xkcd explanation&lt;/a&gt; of what is and is not worth
the time:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgs.xkcd.com/comics/is_it_worth_the_time.png&quot;&gt;&lt;/p&gt;
&lt;p&gt;Tech debt, in its simplest form, is the time you didn&#39;t spend making tasks
more efficient. When you think of it that way, it&#39;s obvious that zero tech
debt is a silly choice.&lt;/p&gt;
&lt;p&gt;(Note that the interest-to-income ratio in this formulation has nothing to
do with financial income. &quot;Tech income&quot; in our metaphor is feature
development time, where &quot;tech debt&quot; is what eats up your development time.)&lt;/p&gt;
&lt;p&gt;(Also note that by this definiton, nowadays tech stacks are so big, complex,
and irritable that every project starts with a giant pile of someone else&#39;s
tech debt on day 1. Enjoy!)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Debt to equity ratios&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Interest-to-income ratios compare two items from your cashflow statement.
Debt-to-equity ratios compare two items from your balance sheet. Which means
they, too, are at least not nonsense.&lt;/p&gt;
&lt;p&gt;&quot;Equity&quot; is unfortunately a lot fuzzier than income. How much is your
company worth? Or your product? The potential value of a factory isn&#39;t just
the value of the machines inside it; it&#39;s the amortized income stream you
(or a buyer) could get from continuing to operate that factory. Which means
it includes the built-up human and business expertise needed to operate the
factory.&lt;/p&gt;
&lt;p&gt;And of course, software is even worse; as many of us know but few
businesspeople admit, the value of proprietary software without the people
is zero. This is why you hear about acqui-hires (humans create value even if
they might quit tomorrow) but never about acqui-codes (code without
humans is worthless).&lt;/p&gt;
&lt;p&gt;Anyway, for a software company the &quot;equity&quot; comes from a variety of factors.
In the startup world, Venture Capitalists are -- and I know this is
depressing -- the best we have for valuing company equity. They are, of
course, not very good at it, but they make it up in volume. As software
companies get more mature, valuation becomes more quantifiable and comes
back to expectations for the future cashflow statement.&lt;/p&gt;
&lt;p&gt;Venture Debt is typically weighted heavily on equity (expected future value)
and somewhat less on revenue (ability to pay the interest).&lt;/p&gt;
&lt;p&gt;As the company builds up assets and shows faster growth, the assumed
equity value gets bigger and bigger. In the financial world, that means
people are willing to issue more debt.&lt;/p&gt;
&lt;p&gt;(Over in the consumer world: your home is equity. That&#39;s why you can get a
huge mortgage on a house but your unsecured loan limit is much smaller. So
Venture Debt is like a mortgage.)&lt;/p&gt;
&lt;p&gt;Anyway, back to tech debt: the debt-to-equity ratio is how much tech debt
you&#39;ve taken on compared to the accumulated value, and future growth rate,
of your product quality. If your product is acquiring lots of customers
fast, you can afford to take on more tech debt so you can acquire more
customers even faster.&lt;/p&gt;
&lt;p&gt;What&#39;s weirder is that as the absolute value of product equity increases,
you can take on a larger and larger absolute value of tech debt.&lt;/p&gt;
&lt;p&gt;That feels unexpected. If we&#39;re doing so well, why would we want to take on
&lt;em&gt;more&lt;/em&gt; tech debt? But think of it this way: if your product (thus company)
are really growing that fast, you will have more people to pay down the tech
debt next year than you do now. In theory, you could even take on so much
tech debt this year that your current team can&#39;t even pay the interest...&lt;/p&gt;
&lt;p&gt;...which brings us to leverage. And risk.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Leverage risk&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Earlier in this article, I mentioned the popular (and surprisingly, often
correct!) idea that you should &quot;buy the biggest house you can afford.&quot; Why
would I want a bigger house? My house is fine. I have a big enough house.
How is this good advice?&lt;/p&gt;
&lt;p&gt;The answer is the amazing multiplying power of leverage.&lt;/p&gt;
&lt;p&gt;Let&#39;s say housing goes up at 5%/year. (I wish it didn&#39;t because this rate is
fabulously unsustainable. But bear with me.)
And let&#39;s say you have $100k in savings and $100k in annual
income.&lt;/p&gt;
&lt;p&gt;You could pay cash and buy a house for $100k. Woo hoo, no mortgage! And
it&#39;ll go up in value by about $5k/year, which is not bad I guess.&lt;/p&gt;
&lt;p&gt;Or, you could buy a $200k house: a $100k down payment and a $100k mortgage
at, say, 3% (fairly common back in 2021), which means $3k/year
in interest. But your $200k house goes up by 5% = $10k/year. Now you have an
annual gain of $10k - $3k = $7k, much more than the $5k you were making
before, with the same money. Sweet!&lt;/p&gt;
&lt;p&gt;But don&#39;t stop there. If the bank will let you get away with it, why not a
$1M house with a $100k down payment? That&#39;s $1M x 5% = +$50k/year in value,
and $900k x 3% = $27k in interest, so a solid $23k in annual (unrealized)
capital gain. From the same initial bank balance! Omg we&#39;re printing money.&lt;/p&gt;
&lt;p&gt;(Obviously we&#39;re omitting maintenance costs and property tax here. Forgive
me. On the other hand, presumably you&#39;re getting intangible value from
living in a much bigger and fancier house. $AAPL shares don&#39;t have skylights
and rumpus rooms and that weird statue in bedroom number seven.)&lt;/p&gt;
&lt;p&gt;What&#39;s the catch? Well, the catch is massively increasing risk.&lt;/p&gt;
&lt;p&gt;Let&#39;s say you lose your job and can&#39;t afford interest payments. If you
bought your $100k house with no mortgage, you&#39;re in luck: that house is
yours, free and clear. You might not have food but you have a place to live.&lt;/p&gt;
&lt;p&gt;If you bought the $1M house and have $900k worth of mortgage payments to
keep up, you&#39;re screwed. Get another job or get ready to move out and
disrupt your family and change everything about your standard of living, up
to and possibly including bankruptcy, which we&#39;ll get to in a bit.&lt;/p&gt;
&lt;p&gt;Similarly, let&#39;s imagine that your property value stops increasing, or (less
common in the US for stupid reasons, but common everywhere else) mortage
rates go up. The leverage effect multiplies your potential losses just like
it multiplies your potential gains.&lt;/p&gt;
&lt;p&gt;Back to tech debt. What&#39;s the analogy?&lt;/p&gt;
&lt;p&gt;Remember that idea I had above, of incurring extra tech debt this year to
keep the revenue growth rolling, and then planning to pay it off next year
with the newer and bigger team? Yeah, that actually works... if you keep
growing. If you estimated your tech debt interest rate correctly. If that
future team materializes. (If you can even motivate that future team to work
on tech debt.) If you&#39;re rational, next year, about whether you borrow more
or not.&lt;/p&gt;
&lt;p&gt;That thing I said about the perfect equilibrium running-in-place state, when
you spend all your time just keeping the machine operating and you have no
time to make it better. How do so many companies get themselves into that
state? In a word, leverage. They guessed wrong. The growth rate fell off,
the new team members didn&#39;t materialize or didn&#39;t ramp up fast enough.&lt;/p&gt;
&lt;p&gt;And if you go past equilibrium, you get the worst case: your tech debt
interest is greater than your tech production (income). Things get worse and
worse and you enter the downward spiral. This is where desperation sets in.
The only remaining option is &lt;strike&gt;bankruptcy&lt;/strike&gt; Tech Debt
Refinancing.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Refinancing&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Most people who can&#39;t afford the interest on their loans don&#39;t declare
bankruptcy. The step before that is to make an arrangement with your
creditors to lower your interest payments. Why would they accept such an
agreement? Because if they don&#39;t, you&#39;ll declare bankruptcy, which is annoying
for you but hugely unprofitable for them.&lt;/p&gt;
&lt;p&gt;The tech metaphor for refinancing is &lt;em&gt;premature deprecation&lt;/em&gt;. Yes, people
love both service A and service B. Yes, we are even running both services at
financial breakeven. But they are slipping, slipping, getting a little worse
every month and digging into a hole that I can&#39;t escape. In order to pull
out of this, I have to stop my payments on A so I can pay back more of B; by
then A will be unrecoverably broken. But at least B will live on, to fight
another day.&lt;/p&gt;
&lt;p&gt;Companies do this all the time. Even at huge profitable companies, in some
corners you&#39;ll occasionally find an understaffed project sliding deeper and
deeper into tech debt. Users may still love it, and it may even be net
profitable, but not profitable enough to pay for the additional engineering
time to dig it out. Such a project is destined to die, and the only
question is when. The answer is &quot;whenever some executive finally notices.&quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The tech bankruptcy metaphor is an easy one: if refinancing doesn&#39;t work and
your tech debt continues to spiral downward, sooner or later your finances
will follow. When you run out of money you declare bankruptcy; what&#39;s
interesting is your tech debt disappears at the same time your financial
debt does.&lt;/p&gt;
&lt;p&gt;This is a really important point. You can incur all the tech debt in the
world, and while your company is still operating, you at least have some
chance of someday paying it back. When your company finally dies, you will
find yourself off the hook; the tech debt never needs to be repaid.&lt;/p&gt;
&lt;p&gt;Okay, for those of us grinding away at code all day, perhaps that sounds
perversely refreshing. But it explains lots of corporate behaviour. The more
desperate a company gets, the less they care about tech debt. &lt;em&gt;Anything&lt;/em&gt; to
turn a profit. They&#39;re not wrong to do so, but you can see how the downward
spiral begins to spiral downward. The more tech debt you incur, the slower
your development goes, and the harder it is to do something productive that
might make you profitable. You might still pull it off! But your luck will
get progressively worse.&lt;/p&gt;
&lt;p&gt;The reverse is also true. When your company is doing well, you have time to
pay back tech debt, or at least to control precisely how much debt you take
on and when. To maintain your interest-to-income ratio or debt-to-equity
ratio at a reasonable level.&lt;/p&gt;
&lt;p&gt;When you see a company managing their tech debt carefully, you see a company
that is planning for the long term rather than a quick exit. Again, that
doesn&#39;t mean paying it all back. It means being careful.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Student loans that are non-dischargeable in bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Since we&#39;re here anyway talking about finance, let&#39;s talk about the idiotic
US government policy of guaranteeing student loans, but also not allowing
people to discharge those loans (ie. zero them out) in bankruptcy.&lt;/p&gt;
&lt;p&gt;What&#39;s the effect of this? Well, of course, banks are extremely eager to
give these loans out to anybody, at any scale, as fast as they can, because
they can&#39;t lose. They have all the equity of the US government to back them
up. The debt-to-equity ratio is effectively zero.&lt;/p&gt;
&lt;p&gt;And of course, people who don&#39;t understand finance (which they don&#39;t teach
you until university; catch-22!) take on lots of these loans in the hope of
making money in the future.&lt;/p&gt;
&lt;p&gt;Since anyone who wants to go to university can get a student loan,
American universities keep raising their rates until they find the maximum amount
that lenders are willing to lend (unlimited!) or foolish borrowers are
willing to borrow in the name of the American Dream (so far we haven&#39;t found
the limit).&lt;/p&gt;
&lt;p&gt;Where was I? Oh right, tech metaphors.&lt;/p&gt;
&lt;p&gt;Well, there are two parts here. First, unlimited access to money. Well, the
tech world has had plenty of that, prior to the 2022 crash anyway. The
result is they hired way too many engineers (students) who did a lot of dumb
stuff (going to school) and incurred a lot of tech debt (student loans) that
they promised to pay back later when their team got bigger (they earned
their Bachelor&#39;s degree and got a job), which unfortunately didn&#39;t
materialize. Oops. They are worse off than if they had skipped all that.&lt;/p&gt;
&lt;p&gt;Second, inability to discharge the debt in bankruptcy. Okay, you got me.
Maybe we&#39;ve come to the end of our analogy. Maybe US government policies
actually, and this is quite an achievement, manage to be even dumber than
tech company management. In this one way. Maybe.&lt;/p&gt;
&lt;p&gt;OR MAYBE YOU &lt;a href=&quot;/log/20091224&quot;&gt;OPEN SOURCED WVDIAL&lt;/a&gt; AND PEOPLE STILL EMAIL YOU
FOR HELP DECADES AFTER YOUR FIRST STARTUP IS LONG GONE.&lt;/p&gt;
&lt;p&gt;Um, sorry for that outburst. I have no idea where that came from.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bonus note: bug bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;While we&#39;re here exploring financial metaphors, I might as well say
something about bug bankruptcy. Although I &lt;a href=&quot;/log/20171213&quot;&gt;have been known to make fun of
bug bankruptcy&lt;/a&gt;, it too is an excellent metaphor, but only if
you take it far enough.&lt;/p&gt;
&lt;p&gt;For those who haven&#39;t heard of this concept, bug bankruptcy happens when
your bug tracking database is so full of bugs that you give up and delete
them all and start over (&quot;declare bankruptcy&quot;).&lt;/p&gt;
&lt;p&gt;Like financial bankruptcy, it is very tempting: I have this big pile of
bills. Gosh, it is a big pile. Downright daunting, if we&#39;re honest. Chances
are, if I opened all these bills, I would find out that I owe more money
than I have, and moreover, next month a bunch more bills will come and I
won&#39;t be able to pay them either and this is hopeless. That would be
stressful. My solution, therefore, is to throw all the bills in the
dumpster, call up my friendly neighbourhood bankruptcy trustee, and
conveniently discharge all my debt once and for all.&lt;/p&gt;
&lt;p&gt;Right?&lt;/p&gt;
&lt;p&gt;Well, not so fast, buddy. Bankruptcy has consequences. First of all, it&#39;s
kind of annoying to arrange legally. Secondly, it sits on your financial
records for like 7 years afterwards, during which time probably nobody will
be willing to issue you any loans, because you&#39;re empirically the kind of
person who does not pay back their loans.&lt;/p&gt;
&lt;p&gt;And that, my friends, is also how bug bankruptcy works. Although the process
for declaring it is easier -- no lawyers or trustees required! -- the
long-term destruction of trust is real. If you run a project in which a lot
of people spent a bunch of effort filing and investigating bugs (ie. lent
you their time in the hope that you&#39;ll pay it back by fixing the bugs
later), and you just close them all wholesale, you can expect that those
people will eventually stop filing bugs. Which, you know, admittedly feels
better, just like the hydro company not sending you bills anymore feels
better until winter comes and your heater doesn&#39;t work and you can&#39;t figure
out why and you eventually remember &quot;oh, I think someone said this might
happen but I forget the details.&quot;&lt;/p&gt;
&lt;p&gt;Anyway, yes, you can do it. But refinancing is better.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Email bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Email bankruptcy is similar to bug bankruptcy, with one important
distinction: nobody ever expected you to answer your email anyway. I&#39;m
honestly not sure why people keep sending them.&lt;/p&gt;
&lt;p&gt;ESPECIALLY EMAILS ABOUT WVDIAL where does that voice keep coming from&lt;/p&gt;</description>
        </item>
    </channel>
</rss>
Raw text
<rss version="2.0">

<channel>
  <title>apenwarr</title>
  <description>apenwarr - NITLog</description>
  <link>https://apenwarr.ca/log/</link>
  <language>en-ca</language>
  <generator>PyNITLog</generator>
  <docs>http://blogs.law.harvard.edu/tech/rss</docs>

  
  
  <item>
    <title>
      Systems design 3: LLMs and the semantic revolution
    </title>
    <pubDate>Thu, 20 Nov 2025 14:19:14 +0000</pubDate>
    <link>https://apenwarr.ca/log/20251120</link>
    
    <guid isPermaLink="true">https://apenwarr.ca/log/20251120</guid>
    
    <description>
    &lt;p&gt;Long ago in the 1990s when I was in high school, my chemistry+physics
teacher pulled me aside. &quot;Avery, you know how the Internet works, right? I
have a question.&quot;&lt;/p&gt;
&lt;p&gt;I now know the correct response to that was, &quot;Does anyone &lt;em&gt;really&lt;/em&gt; know how
the Internet works?&quot; But as a naive young high schooler I did not have that
level of self-awareness. (Decades later, as a CEO, that&#39;s my answer to
almost everything.)&lt;/p&gt;
&lt;p&gt;Anyway, he asked his question, and it was simple but deep. How do they make
all the computers connect?&lt;/p&gt;
&lt;p&gt;We can&#39;t even get the world to agree on 60 Hz vs 50 Hz, 120V vs 240V, or
which kind of physical power plug to use. Communications equipment uses way
more frequencies, way more voltages, way more plug types. Phone companies
managed to federate with each other, eventually, barely, but the ring tones
were different everywhere, there was pulse dialing and tone dialing, and
some of them &lt;em&gt;still&lt;/em&gt; charge $3/minute for international long distance, and
connections take a long time to establish and humans seem to be involved in
suspiciously many places when things get messy, and every country has a
different long-distance dialing standard and phone number format.&lt;/p&gt;
&lt;p&gt;So Avery, he said, now they&#39;re telling me every computer in the world can
connect to every other computer, in milliseconds, for free, between Canada
and France and China and Russia. And they all use a single standardized
address format, and then you just log in and transfer files and stuff? How?
How did they make the whole world cooperate? And who?&lt;/p&gt;
&lt;p&gt;When he asked that question, it was a formative moment in my life that I&#39;ll
never forget, because as an early member of what would be the first Internet
generation…  I Had Simply Never Thought of That.&lt;/p&gt;
&lt;p&gt;I mean, I had to stop and think for a second. Wait, is protocol
standardization even a hard problem? Of course it is. Humans can&#39;t agree on
anything. We can&#39;t agree on a unit of length or the size of a pint, or which
side of the road to drive on. Humans in two regions of Europe no farther
apart than Thunder Bay and Toronto can&#39;t understand each other&#39;s speech. But
this Internet thing just, kinda, worked.&lt;/p&gt;
&lt;p&gt;&quot;There&#39;s… a layer on top,&quot; I uttered, unsatisfyingly. Nobody had taught me
yet that the OSI stack model existed, let alone that it was at best a weak
explanation of reality.&lt;/p&gt;
&lt;p&gt;&quot;When something doesn&#39;t talk to something else, someone makes an adapter.
Uh, and some of the adapters are just programs rather than physical things.
It&#39;s not like everyone in the world agrees. But as soon as one person makes
an adapter, the two things come together.&quot;&lt;/p&gt;
&lt;p&gt;I don&#39;t think he was impressed with my answer. Why would he be? Surely
nothing so comprehensively connected could be engineered with no central
architecture, by a loosely-knit cult of mostly-volunteers building an
endless series of whimsical half-considered &quot;adapters&quot; in their basements
and cramped university tech labs. Such a creation would be a monstrosity,
just as likely to topple over as to barely function.&lt;/p&gt;
&lt;p&gt;I didn&#39;t try to convince him, because honestly, how could I know? But the
question has dominated my life ever since.&lt;/p&gt;
&lt;p&gt;When things don&#39;t connect, why don&#39;t they connect? When they do, why? How?
…and who?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Postel&#39;s Law&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The closest clue I&#39;ve found is this thing called Postel&#39;s Law, one of the
foundational principles of the Internet. It was best stated by one of the
founders of the Internet, Jon Postel. &quot;Be conservative in what you send, and
liberal in what you accept.&quot;&lt;/p&gt;
&lt;p&gt;What it means to me is, if there&#39;s a standard, do your best to follow it,
when you&#39;re sending. And when you&#39;re receiving, uh, assume the best
intentions of your counterparty and do your best and if that doesn&#39;t work,
guess.&lt;/p&gt;
&lt;p&gt;A rephrasing I use sometimes is, &quot;It takes two to miscommunicate.&quot;
Communication works best and most smoothly if you have a good listener and a
clear speaker, sharing a language and context. But it can still bumble along
successfully if you have a poor speaker with a great listener, or even a
great speaker with a mediocre listener. Sometimes you have to say the same
thing five ways before it gets across (wifi packet retransmits), or ask way
too many clarifying questions, but if one side or the other is diligent
enough, you can almost always make it work.&lt;/p&gt;
&lt;p&gt;This asymmetry is key to all high-level communication. It makes network bugs
much less severe. Without Postel&#39;s Law, triggering a bug in the sender would
break the connection; so would triggering a bug in the receiver. With
Postel&#39;s Law, we acknowledge from the start that there are always bugs and
we have twice as many chances to work around them. Only if you trigger both
sets of bugs at once is the flaw fatal.&lt;/p&gt;
&lt;p&gt;…So okay, if you&#39;ve used the Internet, you&#39;ve probably observed that fatal
connection errors are nevertheless pretty common. But that misses how
&lt;em&gt;incredibly much more common&lt;/em&gt; they would be in a non-Postel world. That
world would be the one my physics teacher imagined, where nothing ever works
and it all topples over.&lt;/p&gt;
&lt;p&gt;And we know that&#39;s true because we&#39;ve tried it. Science! Let us digress.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;XML&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We had the Internet (&quot;OSI Layer 3&quot;) mostly figured out by the time my era
began in the late 1900s, but higher layers of the stack still had work to
do. It was the early days of the web. We had these newfangled hypertext
(&quot;HTML&quot;) browsers that would connect to a server, download some stuff, and
then try their best to render it.&lt;/p&gt;
&lt;p&gt;Web browsers are and have always been an epic instantiation of Postel&#39;s Law.
From the very beginning, they assumed that the server (content author) had
absolutely no clue what they were doing and did their best to apply some
kind of meaning on top, despite every indication that this was a lost cause.
List items that never end? Sure. Tags you&#39;ve never heard of? Whatever.
Forgot some semicolons in your javascript? I&#39;ll interpolate some. Partially
overlapping italics and bold? Leave it to me. No indication what language or
encoding the page is in? I&#39;ll just guess.&lt;/p&gt;
&lt;p&gt;The evolution of browsers gives us some insight into why Postel&#39;s Law is a
law and not just, you know, Postel&#39;s Advice. The answer is: competition. It
works like this. If your browser interprets someone&#39;s mismash subjectively
better than another browser, your browser wins.&lt;/p&gt;
&lt;p&gt;I think economists call this an iterated prisoner&#39;s dilemma. Over and over,
people write web pages (defect) and browsers try to render them (defect) and
absolutely nobody actually cares what the HTML standard says (stays loyal).
Because if there&#39;s a popular page that&#39;s wrong and you render it &quot;right&quot; and
it doesn&#39;t work? Straight to jail.&lt;/p&gt;
&lt;p&gt;(By now almost all the evolutionary lines of browsers have been sent to
jail, one by one, and the HTML standard is effectively whatever Chromium and
Safari say it is. Sorry.)&lt;/p&gt;
&lt;p&gt;This law offends engineers to the deepness of their soul. We went through a
period where loyalists would run their pages through &quot;validators&quot; and
proudly add a logo to the bottom of their page saying how valid their HTML
was. Browsers, of course, didn&#39;t care and continued to try their best.&lt;/p&gt;
&lt;p&gt;Another valiant effort was the definition of &quot;quirks mode&quot;: a legacy
rendering mode meant to document, normalize, and push aside all the legacy
wonko interpretations of old web pages. It was paired with a new,
standards-compliant rendering mode that everyone was supposed to agree on,
starting from scratch with an actual written spec and tests this time, and
public shaming if you made a browser that did it wrong. Of course, outside
of browser academia, nobody cares about the public shaming and everyone
cares if your browser can render the popular web sites, so there are still
plenty of quirks outside quirks mode. It&#39;s better and it was well worth the
effort, but it&#39;s not all the way there. It never can be.&lt;/p&gt;
&lt;p&gt;We can be sure it&#39;s not all the way there because there was another exciting
development, HTML Strict (and its fancier twin, XHTML), which was meant to
be the same thing, but with a special feature. Instead of sending browsers
to jail for rendering wrong pages wrong, we&#39;d send page authors to jail for
writing wrong pages!&lt;/p&gt;
&lt;p&gt;To mark your web page as HTML Strict was a vote against the iterated
prisoner&#39;s dilemma and Postel&#39;s Law. No, your vote said. No more. We cannot
accept this madness. We are going to be Correct. I certify this page is
correct. If it is not correct, you must sacrifice me, not all of society. My
honour demands it.&lt;/p&gt;
&lt;p&gt;Anyway, many page authors were thus sacrificed and now nobody uses HTML
Strict. Nobody wants to do tech support for a web page that asks browsers to
crash when parsing it, when you can just… not do that.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Excuse me, the above XML section didn&#39;t have any XML&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Yes, I&#39;m getting to that. (And you&#39;re soon going to appreciate that meta
joke about schemas.)&lt;/p&gt;
&lt;p&gt;In parallel with that dead branch of HTML, a bunch of people had realized
that, more generally, HTML-like languages (technically SGML-like languages)
had turned out to be a surprisingly effective way to build interconnected
data systems.&lt;/p&gt;
&lt;p&gt;In retrospect we now know that the reason for HTML&#39;s resilience is Postel&#39;s
Law. It&#39;s simply easier to fudge your way through parsing incorrect
hypertext, than to fudge your way through parsing a Microsoft Word or Excel
file&#39;s hairball of binary OLE streams, which famously even Microsoft at one
point lost the knowledge of how to parse. But, that Postel&#39;s Law connection
wasn&#39;t really understood at the time.&lt;/p&gt;
&lt;p&gt;Instead we had a different hypothesis: &quot;separation of structure and
content.&quot; Syntax and semantics. Writing software to deal with structure is
repetitive overhead, and content is where the money is. Let&#39;s automate away
the structure so you can spend your time on the content: semantics.&lt;/p&gt;
&lt;p&gt;We can standardize the syntax with a single Extensible Markup Language
(XML). Write your content, then &quot;mark it up&quot; by adding structure right in
the doc, just like we did with plaintext human documents. Data, plus
self-describing metadata, all in one place. Never write a parser again!&lt;/p&gt;
&lt;p&gt;Of course, with 20/20 hindsight (or now 2025 hindsight), this is laughable.
Yes, we now have XML parser libraries. If you&#39;ve ever tried to use one, you
will find they indeed produce parse trees automatically… if you&#39;re lucky. If
you&#39;re not lucky, they produce a stream of &quot;tokens&quot; and leave it to you to
figure out how to arrange it in a tree, for reasons involving streaming,
performance, memory efficiency, and so on. Basically, if you use XML you now
have to &lt;em&gt;deeply&lt;/em&gt; care about structure, perhaps more than ever, but you also
have to include some giant external parsing library that, left in its normal
mode, &lt;a href=&quot;https://cheatsheetseries.owasp.org/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html&quot;&gt;might spontaneously start making a lot of uncached HTTP requests that
can also exploit remote code execution vulnerabilities haha
oops&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;If you&#39;ve ever taken a parser class, or even if you&#39;ve just barely tried to
write a parser, you&#39;ll know the truth: the value added by outsourcing
&lt;em&gt;parsing&lt;/em&gt; (or in some cases only tokenization) is not a lot. This is because
almost all the trouble of document processing (or compiling) is the
&lt;em&gt;semantic&lt;/em&gt; layer, the part where you make sense of the parse tree. The part
where you just read a stream of characters into a data structure is the
trivial, well-understood first step.&lt;/p&gt;
&lt;p&gt;Now, semantics is where it gets interesting. XML was all about separating
syntax from semantics. And they did some pretty neat stuff with that
separation, in a computer science sense. XML is neat because it&#39;s such a
regular and strict language that you can completely &lt;em&gt;validate&lt;/em&gt; the syntax
(text and tags) without knowing what any of the tags &lt;em&gt;mean&lt;/em&gt; or which tags
are intended to be valid at all.&lt;/p&gt;
&lt;p&gt;…aha! Did someone say &lt;em&gt;validate?!&lt;/em&gt; Like those old HTML validators we
talked about? Oh yes. Yes! And this time the validation will be completely
strict and baked into every implementation from day 1. And, the language
syntax itself will be so easy and consistent to validate (unlike SGML and
HTML, which are, in all fairness, bananas) that nobody can possibly screw it
up.&lt;/p&gt;
&lt;p&gt;A layer on top of this basic, highly validatable XML, was a thing called XML
Schemas. These were documents (mysteriously not written in XML) that
described which tags were allowed in which places in a certain kind of
document. Not only could you parse and validate the basic XML syntax, you
could also then validate its XML schema as a separate step, to be totally
sure that every tag in the document was allowed where it was used, and
present if it was required. And if not? Well, straight to jail. We all
agreed on this, everyone. Day one. No exceptions. Every document validates.
Straight to jail.&lt;/p&gt;
&lt;p&gt;Anyway XML schema validation became an absolute farce. Just parsing or
understanding, let alone writing, the awful schema file format is an
unpleasant ordeal. To say nothing of complying with the schema, or (heaven
forbid) obtaining a copy of someone&#39;s custom schema and loading it into the
validator at the right time.&lt;/p&gt;
&lt;p&gt;The core XML syntax validation was easy enough to do while parsing.
Unfortunately, in a second violation of Postel&#39;s Law, almost no software
that &lt;em&gt;outputs&lt;/em&gt; XML runs it through a validator before sending. I mean, why
would they, the language is highly regular and easy to generate and thus the
output is already perfect. …Yeah, sure.&lt;/p&gt;
&lt;p&gt;Anyway we all use JSON now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;JSON&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Whoa, wait! I wasn&#39;t done!&lt;/p&gt;
&lt;p&gt;This is the part where I note, for posterity&#39;s sake, that XML became a
decade-long fad in the early 2000s that justified billions of dollars of
software investment. None of XML&#39;s technical promises played out; it is a
stain on the history of the computer industry. But, a lot of legacy software
got un-stuck because of those billions of dollars, and so we did make
progress.&lt;/p&gt;
&lt;p&gt;What was that progress? Interconnection.&lt;/p&gt;
&lt;p&gt;Before the Internet, we kinda didn&#39;t really need to interconnect software
together. I mean, we sort of did, like cut-and-pasting between apps on
Windows or macOS or X11, all of which were surprisingly difficult little
mini-Postel&#39;s Law protocol adventures in their own right and remain quite
useful when they work (&lt;a href=&quot;https://news.ycombinator.com/item?id=31356896&quot;&gt;except &quot;paste formatted text,&quot; wtf are you people
thinking&lt;/a&gt;). What makes
cut-and-paste possible is top-down standards imposed by each operating
system vendor.&lt;/p&gt;
&lt;p&gt;If you want the same kind of thing on the open Internet, ie. the ability to
&quot;copy&quot; information out of one server and &quot;paste&quot; it into another, you need
&lt;em&gt;some&lt;/em&gt; kind of standard. XML was a valiant effort to create one. It didn&#39;t
work, but it was valiant.&lt;/p&gt;
&lt;p&gt;Whereas all that money investment &lt;em&gt;did&lt;/em&gt; work. Companies spent billions of
dollars to update their servers to publish APIs that could serve not just
human-formatted HTML, but also something machine-readable. The great
innovation was not XML per se, it was serving data over HTTP that wasn&#39;t
always HTML. That was a big step, and didn&#39;t become obvious until afterward.&lt;/p&gt;
&lt;p&gt;The most common clients of HTTP were web browsers, and web browsers only
knew how to parse two things: HTML and javascript. To a first approximation,
valid XML is &quot;valid&quot; (please don&#39;t ask the validator) HTML, so we could do
that at first, and there were some Microsoft extensions. Later, after a few
billions of dollars, true standardized XML parsing arrived in browsers.
Similarly, to a first approximation, valid JSON is valid javascript, which
woo hoo, that&#39;s a story in itself (you could parse it with eval(), tee hee)
but that&#39;s why we got here.&lt;/p&gt;
&lt;p&gt;JSON (minus the rest of javascript) is a vastly simpler language than XML.
It&#39;s easy to consistently parse (&lt;a href=&quot;https://github.com/tailscale/hujson&quot;&gt;other than that pesky trailing
comma&lt;/a&gt;); browsers already did. It
represents only (a subset of) the data types normal programming languages
already have, unlike XML&#39;s weird mishmash of single attributes, multiply
occurring attributes, text content, and CDATA. It&#39;s obviously a tree and
everyone knows how that tree will map into their favourite programming
language. It inherently works with unicode and only unicode. You don&#39;t need
cumbersome and duplicative &quot;closing tags&quot; that double the size of every
node. And best of all, no guilt about skipping that overcomplicated and
impossible-to-get-right schema validator, because, well, nobody liked
schemas anyway so nobody added them to JSON
(&lt;a href=&quot;https://json-schema.org/&quot;&gt;almost&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;Today, if you look at APIs you need to call, you can tell which ones were a
result of the $billions invested in the 2000s, because it&#39;s all XML. And you
can tell which came in the 2010s and later after learning some hard lessons,
because it&#39;s all JSON. But either way, the big achievement is you can call
them all from javascript. That&#39;s pretty good.&lt;/p&gt;
&lt;p&gt;(Google is an interesting exception: they invented and used protobuf during
the same time period because they disliked XML&#39;s inefficiency, they did like
schemas, and they had the automated infrastructure to make schemas actually
work (mostly, after more hard lessons). But it mostly didn&#39;t spread beyond
Google… maybe because it&#39;s hard to do from javascript.)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Blockchain&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The 2010s were another decade of massive multi-billion dollar tech
investment. Once again it was triggered by an overwrought boondoggle
technology, and once again we benefited from systems finally getting updated
that really needed to be updated.&lt;/p&gt;
&lt;p&gt;Let&#39;s leave aside cryptocurrencies (which although used primarily for crime,
at least demonstrably have a functioning use case, ie. crime) and look at
the more general form of the technology.&lt;/p&gt;
&lt;p&gt;Blockchains in general make the promise of a &quot;distributed ledger&quot; which
allows everyone the ability to make claims and then later validate other
people&#39;s claims. The claims that &quot;real&quot; companies invested in were meant to
be about manufacturing, shipping, assembly, purchases, invoices, receipts,
ownership, and so on. What&#39;s the pattern? That&#39;s the stuff of businesses
doing business with other businesses. In other words, data exchange. Data
exchange is exactly what XML didn&#39;t really solve (although progress was made
by virtue of the dollars invested) in the previous decade.&lt;/p&gt;
&lt;p&gt;Blockchain tech was a more spectacular boondoggle than XML for a few
reasons. First, it didn&#39;t even have a purpose you could explain. Why do we
even need a purely distributed system for this? Why can&#39;t we just trust a
third party auditor? Who even wants their entire supply chain (including
number of widgets produced and where each one is right now) to be visible to
the whole world? What is the problem we&#39;re trying to solve with that?&lt;/p&gt;
&lt;p&gt;…and you know there really was no purpose, because after all the huge
 investment to rewrite all that stuff, which was itself valuable work, we
 simply dropped the useless blockchain part and then we were fine. I don&#39;t
 think even the people working on it felt like they needed a real
 distributed ledger. They just needed an &lt;em&gt;updated&lt;/em&gt; ledger and a budget to
 create one. If you make the &quot;ledger&quot; module pluggable in your big fancy
 supply chain system, you can later drop out the useless &quot;distributed&quot;
 ledger and use a regular old ledger. The protocols, the partnerships, the
 databases, the supply chain, and all the rest can stay the same.&lt;/p&gt;
&lt;p&gt;In XML&#39;s defense, at least it was not worth the effort to rip out once the
world came to its senses.&lt;/p&gt;
&lt;p&gt;Another interesting similarity between XML and blockchains was the computer
science appeal. A particular kind of person gets very excited about
&lt;em&gt;validation&lt;/em&gt; and &lt;em&gt;verifiability.&lt;/em&gt; Both times, the whole computer industry
followed those people down into the pits of despair and when we finally
emerged… still no validation, still no verifiability, still didn&#39;t matter.
Just some computers communicating with each other a little better than they
did before.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;LLMs&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In the 2020s, our industry fad is LLMs. I&#39;m going to draw some comparisons
here to the last two fads, but there are some big differences too.&lt;/p&gt;
&lt;p&gt;One similarity is the computer science appeal: so much math! Just the
matrix sizes alone are a technological marvel the likes of which we have
never seen. Beautiful. Colossal. Monumental. An inspiration to nerds
everywhere.&lt;/p&gt;
&lt;p&gt;But a big difference is verification and validation. If there is one thing
LLMs absolutely are not, it&#39;s &lt;em&gt;verifiable.&lt;/em&gt; LLMs are the flakiest thing the
computer industry has ever produced! So far. And remember, this is the
industry that brought you HTML rendering.&lt;/p&gt;
&lt;p&gt;LLMs are an almost cartoonishly amplified realization of Postel&#39;s Law. They
write human grammar perfectly, or almost perfectly, or when they&#39;re not
perfect it&#39;s a bug and we train them harder. And, they can receive just
about any kind of gibberish and turn it into a data structure. In other
words, they&#39;re conservative in what they send and liberal in what they
accept.&lt;/p&gt;
&lt;p&gt;LLMs also solve the syntax problem, in the sense that they can figure out
how to transliterate (convert) basically any file syntax into any other.
Modulo flakiness. But if you need a CSV in the form of a limerick or a
quarterly financial report formatted as a mysql dump, sure, no problem, make
it so.&lt;/p&gt;
&lt;p&gt;In theory we already had syntax solved though. XML and JSON did that
already. We were even making progress interconnecting old school company
supply chain stuff the hard way, thanks to our nominally XML- and
blockchain- investment decades. We had to do every interconnection by hand –
by writing an adapter – but we could do it.&lt;/p&gt;
&lt;p&gt;What&#39;s really new is that LLMs address &lt;em&gt;semantics.&lt;/em&gt; Semantics are the
biggest remaining challenge in connecting one system to another. If XML
solved syntax, that was the first 10%. Semantics are the last 90%. When I
want to copy from one database to another, how do I map the fields? When I
want to scrape a series of uncooperative web pages and turn it into a table
of products and prices, how do I turn that HTML into something structured?
(Predictably &lt;a href=&quot;https://microformats.org/&quot;&gt;microformats&lt;/a&gt;, aka schemas, did not
work out.) If I want to query a database (or join a few disparate
databases!) using some language that isn&#39;t SQL, what options do I have?&lt;/p&gt;
&lt;p&gt;LLMs can do it all.&lt;/p&gt;
&lt;p&gt;Listen, we can argue forever about whether LLMs &quot;understand&quot; things, or will
achieve anything we might call intelligence, or will take over the world and
eradicate all humans, or are useful assistants, or just produce lots of text
sludge that will certainly clog up the web and social media, or will also be
able to filter the sludge, or what it means for capitalism that we willingly
invented a machine we pay to produce sludge that we also pay to remove the
sludge.&lt;/p&gt;
&lt;p&gt;But what we can&#39;t argue is that LLMs interconnect things. Anything. To
anything. Whether you like it or not. Whether it&#39;s bug free or not (spoiler:
it&#39;s not). Whether it gets the right answer or not (spoiler: erm…).&lt;/p&gt;
&lt;p&gt;This is the thing we have gone through at least two decades of hype cycles
desperately chasing. (Three, if you count java &quot;write once run anywhere&quot; in
the 1990s.) It&#39;s application-layer interconnection, the holy grail of the
Internet.&lt;/p&gt;
&lt;p&gt;And this time, it actually works! (mostly)&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The curse of success&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;LLMs aren&#39;t going away. Really we should coin a term for this use case, call
it &quot;b2b AI&quot; or something. For this use case, LLMs work. And they&#39;re still
getting better and the precision will improve with practice. For example,
imagine asking an LLM to write a data translator in some conventional
programming language, instead of asking it to directly translate a dataset
on its own. We&#39;re still at the beginning.&lt;/p&gt;
&lt;p&gt;But, this use case, which I predict is the big one, isn&#39;t what we expected.
We expected LLMs to write poetry or give strategic advice or whatever. We
didn&#39;t expect them to call APIs and immediately turn around and use what it
learned to call other APIs.&lt;/p&gt;
&lt;p&gt;After 30 years of trying and failing to connect one system to another, we
now have a literal universal translator. Plug it into any two things and
it&#39;ll just go, for better or worse, no matter how confused it becomes. And
everyone is doing it, fast, often with a corporate mandate to do it even
faster.&lt;/p&gt;
&lt;p&gt;This kind of scale and speed of (successful!) rollout is unprecedented,
even by the Internet itself, and especially in the glacially slow world of
enterprise system interconnections, where progress grinds to a halt once a
decade only to be finally dislodged by the next misguided technology wave.
Nobody was prepared for it, so nobody was prepared for the consequences.&lt;/p&gt;
&lt;p&gt;One of the odd features of Postel&#39;s Law is it&#39;s irresistible. Big Central
Infrastructure projects rise and fall with funding, but Postel&#39;s Law
projects are powered by love. A little here, a little there, over time. One
more person plugging one more thing into one more other thing. We did it
once with the Internet, overcoming all the incompatibilities at OSI layers 1
and 2. It subsumed, it is still subsuming, everything.&lt;/p&gt;
&lt;p&gt;Now we&#39;re doing it again at the application layer, the information layer.
And just like we found out when we connected all the computers together the
first time, naively hyperconnected networks make it easy for bad actors to
spread and disrupt at superhuman speeds. We had to invent firewalls, NATs,
TLS, authentication systems, two-factor authentication systems,
phishing-resistant two-factor authentication systems, methodical software
patching, CVE tracking, sandboxing, antivirus systems, EDR systems, DLP
systems, everything. We&#39;ll have to do it all again, but faster and
different.&lt;/p&gt;
&lt;p&gt;Because this time, it&#39;s all software.&lt;/p&gt;
    </description>
  </item>
  
  
  <item>
    <title>
      Billionaire math
    </title>
    <pubDate>Fri, 11 Jul 2025 16:18:52 +0000</pubDate>
    <link>https://apenwarr.ca/log/20250711</link>
    
    <guid isPermaLink="true">https://apenwarr.ca/log/20250711</guid>
    
    <description>
    &lt;p&gt;I have a friend who exited his startup a few years ago and is now rich. How
rich is unclear. One day, we were discussing ways to expedite the delivery
of his superyacht and I suggested paying extra. His response, as to so
many of my suggestions, was, “Avery, I’m not &lt;em&gt;that&lt;/em&gt; rich.”&lt;/p&gt;
&lt;p&gt;Everyone has their limit.&lt;/p&gt;
&lt;p&gt;I, too, am not that rich. I have shares in a startup that has not exited,
and they seem to be gracefully ticking up in value as the years pass. But I
have to come to work each day, and if I make a few wrong medium-quality
choices (not even bad ones!), it could all be vaporized in an instant.
Meanwhile, I can’t spend it. So what I have is my accumulated savings from a
long career of writing software and modest tastes (I like hot dogs).&lt;/p&gt;
&lt;p&gt;Those accumulated savings and modest tastes are enough to retire
indefinitely. Is that bragging? It was true even before I started my
startup. Back in 2018, I calculated my “personal runway” to see how long I
could last if I started a company and we didn’t get funded, before I had to
go back to work. My conclusion was I should move from New York City back to
Montreal and then stop worrying about it forever.&lt;/p&gt;
&lt;p&gt;Of course, being in that position means I’m lucky and special. But I’m not
&lt;em&gt;that&lt;/em&gt; lucky and special. My numbers aren’t that different from the average
Canadian or (especially) American software developer nowadays. We all talk a
lot about how the “top 1%” are screwing up society, but software developers
nowadays fall mostly in the top 1-2%[1] of income earners in the US or
Canada. It doesn’t feel like we’re that rich, because we’re surrounded by
people who are about equally rich. And we occasionally bump into a few who
are much more rich, who in turn surround themselves with people who are
about equally rich, so they don’t feel that rich either.&lt;/p&gt;
&lt;p&gt;But, we’re rich.&lt;/p&gt;
&lt;p&gt;Based on my readership demographics, if you’re reading this, you’re probably
a software developer. Do you feel rich?&lt;/p&gt;
&lt;p&gt;&lt;b&gt;It’s all your fault&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;So let’s trace this through. By the numbers, you’re probably a software
developer. So you’re probably in the top 1-2% of wage earners in your
country, and even better globally. So you’re one of those 1%ers ruining
society.&lt;/p&gt;
&lt;p&gt;I’m not the first person to notice this. When I read other posts about it,
they usually stop at this point and say, ha ha. Okay, obviously that’s not
what we meant. Most 1%ers are nice people who pay their taxes. Actually it’s
the top 0.1% screwing up society!&lt;/p&gt;
&lt;p&gt;No.&lt;/p&gt;
&lt;p&gt;I’m not letting us off that easily. Okay, the 0.1%ers are probably worse
(with apologies to my friend and his chronically delayed superyacht). But,
there aren’t that many of them[2] which means they aren’t as powerful as
they think. No one person has very much capacity to do bad things. They only
have the capacity to pay other people to do bad things.&lt;/p&gt;
&lt;p&gt;Some people have no choice but to take that money and do some bad things so
they can feed their families or whatever. But that’s not you. That’s not us.
We’re rich. If we do bad things, that’s entirely on us, no matter who’s
paying our bills.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;What does the top 1% spend their money on?&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Mostly real estate, food, and junk. If they have kids, maybe they spend a
few hundred $k on overpriced university education (which in sensible
countries is free or cheap).&lt;/p&gt;
&lt;p&gt;What they &lt;em&gt;don’t&lt;/em&gt; spend their money on is making the world a better place.
Because they are convinced they are &lt;em&gt;not that rich&lt;/em&gt; and the world’s problems
are caused by &lt;em&gt;somebody else&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;When I worked at a megacorp, I spoke to highly paid software engineers who
were torn up about their declined promotion to L4 or L5 or L6, because they
needed to earn more money, because without more money they wouldn’t be able
to afford the mortgage payments on an &lt;a href=&quot;https://apenwarr.ca/log/20180918&quot;&gt;overpriced $1M+ run-down Bay Area
townhome&lt;/a&gt; which is a prerequisite to
starting a family and thus living a meaningful life. This treadmill started
the day after graduation.[3]&lt;/p&gt;
&lt;p&gt;I tried to tell some of these L3 and L4 engineers that they were already in
the top 5%, probably top 2% of wage earners, and their earning potential was
only going up. They didn’t believe me until I showed them the arithmetic and
the economic stats. And even then, facts didn’t help, because it didn’t make
their fears about money go away. They &lt;em&gt;needed&lt;/em&gt; more money before they could
feel safe, and in the meantime, they had no disposable income. Sort of.
Well, for the sort of definition of disposable income that rich people
use.[4]&lt;/p&gt;
&lt;p&gt;Anyway there are psychology studies about this phenomenon. “&lt;a href=&quot;https://www.cbc.ca/news/business/why-no-one-feels-rich-1.5138657&quot;&gt;What people
consider rich is about three times what they currently
make&lt;/a&gt;.” No
matter what they make. So, I’ll forgive you for falling into this trap. I’ll
even forgive me for falling into this trap.&lt;/p&gt;
&lt;p&gt;But it’s time to fall out of it.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The meaning of life&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;My rich friend is a fountain of wisdom. Part of this wisdom came from the
shock effect of going from normal-software-developer rich to
founder-successful-exit rich, all at once. He described his existential
crisis: “Maybe you do find something you want to spend your money on. But,
I&#39;d bet you never will. It’s a rare problem. M&lt;strong&gt;oney, which is the driver
for everyone, is no longer a thing in my life.&lt;/strong&gt;”&lt;/p&gt;
&lt;p&gt;Growing up, I really liked the saying, “Money is just a way of keeping
score.” I think that metaphor goes deeper than most people give it credit
for. Remember &lt;a href=&quot;https://www.reddit.com/r/Mario/comments/13v3hoc/what_even_is_the_point_of_the_score_counter/&quot;&gt;old Super Mario Brothers, which had a vestigial score
counter&lt;/a&gt;?
Do you know anybody who rated their Super Mario Brothers performance based
on the score? I don’t. I’m sure those people exist. They probably have
Twitch channels and are probably competitive to the point of being annoying.
Most normal people get some other enjoyment out of Mario that is not from
the score. Eventually, Nintendo stopped including a score system in Mario
games altogether. Most people have never noticed. The games are still fun.&lt;/p&gt;
&lt;p&gt;Back in the world of capitalism, we’re still keeping score, and we’re still
weirdly competitive about it. We programmers, we 1%ers, are in the top
percentile of capitalism high scores in the entire world - that’s the
literal definition - but we keep fighting with each other to get closer to
top place. Why?&lt;/p&gt;
&lt;p&gt;Because we forgot there’s anything else. Because someone convinced us that
the score even matters.&lt;/p&gt;
&lt;p&gt;The saying isn’t, “Money is &lt;em&gt;the way&lt;/em&gt; of keeping score.” Money is &lt;em&gt;just one
way&lt;/em&gt; of keeping score.&lt;/p&gt;
&lt;p&gt;It’s mostly a pretty good way. Capitalism, for all its flaws, mostly aligns
incentives so we’re motivated to work together and produce more stuff, and
more valuable stuff, than otherwise. Then it automatically gives more power
to people who empirically[5] seem to be good at organizing others to make
money. Rinse and repeat. Number goes up.&lt;/p&gt;
&lt;p&gt;But there are limits. And in the ever-accelerating feedback loop of modern
capitalism, more people reach those limits faster than ever. They might
realize, like my friend, that money is no longer a thing in their life. You
might realize that. We might.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;There’s nothing more dangerous than a powerful person with nothing to prove&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Billionaires run into this existential crisis, that they obviously have to
have something to live for, and money just isn’t it. Once you can buy
anything you want, you quickly realize that what you want was not very
expensive all along. And then what?&lt;/p&gt;
&lt;p&gt;Some people, the less dangerous ones, retire to their superyacht (if it ever
finally gets delivered, come on already). The dangerous ones pick ever
loftier goals (colonize Mars) and then bet everything on it. Everything.
Their time, their reputation, their relationships, their fortune, their
companies, their morals, everything they’ve ever built. Because if there’s
nothing on the line, there’s no reason to wake up in the morning. And they
really &lt;em&gt;need&lt;/em&gt; to want to wake up in the morning. Even if the reason to wake
up is to deal with today’s unnecessary emergency. As long as, you know, the
emergency requires &lt;em&gt;them&lt;/em&gt; to &lt;em&gt;do something&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Dear reader, statistically speaking, you are not a billionaire. But you have
this problem.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;So what then&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Good question. We live at a moment in history when society is richer and
more productive than it has ever been, with opportunities for even more of
us to become even more rich and productive even more quickly than ever. And
yet, we live in existential fear: the fear that nothing we do matters.[6][7]&lt;/p&gt;
&lt;p&gt;I have bad news for you. This blog post is not going to solve that.&lt;/p&gt;
&lt;p&gt;I have worse news. 98% of society gets to wake up each day and go to work
because they have no choice, so at worst, for them this is a background
philosophical question, like the trolley problem.&lt;/p&gt;
&lt;p&gt;Not you.&lt;/p&gt;
&lt;p&gt;For you this unsolved philosophy problem is urgent &lt;em&gt;right now&lt;/em&gt;. There are
people tied to the tracks. You’re driving the metaphorical trolley. Maybe
nobody told you you’re driving the trolley. Maybe they lied to you and said
someone else is driving. Maybe you have no idea there are people on the
tracks. Maybe you do know, but you’ll get promoted to L6 if you pull the
right lever. Maybe you’re blind. Maybe you’re asleep. Maybe there are no
people on the tracks after all and you’re just destined to go around and
around in circles, forever.&lt;/p&gt;
&lt;p&gt;But whatever happens next: you chose it.&lt;/p&gt;
&lt;p&gt;We chose it.&lt;/p&gt;
&lt;p style=&quot;padding-top: 2em;&quot;&gt;&lt;b&gt;Footnotes&lt;/b&gt;&lt;/p&gt;

&lt;p&gt;[1] Beware of estimates of the “average income of the top 1%.” That average
includes all the richest people in the world. You only need to earn the very
bottom of the 1% bucket in order to be in the top 1%.&lt;/p&gt;
&lt;p&gt;[2] If the population of the US is 340 million, there are actually 340,000
people in the top 0.1%.&lt;/p&gt;
&lt;p&gt;[3] I’m Canadian so I’m disconnected from this phenomenon, but if TV and
movies are to be believed, in America the treadmill starts all the way back
in high school where you stress over getting into an elite university so
that you can land the megacorp job after graduation so that you can stress
about getting promoted. If that’s so, I send my sympathies. That’s not how
it was where I grew up.&lt;/p&gt;
&lt;p&gt;[4] Rich people like us methodically put money into savings accounts,
investments, life insurance, home equity, and so on, and only what’s left
counts as “disposable income.” This is not the definition normal people use.&lt;/p&gt;
&lt;p&gt;[5] Such an interesting double entendre.&lt;/p&gt;
&lt;p&gt;[6] This is what AI doomerism is about. A few people have worked themselves
into a terror that if AI becomes too smart, it will realize that humans are
not actually that useful, and eliminate us in the name of efficiency. That’s
not a story about AI. It’s a story about what we already worry is true.&lt;/p&gt;
&lt;p&gt;[7] I’m in favour of Universal Basic Income (UBI), but it has a big
problem: it reduces your need to wake up in the morning. If the alternative
is &lt;a href=&quot;https://en.wikipedia.org/wiki/Bullshit_Jobs&quot;&gt;bullshit jobs&lt;/a&gt; or suffering
then yeah, UBI is obviously better. And the people who think that if you
don’t work hard, you don’t deserve to live, are nuts. But it’s horribly
dystopian to imagine a society where lots of people wake up and have nothing
that motivates them. The utopian version is to wake up and be able to spend
all your time doing what gives your life meaning. Alas, so far science has
produced no evidence that anything gives your life meaning.&lt;/p&gt;
    </description>
  </item>
  
  
  <item>
    <title>
      The evasive evitability of enshittification
    </title>
    <pubDate>Sun, 15 Jun 2025 02:52:58 +0000</pubDate>
    <link>https://apenwarr.ca/log/20250530</link>
    
    <guid isPermaLink="true">https://apenwarr.ca/log/20250530</guid>
    
    <description>
    &lt;p&gt;Our company recently announced a fundraise.  We were grateful for all
the community support, but the Internet also raised a few of its collective
eyebrows, wondering whether this meant the dreaded “enshittification” was
coming next.&lt;/p&gt;
&lt;p&gt;That word describes a very real pattern we’ve all seen before: products
start great, grow fast, and then slowly become worse as the people running
them trade user love for short-term revenue.&lt;/p&gt;
&lt;p&gt;It’s a topic I find genuinely fascinating, and I&#39;ve seen the downward spiral
firsthand at companies I once admired. So I want to talk about why this
happens, and more importantly, why it won&#39;t happen to us. That&#39;s big talk, I
know. But it&#39;s a promise I&#39;m happy for people to hold us to.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What is enshittification?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The term &quot;enshittification&quot; was first popularized in a &lt;a href=&quot;https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys&quot;&gt;blog post by Corey
Doctorow&lt;/a&gt;, who put
a catchy name to an effect we&#39;ve all experienced. Software starts off good,
then goes bad. How? Why?&lt;/p&gt;
&lt;p&gt;Enshittification proposes not just a name, but a mechanism. First, a product
is well loved and gains in popularity, market share, and revenue. In fact,
it gets so popular that it starts to defeat competitors. Eventually, it&#39;s
the primary product in the space: a monopoly, or as close as you can get.
And then, suddenly, the owners, who are Capitalists, have their evil nature
finally revealed and they exploit that monopoly to raise prices and make the
product worse, so the captive customers all have to pay more. Quality
doesn&#39;t matter anymore, only exploitation.&lt;/p&gt;
&lt;p&gt;I agree with most of that thesis. I think Doctorow has that mechanism
&lt;em&gt;mostly&lt;/em&gt; right. But, there&#39;s one thing that doesn&#39;t add up for me:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Enshittification is not a success mechanism.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I can&#39;t think of any examples of companies that, in real life, enshittified
because they were &lt;em&gt;successful&lt;/em&gt;. What I&#39;ve seen is companies that made their
product worse because they were... scared.&lt;/p&gt;
&lt;p&gt;A company that&#39;s growing fast can afford to be optimistic. They create a
positive feedback loop: more user love, more word of mouth, more users, more
money, more product improvements, more user love, and so on. Everyone in the
company can align around that positive feedback loop. It&#39;s a beautiful
thing. It&#39;s also fragile: miss a beat and it flattens out, and soon it&#39;s a
downward spiral instead of an upward one.&lt;/p&gt;
&lt;p&gt;So, if I were, hypothetically, running a company, I think I would be pretty
hesitant to deliberately sacrifice any part of that positive feedback loop,
the loop I and the whole company spent so much time and energy building, to
see if I can grow faster. User love? Nah, I&#39;m sure we&#39;ll be fine, look how
much money and how many users we have! Time to switch strategies!&lt;/p&gt;
&lt;p&gt;Why would I do that? Switching strategies is always a tremendous risk. When
you switch strategies, it&#39;s triggered by passing a threshold, where something
fundamental changes, and your old strategy becomes wrong.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Threshold moments and control&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;In &lt;a href=&quot;https://en.wikipedia.org/wiki/Reversing_Falls&quot;&gt;Saint John, New Brunswick, there&#39;s a
river&lt;/a&gt; that flows one
direction at high tide, and the other way at low tide. Four times a day,
gravity equalizes, then crosses a threshold to gently start pulling the
other way, then accelerates. What &lt;em&gt;doesn&#39;t&lt;/em&gt; happen is a rapidly flowing
river in one direction &quot;suddenly&quot; shifts to rapidly flowing the other way.
Yes, there&#39;s an instant where the limit from the left is positive and the
limit from the right is negative. But you can see that threshold coming.
It&#39;s predictable.&lt;/p&gt;
&lt;p&gt;In my experience, for a company or a product, there are two kinds of
thresholds like this, that build up slowly and then when crossed, create a
sudden flow change.&lt;/p&gt;
&lt;p&gt;The first one is control: if the visionaries in charge lose control, chances
are high that their replacements won&#39;t &quot;get it.&quot;&lt;/p&gt;
&lt;p&gt;The new people didn&#39;t build the underlying feedback loop, and so they don&#39;t
realize how fragile it is. There are lots of reasons for a change in
control: financial mismanagement, boards of directors, hostile takeovers.&lt;/p&gt;
&lt;p&gt;The worst one is temptation. Being a founder is, well, it actually sucks.
It&#39;s oddly like being repeatedly punched in the face. When I look back at my
career, I guess I&#39;m surprised by how few times per day it feels like I was
punched in the face. But, the
constant face punching gets to you after a while. Once you&#39;ve established a
great product, and amazing customer love, and lots of money, and an upward
spiral, isn&#39;t your creation strong enough yet? Can&#39;t you step back and let
the professionals just run it, confident that they won&#39;t kill the golden
goose?&lt;/p&gt;
&lt;p&gt;Empirically, mostly no, you can&#39;t. Actually the success rate of control
changes, for well loved products, is abysmal.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The saturation trap&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The second trigger of a flow change is comes from outside: saturation. Every
successful product, at some point, reaches approximately all the users it&#39;s
ever going to reach. Before that, you can watch its exponential growth rate
slow down: the &lt;a href=&quot;https://blog.apnic.net/2022/02/21/another-year-of-the-transition-to-ipv6/&quot;&gt;infamous
S-curve&lt;/a&gt;
of product adoption.&lt;/p&gt;
&lt;p&gt;Saturation can lead us back to control change: the founders get frustrated
and back out, or the board ousts them and puts in &quot;real business people&quot; who
know how to get growth going again. Generally that doesn&#39;t work. Modern VCs
consider founder replacement a truly desperate move. Maybe
a last-ditch effort to boost short term numbers in preparation for an
acquisition, if you&#39;re lucky.&lt;/p&gt;
&lt;p&gt;But sometimes the leaders stay on despite saturation, and they try on their
own to make things better. Sometimes that &lt;em&gt;does&lt;/em&gt; work. Actually, it&#39;s kind
of amazing how often it seems to work. Among successful companies,
it&#39;s rare to find one that sustained hypergrowth, nonstop, without suffering
through one of these dangerous periods.&lt;/p&gt;
&lt;p&gt;(That&#39;s called survivorship bias. All companies have dangerous periods.
The successful ones surivived them. But of those survivors, suspiciously few
are ones that replaced their founders.)&lt;/p&gt;
&lt;p&gt;If you saturate and can&#39;t recover - either by growing more in a big-enough
current market, or by finding new markets to expand into - then the best you
can hope for is for your upward spiral to mature gently into decelerating
growth. If so, and you&#39;re a buddhist, then you hire less, you optimize
margins a bit, you resign yourself to being About This Rich And I Guess
That&#39;s All But It&#39;s Not So Bad.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The devil&#39;s bargain&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Alas, very few people reach that state of zen. Especially the kind of
ambitious people who were able to get that far in the first place. If you
can&#39;t accept saturation and you can&#39;t beat saturation, then you&#39;re down to
two choices: step away and let the new owners enshittify it, hopefully
slowly. Or take the devil&#39;s bargain: enshittify it yourself.&lt;/p&gt;
&lt;p&gt;I would not recommend the latter. If you&#39;re a founder and you find yourself
in that position, honestly, you won&#39;t enjoy doing it and you probably aren&#39;t
even good at it and it&#39;s getting enshittified either way. Let someone else
do the job.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Defenses against enshittification&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Okay, maybe that section was not as uplifting as we might have hoped. I&#39;ve
gotta be honest with you here. Doctorow is, after all, mostly right. This
does happen all the time.&lt;/p&gt;
&lt;p&gt;Most founders aren&#39;t perfect for every stage of growth. Most product owners
stumble. Most markets saturate. Most VCs get board control pretty early on
and want hypergrowth or bust. In tech, a lot of the time, if you&#39;re choosing
a product or company to join, that kind of company is all you can get.&lt;/p&gt;
&lt;p&gt;As a founder, maybe you&#39;re okay with growing slowly. Then some copycat shows
up, steals your idea, grows super fast, squeezes you out along with your
moral high ground, and then runs headlong into all the same saturation
problems as everyone else. Tech incentives are awful.&lt;/p&gt;
&lt;p&gt;But, it&#39;s not a lost cause. There are companies (and open source projects)
that keep a good thing going, for decades or more. What do they have in
common?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;An expansive vision that&#39;s not about money&lt;/strong&gt;, and which opens you up to
lots of users. A big addressable market means you don&#39;t have to
worry about saturation for a long time, even at hypergrowth speeds. Google
certainly never had an incentive to make Google Search worse.&lt;/p&gt;
&lt;p&gt;&lt;i&gt;(Update 2025-06-14: A few people disputed that last bit.  Okay. 
Perhaps Google has ccasionally responded to what they thought were
incentives to make search worse -- I wasn&#39;t there, I don&#39;t know -- but it
seems clear in retrospect that when search gets worse, Google does worse. 
So I&#39;ll stick to my claim that their true incentives are to keep improving.)&lt;/i&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Keep control.&lt;/strong&gt; It&#39;s easy to lose control of a project or company at any
point. If you stumble, and you don&#39;t have a backup plan, and there&#39;s someone
waiting to jump on your mistake, then it&#39;s over. Too many companies &quot;bet it
all&quot; on nonstop hypergrowth and &lt;s&gt;&lt;a href=&quot;https://www.reddit.com/r/movies/comments/yuekuu/can_someone_explain_me_this_dialogue_from_gattaca/&quot;&gt;don&#39;t have any way
back&lt;/a&gt;&lt;/s&gt;
have no room in the budget, if results slow down even temporarily.&lt;/p&gt;
&lt;p&gt;Stories abound of companies that scraped close to bankruptcy before
finally pulling through. But far more companies scraped close to
bankruptcy and then went bankrupt. Those companies are forgotten. Avoid
it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Track your data.&lt;/strong&gt; Part of control is predictability. If you know how
big your market is, and you monitor your growth carefully, you can detect
incoming saturation years before it happens. Knowing the telltale shape of
each part of that S-curve is a superpower. If you can see the future, you
can prevent your own future mistakes.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Believe in competition.&lt;/strong&gt; Google used to have this saying they lived by:
&quot;&lt;a href=&quot;https://9to5google.com/2012/04/05/larry-page-posts-update-from-the-ceo-2012%E2%80%B3-memo-detailing-googles-aspirations/&quot;&gt;the competition is only a click
away&lt;/a&gt;.&quot; That was
excellent framing, because it was true, and it will remain true even if
Google captures 99% of the search market. The key is to cultivate a healthy
fear of competing products, not of your investors or the end of
hypergrowth. Enshittification helps your competitors. That would be dumb.&lt;/p&gt;
&lt;p&gt;(And don&#39;t cheat by using lock-in to make competitors
not, anymore, &quot;only a click away.&quot; That&#39;s missing the whole point!)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Inoculate yourself.&lt;/strong&gt; If you have to, create your own competition. Linus
  Torvalds, the creator of the Linux kernel, &lt;a href=&quot;https://git-scm.com/about&quot;&gt;famously also created
  Git&lt;/a&gt;, the greatest tool for forking (and maybe
  merging) open source projects that has ever existed. And then he said,
  this is my fork, the &lt;a href=&quot;https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/&quot;&gt;Linus fork&lt;/a&gt;; use it if you want; use someone else&#39;s if
  you want; and now if I want to win, I have to make mine the best. Git was
  created back in 2005, twenty years ago. To this day, Linus&#39;s fork is still
  the central one.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you combine these defenses, you can be safe from the decline that others
tell you is inevitable. If you look around for examples, you&#39;ll find that
this does actually work. You won&#39;t be the first. You&#39;ll just be rare.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Side note: Things that aren&#39;t enshittification&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I often see people worry about enshittification that isn&#39;t. They might be
good or bad, wise or unwise, but that&#39;s a different topic. Tools aren&#39;t
inherently good or evil. They&#39;re just tools.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&quot;Helpfulness.&quot;&lt;/strong&gt; There&#39;s a fine line between &quot;telling users about this
cool new feature we built&quot; in the spirit of helping them, and &quot;pestering
users about this cool new feature we built&quot; (typically a misguided AI
implementation) to improve some quarterly KPI. Sometimes it&#39;s hard to see
where that line is. But when you&#39;ve crossed it, you know.&lt;/p&gt;
&lt;p&gt;Are you trying to help a user do what &lt;em&gt;they&lt;/em&gt; want to do, or are you trying
to get them to do what &lt;em&gt;you&lt;/em&gt; want them to do?&lt;/p&gt;
&lt;p&gt;Look into your heart. Avoid the second one. I know you know how. Or you
knew how, once. Remember what that feels like.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Charging money for your product.&lt;/strong&gt; Charging money is okay. Get serious.
&lt;a href=&quot;https://apenwarr.ca/log/20211229&quot;&gt;Companies have to stay in business&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;That said, I personally really revile the &quot;we&#39;ll make it &lt;a href=&quot;https://tailscale.com/blog/free-plan&quot;&gt;free for
now&lt;/a&gt; and we&#39;ll start charging for the
exact same thing later&quot; strategy. Keep your promises.&lt;/p&gt;
&lt;p&gt;I&#39;m pretty sure nobody but drug dealers breaks those promises on purpose.
But, again, desperation is a powerful motivator. Growth slowing down?
Costs way higher than expected? Time to capture some of that value we
were giving away for free!&lt;/p&gt;
&lt;p&gt;In retrospect, that&#39;s a bait-and-switch, but most founders never planned
it that way. They just didn&#39;t do the math up front, or they were too
naive to know they would have to. And then they had to.&lt;/p&gt;
&lt;p&gt;Famously, Dropbox had a &quot;free forever&quot; plan that provided a certain
amount of free storage.  What they didn&#39;t count on was abandoned
accounts, accumulating every year, with stored stuff they could never
delete.  Even if a very good fixed fraction of users each year upgraded
to a paid plan, all the ones that didn&#39;t, kept piling up...  year after
year...  after year...  until they had to start &lt;a href=&quot;https://www.cnbc.com/2018/02/23/dropbox-shows-how-it-manages-costs-by-deleting-inactive-accounts.html&quot;&gt;deleting old free
accounts and the data in
them&lt;/a&gt;. 
A similar story &lt;a href=&quot;https://news.ycombinator.com/item?id=24143588&quot;&gt;happened with
Docker&lt;/a&gt;,
which used to host unlimited container downloads for free.  In hindsight
that was mathematically unsustainable.  Success guaranteed failure.&lt;/p&gt;
&lt;p&gt;Do the math up
front. If you&#39;re not sure, find someone who can.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Value pricing.&lt;/strong&gt; (ie. charging different prices to different people.)
It&#39;s okay to charge money. It&#39;s even okay to charge money to some kinds of
people (say, corporate users) and not others. It&#39;s also okay to charge money
for an almost-the-same-but-slightly-better product. It&#39;s okay to charge
money for support for your open source tool (though I stay away from that;
it incentivizes you to make the product worse).&lt;/p&gt;
&lt;p&gt;It&#39;s even okay to charge immense amounts of money for a commercial
product that&#39;s barely better than your open source one! Or for a part of
your product that costs you almost nothing.&lt;/p&gt;
&lt;p&gt;But, you have to
do the rest of the work. Make sure the reason your users don&#39;t
switch away is that you&#39;re the best, not that you have the best lock-in.
Yeah, I&#39;m talking to you, cloud egress fees.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Copying competitors.&lt;/strong&gt; It&#39;s okay to copy features from competitors.
It&#39;s okay to position yourself against competitors. It&#39;s okay to win
customers away from competitors. But it&#39;s not okay to lie.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Bugs.&lt;/strong&gt; It&#39;s okay to fix bugs. It&#39;s okay to decide not to fix bugs;
&lt;a href=&quot;https://apenwarr.ca/log/20171213&quot;&gt;you&#39;ll have to sometimes, anyway&lt;/a&gt;. It&#39;s
okay to take out &lt;a href=&quot;https://apenwarr.ca/log/20230605&quot;&gt;technical debt&lt;/a&gt;. It&#39;s
okay to pay off technical debt. It&#39;s okay to let technical debt languish
forever.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Backward incompatible changes.&lt;/strong&gt; It&#39;s &lt;a href=&quot;https://tailscale.com/blog/community-projects&quot;&gt;dumb to release a new version
that breaks backward
compatibility&lt;/a&gt; with your old
version. It&#39;s tempting. It annoys your users. But it&#39;s not enshittification
for the simple reason that it&#39;s phenomenally ineffective at maintaining
or exploiting a monopoly, which is what enshittification is supposed to be
about. You know who&#39;s good at monopolies? Intel and Microsoft. They don&#39;t
break old versions.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Enshittification is real, and tragic. But let&#39;s protect a
useful term and its definition! Those things aren&#39;t it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Epilogue: a special note to founders&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If you&#39;re a founder or a product owner, I hope all this helps. I&#39;m sad to
say, you have a lot of potential pitfalls in your future. But, remember that
they&#39;re only &lt;em&gt;potential&lt;/em&gt; pitfalls. Not everyone falls into them.&lt;/p&gt;
&lt;p&gt;Plan ahead. Remember where you came from. Keep your integrity. Do your best.&lt;/p&gt;
&lt;p&gt;I will too.&lt;/p&gt;
    </description>
  </item>
  
  
  <item>
    <title>
      NPS, the good parts
    </title>
    <pubDate>Tue, 05 Dec 2023 05:01:12 +0000</pubDate>
    <link>https://apenwarr.ca/log/20231204</link>
    
    <guid isPermaLink="true">https://apenwarr.ca/log/20231204</guid>
    
    <description>
    &lt;p&gt;The Net Promoter Score (NPS) is a statistically questionable way to turn a
set of 10-point ratings into a single number you can compare with other
NPSes. That&#39;s not the good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Humans&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;To understand the good parts, first we have to start with humans. Humans
have emotions, and those emotions are what they mostly use when asked to
rate things on a 10-point scale.&lt;/p&gt;
&lt;p&gt;Almost exactly twenty years ago, I wrote about sitting on a plane next to a
&lt;a href=&quot;/log/20031227&quot;&gt;musician who told me about music album reviews&lt;/a&gt;. The worst
rating an artist can receive, he said, is a lukewarm one. If people think
your music is neutral, it means you didn&#39;t make them feel anything at all.
You failed. Someone might buy music that reviewers hate, or buy music that
people love, but they aren&#39;t really that interested in music that is just
kinda meh. They listen to music because they want to feel something.&lt;/p&gt;
&lt;p&gt;(At the time I contrasted that with tech reviews in computer magazines
(remember those?), and how negative ratings were the worst thing for a tech
product, so magazines never produced them, lest they get fewer free samples.
All these years later, journalism is dead but we&#39;re still debating the
ethics of game companies sponsoring Twitch streams. You can bet there&#39;s no
sponsored game that gets an actively negative review during 5+ hours of
gameplay and still gets more money from that sponsor. If artists just want
you to feel something, but no vendor will pay for a game review that says it
sucks, I wonder what that says about video game companies and art?)&lt;/p&gt;
&lt;p&gt;Anyway, when you ask regular humans, who are not being sponsored, to rate
things on a 10-point scale, they will rate based on their emotions. Most
of the ratings will be just kinda meh, because most products are, if we&#39;re
honest, just kinda meh. I go through most of my days using a variety of
products and services that do not, on any more than the rarest basis, elicit
any emotion at all. Mostly I don&#39;t notice those. I notice when I have
experiences that are surprisingly good, or (less surprisingly but still
notably) bad. Or, I notice when one of the services in any of those three
categories asks me to rate them on a 10-point scale.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The moment&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The moment when they ask me is important. Many products and services are
just kinda invisibly meh, most of the time, so perhaps I&#39;d give them a meh
rating. But if my bluetooth headphones are currently failing to connect, or
I just had to use an airline&#39;s online international check-in system and it
once again rejected my passport for no reason, then maybe my score will be
extra low. Or if Apple releases a new laptop that finally brings back a
non-sucky keyboard after making laptops with sucky keyboards for literally
years because of some obscure internal political battle, maybe I&#39;ll give a
high rating for a while.&lt;/p&gt;
&lt;p&gt;If you&#39;re a person who likes manipulating ratings, you&#39;ll figure out what
moments are best for asking for the rating you want. But let&#39;s assume you&#39;re
above that sort of thing, because that&#39;s not one of the good parts.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The calibration&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Just now I said that if I&#39;m using an invisible meh product or service, I
would rate it with a meh rating. But that&#39;s not true in real life, because
even though I was having no emotion about, say, Google Meet during a call,
perhaps when they ask me (after every...single...call) how it was, that
makes me feel an emotion after all. Maybe that emotion is &quot;leave me alone,
you ask me this way too often.&quot; Or maybe I&#39;ve learned that if I pick
anything other than five stars, I get a clicky multi-tab questionnaire that
I don&#39;t have time to answer, so I almost always pick five stars unless the
experience was &lt;em&gt;so&lt;/em&gt; bad that I feel it&#39;s worth an extra minute because I
simply need to tell the unresponsive and uncaring machine how I really feel.&lt;/p&gt;
&lt;p&gt;Google Meet never gets a meh rating. It&#39;s designed not to. In Google Meet,
meh gets five stars.&lt;/p&gt;
&lt;p&gt;Or maybe I bought something from Amazon and it came with a thank-you card
begging for a 5-star rating (this happens). Or a restaurant offers free
stuff if I leave a 5-star rating and prove it (this happens). Or I ride in
an Uber and there&#39;s a sign on the back seat talking about how they really
need a 5-star rating because this job is essential so they can support their
family and too many 4-star ratings get them disqualified (this happens,
though apparently not at UberEats). Okay. As one of my high school teachers,
Physics I think, once said, &quot;A&#39;s don&#39;t cost me anything. What grade do you
want?&quot; (He was that kind of teacher. I learned a lot.)&lt;/p&gt;
&lt;p&gt;I&#39;m not a professional reviewer. Almost nobody you ask is a professional
reviewer. Most people don&#39;t actually care; they have no basis for
comparison; just about anything will influence their score. They will not
feel badly about this. They&#39;re just trying to exit your stupid popup
interruption as quickly as possible, and half the time they would have
mashed the X button instead but you hid it, so they mashed this one instead.
People&#39;s answers will be... untrustworthy at best.&lt;/p&gt;
&lt;p&gt;That&#39;s not the good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;And yet&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;And yet. As in so many things, randomness tends to average out, &lt;a href=&quot;https://en.wikipedia.org/wiki/Central_limit_theorem&quot;&gt;probably
into a Gaussian distribution, says the Central Limit
Theorem&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The Central Limit Theorem is the fun-destroying reason that you can&#39;t just
average 10-point ratings or star ratings and get something useful: most
scores are meh, a few are extra bad, a few are extra good, and the next
thing you know, every Uber driver is a 4.997. Or you can &lt;a href=&quot;https://xkcd.com/325/&quot;&gt;ship a bobcat one
in 30 times&lt;/a&gt; and still get 97% positive feedback.&lt;/p&gt;
&lt;p&gt;There&#39;s some deep truth hidden in NPS calculations: that meh ratings mean
nothing, that the frequency of strong emotions matters a lot, and that
deliriously happy moments don&#39;t average out disastrous ones.&lt;/p&gt;
&lt;p&gt;Deming might call this &lt;a href=&quot;/log/20161226&quot;&gt;the continuous region and the &quot;special
causes&quot;&lt;/a&gt; (outliers). NPS is all about counting outliers, and
averages don&#39;t work on outliers.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The degrees of meh&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Just kidding, there are no degrees of meh. If you&#39;re not feeling anything,
you&#39;re just not. You&#39;re not feeling more nothing, or less nothing.&lt;/p&gt;
&lt;p&gt;One of my friends used to say, on a scale of 6 to 9, how good is this? It
was a joke about how nobody ever gives a score less than 6 out of 10, and
nothing ever deserves a 10. It was one of those jokes that was never funny
because they always had to explain it. But they seemed to enjoy explaining
it, and after hearing the explanation the first several times, that part was
kinda funny. Anyway, if you took the 6-to-9 instructions seriously, you&#39;d
end up rating almost everything between 7 and 8, just to save room for
something unimaginably bad or unimaginably good, just like you did with
1-to-10, so it didn&#39;t help at all.&lt;/p&gt;
&lt;p&gt;And so, the NPS people say, rather than changing the scale, let&#39;s just
define meaningful regions in the existing scale. Only very angry people
use scores like 1-6. Only very happy people use scores like 9 or 10. And if
you&#39;re not one of those you&#39;re meh. It doesn&#39;t matter how meh. And in fact,
it doesn&#39;t matter much whether you&#39;re &quot;5 angry&quot; or &quot;1 angry&quot;; that says more
about your internal rating system than about the degree of what you
experienced. Similarly with 9 vs 10; it seems like you&#39;re quite happy. Let&#39;s
not split hairs.&lt;/p&gt;
&lt;p&gt;So with NPS we take a 10-point scale and turn it into a 3-point scale. The
exact opposite of my old friend: you know people misuse the 10-point scale,
but instead of giving them a new 3-point scale to misuse, you just
postprocess the 10-point scale to clean it up. And now we have a 3-point
scale with 3 meaningful points. That&#39;s a good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Evangelism&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;So then what? Average out the measurements on the newly calibrated 1-2-3
scale, right?&lt;/p&gt;
&lt;p&gt;Still no. It turns out there are three kinds of people: the ones so mad they
will tell everyone how mad they are about your thing; the ones who don&#39;t
care and will never think about you again if they can avoid it; and the ones
who had such an over-the-top amazing experience that they will tell everyone
how happy they are about your thing.&lt;/p&gt;
&lt;p&gt;NPS says, you really care about the 1s and the 3s, but averaging them makes
no sense. And the 2s have no effect on anything, so you can just leave them
out.&lt;/p&gt;
&lt;p&gt;Cool, right?&lt;/p&gt;
&lt;p&gt;Pretty cool. Unfortunately, that&#39;s still two valuable numbers but we
promised you one single score. So NPS says, let&#39;s subtract them! Yay! Okay,
no. That&#39;s not the good part.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The threefold path&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;I like to look at it this way instead. First of all, we have computers now,
we&#39;re not tracking ratings on one of those 1980s desktop bookkeeping
printer-calculators, you don&#39;t have to make every analysis into one single
all-encompassing number.&lt;/p&gt;
&lt;p&gt;Postprocessing a 10-point scale into a 3-point one, that seems pretty smart.
But you have to stop there. Maybe you now have three separate aggregate
numbers. That&#39;s tough, I&#39;m sorry. Here&#39;s a nickel, kid, go sell your
personal information in exchange for a spreadsheet app. (I don&#39;t know what
you&#39;ll do with the nickel. Anyway I don&#39;t need it. Here. Go.)&lt;/p&gt;
&lt;p&gt;Each of those three rating types gives you something different you can do in
response:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The &lt;b&gt;ones&lt;/b&gt; had a very bad experience, which is hopefully an
  outlier, unless you&#39;re Comcast or the New York Times subscription
  department. Normally you want to get rid of every bad experience. The
  absence of awful isn&#39;t greatness, it&#39;s just meh, but meh is infinitely
  better than awful. Eliminating negative outliers is a whole job. It&#39;s a
  job filled with Deming&#39;s special causes. It&#39;s hard, and it requires
  creativity, but it really matters.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;b&gt;twos&lt;/b&gt; had a meh experience. This is, most commonly, the
  majority. But perhaps they could have had a better experience. Perhaps
  even a great one? Deming would say you can and should work to improve the
  average experience and reduce the standard deviation. That&#39;s the dream;
  heck, what if the average experience could be an amazing one? That&#39;s
  rarely achieved, but a few products achieve it, especially luxury brands.
  And maybe that Broadway show, Hamilton? I don&#39;t know, I couldn&#39;t get tickets,
  because everyone said it was great so it was always sold out and I guess
  that&#39;s my point.&lt;/p&gt;
&lt;p&gt;If getting the average up to three is too hard or will
  take too long (and it will take a long time!), you could still try to at
  least randomly turn a few of them into threes. For example, they say
  users who have a great customer support experience often rate a product more
  highly than the ones who never needed to contact support at all, because
  the support interaction made the company feel more personal. Maybe you can&#39;t
  afford to interact with everyone, but if you have to interact anyway,
  perhaps you can use that chance to make it great instead of meh.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The &lt;b&gt;threes&lt;/b&gt; already had an amazing experience. Nothing to do, right?
  No! These are the people who are, or who can become, your superfan
  evangelists. Sometimes that happens on its own, but often people don&#39;t
  know where to put that excess positive energy. You can help them. Pop
  stars and fashion brands know all about this; get some true believers
  really excited about your product, and the impact is huge. This is a
  completely different job than turning ones into twos, or twos into threes.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;b&gt;What not to do&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Those are all good parts. Let&#39;s ignore that unfortunately they
aren&#39;t part of NPS at all and we&#39;ve strayed way off topic.&lt;/p&gt;
&lt;p&gt;From here, there are several additional things you can do, but it turns out
you shouldn&#39;t.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t compare scores with other products.&lt;/b&gt; I guarantee you, your methodology
isn&#39;t the same as theirs. The slightest change in timing or presentation
will change the score in incomparable ways. You just can&#39;t. I&#39;m sorry.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t reward your team based on aggregate ratings.&lt;/b&gt; They will find a
way to change the ratings. Trust me, it&#39;s too easy.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t average or difference the bad with the great.&lt;/b&gt; The two groups have
nothing to do with each other, require completely different responses
(usually from different teams), and are often very small. They&#39;re outliers
after all. They&#39;re by definition not the mainstream. Outlier data is very
noisy and each terrible experience is different from the others; each
deliriously happy experience is special. As the famous writer said, &lt;a href=&quot;https://en.wikipedia.org/wiki/Anna_Karenina_principle&quot;&gt;all
meh families are
alike&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t fret about which &quot;standard&quot; rating ranges translate to
bad-meh-good.&lt;/b&gt; Your particular survey or product will have the bad
outliers, the big centre, and the great outliers. Run your survey enough and
you&#39;ll be able to find them.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Don&#39;t call it NPS.&lt;/b&gt; NPS nowadays has a bad reputation. Nobody can
really explain the bad reputation; I&#39;ve asked. But they&#39;ve all heard it&#39;s
bad and wrong and misguided and unscientific and &quot;not real statistics&quot; and
gives wrong answers and leads to bad incentives. You don&#39;t want that stigma
attached to your survey mechanic. But if you call it a &lt;em&gt;satisfaction
survey&lt;/em&gt; on a 10-point or 5-point scale, tada, clear skies and lush green fields ahead.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bonus advice&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Perhaps the neatest thing about NPS is how much information you can get from
just one simple question that can be answered with the same effort it takes
to dismiss a popup.&lt;/p&gt;
&lt;p&gt;I joked about Google Meet earlier, but I wasn&#39;t
really kidding; after having a few meetings, if I had learned that I could
just rank from 1 to 5 stars and then &lt;em&gt;not&lt;/em&gt; get guilted for giving anything
other than 5, I would do it. It would be great science and pretty
unobtrusive. As it is, I lie instead. (I don&#39;t even skip, because it&#39;s
faster to get back to the menu by lying than by skipping.)&lt;/p&gt;
&lt;p&gt;While we&#39;re here, only the weirdest people want to answer a survey that says
it will take &quot;just 5 minutes&quot; or &quot;just 30 seconds.&quot; I don&#39;t have 30 seconds,
I&#39;m busy being mad/meh/excited about your product, I have other things to
do! But I can click just one single star rating, as long as I&#39;m 100%
confident that the survey will go the heck away after that. (And don&#39;t even
get me started about the extra layer in &quot;Can we ask you a few simple
questions about our website? Yes or no&quot;)&lt;/p&gt;
&lt;p&gt;Also, don&#39;t be the survey that promises one question and then asks &quot;just one
more question.&quot; Be the survey that gets a reputation for really truly asking
that one question. Then ask it, optionally, in more places and more often. A
good role model is those knowledgebases where every article offers just
thumbs up or thumbs down (or the default of no click, which means meh). That
way you can legitimately look at aggregates or even the same person&#39;s
answers over time, at different points in the app, after they have different
parts of the experience. And you can compare scores at the same point after
you update the experience.&lt;/p&gt;
&lt;p&gt;But for heaven&#39;s sake, not by just averaging them.&lt;/p&gt;
    </description>
  </item>
  
  
  <item>
    <title>
      Interesting
    </title>
    <pubDate>Fri, 06 Oct 2023 20:59:31 +0000</pubDate>
    <link>https://apenwarr.ca/log/20231006</link>
    
    <guid isPermaLink="true">https://apenwarr.ca/log/20231006</guid>
    
    <description>
    &lt;p&gt;A few conversations last week made me realize I use the word “interesting” in an unusual way.&lt;/p&gt;
&lt;p&gt;I rely heavily on mental models. Of course, everyone &lt;em&gt;relies&lt;/em&gt; on mental models. But I do it intentionally and I push it extra hard.&lt;/p&gt;
&lt;p&gt;What I mean by that is, when I’m making predictions about what will happen next, I mostly don’t look around me and make a judgement based on my immediate surroundings. Instead, I look at what I see, try to match it to something inside my mental model, and then let the mental model extrapolate what “should” happen from there.&lt;/p&gt;
&lt;p&gt;If this sounds predictably error prone: yes. It is.&lt;/p&gt;
&lt;p&gt;But it’s also powerful, when used the right way, which I try to do. Here’s my system.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Confirmation bias&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;First of all, let’s acknowledge the problem with mental models: confirmation bias. Confirmation bias is the tendency of all people, including me and you, to consciously or subconsciously look for evidence to support what we already believe to be true, and try to ignore or reject evidence that disagrees with our beliefs.&lt;/p&gt;
&lt;p&gt;This is just something your brain does. If you believe you’re exempt from this, you’re wrong, and dangerously so. Confirmation bias gives you more certainty where certainty is not necessarily warranted, and we all act on that unwarranted certainty sometimes.&lt;/p&gt;
&lt;p&gt;On the one hand, we would all collapse from stress and probably die from bear attacks if we didn’t maintain some amount of certainty, even if it’s certainty about wrong things. But on the other hand, certainty about wrong things is pretty inefficient.&lt;/p&gt;
&lt;p&gt;There’s a word for the feeling of stress when your brain is working hard to ignore or reject evidence against your beliefs: cognitive dissonance. Certain Internet Dingbats have recently made entire careers talking about how to build and exploit cognitive dissonance, so I’ll try to change the subject quickly, but I’ll say this: cognitive dissonance is bad… if you don’t realize you’re having it.&lt;/p&gt;
&lt;p&gt;But your own cognitive dissonance is &lt;em&gt;amazingly useful&lt;/em&gt; if you notice the feeling and use it as a tool.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;The search for dissonance&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Whether you like it or not, your brain is going to be working full time, on automatic pilot, in the background, looking for evidence to support your beliefs. But you know that; at least, you know it now because I just told you. You can be aware of this effect, but you can’t prevent it, which is annoying.&lt;/p&gt;
&lt;p&gt;But you can try to compensate for it. What that means is using the part of your brain you have control over — the supposedly rational part — to look for the opposite: things that don’t match what you believe.&lt;/p&gt;
&lt;p&gt;To take a slight detour, what’s the relationship between your beliefs and your mental model? For the purposes of this discussion, I’m going to say that mental models are a &lt;em&gt;system for generating beliefs.&lt;/em&gt; Beliefs are the output of mental models. And there’s a feedback loop: beliefs are also the things you generalize in order to produce your mental model. (Self-proclaimed ”Bayesians” will know what I’m talking about here.)&lt;/p&gt;
&lt;p&gt;So let’s put it this way: your mental model, combined with current observations, produce your set of beliefs about the world and about what will happen next.&lt;/p&gt;
&lt;p&gt;Now, what happens if what you expected to happen next, doesn’t happen? Or something happens that was entirely unexpected? Or even, what if someone tells you you’re wrong and they expect something else to happen?&lt;/p&gt;
&lt;p&gt;Those situations are some of the most useful ones in the world. They’re what I mean by &lt;em&gt;interesting&lt;/em&gt;. &lt;/p&gt;
&lt;p&gt;&lt;b&gt;The “aha” moment&lt;/b&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;i&gt;The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny…”&lt;/i&gt;
&lt;br&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;— &lt;a
href=&quot;https://quoteinvestigator.com/2015/03/02/eureka-funny/&quot;&gt;possibly&lt;/a&gt; Isaac Asimov
&lt;/ul&gt;

&lt;p&gt;When you encounter evidence that your mental model mismatches someone else’s model, that’s an exciting opportunity to compare and figure out which one of you is wrong (or both). Not everybody is super excited about doing that with you, so you have to be be respectful. But the most important people to surround yourself with, at least for mental model purposes, are the ones who will talk it through with you.&lt;/p&gt;
&lt;p&gt;Or, if you get really lucky, your predictions turn out to be demonstrably concretely wrong. That’s an even bigger opportunity, because now you get to figure out what part of your mental model is mistaken, and you don’t have to negotiate with a possibly-unwilling partner in order to do it. It’s you against reality. It’s science: you had a hypothesis, you did an experiment, your hypothesis was proven wrong. Neat! Now we’re getting somewhere.&lt;/p&gt;
&lt;p&gt;What follows is then the often-tedious process of figuring out what actual thing was wrong with your model, updating the model, generating new outputs that presumably match your current observations, and then generating new hypotheses that you can try out to see if the new model works better more generally.&lt;/p&gt;
&lt;p&gt;For physicists, this whole process can sometimes take decades and require building multiple supercolliders. For most of us, it often takes less time than that, so we should count ourselves fortunate even if sometimes we get frustrated.&lt;/p&gt;
&lt;p&gt;The reason we update our model, of course, is that most of the time, the update changes a lot more predictions than just the one you’re working with right now. Turning observations back into generalizable mental models allows you to learn things you’ve never been taught; perhaps things nobody has ever learned before. That’s a superpower.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Proceeding under uncertainty&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;But we still have a problem: that pesky slowness. Observing outcomes, updating models, generating new hypotheses, and repeating the loop, although productive, can be very time consuming. My guess is that’s why we didn’t evolve to do that loop most of the time. Analysis paralysis is no good when a tiger is chasing you and you’re worried your preconceived notion that it wants to eat you may or may not be correct.&lt;/p&gt;
&lt;p&gt;Let’s tie this back to business for a moment.&lt;/p&gt;
&lt;p&gt;You have evidence that your mental model about your business is not correct. For example, let’s say you have two teams of people, both very smart and well-informed, who believe conflicting things about what you should do next. That’s &lt;em&gt;interesting&lt;/em&gt;, because first of all, your mental model is that these two groups of people are very smart and make right decisions almost all the time, or you wouldn’t have hired them. How can two conflicting things be the right decision? They probably can’t. That means we have a few possibilities:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The first group is right&lt;/li&gt;
&lt;li&gt;The second group is right&lt;/li&gt;
&lt;li&gt;Both groups are wrong&lt;/li&gt;
&lt;li&gt;The appearance of conflict is actually not correct, because you missed something critical&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There is also often a fifth possibility:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Okay, it’s probably one of the first four but I don’t have time to figure that out right now&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In that case, there’s various wisdom out there involving &lt;a href=&quot;https://www.inc.com/jeff-haden/amazon-founder-jeff-bezos-this-is-how-successful-people-make-such-smart-decisions.html&quot;&gt;one- vs two-way doors&lt;/a&gt;, and oxen pulling in different directions, and so on. But it comes down to this: almost always, it’s better to get everyone aligned to the same direction, even if it’s a somewhat wrong direction, than to have different people going in different directions.&lt;/p&gt;
&lt;p&gt;To be honest, I quite dislike it when that’s necessary. But sometimes it is, and you might as well accept it in the short term.&lt;/p&gt;
&lt;p&gt;The way I make myself feel better about it is to choose the path that will allow us to learn as much as possible, as quickly as possible, in order to update our mental models as quickly as possible (without doing &lt;em&gt;too&lt;/em&gt; much damage) so we have fewer of these situations in the future. In other words, yes, we “bias toward action” — but maybe more of a “bias toward learning.” And even after the action has started, we don’t stop trying to figure out the truth.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Being wrong&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Leaving aside many philosophers’ objections to the idea that “the truth” exists, I think we can all agree that being wrong is pretty uncomfortable. Partly that’s cognitive dissonance again, and partly it’s just being embarrassed in front of your peers. But for me, what matters more is the objective operational expense of the bad decisions we make by being wrong.&lt;/p&gt;
&lt;p&gt;You know what’s even worse (and more embarrassing, and more expensive) than being wrong? Being wrong for &lt;em&gt;even longer&lt;/em&gt; because we ignored the evidence in front of our eyes.&lt;/p&gt;
&lt;p&gt;You might have to talk yourself into this point of view. For many of us, admitting wrongness hurts more than continuing wrongness. But if you can pull off that change in perspective, you’ll be able to do things few other people can.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bonus: Strong opinions held weakly&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Like many young naive nerds, when I first heard of the idea of “strong opinions held weakly,” I thought it was a pretty good idea. At least, clearly more productive than weak opinions held weakly (which are fine if you want to keep your job), or weak opinions held strongly (which usually keep you out of the spotlight).&lt;/p&gt;
&lt;p&gt;The real competitor to strong opinions held weakly is, of course, strong opinions held strongly. We’ve all met those people. They are supremely confident and inspiring, until they inspire everyone to jump off a cliff with them.&lt;/p&gt;
&lt;p&gt;Strong opinions held weakly, on the other hand, is really an invitation to debate. If you disagree with me, why not try to convince me otherwise? Let the best idea win.&lt;/p&gt;
&lt;p&gt;After some decades of experience with this approach, however, I eventually learned that the problem with this framing is the word “debate.” Everyone has a mental model, but not everyone wants to debate it. And if you’re really good at debating — the thing they teach you to be, in debate club or whatever — then you learn how to “win” debates without uncovering actual truth.&lt;/p&gt;
&lt;p&gt;Some days it feels like most of the Internet today is people “debating” their weakly-held strong beliefs and pulling out every rhetorical trick they can find, in order to “win” some kind of low-stakes war of opinion where there was no right answer in the first place.&lt;/p&gt;
&lt;p&gt;Anyway, I don’t recommend it, it’s kind of a waste of time. The people who want to hang out with you at the debate club are the people who already, secretly, have the same mental models as you in all the ways that matter.&lt;/p&gt;
&lt;p&gt;What’s really useful, and way harder, is to find the people who are not interested in debating you at all, and figure out why.&lt;/p&gt;
    </description>
  </item>
  
  
  <item>
    <title>
      Tech debt metaphor maximalism
    </title>
    <pubDate>Tue, 11 Jul 2023 03:12:47 +0000</pubDate>
    <link>https://apenwarr.ca/log/20230605</link>
    
    <guid isPermaLink="true">https://apenwarr.ca/log/20230605</guid>
    
    <description>
    &lt;p&gt;I really like the &quot;tech debt&quot; metaphor. A lot of people don&#39;t,
but I think that&#39;s because they either don&#39;t extend the metaphor far enough,
or because they don&#39;t properly understand financial debt.&lt;/p&gt;
&lt;p&gt;So let&#39;s talk about debt!&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Consumer debt vs capital investment&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Back in school my professor, &lt;a href=&quot;http://lwsmith.ca/&quot;&gt;Canadian economics superhero Larry
Smith&lt;/a&gt;, explained debt this way (paraphrased): debt is
stupid if it&#39;s for instant gratification that you pay for later, with
interest. But debt is great if it means you can make more money than the
interest payments.&lt;/p&gt;
&lt;p&gt;A family that takes on high-interest credit card debt
for a visit to Disneyland is wasting money. If you think you can pay it off
in a year, you&#39;ll pay 20%-ish interest for that year for no reason. You can
instead save up for a year and get the same gratification next year without
the 20% surcharge.&lt;/p&gt;
&lt;p&gt;But if you want to buy a $500k machine that will earn your factory an additional
$1M/year in revenue, it would be foolish &lt;em&gt;not&lt;/em&gt; to buy it now, even with 20%
interest ($100k/year). That&#39;s a profit of $900k in just the first year!
(excluding depreciation)&lt;/p&gt;
&lt;p&gt;There&#39;s a reason profitable companies with CFOs take on debt, and often the
total debt increases rather than decreases over time. They&#39;re not idiots.
They&#39;re making a rational choice that&#39;s win-win for everyone. (The
company earns more money faster, the banks earn interest, the interest gets
paid out to consumers&#39; deposit accounts.)&lt;/p&gt;
&lt;p&gt;Debt is bad when you take out the wrong kind, or you mismanage it, or it has
weird strings attached (hello Venture Debt that requires you to put all your
savings in &lt;a href=&quot;https://www.washingtonpost.com/business/2023/03/15/svb-billions-uninsured-assets-companies/&quot;&gt;one underinsured
place&lt;/a&gt;).
But done right, debt is a way to move faster instead of slower.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;High-interest vs low-interest debt&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;For a consumer, the highest interest rates are for &quot;store&quot; credit cards, the
kinds issued by Best Buy or Macy&#39;s or whatever that only work in that one
store. They aren&#39;t as picky about risk (thus have more defaults) because
it&#39;s the ultimate loyalty programme: it gets people to spend more at their
store instead of other stores, in some cases because it&#39;s the only place
that would issue those people debt in the first place.&lt;/p&gt;
&lt;p&gt;The second-highest interest rate is on a general-purpose credit card like
Visa or Mastercard. They can get away with high interest rates because
they&#39;re also the payment system and so they&#39;re very convenient.&lt;/p&gt;
&lt;p&gt;(Incidentally, when I looked at the stats a decade or so ago, in Canada
credit cards make &lt;em&gt;most&lt;/em&gt; of their income on payment fees because Canadians
are annoyingly persistent about paying off their cards; in the US it&#39;s the
opposite. The rumours are true: Canadians really are more cautious about
spending.)&lt;/p&gt;
&lt;p&gt;If you have a good credit rating, you can get better interest rates on a
bank-issued &quot;line of credit&quot; (LOC) (lower interest rate, but less convenient
than a card). In Canada, one reason many people pay off their credit card
each month is simply that they transfer the balance to a lower-interest LOC.&lt;/p&gt;
&lt;p&gt;Even lower interest rates can be obtained if you&#39;re willing to provide
collateral: most obviously, the equity in your home. This greatly reduces
the risk for the lender because they can repossess and then resell your home
if you don&#39;t pay up. Which is pretty good for them even if you don&#39;t pay,
but what&#39;s better is it makes you much more likely to pay rather
than lose your home.&lt;/p&gt;
&lt;p&gt;Some people argue that you should almost never plan to pay off your
mortgage: typical mortgage interest rates are lower than the rates you&#39;d get
long-term from investing in the S&amp;amp;P. The advice that you should &quot;always buy
the biggest home you can afford&quot; is often perversely accurate, especially if
you believe property values will keep going up. And subject to your risk
tolerance and lock-in preferences.&lt;/p&gt;
&lt;p&gt;What&#39;s the pattern here? Just this: high-interest debt is quick and
convenient but you should pay it off quickly. Sometimes you pay it off just
by converting to longer-term lower-rate debt. Sometimes debt is
collateralized and sometimes it isn&#39;t.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;High-interest and low-interest tech debt&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Bringing that back to tech debt: a simple kind of high-interest short-term
debt would be committing code without tests or documentation. Yay, it works,
ship it! And truthfully, maybe you should, because the revenue (and customer
feedback) you get from shipping fast can outweigh how much more bug-prone
you made the code in the short term.&lt;/p&gt;
&lt;p&gt;But like all high-interest debt, you should plan to pay it back fast. Tech
debt generally manifests as a slowdown in your development velocity (ie.
overhead on everything else you do), which means fewer features
launched in the medium-long term, which means less revenue and customer
feedback.&lt;/p&gt;
&lt;p&gt;Whoa, weird, right? This short-term high-interest debt both &lt;em&gt;increases&lt;/em&gt;
revenue and feedback rate, and &lt;em&gt;decreases&lt;/em&gt; it. Why?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If you take a single pull request (PR) that adds a new feature, and launch
  it without tests or documentation, you will definitely get the benefits of
  that PR sooner.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Every PR you try to write after that, before adding the tests and docs
  (ie. repaying the debt) will be slower because you risk creating
  undetected bugs or running into undocumented edge cases.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you take a long time to pay off the debt, the slowdown in future
  launches will outweigh the speedup from the first launch.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is exactly how CFOs manage corporate financial debt. Debt is a drain on
your revenues; the thing you did to incur the debt is a boost to your
revenues; if you take too long to pay back the debt, it&#39;s an overall loss.&lt;/p&gt;
&lt;p&gt;CFOs can calculate that. Engineers don&#39;t like to. (Partly because tech debt
is less quantifiable. And partly because engineers are the sort of people who
pay off their loans sooner than they mathematically should, as a matter of
principle.)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Debt ceilings&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The US government has imposed a &lt;a href=&quot;https://www.reuters.com/world/us/biden-signs-bill-lifting-us-debt-limit-2023-06-03/&quot;&gt;famously ill-advised debt
ceiling&lt;/a&gt;
on itself, that mainly serves to cause drama and create a great place to
push through unrelated riders that nobody will read, because the bill to
raise the debt ceiling will always pass.&lt;/p&gt;
&lt;p&gt;Real-life debt ceilings are defined by your creditworthiness: banks simply
will not lend you more money if you&#39;ve got so much outstanding debt that
they don&#39;t believe you can handle the interest payments. That&#39;s your credit
limit, or the largest mortgage they&#39;ll let you have.&lt;/p&gt;
&lt;p&gt;Banks take a systematic approach to calculating the debt ceiling for each
client. How much can we lend you so that you take out the biggest loan you
possibly can, thus paying as much interest as possible, without starving to
death or (even worse) missing more than two consecutive payments? Also,
morbidly but honestly, since debts are generally not passed down to your
descendants, they would like you to be able to just barely pay it all off
(perhaps by selling off all your assets) right before you kick the bucket.&lt;/p&gt;
&lt;p&gt;They can math this, they&#39;re good at it. Remember, they don&#39;t want you to pay
it off early. If you have leftover money you might use it to pay down your
debt. That&#39;s no good, because less debt means lower interest payments.
They&#39;d rather you incur even more debt, then use that leftover monthly
income even for bigger interest payments. That&#39;s when you&#39;re trapped.&lt;/p&gt;
&lt;p&gt;The equivalent in tech debt is when you are so far behind that you can
barely keep the system running with no improvements at all; the perfect
balance. If things get worse over time, you&#39;re underwater and will
eventually fail. But if you reach this zen state of perfect equilibrium, you
can keep going forever, running in place. That&#39;s your tech debt ceiling.&lt;/p&gt;
&lt;p&gt;Unlike the banking world, I can&#39;t think of a way to anthropomorphize a
villain who wants you to go that far into debt. Maybe the CEO? I guess maybe
someone who is trying to juice revenues for a well-timed acquisition.
Private Equity firms also specialize in maximizing both financial and
technical debt so they can extract the assets while your company slowly
dies.&lt;/p&gt;
&lt;p&gt;Anyway, both in finance and tech, you want to stay well away from your
credit limit.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Debt to income ratios&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;There are many imperfect rules of thumb for how much debt is healthy.
(Remember, some debt is very often healthy, and only people who don&#39;t
understand debt rush to pay it all off as fast as they can.)&lt;/p&gt;
&lt;p&gt;One measure is the debt to income ratio (or for governments, the
debt to GDP ratio). The problem with debt-to-income is debt and income are two
different things. The first produces a mostly-predictable repayment cost
spread over an undefined period of time; the other is a
possibly-fast-changing benefit measured annually. One is an amount, the
other is a rate.&lt;/p&gt;
&lt;p&gt;It would be better to measure interest payments as a fraction of revenue. At
least that encompasses the distinction between high-interest and
low-interest loans. And it compares two cashflow rates rather
than the nonsense comparison of a balance sheet measure vs a cashflow
measure. Banks love interest-to-income ratios; that&#39;s why your income level
has such a big impact on your debt ceiling.&lt;/p&gt;
&lt;p&gt;In the tech world, the interest-to-income equivalent is how much time you
spend dealing with overhead compared to building new revenue-generating
features. Again, getting to zero overhead is probably not worth it. I like
this &lt;a href=&quot;https://xkcd.com/1205/&quot;&gt;xkcd explanation&lt;/a&gt; of what is and is not worth
the time:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://imgs.xkcd.com/comics/is_it_worth_the_time.png&quot;&gt;&lt;/p&gt;
&lt;p&gt;Tech debt, in its simplest form, is the time you didn&#39;t spend making tasks
more efficient. When you think of it that way, it&#39;s obvious that zero tech
debt is a silly choice.&lt;/p&gt;
&lt;p&gt;(Note that the interest-to-income ratio in this formulation has nothing to
do with financial income. &quot;Tech income&quot; in our metaphor is feature
development time, where &quot;tech debt&quot; is what eats up your development time.)&lt;/p&gt;
&lt;p&gt;(Also note that by this definiton, nowadays tech stacks are so big, complex,
and irritable that every project starts with a giant pile of someone else&#39;s
tech debt on day 1. Enjoy!)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Debt to equity ratios&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Interest-to-income ratios compare two items from your cashflow statement.
Debt-to-equity ratios compare two items from your balance sheet. Which means
they, too, are at least not nonsense.&lt;/p&gt;
&lt;p&gt;&quot;Equity&quot; is unfortunately a lot fuzzier than income. How much is your
company worth? Or your product? The potential value of a factory isn&#39;t just
the value of the machines inside it; it&#39;s the amortized income stream you
(or a buyer) could get from continuing to operate that factory. Which means
it includes the built-up human and business expertise needed to operate the
factory.&lt;/p&gt;
&lt;p&gt;And of course, software is even worse; as many of us know but few
businesspeople admit, the value of proprietary software without the people
is zero. This is why you hear about acqui-hires (humans create value even if
they might quit tomorrow) but never about acqui-codes (code without
humans is worthless).&lt;/p&gt;
&lt;p&gt;Anyway, for a software company the &quot;equity&quot; comes from a variety of factors.
In the startup world, Venture Capitalists are -- and I know this is
depressing -- the best we have for valuing company equity. They are, of
course, not very good at it, but they make it up in volume. As software
companies get more mature, valuation becomes more quantifiable and comes
back to expectations for the future cashflow statement.&lt;/p&gt;
&lt;p&gt;Venture Debt is typically weighted heavily on equity (expected future value)
and somewhat less on revenue (ability to pay the interest).&lt;/p&gt;
&lt;p&gt;As the company builds up assets and shows faster growth, the assumed
equity value gets bigger and bigger. In the financial world, that means
people are willing to issue more debt.&lt;/p&gt;
&lt;p&gt;(Over in the consumer world: your home is equity. That&#39;s why you can get a
huge mortgage on a house but your unsecured loan limit is much smaller. So
Venture Debt is like a mortgage.)&lt;/p&gt;
&lt;p&gt;Anyway, back to tech debt: the debt-to-equity ratio is how much tech debt
you&#39;ve taken on compared to the accumulated value, and future growth rate,
of your product quality. If your product is acquiring lots of customers
fast, you can afford to take on more tech debt so you can acquire more
customers even faster.&lt;/p&gt;
&lt;p&gt;What&#39;s weirder is that as the absolute value of product equity increases,
you can take on a larger and larger absolute value of tech debt.&lt;/p&gt;
&lt;p&gt;That feels unexpected. If we&#39;re doing so well, why would we want to take on
&lt;em&gt;more&lt;/em&gt; tech debt? But think of it this way: if your product (thus company)
are really growing that fast, you will have more people to pay down the tech
debt next year than you do now. In theory, you could even take on so much
tech debt this year that your current team can&#39;t even pay the interest...&lt;/p&gt;
&lt;p&gt;...which brings us to leverage. And risk.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Leverage risk&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Earlier in this article, I mentioned the popular (and surprisingly, often
correct!) idea that you should &quot;buy the biggest house you can afford.&quot; Why
would I want a bigger house? My house is fine. I have a big enough house.
How is this good advice?&lt;/p&gt;
&lt;p&gt;The answer is the amazing multiplying power of leverage.&lt;/p&gt;
&lt;p&gt;Let&#39;s say housing goes up at 5%/year. (I wish it didn&#39;t because this rate is
fabulously unsustainable. But bear with me.)
And let&#39;s say you have $100k in savings and $100k in annual
income.&lt;/p&gt;
&lt;p&gt;You could pay cash and buy a house for $100k. Woo hoo, no mortgage! And
it&#39;ll go up in value by about $5k/year, which is not bad I guess.&lt;/p&gt;
&lt;p&gt;Or, you could buy a $200k house: a $100k down payment and a $100k mortgage
at, say, 3% (fairly common back in 2021), which means $3k/year
in interest. But your $200k house goes up by 5% = $10k/year. Now you have an
annual gain of $10k - $3k = $7k, much more than the $5k you were making
before, with the same money. Sweet!&lt;/p&gt;
&lt;p&gt;But don&#39;t stop there. If the bank will let you get away with it, why not a
$1M house with a $100k down payment? That&#39;s $1M x 5% = +$50k/year in value,
and $900k x 3% = $27k in interest, so a solid $23k in annual (unrealized)
capital gain. From the same initial bank balance! Omg we&#39;re printing money.&lt;/p&gt;
&lt;p&gt;(Obviously we&#39;re omitting maintenance costs and property tax here. Forgive
me. On the other hand, presumably you&#39;re getting intangible value from
living in a much bigger and fancier house. $AAPL shares don&#39;t have skylights
and rumpus rooms and that weird statue in bedroom number seven.)&lt;/p&gt;
&lt;p&gt;What&#39;s the catch? Well, the catch is massively increasing risk.&lt;/p&gt;
&lt;p&gt;Let&#39;s say you lose your job and can&#39;t afford interest payments. If you
bought your $100k house with no mortgage, you&#39;re in luck: that house is
yours, free and clear. You might not have food but you have a place to live.&lt;/p&gt;
&lt;p&gt;If you bought the $1M house and have $900k worth of mortgage payments to
keep up, you&#39;re screwed. Get another job or get ready to move out and
disrupt your family and change everything about your standard of living, up
to and possibly including bankruptcy, which we&#39;ll get to in a bit.&lt;/p&gt;
&lt;p&gt;Similarly, let&#39;s imagine that your property value stops increasing, or (less
common in the US for stupid reasons, but common everywhere else) mortage
rates go up. The leverage effect multiplies your potential losses just like
it multiplies your potential gains.&lt;/p&gt;
&lt;p&gt;Back to tech debt. What&#39;s the analogy?&lt;/p&gt;
&lt;p&gt;Remember that idea I had above, of incurring extra tech debt this year to
keep the revenue growth rolling, and then planning to pay it off next year
with the newer and bigger team? Yeah, that actually works... if you keep
growing. If you estimated your tech debt interest rate correctly. If that
future team materializes. (If you can even motivate that future team to work
on tech debt.) If you&#39;re rational, next year, about whether you borrow more
or not.&lt;/p&gt;
&lt;p&gt;That thing I said about the perfect equilibrium running-in-place state, when
you spend all your time just keeping the machine operating and you have no
time to make it better. How do so many companies get themselves into that
state? In a word, leverage. They guessed wrong. The growth rate fell off,
the new team members didn&#39;t materialize or didn&#39;t ramp up fast enough.&lt;/p&gt;
&lt;p&gt;And if you go past equilibrium, you get the worst case: your tech debt
interest is greater than your tech production (income). Things get worse and
worse and you enter the downward spiral. This is where desperation sets in.
The only remaining option is &lt;strike&gt;bankruptcy&lt;/strike&gt; Tech Debt
Refinancing.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Refinancing&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Most people who can&#39;t afford the interest on their loans don&#39;t declare
bankruptcy. The step before that is to make an arrangement with your
creditors to lower your interest payments. Why would they accept such an
agreement? Because if they don&#39;t, you&#39;ll declare bankruptcy, which is annoying
for you but hugely unprofitable for them.&lt;/p&gt;
&lt;p&gt;The tech metaphor for refinancing is &lt;em&gt;premature deprecation&lt;/em&gt;. Yes, people
love both service A and service B. Yes, we are even running both services at
financial breakeven. But they are slipping, slipping, getting a little worse
every month and digging into a hole that I can&#39;t escape. In order to pull
out of this, I have to stop my payments on A so I can pay back more of B; by
then A will be unrecoverably broken. But at least B will live on, to fight
another day.&lt;/p&gt;
&lt;p&gt;Companies do this all the time. Even at huge profitable companies, in some
corners you&#39;ll occasionally find an understaffed project sliding deeper and
deeper into tech debt. Users may still love it, and it may even be net
profitable, but not profitable enough to pay for the additional engineering
time to dig it out. Such a project is destined to die, and the only
question is when. The answer is &quot;whenever some executive finally notices.&quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;The tech bankruptcy metaphor is an easy one: if refinancing doesn&#39;t work and
your tech debt continues to spiral downward, sooner or later your finances
will follow. When you run out of money you declare bankruptcy; what&#39;s
interesting is your tech debt disappears at the same time your financial
debt does.&lt;/p&gt;
&lt;p&gt;This is a really important point. You can incur all the tech debt in the
world, and while your company is still operating, you at least have some
chance of someday paying it back. When your company finally dies, you will
find yourself off the hook; the tech debt never needs to be repaid.&lt;/p&gt;
&lt;p&gt;Okay, for those of us grinding away at code all day, perhaps that sounds
perversely refreshing. But it explains lots of corporate behaviour. The more
desperate a company gets, the less they care about tech debt. &lt;em&gt;Anything&lt;/em&gt; to
turn a profit. They&#39;re not wrong to do so, but you can see how the downward
spiral begins to spiral downward. The more tech debt you incur, the slower
your development goes, and the harder it is to do something productive that
might make you profitable. You might still pull it off! But your luck will
get progressively worse.&lt;/p&gt;
&lt;p&gt;The reverse is also true. When your company is doing well, you have time to
pay back tech debt, or at least to control precisely how much debt you take
on and when. To maintain your interest-to-income ratio or debt-to-equity
ratio at a reasonable level.&lt;/p&gt;
&lt;p&gt;When you see a company managing their tech debt carefully, you see a company
that is planning for the long term rather than a quick exit. Again, that
doesn&#39;t mean paying it all back. It means being careful.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Student loans that are non-dischargeable in bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Since we&#39;re here anyway talking about finance, let&#39;s talk about the idiotic
US government policy of guaranteeing student loans, but also not allowing
people to discharge those loans (ie. zero them out) in bankruptcy.&lt;/p&gt;
&lt;p&gt;What&#39;s the effect of this? Well, of course, banks are extremely eager to
give these loans out to anybody, at any scale, as fast as they can, because
they can&#39;t lose. They have all the equity of the US government to back them
up. The debt-to-equity ratio is effectively zero.&lt;/p&gt;
&lt;p&gt;And of course, people who don&#39;t understand finance (which they don&#39;t teach
you until university; catch-22!) take on lots of these loans in the hope of
making money in the future.&lt;/p&gt;
&lt;p&gt;Since anyone who wants to go to university can get a student loan,
American universities keep raising their rates until they find the maximum amount
that lenders are willing to lend (unlimited!) or foolish borrowers are
willing to borrow in the name of the American Dream (so far we haven&#39;t found
the limit).&lt;/p&gt;
&lt;p&gt;Where was I? Oh right, tech metaphors.&lt;/p&gt;
&lt;p&gt;Well, there are two parts here. First, unlimited access to money. Well, the
tech world has had plenty of that, prior to the 2022 crash anyway. The
result is they hired way too many engineers (students) who did a lot of dumb
stuff (going to school) and incurred a lot of tech debt (student loans) that
they promised to pay back later when their team got bigger (they earned
their Bachelor&#39;s degree and got a job), which unfortunately didn&#39;t
materialize. Oops. They are worse off than if they had skipped all that.&lt;/p&gt;
&lt;p&gt;Second, inability to discharge the debt in bankruptcy. Okay, you got me.
Maybe we&#39;ve come to the end of our analogy. Maybe US government policies
actually, and this is quite an achievement, manage to be even dumber than
tech company management. In this one way. Maybe.&lt;/p&gt;
&lt;p&gt;OR MAYBE YOU &lt;a href=&quot;/log/20091224&quot;&gt;OPEN SOURCED WVDIAL&lt;/a&gt; AND PEOPLE STILL EMAIL YOU
FOR HELP DECADES AFTER YOUR FIRST STARTUP IS LONG GONE.&lt;/p&gt;
&lt;p&gt;Um, sorry for that outburst. I have no idea where that came from.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Bonus note: bug bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;While we&#39;re here exploring financial metaphors, I might as well say
something about bug bankruptcy. Although I &lt;a href=&quot;/log/20171213&quot;&gt;have been known to make fun of
bug bankruptcy&lt;/a&gt;, it too is an excellent metaphor, but only if
you take it far enough.&lt;/p&gt;
&lt;p&gt;For those who haven&#39;t heard of this concept, bug bankruptcy happens when
your bug tracking database is so full of bugs that you give up and delete
them all and start over (&quot;declare bankruptcy&quot;).&lt;/p&gt;
&lt;p&gt;Like financial bankruptcy, it is very tempting: I have this big pile of
bills. Gosh, it is a big pile. Downright daunting, if we&#39;re honest. Chances
are, if I opened all these bills, I would find out that I owe more money
than I have, and moreover, next month a bunch more bills will come and I
won&#39;t be able to pay them either and this is hopeless. That would be
stressful. My solution, therefore, is to throw all the bills in the
dumpster, call up my friendly neighbourhood bankruptcy trustee, and
conveniently discharge all my debt once and for all.&lt;/p&gt;
&lt;p&gt;Right?&lt;/p&gt;
&lt;p&gt;Well, not so fast, buddy. Bankruptcy has consequences. First of all, it&#39;s
kind of annoying to arrange legally. Secondly, it sits on your financial
records for like 7 years afterwards, during which time probably nobody will
be willing to issue you any loans, because you&#39;re empirically the kind of
person who does not pay back their loans.&lt;/p&gt;
&lt;p&gt;And that, my friends, is also how bug bankruptcy works. Although the process
for declaring it is easier -- no lawyers or trustees required! -- the
long-term destruction of trust is real. If you run a project in which a lot
of people spent a bunch of effort filing and investigating bugs (ie. lent
you their time in the hope that you&#39;ll pay it back by fixing the bugs
later), and you just close them all wholesale, you can expect that those
people will eventually stop filing bugs. Which, you know, admittedly feels
better, just like the hydro company not sending you bills anymore feels
better until winter comes and your heater doesn&#39;t work and you can&#39;t figure
out why and you eventually remember &quot;oh, I think someone said this might
happen but I forget the details.&quot;&lt;/p&gt;
&lt;p&gt;Anyway, yes, you can do it. But refinancing is better.&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Email bankruptcy&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;Email bankruptcy is similar to bug bankruptcy, with one important
distinction: nobody ever expected you to answer your email anyway. I&#39;m
honestly not sure why people keep sending them.&lt;/p&gt;
&lt;p&gt;ESPECIALLY EMAILS ABOUT WVDIAL where does that voice keep coming from&lt;/p&gt;
    </description>
  </item>
  
</channel>
</rss>
Raw headers
{
  "cf-cache-status": "DYNAMIC",
  "cf-ray": "9c5087537751724e-CMH",
  "connection": "keep-alive",
  "content-type": "text/xml; charset=UTF-8",
  "date": "Wed, 28 Jan 2026 12:35:15 GMT",
  "etag": "W/\"f0642961f87415adc57a2c79442af169f292429d\"",
  "server": "cloudflare",
  "strict-transport-security": "max-age=63072000",
  "transfer-encoding": "chunked",
  "vary": "accept-encoding",
  "x-content-type-options": "nosniff"
}
Parsed with @rowanmanning/feed-parser
{
  "meta": {
    "type": "rss",
    "version": "2.0"
  },
  "language": "en-ca",
  "title": "apenwarr",
  "description": "apenwarr - NITLog",
  "copyright": null,
  "url": "https://apenwarr.ca/log/",
  "self": null,
  "published": null,
  "updated": null,
  "generator": {
    "label": "PyNITLog",
    "version": null,
    "url": null
  },
  "image": null,
  "authors": [],
  "categories": [],
  "items": [
    {
      "id": "https://apenwarr.ca/log/20251120",
      "title": "Systems design 3: LLMs and the semantic revolution",
      "description": "<p>Long ago in the 1990s when I was in high school, my chemistry+physics\nteacher pulled me aside. \"Avery, you know how the Internet works, right? I\nhave a question.\"</p>\n<p>I now know the correct response to that was, \"Does anyone <em>really</em> know how\nthe Internet works?\" But as a naive young high schooler I did not have that\nlevel of self-awareness. (Decades later, as a CEO, that's my answer to\nalmost everything.)</p>\n<p>Anyway, he asked his question, and it was simple but deep. How do they make\nall the computers connect?</p>\n<p>We can't even get the world to agree on 60 Hz vs 50 Hz, 120V vs 240V, or\nwhich kind of physical power plug to use. Communications equipment uses way\nmore frequencies, way more voltages, way more plug types. Phone companies\nmanaged to federate with each other, eventually, barely, but the ring tones\nwere different everywhere, there was pulse dialing and tone dialing, and\nsome of them <em>still</em> charge $3/minute for international long distance, and\nconnections take a long time to establish and humans seem to be involved in\nsuspiciously many places when things get messy, and every country has a\ndifferent long-distance dialing standard and phone number format.</p>\n<p>So Avery, he said, now they're telling me every computer in the world can\nconnect to every other computer, in milliseconds, for free, between Canada\nand France and China and Russia. And they all use a single standardized\naddress format, and then you just log in and transfer files and stuff? How?\nHow did they make the whole world cooperate? And who?</p>\n<p>When he asked that question, it was a formative moment in my life that I'll\nnever forget, because as an early member of what would be the first Internet\ngeneration…  I Had Simply Never Thought of That.</p>\n<p>I mean, I had to stop and think for a second. Wait, is protocol\nstandardization even a hard problem? Of course it is. Humans can't agree on\nanything. We can't agree on a unit of length or the size of a pint, or which\nside of the road to drive on. Humans in two regions of Europe no farther\napart than Thunder Bay and Toronto can't understand each other's speech. But\nthis Internet thing just, kinda, worked.</p>\n<p>\"There's… a layer on top,\" I uttered, unsatisfyingly. Nobody had taught me\nyet that the OSI stack model existed, let alone that it was at best a weak\nexplanation of reality.</p>\n<p>\"When something doesn't talk to something else, someone makes an adapter.\nUh, and some of the adapters are just programs rather than physical things.\nIt's not like everyone in the world agrees. But as soon as one person makes\nan adapter, the two things come together.\"</p>\n<p>I don't think he was impressed with my answer. Why would he be? Surely\nnothing so comprehensively connected could be engineered with no central\narchitecture, by a loosely-knit cult of mostly-volunteers building an\nendless series of whimsical half-considered \"adapters\" in their basements\nand cramped university tech labs. Such a creation would be a monstrosity,\njust as likely to topple over as to barely function.</p>\n<p>I didn't try to convince him, because honestly, how could I know? But the\nquestion has dominated my life ever since.</p>\n<p>When things don't connect, why don't they connect? When they do, why? How?\n…and who?</p>\n<p><strong>Postel's Law</strong></p>\n<p>The closest clue I've found is this thing called Postel's Law, one of the\nfoundational principles of the Internet. It was best stated by one of the\nfounders of the Internet, Jon Postel. \"Be conservative in what you send, and\nliberal in what you accept.\"</p>\n<p>What it means to me is, if there's a standard, do your best to follow it,\nwhen you're sending. And when you're receiving, uh, assume the best\nintentions of your counterparty and do your best and if that doesn't work,\nguess.</p>\n<p>A rephrasing I use sometimes is, \"It takes two to miscommunicate.\"\nCommunication works best and most smoothly if you have a good listener and a\nclear speaker, sharing a language and context. But it can still bumble along\nsuccessfully if you have a poor speaker with a great listener, or even a\ngreat speaker with a mediocre listener. Sometimes you have to say the same\nthing five ways before it gets across (wifi packet retransmits), or ask way\ntoo many clarifying questions, but if one side or the other is diligent\nenough, you can almost always make it work.</p>\n<p>This asymmetry is key to all high-level communication. It makes network bugs\nmuch less severe. Without Postel's Law, triggering a bug in the sender would\nbreak the connection; so would triggering a bug in the receiver. With\nPostel's Law, we acknowledge from the start that there are always bugs and\nwe have twice as many chances to work around them. Only if you trigger both\nsets of bugs at once is the flaw fatal.</p>\n<p>…So okay, if you've used the Internet, you've probably observed that fatal\nconnection errors are nevertheless pretty common. But that misses how\n<em>incredibly much more common</em> they would be in a non-Postel world. That\nworld would be the one my physics teacher imagined, where nothing ever works\nand it all topples over.</p>\n<p>And we know that's true because we've tried it. Science! Let us digress.</p>\n<p><strong>XML</strong></p>\n<p>We had the Internet (\"OSI Layer 3\") mostly figured out by the time my era\nbegan in the late 1900s, but higher layers of the stack still had work to\ndo. It was the early days of the web. We had these newfangled hypertext\n(\"HTML\") browsers that would connect to a server, download some stuff, and\nthen try their best to render it.</p>\n<p>Web browsers are and have always been an epic instantiation of Postel's Law.\nFrom the very beginning, they assumed that the server (content author) had\nabsolutely no clue what they were doing and did their best to apply some\nkind of meaning on top, despite every indication that this was a lost cause.\nList items that never end? Sure. Tags you've never heard of? Whatever.\nForgot some semicolons in your javascript? I'll interpolate some. Partially\noverlapping italics and bold? Leave it to me. No indication what language or\nencoding the page is in? I'll just guess.</p>\n<p>The evolution of browsers gives us some insight into why Postel's Law is a\nlaw and not just, you know, Postel's Advice. The answer is: competition. It\nworks like this. If your browser interprets someone's mismash subjectively\nbetter than another browser, your browser wins.</p>\n<p>I think economists call this an iterated prisoner's dilemma. Over and over,\npeople write web pages (defect) and browsers try to render them (defect) and\nabsolutely nobody actually cares what the HTML standard says (stays loyal).\nBecause if there's a popular page that's wrong and you render it \"right\" and\nit doesn't work? Straight to jail.</p>\n<p>(By now almost all the evolutionary lines of browsers have been sent to\njail, one by one, and the HTML standard is effectively whatever Chromium and\nSafari say it is. Sorry.)</p>\n<p>This law offends engineers to the deepness of their soul. We went through a\nperiod where loyalists would run their pages through \"validators\" and\nproudly add a logo to the bottom of their page saying how valid their HTML\nwas. Browsers, of course, didn't care and continued to try their best.</p>\n<p>Another valiant effort was the definition of \"quirks mode\": a legacy\nrendering mode meant to document, normalize, and push aside all the legacy\nwonko interpretations of old web pages. It was paired with a new,\nstandards-compliant rendering mode that everyone was supposed to agree on,\nstarting from scratch with an actual written spec and tests this time, and\npublic shaming if you made a browser that did it wrong. Of course, outside\nof browser academia, nobody cares about the public shaming and everyone\ncares if your browser can render the popular web sites, so there are still\nplenty of quirks outside quirks mode. It's better and it was well worth the\neffort, but it's not all the way there. It never can be.</p>\n<p>We can be sure it's not all the way there because there was another exciting\ndevelopment, HTML Strict (and its fancier twin, XHTML), which was meant to\nbe the same thing, but with a special feature. Instead of sending browsers\nto jail for rendering wrong pages wrong, we'd send page authors to jail for\nwriting wrong pages!</p>\n<p>To mark your web page as HTML Strict was a vote against the iterated\nprisoner's dilemma and Postel's Law. No, your vote said. No more. We cannot\naccept this madness. We are going to be Correct. I certify this page is\ncorrect. If it is not correct, you must sacrifice me, not all of society. My\nhonour demands it.</p>\n<p>Anyway, many page authors were thus sacrificed and now nobody uses HTML\nStrict. Nobody wants to do tech support for a web page that asks browsers to\ncrash when parsing it, when you can just… not do that.</p>\n<p><strong>Excuse me, the above XML section didn't have any XML</strong></p>\n<p>Yes, I'm getting to that. (And you're soon going to appreciate that meta\njoke about schemas.)</p>\n<p>In parallel with that dead branch of HTML, a bunch of people had realized\nthat, more generally, HTML-like languages (technically SGML-like languages)\nhad turned out to be a surprisingly effective way to build interconnected\ndata systems.</p>\n<p>In retrospect we now know that the reason for HTML's resilience is Postel's\nLaw. It's simply easier to fudge your way through parsing incorrect\nhypertext, than to fudge your way through parsing a Microsoft Word or Excel\nfile's hairball of binary OLE streams, which famously even Microsoft at one\npoint lost the knowledge of how to parse. But, that Postel's Law connection\nwasn't really understood at the time.</p>\n<p>Instead we had a different hypothesis: \"separation of structure and\ncontent.\" Syntax and semantics. Writing software to deal with structure is\nrepetitive overhead, and content is where the money is. Let's automate away\nthe structure so you can spend your time on the content: semantics.</p>\n<p>We can standardize the syntax with a single Extensible Markup Language\n(XML). Write your content, then \"mark it up\" by adding structure right in\nthe doc, just like we did with plaintext human documents. Data, plus\nself-describing metadata, all in one place. Never write a parser again!</p>\n<p>Of course, with 20/20 hindsight (or now 2025 hindsight), this is laughable.\nYes, we now have XML parser libraries. If you've ever tried to use one, you\nwill find they indeed produce parse trees automatically… if you're lucky. If\nyou're not lucky, they produce a stream of \"tokens\" and leave it to you to\nfigure out how to arrange it in a tree, for reasons involving streaming,\nperformance, memory efficiency, and so on. Basically, if you use XML you now\nhave to <em>deeply</em> care about structure, perhaps more than ever, but you also\nhave to include some giant external parsing library that, left in its normal\nmode, <a href=\"https://cheatsheetseries.owasp.org/cheatsheets/XML_External_Entity_Prevention_Cheat_Sheet.html\">might spontaneously start making a lot of uncached HTTP requests that\ncan also exploit remote code execution vulnerabilities haha\noops</a>.</p>\n<p>If you've ever taken a parser class, or even if you've just barely tried to\nwrite a parser, you'll know the truth: the value added by outsourcing\n<em>parsing</em> (or in some cases only tokenization) is not a lot. This is because\nalmost all the trouble of document processing (or compiling) is the\n<em>semantic</em> layer, the part where you make sense of the parse tree. The part\nwhere you just read a stream of characters into a data structure is the\ntrivial, well-understood first step.</p>\n<p>Now, semantics is where it gets interesting. XML was all about separating\nsyntax from semantics. And they did some pretty neat stuff with that\nseparation, in a computer science sense. XML is neat because it's such a\nregular and strict language that you can completely <em>validate</em> the syntax\n(text and tags) without knowing what any of the tags <em>mean</em> or which tags\nare intended to be valid at all.</p>\n<p>…aha! Did someone say <em>validate?!</em> Like those old HTML validators we\ntalked about? Oh yes. Yes! And this time the validation will be completely\nstrict and baked into every implementation from day 1. And, the language\nsyntax itself will be so easy and consistent to validate (unlike SGML and\nHTML, which are, in all fairness, bananas) that nobody can possibly screw it\nup.</p>\n<p>A layer on top of this basic, highly validatable XML, was a thing called XML\nSchemas. These were documents (mysteriously not written in XML) that\ndescribed which tags were allowed in which places in a certain kind of\ndocument. Not only could you parse and validate the basic XML syntax, you\ncould also then validate its XML schema as a separate step, to be totally\nsure that every tag in the document was allowed where it was used, and\npresent if it was required. And if not? Well, straight to jail. We all\nagreed on this, everyone. Day one. No exceptions. Every document validates.\nStraight to jail.</p>\n<p>Anyway XML schema validation became an absolute farce. Just parsing or\nunderstanding, let alone writing, the awful schema file format is an\nunpleasant ordeal. To say nothing of complying with the schema, or (heaven\nforbid) obtaining a copy of someone's custom schema and loading it into the\nvalidator at the right time.</p>\n<p>The core XML syntax validation was easy enough to do while parsing.\nUnfortunately, in a second violation of Postel's Law, almost no software\nthat <em>outputs</em> XML runs it through a validator before sending. I mean, why\nwould they, the language is highly regular and easy to generate and thus the\noutput is already perfect. …Yeah, sure.</p>\n<p>Anyway we all use JSON now.</p>\n<p><strong>JSON</strong></p>\n<p>Whoa, wait! I wasn't done!</p>\n<p>This is the part where I note, for posterity's sake, that XML became a\ndecade-long fad in the early 2000s that justified billions of dollars of\nsoftware investment. None of XML's technical promises played out; it is a\nstain on the history of the computer industry. But, a lot of legacy software\ngot un-stuck because of those billions of dollars, and so we did make\nprogress.</p>\n<p>What was that progress? Interconnection.</p>\n<p>Before the Internet, we kinda didn't really need to interconnect software\ntogether. I mean, we sort of did, like cut-and-pasting between apps on\nWindows or macOS or X11, all of which were surprisingly difficult little\nmini-Postel's Law protocol adventures in their own right and remain quite\nuseful when they work (<a href=\"https://news.ycombinator.com/item?id=31356896\">except \"paste formatted text,\" wtf are you people\nthinking</a>). What makes\ncut-and-paste possible is top-down standards imposed by each operating\nsystem vendor.</p>\n<p>If you want the same kind of thing on the open Internet, ie. the ability to\n\"copy\" information out of one server and \"paste\" it into another, you need\n<em>some</em> kind of standard. XML was a valiant effort to create one. It didn't\nwork, but it was valiant.</p>\n<p>Whereas all that money investment <em>did</em> work. Companies spent billions of\ndollars to update their servers to publish APIs that could serve not just\nhuman-formatted HTML, but also something machine-readable. The great\ninnovation was not XML per se, it was serving data over HTTP that wasn't\nalways HTML. That was a big step, and didn't become obvious until afterward.</p>\n<p>The most common clients of HTTP were web browsers, and web browsers only\nknew how to parse two things: HTML and javascript. To a first approximation,\nvalid XML is \"valid\" (please don't ask the validator) HTML, so we could do\nthat at first, and there were some Microsoft extensions. Later, after a few\nbillions of dollars, true standardized XML parsing arrived in browsers.\nSimilarly, to a first approximation, valid JSON is valid javascript, which\nwoo hoo, that's a story in itself (you could parse it with eval(), tee hee)\nbut that's why we got here.</p>\n<p>JSON (minus the rest of javascript) is a vastly simpler language than XML.\nIt's easy to consistently parse (<a href=\"https://github.com/tailscale/hujson\">other than that pesky trailing\ncomma</a>); browsers already did. It\nrepresents only (a subset of) the data types normal programming languages\nalready have, unlike XML's weird mishmash of single attributes, multiply\noccurring attributes, text content, and CDATA. It's obviously a tree and\neveryone knows how that tree will map into their favourite programming\nlanguage. It inherently works with unicode and only unicode. You don't need\ncumbersome and duplicative \"closing tags\" that double the size of every\nnode. And best of all, no guilt about skipping that overcomplicated and\nimpossible-to-get-right schema validator, because, well, nobody liked\nschemas anyway so nobody added them to JSON\n(<a href=\"https://json-schema.org/\">almost</a>).</p>\n<p>Today, if you look at APIs you need to call, you can tell which ones were a\nresult of the $billions invested in the 2000s, because it's all XML. And you\ncan tell which came in the 2010s and later after learning some hard lessons,\nbecause it's all JSON. But either way, the big achievement is you can call\nthem all from javascript. That's pretty good.</p>\n<p>(Google is an interesting exception: they invented and used protobuf during\nthe same time period because they disliked XML's inefficiency, they did like\nschemas, and they had the automated infrastructure to make schemas actually\nwork (mostly, after more hard lessons). But it mostly didn't spread beyond\nGoogle… maybe because it's hard to do from javascript.)</p>\n<p><strong>Blockchain</strong></p>\n<p>The 2010s were another decade of massive multi-billion dollar tech\ninvestment. Once again it was triggered by an overwrought boondoggle\ntechnology, and once again we benefited from systems finally getting updated\nthat really needed to be updated.</p>\n<p>Let's leave aside cryptocurrencies (which although used primarily for crime,\nat least demonstrably have a functioning use case, ie. crime) and look at\nthe more general form of the technology.</p>\n<p>Blockchains in general make the promise of a \"distributed ledger\" which\nallows everyone the ability to make claims and then later validate other\npeople's claims. The claims that \"real\" companies invested in were meant to\nbe about manufacturing, shipping, assembly, purchases, invoices, receipts,\nownership, and so on. What's the pattern? That's the stuff of businesses\ndoing business with other businesses. In other words, data exchange. Data\nexchange is exactly what XML didn't really solve (although progress was made\nby virtue of the dollars invested) in the previous decade.</p>\n<p>Blockchain tech was a more spectacular boondoggle than XML for a few\nreasons. First, it didn't even have a purpose you could explain. Why do we\neven need a purely distributed system for this? Why can't we just trust a\nthird party auditor? Who even wants their entire supply chain (including\nnumber of widgets produced and where each one is right now) to be visible to\nthe whole world? What is the problem we're trying to solve with that?</p>\n<p>…and you know there really was no purpose, because after all the huge\n investment to rewrite all that stuff, which was itself valuable work, we\n simply dropped the useless blockchain part and then we were fine. I don't\n think even the people working on it felt like they needed a real\n distributed ledger. They just needed an <em>updated</em> ledger and a budget to\n create one. If you make the \"ledger\" module pluggable in your big fancy\n supply chain system, you can later drop out the useless \"distributed\"\n ledger and use a regular old ledger. The protocols, the partnerships, the\n databases, the supply chain, and all the rest can stay the same.</p>\n<p>In XML's defense, at least it was not worth the effort to rip out once the\nworld came to its senses.</p>\n<p>Another interesting similarity between XML and blockchains was the computer\nscience appeal. A particular kind of person gets very excited about\n<em>validation</em> and <em>verifiability.</em> Both times, the whole computer industry\nfollowed those people down into the pits of despair and when we finally\nemerged… still no validation, still no verifiability, still didn't matter.\nJust some computers communicating with each other a little better than they\ndid before.</p>\n<p><strong>LLMs</strong></p>\n<p>In the 2020s, our industry fad is LLMs. I'm going to draw some comparisons\nhere to the last two fads, but there are some big differences too.</p>\n<p>One similarity is the computer science appeal: so much math! Just the\nmatrix sizes alone are a technological marvel the likes of which we have\nnever seen. Beautiful. Colossal. Monumental. An inspiration to nerds\neverywhere.</p>\n<p>But a big difference is verification and validation. If there is one thing\nLLMs absolutely are not, it's <em>verifiable.</em> LLMs are the flakiest thing the\ncomputer industry has ever produced! So far. And remember, this is the\nindustry that brought you HTML rendering.</p>\n<p>LLMs are an almost cartoonishly amplified realization of Postel's Law. They\nwrite human grammar perfectly, or almost perfectly, or when they're not\nperfect it's a bug and we train them harder. And, they can receive just\nabout any kind of gibberish and turn it into a data structure. In other\nwords, they're conservative in what they send and liberal in what they\naccept.</p>\n<p>LLMs also solve the syntax problem, in the sense that they can figure out\nhow to transliterate (convert) basically any file syntax into any other.\nModulo flakiness. But if you need a CSV in the form of a limerick or a\nquarterly financial report formatted as a mysql dump, sure, no problem, make\nit so.</p>\n<p>In theory we already had syntax solved though. XML and JSON did that\nalready. We were even making progress interconnecting old school company\nsupply chain stuff the hard way, thanks to our nominally XML- and\nblockchain- investment decades. We had to do every interconnection by hand –\nby writing an adapter – but we could do it.</p>\n<p>What's really new is that LLMs address <em>semantics.</em> Semantics are the\nbiggest remaining challenge in connecting one system to another. If XML\nsolved syntax, that was the first 10%. Semantics are the last 90%. When I\nwant to copy from one database to another, how do I map the fields? When I\nwant to scrape a series of uncooperative web pages and turn it into a table\nof products and prices, how do I turn that HTML into something structured?\n(Predictably <a href=\"https://microformats.org/\">microformats</a>, aka schemas, did not\nwork out.) If I want to query a database (or join a few disparate\ndatabases!) using some language that isn't SQL, what options do I have?</p>\n<p>LLMs can do it all.</p>\n<p>Listen, we can argue forever about whether LLMs \"understand\" things, or will\nachieve anything we might call intelligence, or will take over the world and\neradicate all humans, or are useful assistants, or just produce lots of text\nsludge that will certainly clog up the web and social media, or will also be\nable to filter the sludge, or what it means for capitalism that we willingly\ninvented a machine we pay to produce sludge that we also pay to remove the\nsludge.</p>\n<p>But what we can't argue is that LLMs interconnect things. Anything. To\nanything. Whether you like it or not. Whether it's bug free or not (spoiler:\nit's not). Whether it gets the right answer or not (spoiler: erm…).</p>\n<p>This is the thing we have gone through at least two decades of hype cycles\ndesperately chasing. (Three, if you count java \"write once run anywhere\" in\nthe 1990s.) It's application-layer interconnection, the holy grail of the\nInternet.</p>\n<p>And this time, it actually works! (mostly)</p>\n<p><strong>The curse of success</strong></p>\n<p>LLMs aren't going away. Really we should coin a term for this use case, call\nit \"b2b AI\" or something. For this use case, LLMs work. And they're still\ngetting better and the precision will improve with practice. For example,\nimagine asking an LLM to write a data translator in some conventional\nprogramming language, instead of asking it to directly translate a dataset\non its own. We're still at the beginning.</p>\n<p>But, this use case, which I predict is the big one, isn't what we expected.\nWe expected LLMs to write poetry or give strategic advice or whatever. We\ndidn't expect them to call APIs and immediately turn around and use what it\nlearned to call other APIs.</p>\n<p>After 30 years of trying and failing to connect one system to another, we\nnow have a literal universal translator. Plug it into any two things and\nit'll just go, for better or worse, no matter how confused it becomes. And\neveryone is doing it, fast, often with a corporate mandate to do it even\nfaster.</p>\n<p>This kind of scale and speed of (successful!) rollout is unprecedented,\neven by the Internet itself, and especially in the glacially slow world of\nenterprise system interconnections, where progress grinds to a halt once a\ndecade only to be finally dislodged by the next misguided technology wave.\nNobody was prepared for it, so nobody was prepared for the consequences.</p>\n<p>One of the odd features of Postel's Law is it's irresistible. Big Central\nInfrastructure projects rise and fall with funding, but Postel's Law\nprojects are powered by love. A little here, a little there, over time. One\nmore person plugging one more thing into one more other thing. We did it\nonce with the Internet, overcoming all the incompatibilities at OSI layers 1\nand 2. It subsumed, it is still subsuming, everything.</p>\n<p>Now we're doing it again at the application layer, the information layer.\nAnd just like we found out when we connected all the computers together the\nfirst time, naively hyperconnected networks make it easy for bad actors to\nspread and disrupt at superhuman speeds. We had to invent firewalls, NATs,\nTLS, authentication systems, two-factor authentication systems,\nphishing-resistant two-factor authentication systems, methodical software\npatching, CVE tracking, sandboxing, antivirus systems, EDR systems, DLP\nsystems, everything. We'll have to do it all again, but faster and\ndifferent.</p>\n<p>Because this time, it's all software.</p>",
      "url": "https://apenwarr.ca/log/20251120",
      "published": "2025-11-20T14:19:14.000Z",
      "updated": "2025-11-20T14:19:14.000Z",
      "content": null,
      "image": null,
      "media": [],
      "authors": [],
      "categories": []
    },
    {
      "id": "https://apenwarr.ca/log/20250711",
      "title": "Billionaire math",
      "description": "<p>I have a friend who exited his startup a few years ago and is now rich. How\nrich is unclear. One day, we were discussing ways to expedite the delivery\nof his superyacht and I suggested paying extra. His response, as to so\nmany of my suggestions, was, “Avery, I’m not <em>that</em> rich.”</p>\n<p>Everyone has their limit.</p>\n<p>I, too, am not that rich. I have shares in a startup that has not exited,\nand they seem to be gracefully ticking up in value as the years pass. But I\nhave to come to work each day, and if I make a few wrong medium-quality\nchoices (not even bad ones!), it could all be vaporized in an instant.\nMeanwhile, I can’t spend it. So what I have is my accumulated savings from a\nlong career of writing software and modest tastes (I like hot dogs).</p>\n<p>Those accumulated savings and modest tastes are enough to retire\nindefinitely. Is that bragging? It was true even before I started my\nstartup. Back in 2018, I calculated my “personal runway” to see how long I\ncould last if I started a company and we didn’t get funded, before I had to\ngo back to work. My conclusion was I should move from New York City back to\nMontreal and then stop worrying about it forever.</p>\n<p>Of course, being in that position means I’m lucky and special. But I’m not\n<em>that</em> lucky and special. My numbers aren’t that different from the average\nCanadian or (especially) American software developer nowadays. We all talk a\nlot about how the “top 1%” are screwing up society, but software developers\nnowadays fall mostly in the top 1-2%[1] of income earners in the US or\nCanada. It doesn’t feel like we’re that rich, because we’re surrounded by\npeople who are about equally rich. And we occasionally bump into a few who\nare much more rich, who in turn surround themselves with people who are\nabout equally rich, so they don’t feel that rich either.</p>\n<p>But, we’re rich.</p>\n<p>Based on my readership demographics, if you’re reading this, you’re probably\na software developer. Do you feel rich?</p>\n<p><b>It’s all your fault</b></p>\n<p>So let’s trace this through. By the numbers, you’re probably a software\ndeveloper. So you’re probably in the top 1-2% of wage earners in your\ncountry, and even better globally. So you’re one of those 1%ers ruining\nsociety.</p>\n<p>I’m not the first person to notice this. When I read other posts about it,\nthey usually stop at this point and say, ha ha. Okay, obviously that’s not\nwhat we meant. Most 1%ers are nice people who pay their taxes. Actually it’s\nthe top 0.1% screwing up society!</p>\n<p>No.</p>\n<p>I’m not letting us off that easily. Okay, the 0.1%ers are probably worse\n(with apologies to my friend and his chronically delayed superyacht). But,\nthere aren’t that many of them[2] which means they aren’t as powerful as\nthey think. No one person has very much capacity to do bad things. They only\nhave the capacity to pay other people to do bad things.</p>\n<p>Some people have no choice but to take that money and do some bad things so\nthey can feed their families or whatever. But that’s not you. That’s not us.\nWe’re rich. If we do bad things, that’s entirely on us, no matter who’s\npaying our bills.</p>\n<p><b>What does the top 1% spend their money on?</b></p>\n<p>Mostly real estate, food, and junk. If they have kids, maybe they spend a\nfew hundred $k on overpriced university education (which in sensible\ncountries is free or cheap).</p>\n<p>What they <em>don’t</em> spend their money on is making the world a better place.\nBecause they are convinced they are <em>not that rich</em> and the world’s problems\nare caused by <em>somebody else</em>.</p>\n<p>When I worked at a megacorp, I spoke to highly paid software engineers who\nwere torn up about their declined promotion to L4 or L5 or L6, because they\nneeded to earn more money, because without more money they wouldn’t be able\nto afford the mortgage payments on an <a href=\"https://apenwarr.ca/log/20180918\">overpriced $1M+ run-down Bay Area\ntownhome</a> which is a prerequisite to\nstarting a family and thus living a meaningful life. This treadmill started\nthe day after graduation.[3]</p>\n<p>I tried to tell some of these L3 and L4 engineers that they were already in\nthe top 5%, probably top 2% of wage earners, and their earning potential was\nonly going up. They didn’t believe me until I showed them the arithmetic and\nthe economic stats. And even then, facts didn’t help, because it didn’t make\ntheir fears about money go away. They <em>needed</em> more money before they could\nfeel safe, and in the meantime, they had no disposable income. Sort of.\nWell, for the sort of definition of disposable income that rich people\nuse.[4]</p>\n<p>Anyway there are psychology studies about this phenomenon. “<a href=\"https://www.cbc.ca/news/business/why-no-one-feels-rich-1.5138657\">What people\nconsider rich is about three times what they currently\nmake</a>.” No\nmatter what they make. So, I’ll forgive you for falling into this trap. I’ll\neven forgive me for falling into this trap.</p>\n<p>But it’s time to fall out of it.</p>\n<p><b>The meaning of life</b></p>\n<p>My rich friend is a fountain of wisdom. Part of this wisdom came from the\nshock effect of going from normal-software-developer rich to\nfounder-successful-exit rich, all at once. He described his existential\ncrisis: “Maybe you do find something you want to spend your money on. But,\nI'd bet you never will. It’s a rare problem. M<strong>oney, which is the driver\nfor everyone, is no longer a thing in my life.</strong>”</p>\n<p>Growing up, I really liked the saying, “Money is just a way of keeping\nscore.” I think that metaphor goes deeper than most people give it credit\nfor. Remember <a href=\"https://www.reddit.com/r/Mario/comments/13v3hoc/what_even_is_the_point_of_the_score_counter/\">old Super Mario Brothers, which had a vestigial score\ncounter</a>?\nDo you know anybody who rated their Super Mario Brothers performance based\non the score? I don’t. I’m sure those people exist. They probably have\nTwitch channels and are probably competitive to the point of being annoying.\nMost normal people get some other enjoyment out of Mario that is not from\nthe score. Eventually, Nintendo stopped including a score system in Mario\ngames altogether. Most people have never noticed. The games are still fun.</p>\n<p>Back in the world of capitalism, we’re still keeping score, and we’re still\nweirdly competitive about it. We programmers, we 1%ers, are in the top\npercentile of capitalism high scores in the entire world - that’s the\nliteral definition - but we keep fighting with each other to get closer to\ntop place. Why?</p>\n<p>Because we forgot there’s anything else. Because someone convinced us that\nthe score even matters.</p>\n<p>The saying isn’t, “Money is <em>the way</em> of keeping score.” Money is <em>just one\nway</em> of keeping score.</p>\n<p>It’s mostly a pretty good way. Capitalism, for all its flaws, mostly aligns\nincentives so we’re motivated to work together and produce more stuff, and\nmore valuable stuff, than otherwise. Then it automatically gives more power\nto people who empirically[5] seem to be good at organizing others to make\nmoney. Rinse and repeat. Number goes up.</p>\n<p>But there are limits. And in the ever-accelerating feedback loop of modern\ncapitalism, more people reach those limits faster than ever. They might\nrealize, like my friend, that money is no longer a thing in their life. You\nmight realize that. We might.</p>\n<p><b>There’s nothing more dangerous than a powerful person with nothing to prove</b></p>\n<p>Billionaires run into this existential crisis, that they obviously have to\nhave something to live for, and money just isn’t it. Once you can buy\nanything you want, you quickly realize that what you want was not very\nexpensive all along. And then what?</p>\n<p>Some people, the less dangerous ones, retire to their superyacht (if it ever\nfinally gets delivered, come on already). The dangerous ones pick ever\nloftier goals (colonize Mars) and then bet everything on it. Everything.\nTheir time, their reputation, their relationships, their fortune, their\ncompanies, their morals, everything they’ve ever built. Because if there’s\nnothing on the line, there’s no reason to wake up in the morning. And they\nreally <em>need</em> to want to wake up in the morning. Even if the reason to wake\nup is to deal with today’s unnecessary emergency. As long as, you know, the\nemergency requires <em>them</em> to <em>do something</em>.</p>\n<p>Dear reader, statistically speaking, you are not a billionaire. But you have\nthis problem.</p>\n<p><b>So what then</b></p>\n<p>Good question. We live at a moment in history when society is richer and\nmore productive than it has ever been, with opportunities for even more of\nus to become even more rich and productive even more quickly than ever. And\nyet, we live in existential fear: the fear that nothing we do matters.[6][7]</p>\n<p>I have bad news for you. This blog post is not going to solve that.</p>\n<p>I have worse news. 98% of society gets to wake up each day and go to work\nbecause they have no choice, so at worst, for them this is a background\nphilosophical question, like the trolley problem.</p>\n<p>Not you.</p>\n<p>For you this unsolved philosophy problem is urgent <em>right now</em>. There are\npeople tied to the tracks. You’re driving the metaphorical trolley. Maybe\nnobody told you you’re driving the trolley. Maybe they lied to you and said\nsomeone else is driving. Maybe you have no idea there are people on the\ntracks. Maybe you do know, but you’ll get promoted to L6 if you pull the\nright lever. Maybe you’re blind. Maybe you’re asleep. Maybe there are no\npeople on the tracks after all and you’re just destined to go around and\naround in circles, forever.</p>\n<p>But whatever happens next: you chose it.</p>\n<p>We chose it.</p>\n<p style=\"padding-top: 2em;\"><b>Footnotes</b></p>\n\n<p>[1] Beware of estimates of the “average income of the top 1%.” That average\nincludes all the richest people in the world. You only need to earn the very\nbottom of the 1% bucket in order to be in the top 1%.</p>\n<p>[2] If the population of the US is 340 million, there are actually 340,000\npeople in the top 0.1%.</p>\n<p>[3] I’m Canadian so I’m disconnected from this phenomenon, but if TV and\nmovies are to be believed, in America the treadmill starts all the way back\nin high school where you stress over getting into an elite university so\nthat you can land the megacorp job after graduation so that you can stress\nabout getting promoted. If that’s so, I send my sympathies. That’s not how\nit was where I grew up.</p>\n<p>[4] Rich people like us methodically put money into savings accounts,\ninvestments, life insurance, home equity, and so on, and only what’s left\ncounts as “disposable income.” This is not the definition normal people use.</p>\n<p>[5] Such an interesting double entendre.</p>\n<p>[6] This is what AI doomerism is about. A few people have worked themselves\ninto a terror that if AI becomes too smart, it will realize that humans are\nnot actually that useful, and eliminate us in the name of efficiency. That’s\nnot a story about AI. It’s a story about what we already worry is true.</p>\n<p>[7] I’m in favour of Universal Basic Income (UBI), but it has a big\nproblem: it reduces your need to wake up in the morning. If the alternative\nis <a href=\"https://en.wikipedia.org/wiki/Bullshit_Jobs\">bullshit jobs</a> or suffering\nthen yeah, UBI is obviously better. And the people who think that if you\ndon’t work hard, you don’t deserve to live, are nuts. But it’s horribly\ndystopian to imagine a society where lots of people wake up and have nothing\nthat motivates them. The utopian version is to wake up and be able to spend\nall your time doing what gives your life meaning. Alas, so far science has\nproduced no evidence that anything gives your life meaning.</p>",
      "url": "https://apenwarr.ca/log/20250711",
      "published": "2025-07-11T16:18:52.000Z",
      "updated": "2025-07-11T16:18:52.000Z",
      "content": null,
      "image": null,
      "media": [],
      "authors": [],
      "categories": []
    },
    {
      "id": "https://apenwarr.ca/log/20250530",
      "title": "The evasive evitability of enshittification",
      "description": "<p>Our company recently announced a fundraise.  We were grateful for all\nthe community support, but the Internet also raised a few of its collective\neyebrows, wondering whether this meant the dreaded “enshittification” was\ncoming next.</p>\n<p>That word describes a very real pattern we’ve all seen before: products\nstart great, grow fast, and then slowly become worse as the people running\nthem trade user love for short-term revenue.</p>\n<p>It’s a topic I find genuinely fascinating, and I've seen the downward spiral\nfirsthand at companies I once admired. So I want to talk about why this\nhappens, and more importantly, why it won't happen to us. That's big talk, I\nknow. But it's a promise I'm happy for people to hold us to.</p>\n<p><strong>What is enshittification?</strong></p>\n<p>The term \"enshittification\" was first popularized in a <a href=\"https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys\">blog post by Corey\nDoctorow</a>, who put\na catchy name to an effect we've all experienced. Software starts off good,\nthen goes bad. How? Why?</p>\n<p>Enshittification proposes not just a name, but a mechanism. First, a product\nis well loved and gains in popularity, market share, and revenue. In fact,\nit gets so popular that it starts to defeat competitors. Eventually, it's\nthe primary product in the space: a monopoly, or as close as you can get.\nAnd then, suddenly, the owners, who are Capitalists, have their evil nature\nfinally revealed and they exploit that monopoly to raise prices and make the\nproduct worse, so the captive customers all have to pay more. Quality\ndoesn't matter anymore, only exploitation.</p>\n<p>I agree with most of that thesis. I think Doctorow has that mechanism\n<em>mostly</em> right. But, there's one thing that doesn't add up for me:</p>\n<p><strong>Enshittification is not a success mechanism.</strong></p>\n<p>I can't think of any examples of companies that, in real life, enshittified\nbecause they were <em>successful</em>. What I've seen is companies that made their\nproduct worse because they were... scared.</p>\n<p>A company that's growing fast can afford to be optimistic. They create a\npositive feedback loop: more user love, more word of mouth, more users, more\nmoney, more product improvements, more user love, and so on. Everyone in the\ncompany can align around that positive feedback loop. It's a beautiful\nthing. It's also fragile: miss a beat and it flattens out, and soon it's a\ndownward spiral instead of an upward one.</p>\n<p>So, if I were, hypothetically, running a company, I think I would be pretty\nhesitant to deliberately sacrifice any part of that positive feedback loop,\nthe loop I and the whole company spent so much time and energy building, to\nsee if I can grow faster. User love? Nah, I'm sure we'll be fine, look how\nmuch money and how many users we have! Time to switch strategies!</p>\n<p>Why would I do that? Switching strategies is always a tremendous risk. When\nyou switch strategies, it's triggered by passing a threshold, where something\nfundamental changes, and your old strategy becomes wrong.</p>\n<p><strong>Threshold moments and control</strong></p>\n<p>In <a href=\"https://en.wikipedia.org/wiki/Reversing_Falls\">Saint John, New Brunswick, there's a\nriver</a> that flows one\ndirection at high tide, and the other way at low tide. Four times a day,\ngravity equalizes, then crosses a threshold to gently start pulling the\nother way, then accelerates. What <em>doesn't</em> happen is a rapidly flowing\nriver in one direction \"suddenly\" shifts to rapidly flowing the other way.\nYes, there's an instant where the limit from the left is positive and the\nlimit from the right is negative. But you can see that threshold coming.\nIt's predictable.</p>\n<p>In my experience, for a company or a product, there are two kinds of\nthresholds like this, that build up slowly and then when crossed, create a\nsudden flow change.</p>\n<p>The first one is control: if the visionaries in charge lose control, chances\nare high that their replacements won't \"get it.\"</p>\n<p>The new people didn't build the underlying feedback loop, and so they don't\nrealize how fragile it is. There are lots of reasons for a change in\ncontrol: financial mismanagement, boards of directors, hostile takeovers.</p>\n<p>The worst one is temptation. Being a founder is, well, it actually sucks.\nIt's oddly like being repeatedly punched in the face. When I look back at my\ncareer, I guess I'm surprised by how few times per day it feels like I was\npunched in the face. But, the\nconstant face punching gets to you after a while. Once you've established a\ngreat product, and amazing customer love, and lots of money, and an upward\nspiral, isn't your creation strong enough yet? Can't you step back and let\nthe professionals just run it, confident that they won't kill the golden\ngoose?</p>\n<p>Empirically, mostly no, you can't. Actually the success rate of control\nchanges, for well loved products, is abysmal.</p>\n<p><strong>The saturation trap</strong></p>\n<p>The second trigger of a flow change is comes from outside: saturation. Every\nsuccessful product, at some point, reaches approximately all the users it's\never going to reach. Before that, you can watch its exponential growth rate\nslow down: the <a href=\"https://blog.apnic.net/2022/02/21/another-year-of-the-transition-to-ipv6/\">infamous\nS-curve</a>\nof product adoption.</p>\n<p>Saturation can lead us back to control change: the founders get frustrated\nand back out, or the board ousts them and puts in \"real business people\" who\nknow how to get growth going again. Generally that doesn't work. Modern VCs\nconsider founder replacement a truly desperate move. Maybe\na last-ditch effort to boost short term numbers in preparation for an\nacquisition, if you're lucky.</p>\n<p>But sometimes the leaders stay on despite saturation, and they try on their\nown to make things better. Sometimes that <em>does</em> work. Actually, it's kind\nof amazing how often it seems to work. Among successful companies,\nit's rare to find one that sustained hypergrowth, nonstop, without suffering\nthrough one of these dangerous periods.</p>\n<p>(That's called survivorship bias. All companies have dangerous periods.\nThe successful ones surivived them. But of those survivors, suspiciously few\nare ones that replaced their founders.)</p>\n<p>If you saturate and can't recover - either by growing more in a big-enough\ncurrent market, or by finding new markets to expand into - then the best you\ncan hope for is for your upward spiral to mature gently into decelerating\ngrowth. If so, and you're a buddhist, then you hire less, you optimize\nmargins a bit, you resign yourself to being About This Rich And I Guess\nThat's All But It's Not So Bad.</p>\n<p><strong>The devil's bargain</strong></p>\n<p>Alas, very few people reach that state of zen. Especially the kind of\nambitious people who were able to get that far in the first place. If you\ncan't accept saturation and you can't beat saturation, then you're down to\ntwo choices: step away and let the new owners enshittify it, hopefully\nslowly. Or take the devil's bargain: enshittify it yourself.</p>\n<p>I would not recommend the latter. If you're a founder and you find yourself\nin that position, honestly, you won't enjoy doing it and you probably aren't\neven good at it and it's getting enshittified either way. Let someone else\ndo the job.</p>\n<p><strong>Defenses against enshittification</strong></p>\n<p>Okay, maybe that section was not as uplifting as we might have hoped. I've\ngotta be honest with you here. Doctorow is, after all, mostly right. This\ndoes happen all the time.</p>\n<p>Most founders aren't perfect for every stage of growth. Most product owners\nstumble. Most markets saturate. Most VCs get board control pretty early on\nand want hypergrowth or bust. In tech, a lot of the time, if you're choosing\na product or company to join, that kind of company is all you can get.</p>\n<p>As a founder, maybe you're okay with growing slowly. Then some copycat shows\nup, steals your idea, grows super fast, squeezes you out along with your\nmoral high ground, and then runs headlong into all the same saturation\nproblems as everyone else. Tech incentives are awful.</p>\n<p>But, it's not a lost cause. There are companies (and open source projects)\nthat keep a good thing going, for decades or more. What do they have in\ncommon?</p>\n<ul>\n<li>\n<p><strong>An expansive vision that's not about money</strong>, and which opens you up to\nlots of users. A big addressable market means you don't have to\nworry about saturation for a long time, even at hypergrowth speeds. Google\ncertainly never had an incentive to make Google Search worse.</p>\n<p><i>(Update 2025-06-14: A few people disputed that last bit.  Okay. \nPerhaps Google has ccasionally responded to what they thought were\nincentives to make search worse -- I wasn't there, I don't know -- but it\nseems clear in retrospect that when search gets worse, Google does worse. \nSo I'll stick to my claim that their true incentives are to keep improving.)</i></p>\n</li>\n<li>\n<p><strong>Keep control.</strong> It's easy to lose control of a project or company at any\npoint. If you stumble, and you don't have a backup plan, and there's someone\nwaiting to jump on your mistake, then it's over. Too many companies \"bet it\nall\" on nonstop hypergrowth and <s><a href=\"https://www.reddit.com/r/movies/comments/yuekuu/can_someone_explain_me_this_dialogue_from_gattaca/\">don't have any way\nback</a></s>\nhave no room in the budget, if results slow down even temporarily.</p>\n<p>Stories abound of companies that scraped close to bankruptcy before\nfinally pulling through. But far more companies scraped close to\nbankruptcy and then went bankrupt. Those companies are forgotten. Avoid\nit.</p>\n</li>\n<li>\n<p><strong>Track your data.</strong> Part of control is predictability. If you know how\nbig your market is, and you monitor your growth carefully, you can detect\nincoming saturation years before it happens. Knowing the telltale shape of\neach part of that S-curve is a superpower. If you can see the future, you\ncan prevent your own future mistakes.</p>\n</li>\n<li>\n<p><strong>Believe in competition.</strong> Google used to have this saying they lived by:\n\"<a href=\"https://9to5google.com/2012/04/05/larry-page-posts-update-from-the-ceo-2012%E2%80%B3-memo-detailing-googles-aspirations/\">the competition is only a click\naway</a>.\" That was\nexcellent framing, because it was true, and it will remain true even if\nGoogle captures 99% of the search market. The key is to cultivate a healthy\nfear of competing products, not of your investors or the end of\nhypergrowth. Enshittification helps your competitors. That would be dumb.</p>\n<p>(And don't cheat by using lock-in to make competitors\nnot, anymore, \"only a click away.\" That's missing the whole point!)</p>\n</li>\n<li>\n<p><strong>Inoculate yourself.</strong> If you have to, create your own competition. Linus\n  Torvalds, the creator of the Linux kernel, <a href=\"https://git-scm.com/about\">famously also created\n  Git</a>, the greatest tool for forking (and maybe\n  merging) open source projects that has ever existed. And then he said,\n  this is my fork, the <a href=\"https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/\">Linus fork</a>; use it if you want; use someone else's if\n  you want; and now if I want to win, I have to make mine the best. Git was\n  created back in 2005, twenty years ago. To this day, Linus's fork is still\n  the central one.</p>\n</li>\n</ul>\n<p>If you combine these defenses, you can be safe from the decline that others\ntell you is inevitable. If you look around for examples, you'll find that\nthis does actually work. You won't be the first. You'll just be rare.</p>\n<p><strong>Side note: Things that aren't enshittification</strong></p>\n<p>I often see people worry about enshittification that isn't. They might be\ngood or bad, wise or unwise, but that's a different topic. Tools aren't\ninherently good or evil. They're just tools.</p>\n<ol>\n<li>\n<p><strong>\"Helpfulness.\"</strong> There's a fine line between \"telling users about this\ncool new feature we built\" in the spirit of helping them, and \"pestering\nusers about this cool new feature we built\" (typically a misguided AI\nimplementation) to improve some quarterly KPI. Sometimes it's hard to see\nwhere that line is. But when you've crossed it, you know.</p>\n<p>Are you trying to help a user do what <em>they</em> want to do, or are you trying\nto get them to do what <em>you</em> want them to do?</p>\n<p>Look into your heart. Avoid the second one. I know you know how. Or you\nknew how, once. Remember what that feels like.</p>\n</li>\n<li>\n<p><strong>Charging money for your product.</strong> Charging money is okay. Get serious.\n<a href=\"https://apenwarr.ca/log/20211229\">Companies have to stay in business</a>.</p>\n<p>That said, I personally really revile the \"we'll make it <a href=\"https://tailscale.com/blog/free-plan\">free for\nnow</a> and we'll start charging for the\nexact same thing later\" strategy. Keep your promises.</p>\n<p>I'm pretty sure nobody but drug dealers breaks those promises on purpose.\nBut, again, desperation is a powerful motivator. Growth slowing down?\nCosts way higher than expected? Time to capture some of that value we\nwere giving away for free!</p>\n<p>In retrospect, that's a bait-and-switch, but most founders never planned\nit that way. They just didn't do the math up front, or they were too\nnaive to know they would have to. And then they had to.</p>\n<p>Famously, Dropbox had a \"free forever\" plan that provided a certain\namount of free storage.  What they didn't count on was abandoned\naccounts, accumulating every year, with stored stuff they could never\ndelete.  Even if a very good fixed fraction of users each year upgraded\nto a paid plan, all the ones that didn't, kept piling up...  year after\nyear...  after year...  until they had to start <a href=\"https://www.cnbc.com/2018/02/23/dropbox-shows-how-it-manages-costs-by-deleting-inactive-accounts.html\">deleting old free\naccounts and the data in\nthem</a>. \nA similar story <a href=\"https://news.ycombinator.com/item?id=24143588\">happened with\nDocker</a>,\nwhich used to host unlimited container downloads for free.  In hindsight\nthat was mathematically unsustainable.  Success guaranteed failure.</p>\n<p>Do the math up\nfront. If you're not sure, find someone who can.</p>\n</li>\n<li>\n<p><strong>Value pricing.</strong> (ie. charging different prices to different people.)\nIt's okay to charge money. It's even okay to charge money to some kinds of\npeople (say, corporate users) and not others. It's also okay to charge money\nfor an almost-the-same-but-slightly-better product. It's okay to charge\nmoney for support for your open source tool (though I stay away from that;\nit incentivizes you to make the product worse).</p>\n<p>It's even okay to charge immense amounts of money for a commercial\nproduct that's barely better than your open source one! Or for a part of\nyour product that costs you almost nothing.</p>\n<p>But, you have to\ndo the rest of the work. Make sure the reason your users don't\nswitch away is that you're the best, not that you have the best lock-in.\nYeah, I'm talking to you, cloud egress fees.</p>\n</li>\n<li>\n<p><strong>Copying competitors.</strong> It's okay to copy features from competitors.\nIt's okay to position yourself against competitors. It's okay to win\ncustomers away from competitors. But it's not okay to lie.</p>\n</li>\n<li>\n<p><strong>Bugs.</strong> It's okay to fix bugs. It's okay to decide not to fix bugs;\n<a href=\"https://apenwarr.ca/log/20171213\">you'll have to sometimes, anyway</a>. It's\nokay to take out <a href=\"https://apenwarr.ca/log/20230605\">technical debt</a>. It's\nokay to pay off technical debt. It's okay to let technical debt languish\nforever.</p>\n</li>\n<li>\n<p><strong>Backward incompatible changes.</strong> It's <a href=\"https://tailscale.com/blog/community-projects\">dumb to release a new version\nthat breaks backward\ncompatibility</a> with your old\nversion. It's tempting. It annoys your users. But it's not enshittification\nfor the simple reason that it's phenomenally ineffective at maintaining\nor exploiting a monopoly, which is what enshittification is supposed to be\nabout. You know who's good at monopolies? Intel and Microsoft. They don't\nbreak old versions.</p>\n</li>\n</ol>\n<p>Enshittification is real, and tragic. But let's protect a\nuseful term and its definition! Those things aren't it.</p>\n<p><strong>Epilogue: a special note to founders</strong></p>\n<p>If you're a founder or a product owner, I hope all this helps. I'm sad to\nsay, you have a lot of potential pitfalls in your future. But, remember that\nthey're only <em>potential</em> pitfalls. Not everyone falls into them.</p>\n<p>Plan ahead. Remember where you came from. Keep your integrity. Do your best.</p>\n<p>I will too.</p>",
      "url": "https://apenwarr.ca/log/20250530",
      "published": "2025-06-15T02:52:58.000Z",
      "updated": "2025-06-15T02:52:58.000Z",
      "content": null,
      "image": null,
      "media": [],
      "authors": [],
      "categories": []
    },
    {
      "id": "https://apenwarr.ca/log/20231204",
      "title": "NPS, the good parts",
      "description": "<p>The Net Promoter Score (NPS) is a statistically questionable way to turn a\nset of 10-point ratings into a single number you can compare with other\nNPSes. That's not the good part.</p>\n<p><b>Humans</b></p>\n<p>To understand the good parts, first we have to start with humans. Humans\nhave emotions, and those emotions are what they mostly use when asked to\nrate things on a 10-point scale.</p>\n<p>Almost exactly twenty years ago, I wrote about sitting on a plane next to a\n<a href=\"/log/20031227\">musician who told me about music album reviews</a>. The worst\nrating an artist can receive, he said, is a lukewarm one. If people think\nyour music is neutral, it means you didn't make them feel anything at all.\nYou failed. Someone might buy music that reviewers hate, or buy music that\npeople love, but they aren't really that interested in music that is just\nkinda meh. They listen to music because they want to feel something.</p>\n<p>(At the time I contrasted that with tech reviews in computer magazines\n(remember those?), and how negative ratings were the worst thing for a tech\nproduct, so magazines never produced them, lest they get fewer free samples.\nAll these years later, journalism is dead but we're still debating the\nethics of game companies sponsoring Twitch streams. You can bet there's no\nsponsored game that gets an actively negative review during 5+ hours of\ngameplay and still gets more money from that sponsor. If artists just want\nyou to feel something, but no vendor will pay for a game review that says it\nsucks, I wonder what that says about video game companies and art?)</p>\n<p>Anyway, when you ask regular humans, who are not being sponsored, to rate\nthings on a 10-point scale, they will rate based on their emotions. Most\nof the ratings will be just kinda meh, because most products are, if we're\nhonest, just kinda meh. I go through most of my days using a variety of\nproducts and services that do not, on any more than the rarest basis, elicit\nany emotion at all. Mostly I don't notice those. I notice when I have\nexperiences that are surprisingly good, or (less surprisingly but still\nnotably) bad. Or, I notice when one of the services in any of those three\ncategories asks me to rate them on a 10-point scale.</p>\n<p><b>The moment</b></p>\n<p>The moment when they ask me is important. Many products and services are\njust kinda invisibly meh, most of the time, so perhaps I'd give them a meh\nrating. But if my bluetooth headphones are currently failing to connect, or\nI just had to use an airline's online international check-in system and it\nonce again rejected my passport for no reason, then maybe my score will be\nextra low. Or if Apple releases a new laptop that finally brings back a\nnon-sucky keyboard after making laptops with sucky keyboards for literally\nyears because of some obscure internal political battle, maybe I'll give a\nhigh rating for a while.</p>\n<p>If you're a person who likes manipulating ratings, you'll figure out what\nmoments are best for asking for the rating you want. But let's assume you're\nabove that sort of thing, because that's not one of the good parts.</p>\n<p><b>The calibration</b></p>\n<p>Just now I said that if I'm using an invisible meh product or service, I\nwould rate it with a meh rating. But that's not true in real life, because\neven though I was having no emotion about, say, Google Meet during a call,\nperhaps when they ask me (after every...single...call) how it was, that\nmakes me feel an emotion after all. Maybe that emotion is \"leave me alone,\nyou ask me this way too often.\" Or maybe I've learned that if I pick\nanything other than five stars, I get a clicky multi-tab questionnaire that\nI don't have time to answer, so I almost always pick five stars unless the\nexperience was <em>so</em> bad that I feel it's worth an extra minute because I\nsimply need to tell the unresponsive and uncaring machine how I really feel.</p>\n<p>Google Meet never gets a meh rating. It's designed not to. In Google Meet,\nmeh gets five stars.</p>\n<p>Or maybe I bought something from Amazon and it came with a thank-you card\nbegging for a 5-star rating (this happens). Or a restaurant offers free\nstuff if I leave a 5-star rating and prove it (this happens). Or I ride in\nan Uber and there's a sign on the back seat talking about how they really\nneed a 5-star rating because this job is essential so they can support their\nfamily and too many 4-star ratings get them disqualified (this happens,\nthough apparently not at UberEats). Okay. As one of my high school teachers,\nPhysics I think, once said, \"A's don't cost me anything. What grade do you\nwant?\" (He was that kind of teacher. I learned a lot.)</p>\n<p>I'm not a professional reviewer. Almost nobody you ask is a professional\nreviewer. Most people don't actually care; they have no basis for\ncomparison; just about anything will influence their score. They will not\nfeel badly about this. They're just trying to exit your stupid popup\ninterruption as quickly as possible, and half the time they would have\nmashed the X button instead but you hid it, so they mashed this one instead.\nPeople's answers will be... untrustworthy at best.</p>\n<p>That's not the good part.</p>\n<p><b>And yet</b></p>\n<p>And yet. As in so many things, randomness tends to average out, <a href=\"https://en.wikipedia.org/wiki/Central_limit_theorem\">probably\ninto a Gaussian distribution, says the Central Limit\nTheorem</a>.</p>\n<p>The Central Limit Theorem is the fun-destroying reason that you can't just\naverage 10-point ratings or star ratings and get something useful: most\nscores are meh, a few are extra bad, a few are extra good, and the next\nthing you know, every Uber driver is a 4.997. Or you can <a href=\"https://xkcd.com/325/\">ship a bobcat one\nin 30 times</a> and still get 97% positive feedback.</p>\n<p>There's some deep truth hidden in NPS calculations: that meh ratings mean\nnothing, that the frequency of strong emotions matters a lot, and that\ndeliriously happy moments don't average out disastrous ones.</p>\n<p>Deming might call this <a href=\"/log/20161226\">the continuous region and the \"special\ncauses\"</a> (outliers). NPS is all about counting outliers, and\naverages don't work on outliers.</p>\n<p><b>The degrees of meh</b></p>\n<p>Just kidding, there are no degrees of meh. If you're not feeling anything,\nyou're just not. You're not feeling more nothing, or less nothing.</p>\n<p>One of my friends used to say, on a scale of 6 to 9, how good is this? It\nwas a joke about how nobody ever gives a score less than 6 out of 10, and\nnothing ever deserves a 10. It was one of those jokes that was never funny\nbecause they always had to explain it. But they seemed to enjoy explaining\nit, and after hearing the explanation the first several times, that part was\nkinda funny. Anyway, if you took the 6-to-9 instructions seriously, you'd\nend up rating almost everything between 7 and 8, just to save room for\nsomething unimaginably bad or unimaginably good, just like you did with\n1-to-10, so it didn't help at all.</p>\n<p>And so, the NPS people say, rather than changing the scale, let's just\ndefine meaningful regions in the existing scale. Only very angry people\nuse scores like 1-6. Only very happy people use scores like 9 or 10. And if\nyou're not one of those you're meh. It doesn't matter how meh. And in fact,\nit doesn't matter much whether you're \"5 angry\" or \"1 angry\"; that says more\nabout your internal rating system than about the degree of what you\nexperienced. Similarly with 9 vs 10; it seems like you're quite happy. Let's\nnot split hairs.</p>\n<p>So with NPS we take a 10-point scale and turn it into a 3-point scale. The\nexact opposite of my old friend: you know people misuse the 10-point scale,\nbut instead of giving them a new 3-point scale to misuse, you just\npostprocess the 10-point scale to clean it up. And now we have a 3-point\nscale with 3 meaningful points. That's a good part.</p>\n<p><b>Evangelism</b></p>\n<p>So then what? Average out the measurements on the newly calibrated 1-2-3\nscale, right?</p>\n<p>Still no. It turns out there are three kinds of people: the ones so mad they\nwill tell everyone how mad they are about your thing; the ones who don't\ncare and will never think about you again if they can avoid it; and the ones\nwho had such an over-the-top amazing experience that they will tell everyone\nhow happy they are about your thing.</p>\n<p>NPS says, you really care about the 1s and the 3s, but averaging them makes\nno sense. And the 2s have no effect on anything, so you can just leave them\nout.</p>\n<p>Cool, right?</p>\n<p>Pretty cool. Unfortunately, that's still two valuable numbers but we\npromised you one single score. So NPS says, let's subtract them! Yay! Okay,\nno. That's not the good part.</p>\n<p><b>The threefold path</b></p>\n<p>I like to look at it this way instead. First of all, we have computers now,\nwe're not tracking ratings on one of those 1980s desktop bookkeeping\nprinter-calculators, you don't have to make every analysis into one single\nall-encompassing number.</p>\n<p>Postprocessing a 10-point scale into a 3-point one, that seems pretty smart.\nBut you have to stop there. Maybe you now have three separate aggregate\nnumbers. That's tough, I'm sorry. Here's a nickel, kid, go sell your\npersonal information in exchange for a spreadsheet app. (I don't know what\nyou'll do with the nickel. Anyway I don't need it. Here. Go.)</p>\n<p>Each of those three rating types gives you something different you can do in\nresponse:</p>\n<ul>\n<li>\n<p>The <b>ones</b> had a very bad experience, which is hopefully an\n  outlier, unless you're Comcast or the New York Times subscription\n  department. Normally you want to get rid of every bad experience. The\n  absence of awful isn't greatness, it's just meh, but meh is infinitely\n  better than awful. Eliminating negative outliers is a whole job. It's a\n  job filled with Deming's special causes. It's hard, and it requires\n  creativity, but it really matters.</p>\n</li>\n<li>\n<p>The <b>twos</b> had a meh experience. This is, most commonly, the\n  majority. But perhaps they could have had a better experience. Perhaps\n  even a great one? Deming would say you can and should work to improve the\n  average experience and reduce the standard deviation. That's the dream;\n  heck, what if the average experience could be an amazing one? That's\n  rarely achieved, but a few products achieve it, especially luxury brands.\n  And maybe that Broadway show, Hamilton? I don't know, I couldn't get tickets,\n  because everyone said it was great so it was always sold out and I guess\n  that's my point.</p>\n<p>If getting the average up to three is too hard or will\n  take too long (and it will take a long time!), you could still try to at\n  least randomly turn a few of them into threes. For example, they say\n  users who have a great customer support experience often rate a product more\n  highly than the ones who never needed to contact support at all, because\n  the support interaction made the company feel more personal. Maybe you can't\n  afford to interact with everyone, but if you have to interact anyway,\n  perhaps you can use that chance to make it great instead of meh.</p>\n</li>\n<li>\n<p>The <b>threes</b> already had an amazing experience. Nothing to do, right?\n  No! These are the people who are, or who can become, your superfan\n  evangelists. Sometimes that happens on its own, but often people don't\n  know where to put that excess positive energy. You can help them. Pop\n  stars and fashion brands know all about this; get some true believers\n  really excited about your product, and the impact is huge. This is a\n  completely different job than turning ones into twos, or twos into threes.</p>\n</li>\n</ul>\n<p><b>What not to do</b></p>\n<p>Those are all good parts. Let's ignore that unfortunately they\naren't part of NPS at all and we've strayed way off topic.</p>\n<p>From here, there are several additional things you can do, but it turns out\nyou shouldn't.</p>\n<p><b>Don't compare scores with other products.</b> I guarantee you, your methodology\nisn't the same as theirs. The slightest change in timing or presentation\nwill change the score in incomparable ways. You just can't. I'm sorry.</p>\n<p><b>Don't reward your team based on aggregate ratings.</b> They will find a\nway to change the ratings. Trust me, it's too easy.</p>\n<p><b>Don't average or difference the bad with the great.</b> The two groups have\nnothing to do with each other, require completely different responses\n(usually from different teams), and are often very small. They're outliers\nafter all. They're by definition not the mainstream. Outlier data is very\nnoisy and each terrible experience is different from the others; each\ndeliriously happy experience is special. As the famous writer said, <a href=\"https://en.wikipedia.org/wiki/Anna_Karenina_principle\">all\nmeh families are\nalike</a>.</p>\n<p><b>Don't fret about which \"standard\" rating ranges translate to\nbad-meh-good.</b> Your particular survey or product will have the bad\noutliers, the big centre, and the great outliers. Run your survey enough and\nyou'll be able to find them.</p>\n<p><b>Don't call it NPS.</b> NPS nowadays has a bad reputation. Nobody can\nreally explain the bad reputation; I've asked. But they've all heard it's\nbad and wrong and misguided and unscientific and \"not real statistics\" and\ngives wrong answers and leads to bad incentives. You don't want that stigma\nattached to your survey mechanic. But if you call it a <em>satisfaction\nsurvey</em> on a 10-point or 5-point scale, tada, clear skies and lush green fields ahead.</p>\n<p><b>Bonus advice</b></p>\n<p>Perhaps the neatest thing about NPS is how much information you can get from\njust one simple question that can be answered with the same effort it takes\nto dismiss a popup.</p>\n<p>I joked about Google Meet earlier, but I wasn't\nreally kidding; after having a few meetings, if I had learned that I could\njust rank from 1 to 5 stars and then <em>not</em> get guilted for giving anything\nother than 5, I would do it. It would be great science and pretty\nunobtrusive. As it is, I lie instead. (I don't even skip, because it's\nfaster to get back to the menu by lying than by skipping.)</p>\n<p>While we're here, only the weirdest people want to answer a survey that says\nit will take \"just 5 minutes\" or \"just 30 seconds.\" I don't have 30 seconds,\nI'm busy being mad/meh/excited about your product, I have other things to\ndo! But I can click just one single star rating, as long as I'm 100%\nconfident that the survey will go the heck away after that. (And don't even\nget me started about the extra layer in \"Can we ask you a few simple\nquestions about our website? Yes or no\")</p>\n<p>Also, don't be the survey that promises one question and then asks \"just one\nmore question.\" Be the survey that gets a reputation for really truly asking\nthat one question. Then ask it, optionally, in more places and more often. A\ngood role model is those knowledgebases where every article offers just\nthumbs up or thumbs down (or the default of no click, which means meh). That\nway you can legitimately look at aggregates or even the same person's\nanswers over time, at different points in the app, after they have different\nparts of the experience. And you can compare scores at the same point after\nyou update the experience.</p>\n<p>But for heaven's sake, not by just averaging them.</p>",
      "url": "https://apenwarr.ca/log/20231204",
      "published": "2023-12-05T05:01:12.000Z",
      "updated": "2023-12-05T05:01:12.000Z",
      "content": null,
      "image": null,
      "media": [],
      "authors": [],
      "categories": []
    },
    {
      "id": "https://apenwarr.ca/log/20231006",
      "title": "Interesting",
      "description": "<p>A few conversations last week made me realize I use the word “interesting” in an unusual way.</p>\n<p>I rely heavily on mental models. Of course, everyone <em>relies</em> on mental models. But I do it intentionally and I push it extra hard.</p>\n<p>What I mean by that is, when I’m making predictions about what will happen next, I mostly don’t look around me and make a judgement based on my immediate surroundings. Instead, I look at what I see, try to match it to something inside my mental model, and then let the mental model extrapolate what “should” happen from there.</p>\n<p>If this sounds predictably error prone: yes. It is.</p>\n<p>But it’s also powerful, when used the right way, which I try to do. Here’s my system.</p>\n<p><b>Confirmation bias</b></p>\n<p>First of all, let’s acknowledge the problem with mental models: confirmation bias. Confirmation bias is the tendency of all people, including me and you, to consciously or subconsciously look for evidence to support what we already believe to be true, and try to ignore or reject evidence that disagrees with our beliefs.</p>\n<p>This is just something your brain does. If you believe you’re exempt from this, you’re wrong, and dangerously so. Confirmation bias gives you more certainty where certainty is not necessarily warranted, and we all act on that unwarranted certainty sometimes.</p>\n<p>On the one hand, we would all collapse from stress and probably die from bear attacks if we didn’t maintain some amount of certainty, even if it’s certainty about wrong things. But on the other hand, certainty about wrong things is pretty inefficient.</p>\n<p>There’s a word for the feeling of stress when your brain is working hard to ignore or reject evidence against your beliefs: cognitive dissonance. Certain Internet Dingbats have recently made entire careers talking about how to build and exploit cognitive dissonance, so I’ll try to change the subject quickly, but I’ll say this: cognitive dissonance is bad… if you don’t realize you’re having it.</p>\n<p>But your own cognitive dissonance is <em>amazingly useful</em> if you notice the feeling and use it as a tool.</p>\n<p><b>The search for dissonance</b></p>\n<p>Whether you like it or not, your brain is going to be working full time, on automatic pilot, in the background, looking for evidence to support your beliefs. But you know that; at least, you know it now because I just told you. You can be aware of this effect, but you can’t prevent it, which is annoying.</p>\n<p>But you can try to compensate for it. What that means is using the part of your brain you have control over — the supposedly rational part — to look for the opposite: things that don’t match what you believe.</p>\n<p>To take a slight detour, what’s the relationship between your beliefs and your mental model? For the purposes of this discussion, I’m going to say that mental models are a <em>system for generating beliefs.</em> Beliefs are the output of mental models. And there’s a feedback loop: beliefs are also the things you generalize in order to produce your mental model. (Self-proclaimed ”Bayesians” will know what I’m talking about here.)</p>\n<p>So let’s put it this way: your mental model, combined with current observations, produce your set of beliefs about the world and about what will happen next.</p>\n<p>Now, what happens if what you expected to happen next, doesn’t happen? Or something happens that was entirely unexpected? Or even, what if someone tells you you’re wrong and they expect something else to happen?</p>\n<p>Those situations are some of the most useful ones in the world. They’re what I mean by <em>interesting</em>. </p>\n<p><b>The “aha” moment</b></p>\n<ul>\n<i>The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny…”</i>\n<br>\n    — <a\nhref=\"https://quoteinvestigator.com/2015/03/02/eureka-funny/\">possibly</a> Isaac Asimov\n</ul>\n\n<p>When you encounter evidence that your mental model mismatches someone else’s model, that’s an exciting opportunity to compare and figure out which one of you is wrong (or both). Not everybody is super excited about doing that with you, so you have to be be respectful. But the most important people to surround yourself with, at least for mental model purposes, are the ones who will talk it through with you.</p>\n<p>Or, if you get really lucky, your predictions turn out to be demonstrably concretely wrong. That’s an even bigger opportunity, because now you get to figure out what part of your mental model is mistaken, and you don’t have to negotiate with a possibly-unwilling partner in order to do it. It’s you against reality. It’s science: you had a hypothesis, you did an experiment, your hypothesis was proven wrong. Neat! Now we’re getting somewhere.</p>\n<p>What follows is then the often-tedious process of figuring out what actual thing was wrong with your model, updating the model, generating new outputs that presumably match your current observations, and then generating new hypotheses that you can try out to see if the new model works better more generally.</p>\n<p>For physicists, this whole process can sometimes take decades and require building multiple supercolliders. For most of us, it often takes less time than that, so we should count ourselves fortunate even if sometimes we get frustrated.</p>\n<p>The reason we update our model, of course, is that most of the time, the update changes a lot more predictions than just the one you’re working with right now. Turning observations back into generalizable mental models allows you to learn things you’ve never been taught; perhaps things nobody has ever learned before. That’s a superpower.</p>\n<p><b>Proceeding under uncertainty</b></p>\n<p>But we still have a problem: that pesky slowness. Observing outcomes, updating models, generating new hypotheses, and repeating the loop, although productive, can be very time consuming. My guess is that’s why we didn’t evolve to do that loop most of the time. Analysis paralysis is no good when a tiger is chasing you and you’re worried your preconceived notion that it wants to eat you may or may not be correct.</p>\n<p>Let’s tie this back to business for a moment.</p>\n<p>You have evidence that your mental model about your business is not correct. For example, let’s say you have two teams of people, both very smart and well-informed, who believe conflicting things about what you should do next. That’s <em>interesting</em>, because first of all, your mental model is that these two groups of people are very smart and make right decisions almost all the time, or you wouldn’t have hired them. How can two conflicting things be the right decision? They probably can’t. That means we have a few possibilities:</p>\n<ol>\n<li>The first group is right</li>\n<li>The second group is right</li>\n<li>Both groups are wrong</li>\n<li>The appearance of conflict is actually not correct, because you missed something critical</li>\n</ol>\n<p>There is also often a fifth possibility:</p>\n<ul>\n<li>Okay, it’s probably one of the first four but I don’t have time to figure that out right now</li>\n</ul>\n<p>In that case, there’s various wisdom out there involving <a href=\"https://www.inc.com/jeff-haden/amazon-founder-jeff-bezos-this-is-how-successful-people-make-such-smart-decisions.html\">one- vs two-way doors</a>, and oxen pulling in different directions, and so on. But it comes down to this: almost always, it’s better to get everyone aligned to the same direction, even if it’s a somewhat wrong direction, than to have different people going in different directions.</p>\n<p>To be honest, I quite dislike it when that’s necessary. But sometimes it is, and you might as well accept it in the short term.</p>\n<p>The way I make myself feel better about it is to choose the path that will allow us to learn as much as possible, as quickly as possible, in order to update our mental models as quickly as possible (without doing <em>too</em> much damage) so we have fewer of these situations in the future. In other words, yes, we “bias toward action” — but maybe more of a “bias toward learning.” And even after the action has started, we don’t stop trying to figure out the truth.</p>\n<p><b>Being wrong</b></p>\n<p>Leaving aside many philosophers’ objections to the idea that “the truth” exists, I think we can all agree that being wrong is pretty uncomfortable. Partly that’s cognitive dissonance again, and partly it’s just being embarrassed in front of your peers. But for me, what matters more is the objective operational expense of the bad decisions we make by being wrong.</p>\n<p>You know what’s even worse (and more embarrassing, and more expensive) than being wrong? Being wrong for <em>even longer</em> because we ignored the evidence in front of our eyes.</p>\n<p>You might have to talk yourself into this point of view. For many of us, admitting wrongness hurts more than continuing wrongness. But if you can pull off that change in perspective, you’ll be able to do things few other people can.</p>\n<p><b>Bonus: Strong opinions held weakly</b></p>\n<p>Like many young naive nerds, when I first heard of the idea of “strong opinions held weakly,” I thought it was a pretty good idea. At least, clearly more productive than weak opinions held weakly (which are fine if you want to keep your job), or weak opinions held strongly (which usually keep you out of the spotlight).</p>\n<p>The real competitor to strong opinions held weakly is, of course, strong opinions held strongly. We’ve all met those people. They are supremely confident and inspiring, until they inspire everyone to jump off a cliff with them.</p>\n<p>Strong opinions held weakly, on the other hand, is really an invitation to debate. If you disagree with me, why not try to convince me otherwise? Let the best idea win.</p>\n<p>After some decades of experience with this approach, however, I eventually learned that the problem with this framing is the word “debate.” Everyone has a mental model, but not everyone wants to debate it. And if you’re really good at debating — the thing they teach you to be, in debate club or whatever — then you learn how to “win” debates without uncovering actual truth.</p>\n<p>Some days it feels like most of the Internet today is people “debating” their weakly-held strong beliefs and pulling out every rhetorical trick they can find, in order to “win” some kind of low-stakes war of opinion where there was no right answer in the first place.</p>\n<p>Anyway, I don’t recommend it, it’s kind of a waste of time. The people who want to hang out with you at the debate club are the people who already, secretly, have the same mental models as you in all the ways that matter.</p>\n<p>What’s really useful, and way harder, is to find the people who are not interested in debating you at all, and figure out why.</p>",
      "url": "https://apenwarr.ca/log/20231006",
      "published": "2023-10-06T20:59:31.000Z",
      "updated": "2023-10-06T20:59:31.000Z",
      "content": null,
      "image": null,
      "media": [],
      "authors": [],
      "categories": []
    },
    {
      "id": "https://apenwarr.ca/log/20230605",
      "title": "Tech debt metaphor maximalism",
      "description": "<p>I really like the \"tech debt\" metaphor. A lot of people don't,\nbut I think that's because they either don't extend the metaphor far enough,\nor because they don't properly understand financial debt.</p>\n<p>So let's talk about debt!</p>\n<p><b>Consumer debt vs capital investment</b></p>\n<p>Back in school my professor, <a href=\"http://lwsmith.ca/\">Canadian economics superhero Larry\nSmith</a>, explained debt this way (paraphrased): debt is\nstupid if it's for instant gratification that you pay for later, with\ninterest. But debt is great if it means you can make more money than the\ninterest payments.</p>\n<p>A family that takes on high-interest credit card debt\nfor a visit to Disneyland is wasting money. If you think you can pay it off\nin a year, you'll pay 20%-ish interest for that year for no reason. You can\ninstead save up for a year and get the same gratification next year without\nthe 20% surcharge.</p>\n<p>But if you want to buy a $500k machine that will earn your factory an additional\n$1M/year in revenue, it would be foolish <em>not</em> to buy it now, even with 20%\ninterest ($100k/year). That's a profit of $900k in just the first year!\n(excluding depreciation)</p>\n<p>There's a reason profitable companies with CFOs take on debt, and often the\ntotal debt increases rather than decreases over time. They're not idiots.\nThey're making a rational choice that's win-win for everyone. (The\ncompany earns more money faster, the banks earn interest, the interest gets\npaid out to consumers' deposit accounts.)</p>\n<p>Debt is bad when you take out the wrong kind, or you mismanage it, or it has\nweird strings attached (hello Venture Debt that requires you to put all your\nsavings in <a href=\"https://www.washingtonpost.com/business/2023/03/15/svb-billions-uninsured-assets-companies/\">one underinsured\nplace</a>).\nBut done right, debt is a way to move faster instead of slower.</p>\n<p><b>High-interest vs low-interest debt</b></p>\n<p>For a consumer, the highest interest rates are for \"store\" credit cards, the\nkinds issued by Best Buy or Macy's or whatever that only work in that one\nstore. They aren't as picky about risk (thus have more defaults) because\nit's the ultimate loyalty programme: it gets people to spend more at their\nstore instead of other stores, in some cases because it's the only place\nthat would issue those people debt in the first place.</p>\n<p>The second-highest interest rate is on a general-purpose credit card like\nVisa or Mastercard. They can get away with high interest rates because\nthey're also the payment system and so they're very convenient.</p>\n<p>(Incidentally, when I looked at the stats a decade or so ago, in Canada\ncredit cards make <em>most</em> of their income on payment fees because Canadians\nare annoyingly persistent about paying off their cards; in the US it's the\nopposite. The rumours are true: Canadians really are more cautious about\nspending.)</p>\n<p>If you have a good credit rating, you can get better interest rates on a\nbank-issued \"line of credit\" (LOC) (lower interest rate, but less convenient\nthan a card). In Canada, one reason many people pay off their credit card\neach month is simply that they transfer the balance to a lower-interest LOC.</p>\n<p>Even lower interest rates can be obtained if you're willing to provide\ncollateral: most obviously, the equity in your home. This greatly reduces\nthe risk for the lender because they can repossess and then resell your home\nif you don't pay up. Which is pretty good for them even if you don't pay,\nbut what's better is it makes you much more likely to pay rather\nthan lose your home.</p>\n<p>Some people argue that you should almost never plan to pay off your\nmortgage: typical mortgage interest rates are lower than the rates you'd get\nlong-term from investing in the S&P. The advice that you should \"always buy\nthe biggest home you can afford\" is often perversely accurate, especially if\nyou believe property values will keep going up. And subject to your risk\ntolerance and lock-in preferences.</p>\n<p>What's the pattern here? Just this: high-interest debt is quick and\nconvenient but you should pay it off quickly. Sometimes you pay it off just\nby converting to longer-term lower-rate debt. Sometimes debt is\ncollateralized and sometimes it isn't.</p>\n<p><b>High-interest and low-interest tech debt</b></p>\n<p>Bringing that back to tech debt: a simple kind of high-interest short-term\ndebt would be committing code without tests or documentation. Yay, it works,\nship it! And truthfully, maybe you should, because the revenue (and customer\nfeedback) you get from shipping fast can outweigh how much more bug-prone\nyou made the code in the short term.</p>\n<p>But like all high-interest debt, you should plan to pay it back fast. Tech\ndebt generally manifests as a slowdown in your development velocity (ie.\noverhead on everything else you do), which means fewer features\nlaunched in the medium-long term, which means less revenue and customer\nfeedback.</p>\n<p>Whoa, weird, right? This short-term high-interest debt both <em>increases</em>\nrevenue and feedback rate, and <em>decreases</em> it. Why?</p>\n<ul>\n<li>\n<p>If you take a single pull request (PR) that adds a new feature, and launch\n  it without tests or documentation, you will definitely get the benefits of\n  that PR sooner.</p>\n</li>\n<li>\n<p>Every PR you try to write after that, before adding the tests and docs\n  (ie. repaying the debt) will be slower because you risk creating\n  undetected bugs or running into undocumented edge cases.</p>\n</li>\n<li>\n<p>If you take a long time to pay off the debt, the slowdown in future\n  launches will outweigh the speedup from the first launch.</p>\n</li>\n</ul>\n<p>This is exactly how CFOs manage corporate financial debt. Debt is a drain on\nyour revenues; the thing you did to incur the debt is a boost to your\nrevenues; if you take too long to pay back the debt, it's an overall loss.</p>\n<p>CFOs can calculate that. Engineers don't like to. (Partly because tech debt\nis less quantifiable. And partly because engineers are the sort of people who\npay off their loans sooner than they mathematically should, as a matter of\nprinciple.)</p>\n<p><b>Debt ceilings</b></p>\n<p>The US government has imposed a <a href=\"https://www.reuters.com/world/us/biden-signs-bill-lifting-us-debt-limit-2023-06-03/\">famously ill-advised debt\nceiling</a>\non itself, that mainly serves to cause drama and create a great place to\npush through unrelated riders that nobody will read, because the bill to\nraise the debt ceiling will always pass.</p>\n<p>Real-life debt ceilings are defined by your creditworthiness: banks simply\nwill not lend you more money if you've got so much outstanding debt that\nthey don't believe you can handle the interest payments. That's your credit\nlimit, or the largest mortgage they'll let you have.</p>\n<p>Banks take a systematic approach to calculating the debt ceiling for each\nclient. How much can we lend you so that you take out the biggest loan you\npossibly can, thus paying as much interest as possible, without starving to\ndeath or (even worse) missing more than two consecutive payments? Also,\nmorbidly but honestly, since debts are generally not passed down to your\ndescendants, they would like you to be able to just barely pay it all off\n(perhaps by selling off all your assets) right before you kick the bucket.</p>\n<p>They can math this, they're good at it. Remember, they don't want you to pay\nit off early. If you have leftover money you might use it to pay down your\ndebt. That's no good, because less debt means lower interest payments.\nThey'd rather you incur even more debt, then use that leftover monthly\nincome even for bigger interest payments. That's when you're trapped.</p>\n<p>The equivalent in tech debt is when you are so far behind that you can\nbarely keep the system running with no improvements at all; the perfect\nbalance. If things get worse over time, you're underwater and will\neventually fail. But if you reach this zen state of perfect equilibrium, you\ncan keep going forever, running in place. That's your tech debt ceiling.</p>\n<p>Unlike the banking world, I can't think of a way to anthropomorphize a\nvillain who wants you to go that far into debt. Maybe the CEO? I guess maybe\nsomeone who is trying to juice revenues for a well-timed acquisition.\nPrivate Equity firms also specialize in maximizing both financial and\ntechnical debt so they can extract the assets while your company slowly\ndies.</p>\n<p>Anyway, both in finance and tech, you want to stay well away from your\ncredit limit.</p>\n<p><b>Debt to income ratios</b></p>\n<p>There are many imperfect rules of thumb for how much debt is healthy.\n(Remember, some debt is very often healthy, and only people who don't\nunderstand debt rush to pay it all off as fast as they can.)</p>\n<p>One measure is the debt to income ratio (or for governments, the\ndebt to GDP ratio). The problem with debt-to-income is debt and income are two\ndifferent things. The first produces a mostly-predictable repayment cost\nspread over an undefined period of time; the other is a\npossibly-fast-changing benefit measured annually. One is an amount, the\nother is a rate.</p>\n<p>It would be better to measure interest payments as a fraction of revenue. At\nleast that encompasses the distinction between high-interest and\nlow-interest loans. And it compares two cashflow rates rather\nthan the nonsense comparison of a balance sheet measure vs a cashflow\nmeasure. Banks love interest-to-income ratios; that's why your income level\nhas such a big impact on your debt ceiling.</p>\n<p>In the tech world, the interest-to-income equivalent is how much time you\nspend dealing with overhead compared to building new revenue-generating\nfeatures. Again, getting to zero overhead is probably not worth it. I like\nthis <a href=\"https://xkcd.com/1205/\">xkcd explanation</a> of what is and is not worth\nthe time:</p>\n<p><img src=\"https://imgs.xkcd.com/comics/is_it_worth_the_time.png\"></p>\n<p>Tech debt, in its simplest form, is the time you didn't spend making tasks\nmore efficient. When you think of it that way, it's obvious that zero tech\ndebt is a silly choice.</p>\n<p>(Note that the interest-to-income ratio in this formulation has nothing to\ndo with financial income. \"Tech income\" in our metaphor is feature\ndevelopment time, where \"tech debt\" is what eats up your development time.)</p>\n<p>(Also note that by this definiton, nowadays tech stacks are so big, complex,\nand irritable that every project starts with a giant pile of someone else's\ntech debt on day 1. Enjoy!)</p>\n<p><b>Debt to equity ratios</b></p>\n<p>Interest-to-income ratios compare two items from your cashflow statement.\nDebt-to-equity ratios compare two items from your balance sheet. Which means\nthey, too, are at least not nonsense.</p>\n<p>\"Equity\" is unfortunately a lot fuzzier than income. How much is your\ncompany worth? Or your product? The potential value of a factory isn't just\nthe value of the machines inside it; it's the amortized income stream you\n(or a buyer) could get from continuing to operate that factory. Which means\nit includes the built-up human and business expertise needed to operate the\nfactory.</p>\n<p>And of course, software is even worse; as many of us know but few\nbusinesspeople admit, the value of proprietary software without the people\nis zero. This is why you hear about acqui-hires (humans create value even if\nthey might quit tomorrow) but never about acqui-codes (code without\nhumans is worthless).</p>\n<p>Anyway, for a software company the \"equity\" comes from a variety of factors.\nIn the startup world, Venture Capitalists are -- and I know this is\ndepressing -- the best we have for valuing company equity. They are, of\ncourse, not very good at it, but they make it up in volume. As software\ncompanies get more mature, valuation becomes more quantifiable and comes\nback to expectations for the future cashflow statement.</p>\n<p>Venture Debt is typically weighted heavily on equity (expected future value)\nand somewhat less on revenue (ability to pay the interest).</p>\n<p>As the company builds up assets and shows faster growth, the assumed\nequity value gets bigger and bigger. In the financial world, that means\npeople are willing to issue more debt.</p>\n<p>(Over in the consumer world: your home is equity. That's why you can get a\nhuge mortgage on a house but your unsecured loan limit is much smaller. So\nVenture Debt is like a mortgage.)</p>\n<p>Anyway, back to tech debt: the debt-to-equity ratio is how much tech debt\nyou've taken on compared to the accumulated value, and future growth rate,\nof your product quality. If your product is acquiring lots of customers\nfast, you can afford to take on more tech debt so you can acquire more\ncustomers even faster.</p>\n<p>What's weirder is that as the absolute value of product equity increases,\nyou can take on a larger and larger absolute value of tech debt.</p>\n<p>That feels unexpected. If we're doing so well, why would we want to take on\n<em>more</em> tech debt? But think of it this way: if your product (thus company)\nare really growing that fast, you will have more people to pay down the tech\ndebt next year than you do now. In theory, you could even take on so much\ntech debt this year that your current team can't even pay the interest...</p>\n<p>...which brings us to leverage. And risk.</p>\n<p><b>Leverage risk</b></p>\n<p>Earlier in this article, I mentioned the popular (and surprisingly, often\ncorrect!) idea that you should \"buy the biggest house you can afford.\" Why\nwould I want a bigger house? My house is fine. I have a big enough house.\nHow is this good advice?</p>\n<p>The answer is the amazing multiplying power of leverage.</p>\n<p>Let's say housing goes up at 5%/year. (I wish it didn't because this rate is\nfabulously unsustainable. But bear with me.)\nAnd let's say you have $100k in savings and $100k in annual\nincome.</p>\n<p>You could pay cash and buy a house for $100k. Woo hoo, no mortgage! And\nit'll go up in value by about $5k/year, which is not bad I guess.</p>\n<p>Or, you could buy a $200k house: a $100k down payment and a $100k mortgage\nat, say, 3% (fairly common back in 2021), which means $3k/year\nin interest. But your $200k house goes up by 5% = $10k/year. Now you have an\nannual gain of $10k - $3k = $7k, much more than the $5k you were making\nbefore, with the same money. Sweet!</p>\n<p>But don't stop there. If the bank will let you get away with it, why not a\n$1M house with a $100k down payment? That's $1M x 5% = +$50k/year in value,\nand $900k x 3% = $27k in interest, so a solid $23k in annual (unrealized)\ncapital gain. From the same initial bank balance! Omg we're printing money.</p>\n<p>(Obviously we're omitting maintenance costs and property tax here. Forgive\nme. On the other hand, presumably you're getting intangible value from\nliving in a much bigger and fancier house. $AAPL shares don't have skylights\nand rumpus rooms and that weird statue in bedroom number seven.)</p>\n<p>What's the catch? Well, the catch is massively increasing risk.</p>\n<p>Let's say you lose your job and can't afford interest payments. If you\nbought your $100k house with no mortgage, you're in luck: that house is\nyours, free and clear. You might not have food but you have a place to live.</p>\n<p>If you bought the $1M house and have $900k worth of mortgage payments to\nkeep up, you're screwed. Get another job or get ready to move out and\ndisrupt your family and change everything about your standard of living, up\nto and possibly including bankruptcy, which we'll get to in a bit.</p>\n<p>Similarly, let's imagine that your property value stops increasing, or (less\ncommon in the US for stupid reasons, but common everywhere else) mortage\nrates go up. The leverage effect multiplies your potential losses just like\nit multiplies your potential gains.</p>\n<p>Back to tech debt. What's the analogy?</p>\n<p>Remember that idea I had above, of incurring extra tech debt this year to\nkeep the revenue growth rolling, and then planning to pay it off next year\nwith the newer and bigger team? Yeah, that actually works... if you keep\ngrowing. If you estimated your tech debt interest rate correctly. If that\nfuture team materializes. (If you can even motivate that future team to work\non tech debt.) If you're rational, next year, about whether you borrow more\nor not.</p>\n<p>That thing I said about the perfect equilibrium running-in-place state, when\nyou spend all your time just keeping the machine operating and you have no\ntime to make it better. How do so many companies get themselves into that\nstate? In a word, leverage. They guessed wrong. The growth rate fell off,\nthe new team members didn't materialize or didn't ramp up fast enough.</p>\n<p>And if you go past equilibrium, you get the worst case: your tech debt\ninterest is greater than your tech production (income). Things get worse and\nworse and you enter the downward spiral. This is where desperation sets in.\nThe only remaining option is <strike>bankruptcy</strike> Tech Debt\nRefinancing.</p>\n<p><b>Refinancing</b></p>\n<p>Most people who can't afford the interest on their loans don't declare\nbankruptcy. The step before that is to make an arrangement with your\ncreditors to lower your interest payments. Why would they accept such an\nagreement? Because if they don't, you'll declare bankruptcy, which is annoying\nfor you but hugely unprofitable for them.</p>\n<p>The tech metaphor for refinancing is <em>premature deprecation</em>. Yes, people\nlove both service A and service B. Yes, we are even running both services at\nfinancial breakeven. But they are slipping, slipping, getting a little worse\nevery month and digging into a hole that I can't escape. In order to pull\nout of this, I have to stop my payments on A so I can pay back more of B; by\nthen A will be unrecoverably broken. But at least B will live on, to fight\nanother day.</p>\n<p>Companies do this all the time. Even at huge profitable companies, in some\ncorners you'll occasionally find an understaffed project sliding deeper and\ndeeper into tech debt. Users may still love it, and it may even be net\nprofitable, but not profitable enough to pay for the additional engineering\ntime to dig it out. Such a project is destined to die, and the only\nquestion is when. The answer is \"whenever some executive finally notices.\"</p>\n<p><b>Bankruptcy</b></p>\n<p>The tech bankruptcy metaphor is an easy one: if refinancing doesn't work and\nyour tech debt continues to spiral downward, sooner or later your finances\nwill follow. When you run out of money you declare bankruptcy; what's\ninteresting is your tech debt disappears at the same time your financial\ndebt does.</p>\n<p>This is a really important point. You can incur all the tech debt in the\nworld, and while your company is still operating, you at least have some\nchance of someday paying it back. When your company finally dies, you will\nfind yourself off the hook; the tech debt never needs to be repaid.</p>\n<p>Okay, for those of us grinding away at code all day, perhaps that sounds\nperversely refreshing. But it explains lots of corporate behaviour. The more\ndesperate a company gets, the less they care about tech debt. <em>Anything</em> to\nturn a profit. They're not wrong to do so, but you can see how the downward\nspiral begins to spiral downward. The more tech debt you incur, the slower\nyour development goes, and the harder it is to do something productive that\nmight make you profitable. You might still pull it off! But your luck will\nget progressively worse.</p>\n<p>The reverse is also true. When your company is doing well, you have time to\npay back tech debt, or at least to control precisely how much debt you take\non and when. To maintain your interest-to-income ratio or debt-to-equity\nratio at a reasonable level.</p>\n<p>When you see a company managing their tech debt carefully, you see a company\nthat is planning for the long term rather than a quick exit. Again, that\ndoesn't mean paying it all back. It means being careful.</p>\n<p><b>Student loans that are non-dischargeable in bankruptcy</b></p>\n<p>Since we're here anyway talking about finance, let's talk about the idiotic\nUS government policy of guaranteeing student loans, but also not allowing\npeople to discharge those loans (ie. zero them out) in bankruptcy.</p>\n<p>What's the effect of this? Well, of course, banks are extremely eager to\ngive these loans out to anybody, at any scale, as fast as they can, because\nthey can't lose. They have all the equity of the US government to back them\nup. The debt-to-equity ratio is effectively zero.</p>\n<p>And of course, people who don't understand finance (which they don't teach\nyou until university; catch-22!) take on lots of these loans in the hope of\nmaking money in the future.</p>\n<p>Since anyone who wants to go to university can get a student loan,\nAmerican universities keep raising their rates until they find the maximum amount\nthat lenders are willing to lend (unlimited!) or foolish borrowers are\nwilling to borrow in the name of the American Dream (so far we haven't found\nthe limit).</p>\n<p>Where was I? Oh right, tech metaphors.</p>\n<p>Well, there are two parts here. First, unlimited access to money. Well, the\ntech world has had plenty of that, prior to the 2022 crash anyway. The\nresult is they hired way too many engineers (students) who did a lot of dumb\nstuff (going to school) and incurred a lot of tech debt (student loans) that\nthey promised to pay back later when their team got bigger (they earned\ntheir Bachelor's degree and got a job), which unfortunately didn't\nmaterialize. Oops. They are worse off than if they had skipped all that.</p>\n<p>Second, inability to discharge the debt in bankruptcy. Okay, you got me.\nMaybe we've come to the end of our analogy. Maybe US government policies\nactually, and this is quite an achievement, manage to be even dumber than\ntech company management. In this one way. Maybe.</p>\n<p>OR MAYBE YOU <a href=\"/log/20091224\">OPEN SOURCED WVDIAL</a> AND PEOPLE STILL EMAIL YOU\nFOR HELP DECADES AFTER YOUR FIRST STARTUP IS LONG GONE.</p>\n<p>Um, sorry for that outburst. I have no idea where that came from.</p>\n<p><b>Bonus note: bug bankruptcy</b></p>\n<p>While we're here exploring financial metaphors, I might as well say\nsomething about bug bankruptcy. Although I <a href=\"/log/20171213\">have been known to make fun of\nbug bankruptcy</a>, it too is an excellent metaphor, but only if\nyou take it far enough.</p>\n<p>For those who haven't heard of this concept, bug bankruptcy happens when\nyour bug tracking database is so full of bugs that you give up and delete\nthem all and start over (\"declare bankruptcy\").</p>\n<p>Like financial bankruptcy, it is very tempting: I have this big pile of\nbills. Gosh, it is a big pile. Downright daunting, if we're honest. Chances\nare, if I opened all these bills, I would find out that I owe more money\nthan I have, and moreover, next month a bunch more bills will come and I\nwon't be able to pay them either and this is hopeless. That would be\nstressful. My solution, therefore, is to throw all the bills in the\ndumpster, call up my friendly neighbourhood bankruptcy trustee, and\nconveniently discharge all my debt once and for all.</p>\n<p>Right?</p>\n<p>Well, not so fast, buddy. Bankruptcy has consequences. First of all, it's\nkind of annoying to arrange legally. Secondly, it sits on your financial\nrecords for like 7 years afterwards, during which time probably nobody will\nbe willing to issue you any loans, because you're empirically the kind of\nperson who does not pay back their loans.</p>\n<p>And that, my friends, is also how bug bankruptcy works. Although the process\nfor declaring it is easier -- no lawyers or trustees required! -- the\nlong-term destruction of trust is real. If you run a project in which a lot\nof people spent a bunch of effort filing and investigating bugs (ie. lent\nyou their time in the hope that you'll pay it back by fixing the bugs\nlater), and you just close them all wholesale, you can expect that those\npeople will eventually stop filing bugs. Which, you know, admittedly feels\nbetter, just like the hydro company not sending you bills anymore feels\nbetter until winter comes and your heater doesn't work and you can't figure\nout why and you eventually remember \"oh, I think someone said this might\nhappen but I forget the details.\"</p>\n<p>Anyway, yes, you can do it. But refinancing is better.</p>\n<p><b>Email bankruptcy</b></p>\n<p>Email bankruptcy is similar to bug bankruptcy, with one important\ndistinction: nobody ever expected you to answer your email anyway. I'm\nhonestly not sure why people keep sending them.</p>\n<p>ESPECIALLY EMAILS ABOUT WVDIAL where does that voice keep coming from</p>",
      "url": "https://apenwarr.ca/log/20230605",
      "published": "2023-07-11T03:12:47.000Z",
      "updated": "2023-07-11T03:12:47.000Z",
      "content": null,
      "image": null,
      "media": [],
      "authors": [],
      "categories": []
    }
  ]
}
Analyze Another View with RSS.Style