RSS.Style logo RSS/Atom Feed Analysis


Analysis of https://aphyr.com/posts.atom

Feed fetched in 85 ms.
Warning Content type is application/atom+xml, not text/xml.
Feed is 90,219 characters long.
Warning Feed is missing an ETag.
Warning Feed is missing the Last-Modified HTTP header.
Warning This feed does not have a stylesheet.
This appears to be an Atom feed.
Feed title: Aphyr: Posts
Feed self link matches feed URL.
Feed has 12 items.
First item published on 2025-05-29T17:36:16.000Z
Last item published on 2023-12-05T14:49:05.000Z
Home page URL: https://aphyr.com/
Home page has feed discovery link in <head>.
Error Home page does not have a link to the feed in the <body>.

Formatted XML
<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://aphyr.com/</id>
    <title>Aphyr: Posts</title>
    <updated>2025-05-29T18:05:42-05:00</updated>
    <link href="https://aphyr.com/"></link>
    <link rel="self" href="https://aphyr.com/posts.atom"></link>
    <entry>
        <id>https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess</id>
        <title>The Future of Comments is Lies, I Guess</title>
        <published>2025-05-29T12:36:16-05:00</published>
        <updated>2025-05-29T12:36:16-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;I’ve been involved in content moderation since roughly 2004. I’ve built spam prevention for corporate and personal e-mail, moderated open-source mailing lists and IRC channels, worked at a couple social media networks, and help moderate a Mastodon instance for a few hundred people. In the last few years I’ve wasted more time fighting blog comment spam, and I’m pretty sure Large Language Models (LLMs) are to blame.&lt;/p&gt;
&lt;p&gt;I think of spam as a space with multiple equilibria. Producing spam takes work. Spammers are willing to invest that work because each message has a small chance to make money, or achieve political or emotional goals. Some spam, like the endless identical Viagra scams in my email spam folder, or the PHPBB comment spam I filter out here on aphyr.com, is cheap to generate and easy to catch. I assume the spammers make it up in volume. Other spam, like &lt;a href="https://www.ibm.com/think/topics/spear-phishing"&gt;spear phishing attacks&lt;/a&gt;, is highly time-consuming: the spammer must identify a target, carefully craft a plausible message using, say, the identity of the target’s co-workers, or construct a facade of a bank’s log-in page, and so on. This kind of spam is more likely to make it through filters, but because it takes a lot of human work, is generally only worth it for high-value targets.&lt;/p&gt;
&lt;p&gt;LLMs seem to be changing these equilibria. Over the last year I’ve seen a new class of comment spam, using what I’m fairly sure is LLM-generated text. These comments make specific, plausible remarks about the contents of posts and images, and work in a link to some web site or mention a product. Take this one I caught a few months back:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Walking down a sidewalk lined with vibrant flowers is one of life’s simple joys! It reminds me of playing the [link redacted] slope game, where you have to navigate through colorful landscapes while dodging obstacles.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Before 2023, you’d likely have paid a human a few cents to write and post that. Now, thanks to LLMs, this sort of thing is trivially automated. The model will happily fabricate relatable personal experiences in service of a spam campaign:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;That photo reminds me of the first time I tried macro photography in my backyard. I spent an hour trying to get a clear shot of a red flower, experimenting with angles and lighting. It was so much fun discovering the little details up close! If you ever need a break from photography, I recommend playing Snow Rider 3D for a bit of quick, light-hearted fun.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Other spam seems to glue together LLM remarks with what I think is a hand-written snippet of ad copy. Note the abrupt shift in grammar, diction, and specificity.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This piece masterfully blends technical depth with mythological storytelling, transforming a standard Haskell programming interview into an epic narrative. It cleverly critiques the complexity and absurdity of some technical interviews by illustrating how type-level Haskell can be pushed to esoteric extremes beautiful, powerful, and largely impractical. A fascinating and relevant read for anyone interested in the intersection of programming, language design, and narrative. I’m James Maicle, working at Cryptoairhub where we run a clear and insightful crypto blog. I’ll be bookmarking your site and following the updates. Glad to see so much valuable information shared here looking forward to exploring more strategies together. Thanks for sharing. If you interest about Crypto please visit my website and read my article [link redacted] Crypto Blog.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The same thing is happening on aggregators like Hacker News, where commenters post more-or-less-obviously LLM output for… I’m not sure, exactly. Karma? Weirder still are bots like &lt;a href="https://bsky.app/profile/hackerbriefs.bsky.social"&gt;Hacker Briefs&lt;/a&gt;, which I suspect use an LLM to summarize trending HN posts. Here’s &lt;a href="https://bsky.app/profile/hackerbriefs.bsky.social/post/3lnz4kirkpv25"&gt;its summary of a recent article I wrote&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Jepsen: Amazon RDS for PostgreSQL 17.4”&lt;/p&gt;
&lt;p&gt;New multi-AZ clusters in Amazon RDS for PostgreSQL offer better failure recovery but may return outdated data when reading after writes. Caution is needed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a totally plausible summary of &lt;a href="https://jepsen.io/analyses/amazon-rds-for-postgresql-17.4"&gt;the article&lt;/a&gt;, and it is utterly, laughably wrong. Multi-AZ clusters are not new, and they do not return outdated data when reading after writes. As the abstract succinctly explains, they allow Long Fork, a different anomaly which does not involve real-time orders at all. The bot ignored the actual problem and invented a different one. This sort of spam isn’t obviously motivated by commercial interest, but it is nevertheless depressing: one more drop in the misinformation slurry.&lt;/p&gt;
&lt;p&gt;Of course this is not news. Product reviews are inundated with LLM slop, as are social media networks. LLMs allow for cheap, fast, and automated generation of unique spam which is difficult for machines and humans to identify. The cost falls on me and other moderators, who must sift through LLM bullshit trying to sieve “awkward but sincere human” from “automated attack”. Thanks to OpenAI et al I read more spam, and each message takes longer to check.&lt;/p&gt;
&lt;p&gt;This problem is only going to get worse as LLMs improve and spammers develop more sophisticated ways to use them. In recent weeks I’ve received vague voice messages from strangers with uncanny speech patterns just asking to catch up—a sentence which, had I uttered it prior to 2023, would have been reasonably interpreted as a sign of psychosis. I assume these too are LLM-generated scams, perhaps a &lt;a href="https://dfpi.ca.gov/wp-content/uploads/2025/03/Pig-Butchering-Scam-Playbook.pdf"&gt;pig butchering&lt;/a&gt; scheme. So far these are from strangers, but it’s not hard to imagine an attacker using text and voice synthesis to impersonate a friend, colleague, or lover in a detailed conversation. Or one’s doctor, or bank.&lt;/p&gt;
&lt;p&gt;As the cost of generating slop decreases, it’s easy to imagine LLMs generating personae, correspondence, even months-long relationships with real humans before being deployed for commercial or political purposes. &lt;a href="https://www.techtransparencyproject.org/articles/for-sale-on-facebook-accounts-that-can-run-u.s.-election-ads"&gt;Creating plausible accounts and selling them&lt;/a&gt; has been a successful business model in social media for some time; likewise, we have strong clues that &lt;a href="https://arxiv.org/pdf/2307.16336"&gt;LLMs are already used for social media bots&lt;/a&gt;. Social networks have responded to these attacks via out-of-band mechanisms: IP reputation analysis, javascript and mobile app fingerprinting, statistical correlation across multiple accounts, and so on. I’m not sure how to translate these defenses to less centralized and more privacy-oriented networks, like email or blog spam. I strongly suspect the only reason Mastodon hasn’t been eaten alive by LLM spambots is because we’re just not big enough to be lucrative. But those economics are shifting, and even obscure ecological niches can be worth filling.&lt;/p&gt;
&lt;p&gt;As a moderator, that keeps me up at night.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/387-the-future-of-customer-support-is-lies-i-guess</id>
        <title>The Future of Customer Support is Lies, I Guess</title>
        <published>2025-05-21T07:12:41-05:00</published>
        <updated>2025-05-21T07:12:41-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/387-the-future-of-customer-support-is-lies-i-guess"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;&lt;em&gt;Update, 2025-05-22: TrueNAS was kind enough to reach out and let me know that their support process does not normally incorporate LLMs. They’re talking about what happened internally, and intend to prevent it from happening again through improved documentation and reviewing the support process as a whole. I’m happy to hear it!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.truenas.com/"&gt;TrueNAS&lt;/a&gt; makes file servers, also known as Network Attached Storage (NAS). I bought one of their smaller boxes to house backups, and it’s a great little NAS. For many years TrueNAS sold boxes with a BSD-based operating system, but in the last few years they’ve released a new, Linux-backed operating system called &lt;a href="https://www.truenas.com/truenas-scale/"&gt;TrueNAS SCALE&lt;/a&gt;, also called “Community Edition”. I was considering migrating to TrueNAS SCALE, but the docs &lt;a href="https://web.archive.org/web/20250408163343/https://www.truenas.com/docs/scale/gettingstarted/migrate/migrate-community/"&gt;started off with this warning&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TrueNAS Enterprise customers should consult with TrueNAS Enterprise Support before attempting migrate.&lt;/p&gt;
&lt;p&gt;The process requires an extended maintenance window, requires executing steps in the correct order to prevent issues with system configuration and operation, and additional system review post-migration to catch and correct any configuration issues.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Since I’d actually purchased their hardware, I wrote to their support email. I asked whether the BSD-based OS was reaching end-of-life, and if there was anything special I should know about the upgrade process for my particular machine. The response came from a customer success coordinator, whose name I’ll omit for privacy. It began:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hi Kyle,&lt;/p&gt;
&lt;p&gt;It sounds like you’re at an important crossroads with your TrueNAS Mini 3.0-X+ system as it transitions from the traditional BSD-based OS to the Linux-based SCALE platform, now known as TrueNAS Community Edition.&lt;/p&gt;
&lt;p&gt;Understanding the Transition&lt;br&gt;
End-of-Life for Certain Versions: The BSD-based version of TrueNAS you’re currently using (TrueNAS-13.0-U6.7) may be reaching its end-of-life phase, prompting a move to newer systems like TrueNAS SCALE, which is based on Linux and offers more modern features.&lt;/p&gt;
&lt;p&gt;Community Edition: The Community edition represents the transition towards a more flexible, open-source platform that leverages FreeBSD jails (similar in concept to containers) for virtualization, providing scalability and extensibility.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The phrase “may be reaching its end-of-life phase” was immediately worrying: how does a TrueNAS support agent not know whether their own products are EOL? But what really shocked me was the blatant lie in the third paragraph: they told me that the new OS was based on Linux, then immediately turned around and said that the new system was based on FreeBSD. FreeBSD jails were TrueNAS’s old approach to virtualization; the new Community Edition &lt;a href="https://www.truenas.com/truenas-community-edition/"&gt;uses Docker containers&lt;/a&gt;. I could not believe that a human being had actually skimmed this response, let alone written it. I was, I think, talking to a Large Language Model (LLM) wearing the face of a person.&lt;/p&gt;
&lt;p&gt;The response went on, coining odd phrases and warning of unspecified risks. “Non-HA (Non-Redundant Array of Independent Disks) TrueNAS hardware can sometimes have specific requirements or limitations when migrating to TrueNAS Community Edition,” they said. RAID and HA are different things in TrueNAS. Moreover, the reason I wrote to support in the first place was to ask what those “specific requirements” were. I was hoping for a support article I’d somehow missed, or a canned response. Instead I got my own question, regurgitated as a warning.&lt;/p&gt;
&lt;p&gt;I wrote with followup questions, and got more nonsense in response. When I asked how TrueNAS Community Edition used FreeBSD jails, a different customer success coordinator replied:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;FreeBSD jails are similar to containers (like Docker) but built into the FreeBSD system. TrueNAS uses them to run isolated apps and services. For example, you can run Plex, Nextcloud, or other apps safely and separately using jails — all within your TrueNAS system.&lt;/p&gt;
&lt;p&gt;They’re lightweight, secure, and don’t need a full virtual machine to run each service. It’s a great way to expand what your system can do without using much extra power or memory.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is all lies. TrueNAS CORE, the old FreeBSD OS, ran jails. TrueNAS SCALE / Community Edition runs Docker containers.&lt;/p&gt;
&lt;p&gt;Quoting their HA/RAID confusion, I asked whether for TrueNAS hardware, the two were actually the same.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Good question — they’re different:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;RAID protects your data by spreading it across multiple disks, so if one disk fails, your data is still safe.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HA (High Availability) means the system has two controllers. If one fails, the other takes over so the system keeps running. This is used in high-end enterprise systems (like our TrueNAS M-series), not in the MINI line.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is, I believe, accurate, but betrays no sign that the original response conflated the two.&lt;/p&gt;
&lt;p&gt;I asked what specific limitations my NAS model had that would cause issues migrating to Community Edition. They replied:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The MINI-3.0-X+ is compatible with Community Edition (TrueNAS CORE), but there are a few things to know:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;No Enterprise-only features: Some features like enclosure management and support alerts are only available in the Enterprise version.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No HA support: Not an issue for the MINI, since it’s a single-controller system anyway.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Support level: CORE is community-supported, so it doesn’t come with iXsystems support like the Enterprise version does.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But overall — the MINI runs CORE just fine for most use cases!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This sounds like a reasonable response, but it too is complete nonsense. TrueNAS CORE is a completely different operating system. I asked whether it would run Community Edition, also known as SCALE.&lt;/p&gt;
&lt;p&gt;I worked support for many years, helping people with desktops, systems and networking, and occasionally for web services. I’m used to knowledge base articles and canned responses on both ends—and indeed, that’s exactly what I was hoping TrueNAS would provide. All I needed to know was why the documentation warned their customers to contact support before upgrading. Instead I got a pile of vague nonsense and lies.&lt;/p&gt;
&lt;p&gt;I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.&lt;/p&gt;
&lt;p&gt;Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/386-the-future-of-newspapers-is-lies-i-guess</id>
        <title>The Future of Newspapers is Lies, I Guess</title>
        <published>2025-05-21T06:51:40-05:00</published>
        <updated>2025-05-21T06:51:40-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/386-the-future-of-newspapers-is-lies-i-guess"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;&lt;em&gt;Update, 2023-05-23: Added a paragraph about Dr. Howard Whiteman’s non-existent quote.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;I subscribe to the Chicago Sun-Times, a non-profit newspaper. This week they sent me a sixty-four page special insert, branded with the Chicago Sun-Times logo, full of LLM nonsense. Yesterday I wrote the following letter to the Chicago Sun-Times. That evening, they published &lt;a href="https://chicago.suntimes.com/news/2025/05/20/syndicated-content-sunday-print-sun-times-ai-misinformation"&gt;this followup&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Dear Mr. Buscaglia and the Chicago Sun-Times,&lt;/p&gt;
&lt;p&gt;The May 18th “Heat Index” summer insert feels remarkably like “AI slop”: text generated by a large language model. The unusually enthusiastic use of adjectives coats the entire insert with an oily sheen, but I’m also concerned that there may be wholesale fabrications in the text.&lt;/p&gt;
&lt;p&gt;For instance, “Summer soundtracks” cites Dr. Daniel Levitin, in an interview with Scientific American, as saying “Music activates the brain’s reward centers in ways similar to other pleasurable experiences. When paired with meaningful activities, these songs become powerful memory anchors that can transport us back to those moments for decades”. These phrases, and substrings thereof, return zero matches on Kagi or Google. Scientific American’s archives include a &lt;a href="https://www.scientificamerican.com/article/exploring-the-musical-bra/"&gt;January 22, 2001 article&lt;/a&gt; with several quotes from Levitin, but nothing like the purported quotes.&lt;/p&gt;
&lt;p&gt;The “Nightfall adventures” article cites Dr. Howard Whiteman, professor at Murray State University. “Understanding nocturnal adaptations helps us appreciate the complexity of ecosystems. Even common backyard species display fascinating behaviors that most people never witness.” I checked with Dr. Whiteman, who confirmed that he had never said these words:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I have never spoken to the Chicago Sun-Times or any other Chicago paper, and I can’t recall ever saying any of those words to anyone, ever.&lt;/p&gt;
&lt;p&gt;I write for some regional papers, some of which is online, and I could see an AI putting all of my writing together somehow into those words, but I’ve never actually written about nocturnal behavior per se.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The “Summer food trends” article cites an interview in Bon Appetit, claiming Padma Lakshmi said, “What distinguishes this summer’s global food exploration is how these international flavors are being adapted to local, seasonal ingredients.” Lakshmi &lt;em&gt;is&lt;/em&gt; published in &lt;a href="https://www.bonappetit.com/story/padma-lakshmi"&gt;an interview with BA&lt;/a&gt;, but I can’t find any use of the uncanny marketer-speak “this summer’s global food exploration”.&lt;/p&gt;
&lt;p&gt;The same article also cites the National Ice Cream Retailers Association as forecasting “unexpected savory notes” and “hyperlocal ingredient sourcing” as the “fastest-growing segments in premium frozen desserts”. I find it hard to believe these are segments at all—nor do these phrases appear anywhere on the &lt;a href="https://icecreamassociation.org/general/custom.asp?page=Aboutus"&gt;NICRA web site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The “Summer reading list for 2025” recommends books like “The Rainmakers”, by Percival Everett—a real author, but as far as I can tell, not a real text. Immediately thereafter it offers “Salt and Honey”, by “Delia Owens”—again, a real writer, not a real book. I started reaching out to some of the experts cited in the insert to ask whether their quotes were real, then realized &lt;a href="https://www.theverge.com/ai-artificial-intelligence/670510/chicago-sun-times-ai-generated-reading-list"&gt;the Verge beat me to it&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;These examples are just from a cursory skim; the insert fails my LLM “sniff test” on essentially every page. How did this happen? How do you print and distribute a full-color, sixty-four page special edition without anyone reading it first? Many of the articles have no byline, but those that do are purportedly by Marco Buscaglia. Was he responsible? Or is he too, like so many of the people cited in this nonsense publication, a victim of confabulatory applied statistics? This would be a great project for the Sun-Times newsroom, assuming OpenAI hasn’t eaten them yet.&lt;/p&gt;
&lt;p&gt;Doubtfully,&lt;/p&gt;
&lt;p&gt;—Kyle Kingsbury&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/381-when-flat-rate-movers-wont-answer-your-calls</id>
        <title>When Flat Rate Movers Won't Answer Your Calls</title>
        <published>2025-05-03T09:23:16-05:00</published>
        <updated>2025-05-03T09:23:16-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/381-when-flat-rate-movers-wont-answer-your-calls"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;Back in 2023 I went through an inter-state move with &lt;a href="https://www.flatrate.com/moving/"&gt;Flat Rate Movers,
LTD.&lt;/a&gt;, a.k.a. Flat Rate Moving, US DOT 488466, MC 254356. They bungled the job, damaged my stuff
and my house, and then refused to talk to me or their own insurance company. I
placed dozens of calls, wrote letters, emails, nothing worked. I finally got
some money for the damage by working with their insurer directly. I know a lot
of folks struggle with moving companies, and figured I’d write up a bit of this
story in case it helps. You can &lt;a href="#finding-their-insurance"&gt;skip ahead&lt;/a&gt; if you
just want to know about the FMCSA insurer database.&lt;/p&gt;
&lt;p&gt;In November 2022 I scheduled the move with Flat Rate. We took a detailed video
inventory of everything to be moved. I paid over $14,000 for an “all inclusive
move”, which, per the move plan, included full packing services, blankets for furniture, boxes for electronics, lamps, and bicycles, a custom crate for a tabletop, and “extended valuation coverage”—their insurance policy.&lt;/p&gt;
&lt;p&gt;A few days before the move in March 2023, Flat Rate called to tell me a
five-person team would arrive. When the crew arrived, it wasn’t Flat Rate. It
was a team of two from &lt;a href="https://www.esquiremoving.com/"&gt;Esquire Moving Inc&lt;/a&gt; (US DOT 2820253, MC 940499). They had
no idea they were supposed to provide packing services. The crate I paid for was nowhere to be found. They didn’t have enough boxes or blankets. I had to have everything out that day, so
there wasn’t any choice—I worked alongside the team for thirteen hours to
complete the packing and move-out.&lt;/p&gt;
&lt;p&gt;I let the team handle furniture protection and loading the truck, which was a
mistake. As they unloaded a few days later, I realized much of the furniture had
been completely unprotected. My bicycle wasn’t boxed. Lamps were tossed in
loose. There was extensive damage to furniture, art, books, and my home: I
watched the crew rip a doorframe off the wall. The crew provided me with a
backdated bill of lading only on delivery.&lt;/p&gt;
&lt;p&gt;I called Flat Rate a couple days later to discuss my concerns. Their
representive apologized and acknowledged that a crew of two was insufficient. I
sent Flat Rate a detailed email explaining their failure to provide promised
services, accompanied by photographs of the move process and the resulting
damage. I asked for a partial refund, and for Flat Rate to pay for the damage
they caused. I received only an out-of-office auto-response.&lt;/p&gt;
&lt;p&gt;On March 21st, I received a generic form letter from Flat Rate’s claims
department. I filled out the claim forms and submitted them on March 25th, and
repeated that I needed more than an insurance claim: Flat Rate obviously failed
to provide the services I’d paid for.&lt;/p&gt;
&lt;p&gt;Then things got weird. I placed call after call to the claims department. They
didn’t answer. I left voicemail after voicemail. On April 28th I reached an
actual human, but their computers were apparently down, and no supervisor was
in the building—the representative couldn’t do anything but take down a note
to call me back. They never did. I left another voicemail on May 5th. Again on
May 23rd. Again on August 4th. August 8th. I began to realize that Flat Rate’s
strategy was simply to never answer the phone.&lt;/p&gt;
&lt;p&gt;On August 14th, a new hire in the claims department called me: she was taking over my case. She dispatched a local furniture company
to inspect the damages and help complete the claim. The inspectors agreed: the
movers had done extensive damage. I provided them with documentation and photos
as well.&lt;/p&gt;
&lt;p&gt;On September 14 I called the furniture company and confirmed that yes, they had
submitted their report to Flat Rate. However, the furniture company had a
confidentiality agreement with Flat Rate, and could not release their report to
me. I called their contact at Flat Rate, who didn’t answer, and left a
voicemail. I called the claims representative’s direct line: her number was no
longer in service. On September 19th I got a call from the claims
representative, who promised she’d have a decision by the end of the day. Flat Rate never talked to me again.&lt;/p&gt;
&lt;p&gt;I called again and again, but got voicemail every time. I tried a letter from
my lawyers, an arbitration request with the &lt;a href="https://trucking.org/arbitration-loss-and-damages"&gt;American Trucking
Association&lt;/a&gt;, complaints with the &lt;a href="https://www.usa.gov/agencies/federal-motor-carrier-safety-administration"&gt;Federal Motor Carrier Safety
Administration&lt;/a&gt;,
the &lt;a href="https://www.dot.ny.gov/index"&gt;New York Department of Transportation&lt;/a&gt;’s
Motor Carrier complaint department, and the &lt;a href="https://www.dfs.ny.gov/"&gt;New York State Department of
Financial Services&lt;/a&gt;; none of this went anywhere.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#finding-their-insurance" id="finding-their-insurance"&gt;Finding Their Insurance&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Here’s the thing: movers offer coverage, they have claims departments, and they do many insurance-like things in house. But for complicated reasons, they are not actually insurance companies, and are not regulated as such. Their coverage is backstopped by separate, real insurance companies. I eventually discovered I could bypass Flat Rate and talk to their backing insurance company directly. The Federal Motor Carrier Safety
Administration maintains &lt;a href="https://li-public.fmcsa.dot.gov/LIVIEW/pkg_carrquery.prc_getdetail"&gt;a public database of moving companies&lt;/a&gt;. Putting in their US DOT number
(488466) yields a page with some basic info. At the bottom are links with their
active and pending insurance, and the history of their insurers.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/381/fmcsa.png" alt="A screenshot of the FMCSA page, with links at the bottom" title="A screenshot of the FMCSA page, with links at the bottom"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/381/insurance-history.png" alt="A screenshot of their insurer history" title="A screenshot of their insurer history"&gt;&lt;/p&gt;
&lt;p&gt;The insurance history told me their cargo insurer at the time of my move was
Hanover Insurance Company, with policy RHX H706231. The database also had
Hanover’s phone number. I was able to call Hanover and ask about the
policy—it took a few tries, but I got through to a claims adjuster there who
was able to confirm that yes, I could file a claim directly with Hanover. I
sent over a package of documentation, including the photos and the story so
far.&lt;/p&gt;
&lt;p&gt;Hanover Insurance informed me they’d never heard anything about my claim, and
called Flat Rate to check. Over the next few days we discovered that Flat Rate
wasn’t just refusing to answer my calls—they wouldn’t answer calls from their insurance company either. Hanover wasn’t able to obtain the report from the furniture company, but they did agree, on the basis of the photos, to pay a reasonable amount for the damage to cargo. I had a check a few days later.&lt;/p&gt;
&lt;p&gt;I’m still unhappy with Flat Rate: they charged me a good deal of money for services
they failed to deliver, and never paid for damage to the house. I was gearing up for small claims court—but during the tail end of this fiasco I was entering a fresh hell of home renovation, and got distracted. If you’re reading this, Flat Rate, you still owe me.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/380-comments-on-executive-order-14168</id>
        <title>Comments on Executive Order 14168</title>
        <published>2025-02-21T18:04:55-05:00</published>
        <updated>2025-02-21T18:04:55-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/380-comments-on-executive-order-14168"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;&lt;em&gt;Submitted to the Department of State, which is &lt;a href="https://www.federalregister.gov/documents/2025/02/18/2025-02696/30-day-notice-of-proposed-information-collection-application-for-a-us-passport-for-eligible"&gt;requesting comments&lt;/a&gt; on a proposed change which would align US passport gender markers with &lt;a href="https://www.whitehouse.gov/presidential-actions/2025/01/defending-women-from-gender-ideology-extremism-and-restoring-biological-truth-to-the-federal-government/"&gt;executive order 14168&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Executive order 14168 is biologically incoherent and socially cruel. All passport applicants should be allowed to select whatever gender markers they feel best fit, including M, F, or X.&lt;/p&gt;
&lt;p&gt;In humans, neither sex nor gender is binary at any level. There are several possible arrangements of sex chromosomes: X, XX, XY, XXY, XYY, XXX, tetrasomies, pentasomies, etc. A single person can contain a mosaic of cells with different genetics: some XX, some XYY. Chromosomes may not align with genitalia: people with XY chromosomes may have a vulva and internal testes. People with XY chromosomes and a small penis may be surgically and socially reassigned female at birth—and never told what happened. None of these biological dimensions necessarily align with one’s internal concept of gender, or one’s social presentation.&lt;/p&gt;
&lt;p&gt;The executive order has no idea how biology works. It defines “female” as “a person belonging, at conception, to the sex that produces the large reproductive cell”. Zygotes do not produce reproductive cells at all: under this order none  of us have a sex. Oogenesis doesn’t start until over a month into embryo development. Even if people were karyotyping their zygotes immediately after conception so they could tell what “legal” sex they were going to be, they could be wrong: which gametes we produce depends on the formation of the genital ridge.&lt;/p&gt;
&lt;p&gt;All this is to say that if people fill out these forms using this definition of sex, they’re guessing at a question which is both impossible to answer and socially irrelevant. You might be one of the roughly two percent of humans born with an uncommon sexual development and not even know it. Moreover, the proposed change fundamentally asks the wrong question: gender markers on passports are used by border control agents, and are expected to align with how those agents read the passport holder’s gender. A mismatch will create needless intimidation and hardship for travelers.&lt;/p&gt;
&lt;p&gt;Of course most of us will not have our identities challenged under this order. That animus is reserved for trans people, for gender-non-conforming people, for anyone whose genetics, body, dress, voice, or mannerisms don’t quite fit the mold. Those are the people who will suffer under this order. That cruelty should be resisted.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-nginx</id>
        <title>Geoblocking the UK with Debian &amp; Nginx</title>
        <published>2025-02-20T14:45:55-05:00</published>
        <updated>2025-02-20T14:45:55-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-nginx"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;A few quick notes for other folks who are &lt;a href="https://geoblockthe.uk"&gt;geoblocking the UK&lt;/a&gt;. I just set up a basic geoblock with Nginx on Debian. This is all stuff you can piece together, but the Maxmind and Nginx docs are a little vague about the details, so I figure it’s worth an actual writeup. My Nginx expertise is ~15 years out of date, so this might not be The Best Way to do things. YMMV.&lt;/p&gt;
&lt;p&gt;First, register for a free &lt;a href="https://www.maxmind.com/en/geolite2/signup"&gt;MaxMind account&lt;/a&gt;; you’ll need this to subscribe to their GeoIP database. Then set up a daemon to maintain a copy of the lookup file locally, and Nginx’s GeoIP2 module:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span&gt;&lt;/span&gt;apt install geoipupdate libnginx-mod-http-geoip2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a license key on the MaxMind site, and download a copy of the config file you’ll need. Drop that in &lt;code&gt;/etc/GeoIP.conf&lt;/code&gt;. It’ll look like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AccountID XXXX
LicenseKey XXXX
EditionIDs GeoLite2-Country
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The package sets up a cron job automatically, but we should grab an initial copy of the file. This takes a couple minutes, and writes out &lt;code&gt;/var/lib/GeoIP/GeoLite2-Country-mmdb&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span&gt;&lt;/span&gt;geoipupdate
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The GeoIP2 module should already be loaded via &lt;code&gt;/etc/nginx/modules-enabled/50-mod-http-geoip2.conf&lt;/code&gt;. Add a new config snippet like &lt;code&gt;/etc/nginx/conf.d/geoblock.conf&lt;/code&gt;. The first part tells Nginx where to find the GeoIP database file, and then extracts the two-letter ISO country code for each request as a variable. The &lt;code&gt;map&lt;/code&gt; part sets up an &lt;code&gt;$osa_geoblocked&lt;/code&gt; variable, which is set to &lt;code&gt;1&lt;/code&gt; for GB, otherwise &lt;code&gt;0&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;geoip2 /var/lib/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_iso_code country iso_code;
}

map $geoip2_data_country_iso_code $osa_geoblocked {
  GB      1;
  default 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Write an HTML file somewhere like &lt;code&gt;/var/www/custom_errors/osa.html&lt;/code&gt;, explaining the block. Then serve that page for HTTP 451 status codes: in &lt;code&gt;/etc/nginx/sites-enabled/whatever&lt;/code&gt;, add:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;server {
  ...
  # UK OSA error page
  error_page 451 /osa.html;
  location /osa.html {
    internal;
    root /var/www/custom_errors/;
  }

  # When geoblocked, return 451
  location / {
    if ($osa_geoblocked = 1) {
      return 451;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Test your config with &lt;code&gt;nginx -t&lt;/code&gt;, and then &lt;code&gt;service nginx reload&lt;/code&gt;. You can test how things look from the UK using a VPN service, or something like &lt;a href="https://www.locabrowser.com/"&gt;locabrowser&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is, to be clear, a bad solution. MaxMind’s free database is not particularly precise, and in general IP lookup tables are chasing a moving target. I know for a fact that there are people in non-UK countries (like Ireland!) who have been inadvertently blocked by these lookup tables. Making those people use Tor or a VPN &lt;em&gt;sucks&lt;/em&gt;, but I don’t know what else to do in the current regulatory environment.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/378-seconds-since-the-epoch</id>
        <title>Seconds Since the Epoch</title>
        <published>2024-12-25T13:46:21-05:00</published>
        <updated>2024-12-25T13:46:21-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/378-seconds-since-the-epoch"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;This is not at all news, but it comes up often enough that I think there should be a concise explanation of the problem. People, myself included, like to say that POSIX time, also known as Unix time, is the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date"&gt;number&lt;/a&gt; &lt;a href="https://www.gnu.org/software/coreutils/manual/html_node/Seconds-since-the-Epoch.html"&gt;of&lt;/a&gt; &lt;a href="https://man7.org/linux/man-pages/man2/time.2.html"&gt;seconds&lt;/a&gt; &lt;a href="https://pkg.go.dev/time#Unix"&gt;since&lt;/a&gt; &lt;a href="https://dev.mysql.com/doc/refman/8.4/en/datetime.html"&gt;the&lt;/a&gt; &lt;a href="https://ruby-doc.org/core-3.0.0/Time.html"&gt;Unix&lt;/a&gt; &lt;a href="https://docs.datastax.com/en/cql-oss/3.x/cql/cql_reference/timestamp_type_r.html"&gt;epoch&lt;/a&gt;, which was 1970-01-01 at 00:00:00.&lt;/p&gt;
&lt;p&gt;This is not true. Or rather, it isn’t true in the sense most people think. For example, it is presently 2024-12-25 at 18:51:26 UTC. The POSIX time is 1735152686. It has been 1735152713 seconds since the POSIX epoch. The POSIX time number is twenty-seven seconds lower.&lt;/p&gt;
&lt;p&gt;This is because POSIX time is derived &lt;a href="https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub151-1.pdf"&gt;in IEEE 1003.1&lt;/a&gt; from &lt;a href="https://en.wikipedia.org/wiki/Coordinated_Universal_Time"&gt;Coordinated Universal Time&lt;/a&gt;. The standard assumes that every day is exactly 86,400 seconds long. Specifically:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;em&gt;time()&lt;/em&gt; function returns the value of time in &lt;b&gt;seconds since the Epoch&lt;/b&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Which is defined as:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;b&gt;seconds since the Epoch.&lt;/b&gt; A value to be interpreted as the number of seconds between a specified time and the Epoch. A Coordinated Universal Time name (specified in terms of seconds (&lt;em&gt;tm_sec&lt;/em&gt;), minutes (&lt;em&gt;tm_min&lt;/em&gt;), hours (&lt;em&gt;tm_hour&lt;/em&gt;), days since January 1 of the year (&lt;em&gt;tm_yday&lt;/em&gt;), and calendar year minus 1900
(&lt;em&gt;tm_year&lt;/em&gt;)) is related to a time represented as &lt;em&gt;seconds since the Epoch&lt;/em&gt; according to the expression below.&lt;/p&gt;
&lt;p&gt;If year &amp;lt; 1970 or the value is negative, the relationship is undefined. If year ≥ 1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the expression:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;tm_sec&lt;/em&gt; + &lt;em&gt;tm_min&lt;/em&gt; * 60 + &lt;em&gt;tm_hour&lt;/em&gt; * 3600 + &lt;em&gt;tm_yday&lt;/em&gt; * 86400 +
(&lt;em&gt;tm_year&lt;/em&gt;-70) * 31536000 + ((&lt;em&gt;tm_year&lt;/em&gt; - 69) / 4) * 86400&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The length of the day is not 86,400 seconds, and in fact changes over time. To keep UTC days from drifting too far from solar days, astronomers periodically declare a &lt;a href="https://en.wikipedia.org/wiki/Leap_second"&gt;leap second&lt;/a&gt; in UTC. Consequently, every few years POSIX time jumps backwards, &lt;a href="https://marc.info/?l=linux-kernel&amp;amp;m=134113577921904"&gt;wreaking&lt;/a&gt; &lt;a href="https://www.zdnet.com/article/qantas-suffers-delays-due-to-linux-leap-second-bug/"&gt;utter&lt;/a&gt; &lt;a href="https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/"&gt;havoc&lt;/a&gt;. Someday it might jump forward.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#archaeology" id="archaeology"&gt;Archaeology&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Appendix B of IEEE 1003 has a fascinating discussion of leap seconds:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The concept of leap seconds is added for precision; at the time this standard was published, 14 leap seconds had been added since January 1, 1970. These 14 seconds are ignored to provide an easy and compatible method of computing time differences.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I, too, love to ignore things to make my life easy. The standard authors knew “seconds since the epoch” were not, in fact, seconds since the epoch. And they admit as much:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Most systems’ notion of “time” is that of a continuously-increasing value, so this value should increase even during leap seconds. However, not only do most systems not keep track of leap seconds, but most systems are probably not synchronized to any standard time reference. Therefore, it is inappropriate to require that a time represented as seconds since the Epoch precisely represent the number of seconds between the referenced time and the Epoch.&lt;/p&gt;
&lt;p&gt;It is sufficient to require that applications be allowed to treat this time as if it represented the number of seconds between the referenced time and the Epoch. It is the responsibility of the vendor of the system, and the administrator of the system, to ensure that this value represents the number of seconds between the referenced time and the Epoch as closely as necessary for the application being run on that system….&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I imagine there was some debate over this point. The appendix punts, saying that vendors and administrators must make time align “as closely as necessary”, and that “this value should increase even during leap seconds”. The latter is achievable, but the former is arguably impossible: the standard requires POSIX clocks be twenty-seven seconds off.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Consistent interpretation of seconds since the Epoch can be critical to certain types of distributed applications that rely on such timestamps to synchronize events. The accrual of leap seconds in a time standard is not predictable. The number of leap seconds since the Epoch will likely increase. The standard is
more concerned about the synchronization of time between applications of astronomically short duration and the Working Group expects these concerns to become more critical in the future.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In a sense, the opposite happened. Time synchronization is &lt;em&gt;always&lt;/em&gt; off, so systems generally function (however incorrectly) when times drift a bit. But leap seconds are rare, and the linearity evoked by the phrase “seconds since the epoch” is so deeply baked in to our intuition, that software can accrue serious, unnoticed bugs. Until a few years later, one of those tiny little leap seconds takes down a big chunk of the internet.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#what-to-do-instead" id="what-to-do-instead"&gt;What To Do Instead&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;If you just need to compute the duration between two events on one computer, use &lt;a href="https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/reference_guide/sect-posix_clocks#sect-POSIX_clocks"&gt;&lt;code&gt;CLOCK_MONOTONIC&lt;/code&gt;&lt;/a&gt;, or better yet, &lt;code&gt;CLOCK_BOOTTIME&lt;/code&gt;. If you don’t need to exchange timestamps with other systems that assume POSIX time, use &lt;a href="https://www.ipses.com/eng/in-depth-analysis/standard-of-time-definition/"&gt;TAI, GPS, or maybe LORAN&lt;/a&gt;. If you do need rough alignment with other POSIX-timestamp systems, &lt;a href="https://developers.google.com/time/smear"&gt;smear leap seconds&lt;/a&gt; over a longer window of time. Libraries like &lt;a href="https://github.com/qntm/t-a-i"&gt;qntm’s t-a-i&lt;/a&gt; can convert back and forth between POSIX and TAI.&lt;/p&gt;
&lt;p&gt;There’s an ongoing effort to &lt;a href="https://www.timeanddate.com/news/astronomy/end-of-leap-seconds-2022"&gt;end leap seconds&lt;/a&gt;, hopefully &lt;a href="https://www.bipm.org/documents/20126/64811223/Resolutions-2022.pdf/281f3160-fc56-3e63-dbf7-77b76500990f"&gt;by 2035&lt;/a&gt;. It’ll require additional work to build conversion tables into everything that relies on the “86,400 seconds per day” assumption, but it should also make it much simpler to ask questions like “how many seconds between these two times”. At least for times after 2035!&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/371-threads-wont-take-you-south-of-market</id>
        <title>Threads Won't Take You South of Market</title>
        <published>2024-12-01T10:01:36-05:00</published>
        <updated>2024-12-01T10:01:36-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/371-threads-wont-take-you-south-of-market"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;In June 2023, when &lt;a href="https://threads.net"&gt;Threads&lt;/a&gt; announced their &lt;a href="https://techcrunch.com/2023/07/05/adam-mosseri-says-metas-threads-app-wont-have-activitypub-support-at-launch/"&gt;plans to federate&lt;/a&gt; with other &lt;a href="https://en.wikipedia.org/wiki/Fediverse"&gt;Fediverse instances&lt;/a&gt;, there was a good deal of &lt;a href="https://fedipact.online/"&gt;debate&lt;/a&gt; around whether smaller instances should allow federation or block it pre-emptively. As one of the admins of &lt;a href="https://woof.group"&gt;woof.group&lt;/a&gt;, I wrote about some of the &lt;a href="https://blog.woof.group/announcements/considering-large-instance-federation"&gt;potential risks and rewards&lt;/a&gt; of federating with Threads. We decided to &lt;a href="https://blog.woof.group/announcements/deferring-threads-federation"&gt;wait and see&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In my queer and leather circles, Facebook and Instagram have been generally understood as hostile environments for over a decade. In 2014, their &lt;a href="https://www.eff.org/deeplinks/2014/09/facebooks-real-name-policy-can-cause-real-world-harm-lgbtq-community"&gt;“Real Name” policy&lt;/a&gt; made life particularly difficult for trans people, drag queens, sex workers, and people who, for various reasons, needed to keep their real name disconnected from their queer life. My friends have been repeatedly suspended from both platforms for showing too much skin, or using the peach emoji. Meta’s moderation has been aggressive, opaque, and wildly inconsistent: sometimes full nudity is fine; other times a kiss or swimsuit is beyond the line. In some circles, maintaining a series of backup accounts in advance of one’s ban became de rigueur.&lt;/p&gt;
&lt;p&gt;I’d hoped that federation between Threads and the broader Fediverse might allow a &lt;a href="https://blog.woof.group/mods/the-shape-of-social-space"&gt;more nuanced spectrum&lt;/a&gt; of moderation norms. Threads might opt for a more conservative environment locally, but through federation, allow their users to interact with friends on instances with more liberal norms. Conversely, most of my real-life friends are still on Meta services—I’d love to see their posts and chat with them again. Threads could communicate with Gay Fedi (using the term in the broadest sense), and de-rank or hide content they don’t like on a per-post or per-account basis.&lt;/p&gt;
&lt;p&gt;This world seems technically feasible. Meta reports &lt;a href="https://techcrunch.com/2024/11/03/threads-now-has-275m-monthly-active-users/"&gt;275 million Monthly Active Users (MAUs)&lt;/a&gt;, and over &lt;a href="https://www.statista.com/statistics/1092227/facebook-product-dau/"&gt;three billion&lt;/a&gt; accross other Meta services. Fediverse has something like &lt;a href="https://fedidb.org/"&gt;one million MAUs across various instances&lt;/a&gt;. This is not a large jump in processing or storage; nor would it seem to require a large increase in moderation staff. Threads has already committed to doing the requisite engineering, user experience, and legal work to allow federation across a broad range of instances. Meta is swimming in cash.&lt;/p&gt;
&lt;p&gt;All this seems a moot point. A year and a half later, Threads &lt;a href="https://www.theverge.com/24107998/threads-fediverse-mastodon-how-to"&gt;is barely half federated&lt;/a&gt;. It publishes Threads posts to the world, but only if you dig in to the settings and check the “Fediverse Sharing” box. Threads users can see replies to their posts, but can’t talk back. Threads users can’t mention others, see mentions from other people, or follow anyone outside Threads. This may work for syndication, but is essentially unusable for conversation.&lt;/p&gt;
&lt;p&gt;Despite the fact that Threads users can’t follow or see mentions from people on other instances, Threads has already &lt;a href="https://www.threads.net/moderated_servers"&gt;opted to block&lt;/a&gt; a slew of instances where gay &amp;amp; leather people congregate. Threads blocks &lt;a href="https://hypno.social"&gt;hypno.social&lt;/a&gt;, &lt;a href="rubber.social"&gt;rubber.social&lt;/a&gt;, &lt;a href="https://4bear.com"&gt;4bear.com&lt;/a&gt;, &lt;a href="https://nsfw.lgbt"&gt;nsfw.lgbt&lt;/a&gt;, &lt;a href="https://kinkyelephant.com"&gt;kinkyelephant.com&lt;/a&gt;, &lt;a href="https://kinktroet.social"&gt;kinktroet.social&lt;/a&gt;, &lt;a href="https://barkclub.xyz"&gt;barkclub.xyz&lt;/a&gt;, &lt;a href="https://mastobate.social"&gt;mastobate.social&lt;/a&gt;, and &lt;a href="https://kinky.business"&gt;kinky.business&lt;/a&gt;. They also block the (now-defunct) instances &lt;a href="https://bear.community"&gt;bear.community&lt;/a&gt;, &lt;a href="https://gaybdsm.group"&gt;gaybdsm.group&lt;/a&gt;, and &lt;a href="https://gearheads.social"&gt;gearheads.social&lt;/a&gt;. They block more general queer-friendly instances like &lt;a href="https://bark.lgbt"&gt;bark.lgbt&lt;/a&gt;, &lt;a href="https://super-gay.co"&gt;super-gay.co&lt;/a&gt;, &lt;a href="https://gay.camera"&gt;gay.camera&lt;/a&gt;, and &lt;a href="https://gaygeek.social"&gt;gaygeek.social&lt;/a&gt;. They block sex-positive instances like &lt;a href="https://nsfwphotography.social"&gt;nsfwphotography.social&lt;/a&gt;, &lt;a href="https://nsfw.social"&gt;nsfw.social&lt;/a&gt;, and &lt;a href="https://net4sw.com"&gt;net4sw.com&lt;/a&gt;. All these instances are blocked for having “violated our Community Standards or Terms of Use”. Others like &lt;a href="https://fisting.social"&gt;fisting.social&lt;/a&gt;, &lt;a href="https://mastodon.hypnoguys.com"&gt;mastodon.hypnoguys.com&lt;/a&gt;, &lt;a href="https://abdl.link"&gt;abdl.link&lt;/a&gt;, &lt;a href="https://qaf.men"&gt;qaf.men&lt;/a&gt;, and &lt;a href="https://social.rubber.family"&gt;social.rubber.family&lt;/a&gt;, are blocked for having “no publicly accessible feed”. I don’t know what this means: hypnoguys.social, for instance, has the usual Mastodon &lt;a href="https://mastodon.hypnoguys.com/public/local"&gt;publically accessible local feed&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It’s not like these instances are hotbeds of spam, hate speech, or harassment: woof.group federates heavily with most of the servers I mentioned above, and we rarely have problems with their moderation. Most have reasonable and enforced media policies requiring sensitive-media flags for genitals, heavy play, and so on. Those policies are generally speaking looser than Threads (woof.group, for instance, allows butts!) but there are plenty of accounts and posts on these instances which would be anodyne under Threads’ rules.&lt;/p&gt;
&lt;p&gt;I am shocked that woof.group is &lt;em&gt;not&lt;/em&gt; on Threads’ blocklist yet. We have similar users who post similar things. Our content policies are broadly similar—several of the instances Threads blocks actually adopted woof.group’s specific policy language. I doubt it’s our size: Threads blocks several instances with less than ten MAUs, and woof.group has over seven hundred.&lt;/p&gt;
&lt;p&gt;I’ve been out of the valley for nearly a decade, and I don’t have insight into Meta’s policies or decision-making. I’m sure Threads has their reasons. Whatever they are, Threads, like all of Meta’s services, feels distinctly uncomfortable with sex, and sexual expression is a vibrant aspect of gay culture.&lt;/p&gt;
&lt;p&gt;This is part of why I started woof.group: we deserve spaces moderated with our subculture in mind. But I also hoped that by designing a moderation policy which compromised with normative sensibilities, we might retain connections to a broader set of publics. This particular leather bar need not be an invite-only clubhouse; it can be a part of a walkable neighborhood. For nearly five years we’ve kept that balance, retaining open federation with most all the Fediverse. I get the sense that Threads intends to wall its users off from our world altogether—to make “bad gays” invisible. If Threads were a taxi service, it wouldn’t take you &lt;a href="https://sfleatherdistrict.org/wp-content/uploads/2021/04/Rubin-Valley-of-Kings.pdf"&gt;South of Market&lt;/a&gt;.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/370-ecobee-settings-for-heat-pumps-with-resistive-aux-heat</id>
        <title>Ecobee Settings for Heat Pumps with Resistive Aux Heat</title>
        <published>2024-02-28T23:41:38-05:00</published>
        <updated>2024-02-28T23:41:38-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/370-ecobee-settings-for-heat-pumps-with-resistive-aux-heat"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;I’m in the process of replacing a old radiator system with a centrally-ducted, air-source heat pump system with electric resistive backup heat. I’ve found that the default ecobee algorithm seems to behave surprisingly poorly for this system, and wanted to write up some of the settings that I’ve found yield better behavior.&lt;/p&gt;
&lt;p&gt;A disclaimer. I’m not an HVAC professional. I have two decades in software operations, a background in physics, and far too much experience inferring system dynamics from timeseries graphs. This advice may void your warranty, burn your house down, etc.; everything you do is at your own risk.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#the-system" id="the-system"&gt;The System&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;First, a bit about the system in question. You can skip this section if you know about heat pumps, short cycling, staging, etc.&lt;/p&gt;
&lt;p&gt;There are two main subsystems: a heat pump and an air handler. The heat pump sits outside: it has a fan which moves outside air over a heat exchanger, and a compressor, which compresses a working fluid. The working fluid is connected in a loop to the air handler, where it runs through another heat exchanger to heat or cool the inside air. The air handler also has a blower fan which circulates air through the whole house. If the heat pump can’t keep up with demand, the air handler also has a pair of resistive electric heating coils, called &lt;em&gt;aux heat&lt;/em&gt;, which can supplement or take over from the heat pumps.&lt;/p&gt;
&lt;p&gt;A few important things to know about heat pumps. First, electric resistive heaters have a &lt;em&gt;Coefficient of Performance&lt;/em&gt; (CoP) of essentially 1: they take 1 joule of electricity and turn it into 1 joule of heat in the air. My heat pumps have a typical heating CoP of about 2-4, depending on temperature and load. They take 1 joule of electricity and suck 2 to 4 joules of heat from the outside air into the inside. This means they cost 2-4 times less (in electric opex, at least) than a standard resistive electric heating system.&lt;/p&gt;
&lt;p&gt;Second, heat pumps, like A/C systems, shouldn’t start and stop too frequently. Starting up causes large transient electrical and mechanical stresses. Ideally they should run at a low speed for several hours, rather than running at full blast, shutting off, then turning on again ten minutes later. This is called “short cycling”.&lt;/p&gt;
&lt;p&gt;Third, the heat pump’s fan, heat pump’s compressor, and the air handler’s fan are all variable-speed: they can run very slow (quiet, efficient), very fast (loud, more powerful), or at any speed in between. This helps reduce short-cycling, as well as improving efficiency and reducing noise. However, directly setting compressor and fan speed requires a special “communicating” thermostat made by the same manufacturer, which speaks a proprietary wire protocol. My manufacturer’s communicating thermostats are very expensive and have a reputation for buggy hardware and software, so I opted to get an &lt;a href="https://www.ecobee.com/en-us/smart-thermostats/smart-wifi-thermostat/"&gt;ecobee 3 lite&lt;/a&gt;. Like essentially every other thermostat on the planet, the ecobee uses ~8 wires with simple binary signals, like “please give me heat” and “please turn on the fan”. It can’t ask for a specific &lt;em&gt;amount&lt;/em&gt; of heat.&lt;/p&gt;
&lt;p&gt;However, all is not lost. The standard thermostat protocol has a notion of a “two-stage” system—if the Y1 wire is hot, it’s asking for “some heat”, and if Y2 is also hot, it’s asking for “more heat”. My variable-speed heat pump emulates a two-stage system using a hysteresis mechanism. In stage 1, the heat pump offers some nominal low degree of heat. When the thermostat calls for stage 2, it kicks up the air handler blower a notch, and after 20 minutes, it slowly ramps up the heat pump compressor as well. I assume there’s a ramp-down for going back to stage 1. They say this provides “true variable-capacity operation”. You can imagine that the most efficient steady state is where the thermostat toggles rapidly between Y1 and Y2, causing the system to hang out at exactly the right variable speeds for current conditions—but I assume ecobee has some kind of of frequency limiter to avoid damaging systems that actually have two separate stages with distinct startup/shutdown costs.&lt;/p&gt;
&lt;p&gt;The air handler’s aux heat is also staged: if the W1 wire is hot, I think (based on staring at the wiring diagram and air handler itself) it just energizes one of two coils. If W2 is also hot, it energizes both. I think this is good: we want to use as much of the heat pump heat as possible, and if we can get away with juuuust a little aux heat, instead of going full blast, that’ll save energy.&lt;/p&gt;
&lt;p&gt;In short: aux heat is 2-4x more expensive than heat pump heat; we want to use as little aux as possible. Short-cycling is bad: we want long cycle times. For maximum efficiency, we want both the heat pump and aux heat to be able to toggle between stage 1 and 2 depending on demand.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#automatic-problems" id="automatic-problems"&gt;Automatic Problems&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;I initially left the ecobee at its automatic default settings for a few weeks; it’s supposed to learn the house dynamics and adapt. I noticed several problems. Presumably this behavior depends on weather, building thermal properties, HVAC dynamics, and however ecobee’s tuned their algorithm last week, so YMMV: check your system and see how it looks.&lt;/p&gt;
&lt;p&gt;It’s kind of buried, but ecobee offers a really nice time-series visualization of thermostat behavior on their web site. There’s also a Home Assistant integration that pulls in data from their API. It’s a pain in the ass to set up (ecobee, there’s no need for this to be so user-hostile), but it does work.&lt;/p&gt;
&lt;p&gt;Over the next few weeks I stared obsessively at time-series plots from both ecobee and Home Assistant, and mucked around with ecobee’s settings. Most of what I’ll describe below is configurable in the settings menu on the thermostat: look for “settings”, “installation settings”, “thresholds”.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#reducing-aux-heat" id="reducing-aux-heat"&gt;Reducing Aux Heat&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;First, the automatics kicked on aux heat a &lt;em&gt;lot&lt;/em&gt;. Even in situations where the heat pump would have been perfectly capable of getting up to temp, ecobee would burn aux heat to reach the target temperature (&lt;em&gt;set point&lt;/em&gt;) faster.&lt;/p&gt;
&lt;p&gt;Part of the problem was that ecobee ships (I assume for safety reasons) with ludicrously high cut-off thresholds for heat pumps. Mine had “compressor min outdoor temperature” of something like 35 degrees, so the heat pump wouldn’t run for most of the winter. The actual minimum temperature of my model is -4, cold-climate heat pumps run down to -20. I lowered mine to -5; the manual says there’s a physical thermostat interlock on the heat pump itself, and I trust that more than the ecobee weather feed anyway.&lt;/p&gt;
&lt;p&gt;Second: ecobee seems to prioritize speed over progress: if it’s not getting to the set point fast enough, it’ll burn aux heat to get there sooner. I don’t want this: I’m perfectly happy putting on a jacket. After a bit I worked out that the heat pumps alone can cover the house load down to ~20 degrees or so, and raised “aux heat max outdoor temperature” to 25. If it’s any warmer than that, the system won’t use aux heat.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#reverse-staging" id="reverse-staging"&gt;Reverse Staging&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A second weird behavior: once the ecobee called for stage 2, either from the heat pump or aux, it would run in stage 2 until it hit the set point, then shut off the system entirely. Running aux stage 2 costs more energy. Running the heat pump in stage 2 shortens the cycle time: remember, the goal is a low, long running time.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/370/no-reverse-staging.png" alt="A time-series plot showing that once stage 2 engages, it runs until shutting off, causing frequent cycling" title="A time-series plot showing that once stage 2 engages, it runs until shutting off, causing frequent cycling"&gt;&lt;/p&gt;
&lt;p&gt;The setting I used to fix this is called “reverse staging”. Ecobee’s &lt;a href="https://support.ecobee.com/s/articles/Threshold-settings-for-ecobee-thermostats"&gt;documentation&lt;/a&gt; says:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Compressor Reverse Staging: Enables the second stage of the compressor near the temperature setpoint.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As far as I can tell this documentation is completely wrong. From watching the graphs, this setting seems to allow the staging state machine to move from stage 2 back to stage 1, rather than forcing it to run in stage 2 until shutting off entirely. It’ll go back up to stage 2 if it needs to, and back down again.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/370/reverse-staging.png" alt="With reverse staging, it'll jump up to stage 2, then drop back down to stage 1." title="With reverse staging, it'll jump up to stage 2, then drop back down to stage 1."&gt;&lt;/p&gt;
&lt;h2&gt;&lt;a href="#manual-staging" id="manual-staging"&gt;Manual Staging&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;I couldn’t seem to get ecobee’s automatic staging to drop back to stage 1 heat reliably, or avoid kicking on aux heat when stage 2 heat pump heat would have done fine. I eventually gave up and turned off automatic staging altogether. I went with the delta temperature settings. If the temperature delta between the set point and indoor air is more than 1 degree, it turns on heat pump stage 1. More than two degrees, stage 2. More than four degrees, aux 1. More than five degrees, aux 2. The goal here is to use only as much aux heat as absolutely necessary to supplement the heat pump. I also have aux heat configured to run concurrently with the heat pump: there’s a regime where the heat pump provides useful heat, but not quite enough, and my intuition is that &lt;em&gt;some&lt;/em&gt; heat pump heat is cheaper than all aux.&lt;/p&gt;
&lt;p&gt;I initially tried the default 0.5 degree delta before engaging the heat pump’s first stage. It turns out that for some temperature regimes this creates rapid cycling: that first-phase heat is enough to heat the house rapidly to the set point, and then there’s nothing to do but shut the system off. The house cools, and the system kicks on again, several times per hour. I raised the delta to 1 degree, which significantly extended the cycle time.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#large-setback-with-preheating" id="large-setback-with-preheating"&gt;Large Setback with Preheating&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A &lt;em&gt;setback&lt;/em&gt; is when you lower your thermostat, e.g. while away from home or sleeping. There’s some folk wisdom that heat pumps should run at a constant temperature all the time, rather than have a large setback. As far as I can tell, this is because a properly-sized heat pump system (unlike a gas furnace) doesn’t deliver a ton of excess heat, so it can’t catch up quickly when asked to return to a higher temperature. To compensate, the system might dip into aux heat, and that’s super expensive.&lt;/p&gt;
&lt;p&gt;I’m in the US Midwest, where winter temperatures are usually around 15-40 F. I drop from 68 to 60 overnight, and the house can generally coast all night without having to run any HVAC at all. In theory the ecobee should be able to figure out the time required to come back to 68 and start the heat pump early in the morning, but in practice I found it would wait too long, and then the large difference between actual and set temp would trigger aux heat. To avoid this, I added a custom activity in Ecobee’s web interface (I call mine “preheat”), with a temperature of 64. I have my schedule set up with an hour of preheat in the morning, before going to the normal 68. This means there’s less of a delta-T, and the system can heat up entirely using the heat pump.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/370/setback.png" alt="A time series graph showing temperature falling smoothly overnight as the HVAC is disabled, and then rising during the preheat phase in the morning." title="A time series graph showing temperature falling smoothly overnight as the HVAC is disabled, and then rising during the preheat phase in the morning."&gt;&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/369-classnotfoundexception-java-util-sequencedcollection</id>
        <title>ClassNotFoundException: java.util.SequencedCollection</title>
        <published>2024-02-20T18:03:42-05:00</published>
        <updated>2024-02-20T18:03:42-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/369-classnotfoundexception-java-util-sequencedcollection"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;Recently I’ve had users of my libraries start reporting mysterious errors due to a missing reference to &lt;code&gt;SequencedCollection&lt;/code&gt;, a Java interface added in JDK 21:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Execution error (ClassNotFoundException) at
jdk.internal.loader.BuiltinClassLoader/loadClass (BuiltinClassLoader.java:641).
java.util.SequencedCollection
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Specifically, projects using &lt;a href="https://github.com/jepsen-io/jepsen/issues/585"&gt;Jepsen 0.3.5&lt;/a&gt; started throwing this error due to Clojure’s built-in &lt;code&gt;rrb_vector.clj&lt;/code&gt;, which is particularly vexing given that the class doesn’t reference &lt;code&gt;SequencedCollection&lt;/code&gt; at all.&lt;/p&gt;
&lt;p&gt;It turns out that the Clojure compiler, when run on JDK 21 or later, will automatically insert references to this class when compiling certain expressions–likely because it now appears in the supertypes of other classes. Jepsen had &lt;code&gt;:javac-options [&amp;quot;-source&amp;quot; &amp;quot;11&amp;quot; &amp;quot;-target&amp;quot; &amp;quot;11&amp;quot;]&lt;/code&gt; in Jepsen’s &lt;code&gt;project.clj&lt;/code&gt; already, but it still emitted references to &lt;code&gt;SequencedCollection&lt;/code&gt; because the reference is inserted by the Clojure compiler, not &lt;code&gt;javac&lt;/code&gt;. Similarly, adding &lt;code&gt;[&amp;quot;--release&amp;quot; &amp;quot;11&amp;quot;]&lt;/code&gt; didn’t work.&lt;/p&gt;
&lt;p&gt;Long story short: as far as I can tell the only workaround is to downgrade to Java 17 (or anything prior to 21) when building Jepsen as a library. That’s not super hard with &lt;code&gt;update-alternatives&lt;/code&gt;, but I still imagine I’ll be messing this up until Clojure’s compiler can get a patch.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days</id>
        <title>How to Replace Your CPAP In Only 666 Days</title>
        <published>2024-02-03T19:38:53-05:00</published>
        <updated>2024-02-03T19:38:53-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;&lt;em&gt;This story is not practical advice. For me, it’s closing the book on an almost two-year saga. For you, I hope it’s an enjoyable bit of bureaucratic schadenfreude. For Anthem, I hope it’s the subject of a series of painful but transformative meetings. This is not an isolated event. I’ve had dozens of struggles with Anthem customer support, and they all go like this.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you’re looking for practical advice: it’s this. Be polite. Document everything. Keep a log. Follow the claims process. Check the laws regarding insurance claims in your state. If you pass the legally-mandated deadline for your claim, call customer service. Do not allow them to waste a year of your life, or force you to resubmit your claim from scratch. Initiate a complaint with your state regulators, and escalate directly to &lt;a href="mailto:[email protected]"&gt;Gail Boudreaux’s team&lt;/a&gt;–or whoever Anthem’s current CEO is.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To start, experience an equipment failure.&lt;/p&gt;
&lt;p&gt;Use your CPAP daily for six years. Wake up on day zero with it making a terrible sound. Discover that the pump assembly is failing. Inquire with Anthem Ohio, your health insurer, about how to have it repaired. Allow them to refer you to a list of local durable medical equipment providers. Start calling down the list. Discover half the list are companies like hair salons. Eventually reach a company in your metro which services CPAPs. Discover they will not repair broken equipment unless a doctor tells them to.&lt;/p&gt;
&lt;p&gt;Leave a message with your primary care physician. Call the original sleep center that provided your CPAP. Discover they can’t help, since you’re no longer in the same state. Return to your primary, who can’t help either, because he had nothing to do with your prescription. Put the sleep center and your primary in touch, and ask them to talk.&lt;/p&gt;
&lt;p&gt;On day six, call your primary to check in. He’s received a copy of your sleep records, and has forwarded them to a local sleep center you haven’t heard of. They, in turn, will talk to Anthem for you.&lt;/p&gt;
&lt;p&gt;On day 34, receive an approval letter labeled “confirmation of medical necessity” from Anthem, directed towards the durable medical equipment company. Call that company and confirm you’re waitlisted for a new CPAP. They are not repairable. Begin using your partner’s old CPAP, which is not the right class of device, but at least it helps.&lt;/p&gt;
&lt;p&gt;Over the next 233 days, call that medical equipment company regularly. Every time, inquire whether there’s been any progress, and hear “we’re still out of stock”. Ask them you what the manufacturer backlog might be, how many people are ahead of you in line, how many CPAPs they &lt;em&gt;do&lt;/em&gt; receive per month, or whether anyone has ever received an actual device from them. They won’t answer any questions. Realize they are never going to help you.&lt;/p&gt;
&lt;p&gt;On day 267, realize there is no manufacturer delay. The exact machine you need is in stock on CPAP.com. Check to make sure there’s a claims process for getting reimbursed by Anthem. Pay over three thousand dollars for it. When it arrives, enjoy being able to breathe again.&lt;/p&gt;
&lt;p&gt;On day 282, follow CPAP.com’s documentation to file a claim with Anthem online. Include your prescription, receipt, shipping information, and the confirmation of medical necessity Anthem sent you.&lt;/p&gt;
&lt;p&gt;On day 309, open the mail to discover a mysterious letter from Anthem. They’ve received your appeal. You do not recall appealing anything. There is no information about what might have been appealed, but something will happen within 30-60 days. There is nothing about your claim.&lt;/p&gt;
&lt;p&gt;On day 418, emerge from a haze of lead, asbestos, leaks, and a host of other home-related nightmares; remember Anthem still hasn’t said anything about your claim. Discover your claim no longer appears on Anthem’s web site. Call Anthem customer service. They have no record of your claim either. Ask about the appeal letter you received. Listen, gobsmacked, as they explain that they decided your claim was in fact an appeal, and transferred it immediately to the appeals department. The appeals department examined the appeal and looked for the claim it was appealing. Finding none, they decided the appeal was moot, and rejected it. At no point did anyone inform you of this. Explain to Anthem’s agent that you filed a claim online, not an appeal. At their instruction, resign yourself to filing the entire claim again, this time using a form via physical mail. Include a detailed letter explaining the above.&lt;/p&gt;
&lt;p&gt;On day 499, retreat from the battle against home entropy to call Anthem again. Experience a sense of growing dread as the customer service agent is completely unable to locate either of your claims. After a prolonged conversation, she finds it using a different tool. There is no record of the claim from day 418. There was a claim submitted on day 282. Because the claim does not appear in her system, there is no claim. Experience the cognitive equivalent of the Poltergeist hallway shot as the agent tells you “Our members are not eligible for charges for claim submission”.&lt;/p&gt;
&lt;p&gt;Hear the sentence “There is a claim”. Hear the sentence “There is no claim”. Write these down in the detailed log you’ve been keeping of this unfurling Kafkaesque debacle. Ask again if there is anyone else who can help. There is no manager you can speak to. There is no tier II support. “I’m the only one you can talk to,” she says. Write that down.&lt;/p&gt;
&lt;p&gt;Call CPAP.com, which has a help line staffed by caring humans. Explain that contrary to their documentation, Anthem now says members cannot file claims for equipment directly. Ask if they are the provider. Discover the provider for the claim is probably your primary care physician, who has no idea this is happening. Leave a message with him anyway. Leave a plaintive message with your original sleep center for good measure.&lt;/p&gt;
&lt;p&gt;On day 502, call your sleep center again. They don’t submit claims to insurance, but they confirm that some people &lt;em&gt;do&lt;/em&gt; successfully submit claims to Anthem using the process you’ve been trying. They confirm that Anthem is, in fact, hot garbage. Call your primary, send them everything you have, and ask if they can file a claim for you.&lt;/p&gt;
&lt;p&gt;On day 541, receive a letter from Anthem, responding to your inquiry. You weren’t aware you filed one.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Please be informed that we have received your concern. Upon review we have noticed that there is no claim billed for the date of service mentioned in the submitted documents, Please provide us with a valid claim. If not submitted,provide us with a valid claim iamge to process your claim further.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Stare at the letter, typos and all. Contemplate your insignificance in the face of the vast and uncaring universe that is Anthem.&lt;/p&gt;
&lt;p&gt;On day 559, steel your resolve and call Anthem again. Wait as this representative, too, digs for evidence of a claim. Listen with delight as she finds your documents from day 282. Confirm that yes, a claim definitely exists. Have her repeat that so you can write it down. Confirm that the previous agent was lying: members can submit claims. At her instruction, fill out the claim form a third time. Write a detailed letter, this time with a Document Control Number (DCN). Submit the entire package via registered mail. Wait for USPS to confirm delivery eight days later.&lt;/p&gt;
&lt;p&gt;On day 588, having received no response, call Anthem again. Explain yourself. You’re getting good at this. Let the agent find a reference number for an appeal, but not the claim. Incant the magic DCN, which unlocks your original claim.  “I was able to confirm that this was a claim submitted form for a member,” he says. He sees your claim form, your receipts, your confirmation of medical necessity. However: “We still don’t have the claim”.&lt;/p&gt;
&lt;p&gt;Wait for him to try system after system. Eventually he confirms what you heard on day 418: the claims department transferred your claims to appeals. “Actually this is not an appeal, but it was denied as an appeal.” Agree as he decides to submit your claim manually again, with the help of his supervisor. Write down the call ref number: he promises you’ll receive an email confirmation, and an Explanation of Benefits in 30-40 business days.&lt;/p&gt;
&lt;p&gt;“I can assure you this is the last time you are going to call us regarding this.”&lt;/p&gt;
&lt;p&gt;While waiting for this process, recall insurance is a regulated industry. Check the Ohio Revised Code. Realize that section 3901.381 establishes deadlines for health insurers to respond to claims. They should have paid or denied each of your claims within 30 days–45 if supporting documentation was required. Leave a message with the Ohio Department of Insurance’s Market Conduct Division. File an insurance complaint with ODI as well.&lt;/p&gt;
&lt;p&gt;Grimly wait as no confirmation email arrives.&lt;/p&gt;
&lt;p&gt;On day 602, open an email from Anthem. They are “able to put the claim in the system and currenty on processed [sic] to be applied”. They’re asking for more time. Realize that Anthem is well past the 30-day deadline under the Ohio Revised Code for all three iterations of your claim.&lt;/p&gt;
&lt;p&gt;On day 607, call Anthem again. The representative explains that the claim will be received and processed as of your benefits. She asks you to allow 30-45 days from today. Quote section 3901.381 to her. She promises to expedite the request; it should be addressed within 72 business hours. Like previous agents, she promises to call you back. Nod, knowing she won’t.&lt;/p&gt;
&lt;p&gt;On day 610, email the Ohio Department of Insurance to explain that Anthem has found entirely new ways to avoid paying their claims on time. It’s been 72 hours without a callback; call Anthem again. She says “You submitted a claim and it was received” on day 282. She says the claim was expedited. Ask about the status of that expedited resolution. “Because on your plan we still haven’t received any claims,” she explains. Wonder if you’re having a stroke.&lt;/p&gt;
&lt;p&gt;Explain that it has been 328 days since you submitted your claim, and ask what is going on. She says that since the first page of your mailed claim was a letter, that might have caused it to be processed as an appeal. Remind yourself Anthem told you to enclose that letter. Wait as she attempts to refer you to the subrogation department, until eventually she gives up: the subrogation department doesn’t want to help.&lt;/p&gt;
&lt;p&gt;Call the subrogation department yourself. Allow Anthem’s representative to induce in you a period of brief aphasia. She wants to call a billing provider. Try to explain there is none: you purchased the machine yourself. She wants to refer you to collections. Wonder why on earth Anthem would want money from &lt;em&gt;you&lt;/em&gt;. Write down “I literally can’t understand what she thinks is going on” in your log. Someone named Adrian will call you by tomorrow.&lt;/p&gt;
&lt;p&gt;Contemplate alternative maneuvers. Go on a deep Google dive, searching for increasingly obscure phrases gleaned from Anthem’s bureaucracy. Trawl through internal training PDFs for Anthem’s ethics and compliance procedures. Call their compliance hotline: maybe someone cares about the law. It’s a third-party call center for Elevance Health. Fail to realize this is another name for Anthem. Begin drawing a map of Anthem’s corporate structure.&lt;/p&gt;
&lt;p&gt;From a combination of publicly-available internal slide decks, LinkedIn, and obscure HR databases, discover the name, email, and phone number of Anthem’s Chief Compliance Officer. Call her, but get derailed by an internal directory that requires a 10-digit extension. Try the usual tricks with automated phone systems. No dice.&lt;/p&gt;
&lt;p&gt;Receive a call from an Anthem agent. Ask her what happened to “72 hours”. She says there’s been no response from the adjustments team. She doesn’t know when a response will come. There’s no one available to talk to. Agree to speak to another representative tomorrow. It doesn’t matter: they’ll never call you.&lt;/p&gt;
&lt;p&gt;Do more digging. Guess the CEO’s email from what you can glean of Anthem’s account naming scheme. Write her an email with a short executive summary and a detailed account of the endlessly-unfolding Boschian hellscape in which her company has entrapped you. A few hours later, receive an acknowledgement from an executive concierge at Elevance (Anthem). It’s polite, formal, and syntactically coherent. She promises to look into things. Smile. Maybe this will work.&lt;/p&gt;
&lt;p&gt;On day 617, receive a call from the executive concierge. 355 days after submission, she’s identified a problem with your claim. CPAP.com provided you with an invoice with a single line item (the CPAP) and two associated billing codes (a CPAP and humidifier). Explain that they are integrated components of a single machine. She understands, but insists you need a receipt with multiple line items for them anyway. Anthem has called CPAP.com, but they can’t discuss an invoice unless you call them. Explain you’ll call them right now.&lt;/p&gt;
&lt;p&gt;Call CPAP.com. Their customer support continues to be excellent. Confirm that it is literally impossible to separate the CPAP and humidifier, or to produce an invoice with two line items for a single item. Nod as they ask what the hell Anthem is doing. Recall that this is the exact same machine Anthem covered for you eight years ago. Start a joint call with the CPAP.com representative and Anthem’s concierge. Explain the situation to her voicemail.&lt;/p&gt;
&lt;p&gt;On day 623, receive a letter from ODI. Anthem has told ODI this was a problem with the billing codes, and ODI does not intervene in billing code issues. They have, however, initiated a secretive second investigation. There is no way to contact the second investigator.&lt;/p&gt;
&lt;p&gt;Write a detailed email to the concierge and ODI explaining that it took over three hundred days for Anthem to inform you of this purported billing code issue. Explain again that it is a single device. Emphasize that Anthem has been handling claims for this device for roughly a decade.&lt;/p&gt;
&lt;p&gt;Wait. On day 636, receive a letter from Anthem’s appeals department. They’ve received your request for an appeal. You never filed one. They want your doctor or facility to provide additional information to Carelon Medical Benefits Management. You have never heard of Carelon. There is no explanation of how to reach Carelon, or what information they might require. The letter concludes: “There is currently no authorization on file for the services rendered.” You need to seek authorization from a department called “Utilization Management”.&lt;/p&gt;
&lt;p&gt;Call the executive concierge again. Leave a voicemail asking what on earth is going on.&lt;/p&gt;
&lt;p&gt;On day 637, receive an email: she’s looking into it.&lt;/p&gt;
&lt;p&gt;On day 644, Anthem calls you. It’s a new agent who is immensely polite. Someone you’ve never heard of was asked to work on another project, so she’s taking over your case. She has no updates yet, but promises to keep in touch.&lt;/p&gt;
&lt;p&gt;She does so. On day 653, she informs you Anthem will pay your claim in full. On day 659, she provides a check number. On day 666, the check arrives.&lt;/p&gt;
&lt;p&gt;Deposit the check. Write a thank you email to the ODI and Anthem’s concierge. Write this, too, down in your log.&lt;/p&gt;</content>
    </entry>
    <entry>
        <id>https://aphyr.com/posts/367-why-is-jepsen-written-in-clojure</id>
        <title>Why is Jepsen Written in Clojure?</title>
        <published>2023-12-05T09:49:05-05:00</published>
        <updated>2023-12-05T09:49:05-05:00</updated>
        <link rel="alternate" href="https://aphyr.com/posts/367-why-is-jepsen-written-in-clojure"></link>
        <author>
            <name>Aphyr</name>
            <uri>https://aphyr.com/</uri>
        </author>
        <content type="html">&lt;p&gt;People keep asking why &lt;a href="https://jepsen.io"&gt;Jepsen&lt;/a&gt; is written in &lt;a href="https://clojure.org/"&gt;Clojure&lt;/a&gt;, so I figure it’s worth having a referencable answer. I’ve programmed in something like twenty languages. Why choose a Weird Lisp?&lt;/p&gt;
&lt;p&gt;Jepsen is built for testing concurrent systems–mostly databases. Because it tests concurrent systems, the language itself needs good support for concurrency. Clojure’s immutable, persistent data structures make it easier to write correct concurrent programs, and the language and runtime have excellent concurrency support: real threads, promises, futures, atoms, locks, queues, cyclic barriers, all of java.util.concurrent, etc. I also considered languages (like Haskell) with more rigorous control over side effects, but decided that Clojure’s less-dogmatic approach was preferable.&lt;/p&gt;
&lt;p&gt;Because Jepsen tests databases, it needs broad client support. Almost every database has a JVM client, typically written in Java, and Clojure has decent Java interop.&lt;/p&gt;
&lt;p&gt;Because testing is experimental work, I needed a language which was concise, adaptable, and well-suited to prototyping. Clojure is terse, and its syntactic flexibility–in particular, its macro system–work well for that. In particular the threading macros make chained transformations readable, and macros enable re-usable error handling and easy control of resource scopes. The Clojure REPL is really handy for exploring the data a test run produces.&lt;/p&gt;
&lt;p&gt;Tests involve representing, transforming, and inspecting complex, nested data structures. Clojure’s data structures and standard library functions are possibly the best I’ve ever seen. I also print a lot of structures to the console and files: Clojure’s data syntax (EDN) is fantastic for this.&lt;/p&gt;
&lt;p&gt;Because tests involve manipulating a decent, but not huge, chunk of data, I needed a language with “good enough” performance. Clojure’s certainly not the fastest language out there, but idiomatic Clojure is usually within an order of magnitude or two of Java, and I can shave off the difference where critical. The JVM has excellent profiling tools, and these work well with Clojure.&lt;/p&gt;
&lt;p&gt;Jepsen’s (gosh) about a decade old now: I wanted a language with a mature core and emphasis on stability. Clojure is remarkably stable, both in terms of JVM target and the language itself. Libraries don’t “rot” anywhere near as quickly as in Scala or Ruby.&lt;/p&gt;
&lt;p&gt;Clojure does have significant drawbacks. It has a small engineering community and no (broadly-accepted, successful) static typing system. Both of these would constrain a large team, but Jepsen’s maintained and used by only 1-3 people at a time. Working with JVM primitives can be frustrating without dropping to Java; I do this on occasion. Some aspects of the polymorphism system are lacking, but these can be worked around with libraries. The error messages are terrible. I have no apologetics for this. ;-)&lt;/p&gt;
&lt;p&gt;I prototyped Jepsen in a few different languages before settling on Clojure. A decade in, I think it was a pretty good tradeoff.&lt;/p&gt;</content>
    </entry>
</feed>
Raw text
<?xml version="1.0" encoding="UTF-8"?><feed xmlns="http://www.w3.org/2005/Atom"><id>https://aphyr.com/</id><title>Aphyr: Posts</title><updated>2025-05-29T18:05:42-05:00</updated><link href="https://aphyr.com/"></link><link rel="self" href="https://aphyr.com/posts.atom"></link><entry><id>https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess</id><title>The Future of Comments is Lies, I Guess</title><published>2025-05-29T12:36:16-05:00</published><updated>2025-05-29T12:36:16-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;I’ve been involved in content moderation since roughly 2004. I’ve built spam prevention for corporate and personal e-mail, moderated open-source mailing lists and IRC channels, worked at a couple social media networks, and help moderate a Mastodon instance for a few hundred people. In the last few years I’ve wasted more time fighting blog comment spam, and I’m pretty sure Large Language Models (LLMs) are to blame.&lt;/p&gt;
&lt;p&gt;I think of spam as a space with multiple equilibria. Producing spam takes work. Spammers are willing to invest that work because each message has a small chance to make money, or achieve political or emotional goals. Some spam, like the endless identical Viagra scams in my email spam folder, or the PHPBB comment spam I filter out here on aphyr.com, is cheap to generate and easy to catch. I assume the spammers make it up in volume. Other spam, like &lt;a href="https://www.ibm.com/think/topics/spear-phishing"&gt;spear phishing attacks&lt;/a&gt;, is highly time-consuming: the spammer must identify a target, carefully craft a plausible message using, say, the identity of the target’s co-workers, or construct a facade of a bank’s log-in page, and so on. This kind of spam is more likely to make it through filters, but because it takes a lot of human work, is generally only worth it for high-value targets.&lt;/p&gt;
&lt;p&gt;LLMs seem to be changing these equilibria. Over the last year I’ve seen a new class of comment spam, using what I’m fairly sure is LLM-generated text. These comments make specific, plausible remarks about the contents of posts and images, and work in a link to some web site or mention a product. Take this one I caught a few months back:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Walking down a sidewalk lined with vibrant flowers is one of life’s simple joys! It reminds me of playing the [link redacted] slope game, where you have to navigate through colorful landscapes while dodging obstacles.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Before 2023, you’d likely have paid a human a few cents to write and post that. Now, thanks to LLMs, this sort of thing is trivially automated. The model will happily fabricate relatable personal experiences in service of a spam campaign:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;That photo reminds me of the first time I tried macro photography in my backyard. I spent an hour trying to get a clear shot of a red flower, experimenting with angles and lighting. It was so much fun discovering the little details up close! If you ever need a break from photography, I recommend playing Snow Rider 3D for a bit of quick, light-hearted fun.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Other spam seems to glue together LLM remarks with what I think is a hand-written snippet of ad copy. Note the abrupt shift in grammar, diction, and specificity.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This piece masterfully blends technical depth with mythological storytelling, transforming a standard Haskell programming interview into an epic narrative. It cleverly critiques the complexity and absurdity of some technical interviews by illustrating how type-level Haskell can be pushed to esoteric extremes beautiful, powerful, and largely impractical. A fascinating and relevant read for anyone interested in the intersection of programming, language design, and narrative. I’m James Maicle, working at Cryptoairhub where we run a clear and insightful crypto blog. I’ll be bookmarking your site and following the updates. Glad to see so much valuable information shared here looking forward to exploring more strategies together. Thanks for sharing. If you interest about Crypto please visit my website and read my article [link redacted] Crypto Blog.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The same thing is happening on aggregators like Hacker News, where commenters post more-or-less-obviously LLM output for… I’m not sure, exactly. Karma? Weirder still are bots like &lt;a href="https://bsky.app/profile/hackerbriefs.bsky.social"&gt;Hacker Briefs&lt;/a&gt;, which I suspect use an LLM to summarize trending HN posts. Here’s &lt;a href="https://bsky.app/profile/hackerbriefs.bsky.social/post/3lnz4kirkpv25"&gt;its summary of a recent article I wrote&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“Jepsen: Amazon RDS for PostgreSQL 17.4”&lt;/p&gt;
&lt;p&gt;New multi-AZ clusters in Amazon RDS for PostgreSQL offer better failure recovery but may return outdated data when reading after writes. Caution is needed.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is a totally plausible summary of &lt;a href="https://jepsen.io/analyses/amazon-rds-for-postgresql-17.4"&gt;the article&lt;/a&gt;, and it is utterly, laughably wrong. Multi-AZ clusters are not new, and they do not return outdated data when reading after writes. As the abstract succinctly explains, they allow Long Fork, a different anomaly which does not involve real-time orders at all. The bot ignored the actual problem and invented a different one. This sort of spam isn’t obviously motivated by commercial interest, but it is nevertheless depressing: one more drop in the misinformation slurry.&lt;/p&gt;
&lt;p&gt;Of course this is not news. Product reviews are inundated with LLM slop, as are social media networks. LLMs allow for cheap, fast, and automated generation of unique spam which is difficult for machines and humans to identify. The cost falls on me and other moderators, who must sift through LLM bullshit trying to sieve “awkward but sincere human” from “automated attack”. Thanks to OpenAI et al I read more spam, and each message takes longer to check.&lt;/p&gt;
&lt;p&gt;This problem is only going to get worse as LLMs improve and spammers develop more sophisticated ways to use them. In recent weeks I’ve received vague voice messages from strangers with uncanny speech patterns just asking to catch up—a sentence which, had I uttered it prior to 2023, would have been reasonably interpreted as a sign of psychosis. I assume these too are LLM-generated scams, perhaps a &lt;a href="https://dfpi.ca.gov/wp-content/uploads/2025/03/Pig-Butchering-Scam-Playbook.pdf"&gt;pig butchering&lt;/a&gt; scheme. So far these are from strangers, but it’s not hard to imagine an attacker using text and voice synthesis to impersonate a friend, colleague, or lover in a detailed conversation. Or one’s doctor, or bank.&lt;/p&gt;
&lt;p&gt;As the cost of generating slop decreases, it’s easy to imagine LLMs generating personae, correspondence, even months-long relationships with real humans before being deployed for commercial or political purposes. &lt;a href="https://www.techtransparencyproject.org/articles/for-sale-on-facebook-accounts-that-can-run-u.s.-election-ads"&gt;Creating plausible accounts and selling them&lt;/a&gt; has been a successful business model in social media for some time; likewise, we have strong clues that &lt;a href="https://arxiv.org/pdf/2307.16336"&gt;LLMs are already used for social media bots&lt;/a&gt;. Social networks have responded to these attacks via out-of-band mechanisms: IP reputation analysis, javascript and mobile app fingerprinting, statistical correlation across multiple accounts, and so on. I’m not sure how to translate these defenses to less centralized and more privacy-oriented networks, like email or blog spam. I strongly suspect the only reason Mastodon hasn’t been eaten alive by LLM spambots is because we’re just not big enough to be lucrative. But those economics are shifting, and even obscure ecological niches can be worth filling.&lt;/p&gt;
&lt;p&gt;As a moderator, that keeps me up at night.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/387-the-future-of-customer-support-is-lies-i-guess</id><title>The Future of Customer Support is Lies, I Guess</title><published>2025-05-21T07:12:41-05:00</published><updated>2025-05-21T07:12:41-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/387-the-future-of-customer-support-is-lies-i-guess"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;&lt;em&gt;Update, 2025-05-22: TrueNAS was kind enough to reach out and let me know that their support process does not normally incorporate LLMs. They’re talking about what happened internally, and intend to prevent it from happening again through improved documentation and reviewing the support process as a whole. I’m happy to hear it!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.truenas.com/"&gt;TrueNAS&lt;/a&gt; makes file servers, also known as Network Attached Storage (NAS). I bought one of their smaller boxes to house backups, and it’s a great little NAS. For many years TrueNAS sold boxes with a BSD-based operating system, but in the last few years they’ve released a new, Linux-backed operating system called &lt;a href="https://www.truenas.com/truenas-scale/"&gt;TrueNAS SCALE&lt;/a&gt;, also called “Community Edition”. I was considering migrating to TrueNAS SCALE, but the docs &lt;a href="https://web.archive.org/web/20250408163343/https://www.truenas.com/docs/scale/gettingstarted/migrate/migrate-community/"&gt;started off with this warning&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;TrueNAS Enterprise customers should consult with TrueNAS Enterprise Support before attempting migrate.&lt;/p&gt;
&lt;p&gt;The process requires an extended maintenance window, requires executing steps in the correct order to prevent issues with system configuration and operation, and additional system review post-migration to catch and correct any configuration issues.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Since I’d actually purchased their hardware, I wrote to their support email. I asked whether the BSD-based OS was reaching end-of-life, and if there was anything special I should know about the upgrade process for my particular machine. The response came from a customer success coordinator, whose name I’ll omit for privacy. It began:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hi Kyle,&lt;/p&gt;
&lt;p&gt;It sounds like you’re at an important crossroads with your TrueNAS Mini 3.0-X+ system as it transitions from the traditional BSD-based OS to the Linux-based SCALE platform, now known as TrueNAS Community Edition.&lt;/p&gt;
&lt;p&gt;Understanding the Transition&lt;br&gt;
End-of-Life for Certain Versions: The BSD-based version of TrueNAS you’re currently using (TrueNAS-13.0-U6.7) may be reaching its end-of-life phase, prompting a move to newer systems like TrueNAS SCALE, which is based on Linux and offers more modern features.&lt;/p&gt;
&lt;p&gt;Community Edition: The Community edition represents the transition towards a more flexible, open-source platform that leverages FreeBSD jails (similar in concept to containers) for virtualization, providing scalability and extensibility.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The phrase “may be reaching its end-of-life phase” was immediately worrying: how does a TrueNAS support agent not know whether their own products are EOL? But what really shocked me was the blatant lie in the third paragraph: they told me that the new OS was based on Linux, then immediately turned around and said that the new system was based on FreeBSD. FreeBSD jails were TrueNAS’s old approach to virtualization; the new Community Edition &lt;a href="https://www.truenas.com/truenas-community-edition/"&gt;uses Docker containers&lt;/a&gt;. I could not believe that a human being had actually skimmed this response, let alone written it. I was, I think, talking to a Large Language Model (LLM) wearing the face of a person.&lt;/p&gt;
&lt;p&gt;The response went on, coining odd phrases and warning of unspecified risks. “Non-HA (Non-Redundant Array of Independent Disks) TrueNAS hardware can sometimes have specific requirements or limitations when migrating to TrueNAS Community Edition,” they said. RAID and HA are different things in TrueNAS. Moreover, the reason I wrote to support in the first place was to ask what those “specific requirements” were. I was hoping for a support article I’d somehow missed, or a canned response. Instead I got my own question, regurgitated as a warning.&lt;/p&gt;
&lt;p&gt;I wrote with followup questions, and got more nonsense in response. When I asked how TrueNAS Community Edition used FreeBSD jails, a different customer success coordinator replied:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;FreeBSD jails are similar to containers (like Docker) but built into the FreeBSD system. TrueNAS uses them to run isolated apps and services. For example, you can run Plex, Nextcloud, or other apps safely and separately using jails — all within your TrueNAS system.&lt;/p&gt;
&lt;p&gt;They’re lightweight, secure, and don’t need a full virtual machine to run each service. It’s a great way to expand what your system can do without using much extra power or memory.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is all lies. TrueNAS CORE, the old FreeBSD OS, ran jails. TrueNAS SCALE / Community Edition runs Docker containers.&lt;/p&gt;
&lt;p&gt;Quoting their HA/RAID confusion, I asked whether for TrueNAS hardware, the two were actually the same.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Good question — they’re different:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;RAID protects your data by spreading it across multiple disks, so if one disk fails, your data is still safe.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;HA (High Availability) means the system has two controllers. If one fails, the other takes over so the system keeps running. This is used in high-end enterprise systems (like our TrueNAS M-series), not in the MINI line.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is, I believe, accurate, but betrays no sign that the original response conflated the two.&lt;/p&gt;
&lt;p&gt;I asked what specific limitations my NAS model had that would cause issues migrating to Community Edition. They replied:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The MINI-3.0-X+ is compatible with Community Edition (TrueNAS CORE), but there are a few things to know:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;No Enterprise-only features: Some features like enclosure management and support alerts are only available in the Enterprise version.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;No HA support: Not an issue for the MINI, since it’s a single-controller system anyway.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Support level: CORE is community-supported, so it doesn’t come with iXsystems support like the Enterprise version does.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;But overall — the MINI runs CORE just fine for most use cases!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This sounds like a reasonable response, but it too is complete nonsense. TrueNAS CORE is a completely different operating system. I asked whether it would run Community Edition, also known as SCALE.&lt;/p&gt;
&lt;p&gt;I worked support for many years, helping people with desktops, systems and networking, and occasionally for web services. I’m used to knowledge base articles and canned responses on both ends—and indeed, that’s exactly what I was hoping TrueNAS would provide. All I needed to know was why the documentation warned their customers to contact support before upgrading. Instead I got a pile of vague nonsense and lies.&lt;/p&gt;
&lt;p&gt;I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.&lt;/p&gt;
&lt;p&gt;Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/386-the-future-of-newspapers-is-lies-i-guess</id><title>The Future of Newspapers is Lies, I Guess</title><published>2025-05-21T06:51:40-05:00</published><updated>2025-05-21T06:51:40-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/386-the-future-of-newspapers-is-lies-i-guess"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;&lt;em&gt;Update, 2023-05-23: Added a paragraph about Dr. Howard Whiteman’s non-existent quote.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;I subscribe to the Chicago Sun-Times, a non-profit newspaper. This week they sent me a sixty-four page special insert, branded with the Chicago Sun-Times logo, full of LLM nonsense. Yesterday I wrote the following letter to the Chicago Sun-Times. That evening, they published &lt;a href="https://chicago.suntimes.com/news/2025/05/20/syndicated-content-sunday-print-sun-times-ai-misinformation"&gt;this followup&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Dear Mr. Buscaglia and the Chicago Sun-Times,&lt;/p&gt;
&lt;p&gt;The May 18th “Heat Index” summer insert feels remarkably like “AI slop”: text generated by a large language model. The unusually enthusiastic use of adjectives coats the entire insert with an oily sheen, but I’m also concerned that there may be wholesale fabrications in the text.&lt;/p&gt;
&lt;p&gt;For instance, “Summer soundtracks” cites Dr. Daniel Levitin, in an interview with Scientific American, as saying “Music activates the brain’s reward centers in ways similar to other pleasurable experiences. When paired with meaningful activities, these songs become powerful memory anchors that can transport us back to those moments for decades”. These phrases, and substrings thereof, return zero matches on Kagi or Google. Scientific American’s archives include a &lt;a href="https://www.scientificamerican.com/article/exploring-the-musical-bra/"&gt;January 22, 2001 article&lt;/a&gt; with several quotes from Levitin, but nothing like the purported quotes.&lt;/p&gt;
&lt;p&gt;The “Nightfall adventures” article cites Dr. Howard Whiteman, professor at Murray State University. “Understanding nocturnal adaptations helps us appreciate the complexity of ecosystems. Even common backyard species display fascinating behaviors that most people never witness.” I checked with Dr. Whiteman, who confirmed that he had never said these words:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I have never spoken to the Chicago Sun-Times or any other Chicago paper, and I can’t recall ever saying any of those words to anyone, ever.&lt;/p&gt;
&lt;p&gt;I write for some regional papers, some of which is online, and I could see an AI putting all of my writing together somehow into those words, but I’ve never actually written about nocturnal behavior per se.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The “Summer food trends” article cites an interview in Bon Appetit, claiming Padma Lakshmi said, “What distinguishes this summer’s global food exploration is how these international flavors are being adapted to local, seasonal ingredients.” Lakshmi &lt;em&gt;is&lt;/em&gt; published in &lt;a href="https://www.bonappetit.com/story/padma-lakshmi"&gt;an interview with BA&lt;/a&gt;, but I can’t find any use of the uncanny marketer-speak “this summer’s global food exploration”.&lt;/p&gt;
&lt;p&gt;The same article also cites the National Ice Cream Retailers Association as forecasting “unexpected savory notes” and “hyperlocal ingredient sourcing” as the “fastest-growing segments in premium frozen desserts”. I find it hard to believe these are segments at all—nor do these phrases appear anywhere on the &lt;a href="https://icecreamassociation.org/general/custom.asp?page=Aboutus"&gt;NICRA web site&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The “Summer reading list for 2025” recommends books like “The Rainmakers”, by Percival Everett—a real author, but as far as I can tell, not a real text. Immediately thereafter it offers “Salt and Honey”, by “Delia Owens”—again, a real writer, not a real book. I started reaching out to some of the experts cited in the insert to ask whether their quotes were real, then realized &lt;a href="https://www.theverge.com/ai-artificial-intelligence/670510/chicago-sun-times-ai-generated-reading-list"&gt;the Verge beat me to it&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;These examples are just from a cursory skim; the insert fails my LLM “sniff test” on essentially every page. How did this happen? How do you print and distribute a full-color, sixty-four page special edition without anyone reading it first? Many of the articles have no byline, but those that do are purportedly by Marco Buscaglia. Was he responsible? Or is he too, like so many of the people cited in this nonsense publication, a victim of confabulatory applied statistics? This would be a great project for the Sun-Times newsroom, assuming OpenAI hasn’t eaten them yet.&lt;/p&gt;
&lt;p&gt;Doubtfully,&lt;/p&gt;
&lt;p&gt;—Kyle Kingsbury&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/381-when-flat-rate-movers-wont-answer-your-calls</id><title>When Flat Rate Movers Won't Answer Your Calls</title><published>2025-05-03T09:23:16-05:00</published><updated>2025-05-03T09:23:16-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/381-when-flat-rate-movers-wont-answer-your-calls"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;Back in 2023 I went through an inter-state move with &lt;a href="https://www.flatrate.com/moving/"&gt;Flat Rate Movers,
LTD.&lt;/a&gt;, a.k.a. Flat Rate Moving, US DOT 488466, MC 254356. They bungled the job, damaged my stuff
and my house, and then refused to talk to me or their own insurance company. I
placed dozens of calls, wrote letters, emails, nothing worked. I finally got
some money for the damage by working with their insurer directly. I know a lot
of folks struggle with moving companies, and figured I’d write up a bit of this
story in case it helps. You can &lt;a href="#finding-their-insurance"&gt;skip ahead&lt;/a&gt; if you
just want to know about the FMCSA insurer database.&lt;/p&gt;
&lt;p&gt;In November 2022 I scheduled the move with Flat Rate. We took a detailed video
inventory of everything to be moved. I paid over $14,000 for an “all inclusive
move”, which, per the move plan, included full packing services, blankets for furniture, boxes for electronics, lamps, and bicycles, a custom crate for a tabletop, and “extended valuation coverage”—their insurance policy.&lt;/p&gt;
&lt;p&gt;A few days before the move in March 2023, Flat Rate called to tell me a
five-person team would arrive. When the crew arrived, it wasn’t Flat Rate. It
was a team of two from &lt;a href="https://www.esquiremoving.com/"&gt;Esquire Moving Inc&lt;/a&gt; (US DOT 2820253, MC 940499). They had
no idea they were supposed to provide packing services. The crate I paid for was nowhere to be found. They didn’t have enough boxes or blankets. I had to have everything out that day, so
there wasn’t any choice—I worked alongside the team for thirteen hours to
complete the packing and move-out.&lt;/p&gt;
&lt;p&gt;I let the team handle furniture protection and loading the truck, which was a
mistake. As they unloaded a few days later, I realized much of the furniture had
been completely unprotected. My bicycle wasn’t boxed. Lamps were tossed in
loose. There was extensive damage to furniture, art, books, and my home: I
watched the crew rip a doorframe off the wall. The crew provided me with a
backdated bill of lading only on delivery.&lt;/p&gt;
&lt;p&gt;I called Flat Rate a couple days later to discuss my concerns. Their
representive apologized and acknowledged that a crew of two was insufficient. I
sent Flat Rate a detailed email explaining their failure to provide promised
services, accompanied by photographs of the move process and the resulting
damage. I asked for a partial refund, and for Flat Rate to pay for the damage
they caused. I received only an out-of-office auto-response.&lt;/p&gt;
&lt;p&gt;On March 21st, I received a generic form letter from Flat Rate’s claims
department. I filled out the claim forms and submitted them on March 25th, and
repeated that I needed more than an insurance claim: Flat Rate obviously failed
to provide the services I’d paid for.&lt;/p&gt;
&lt;p&gt;Then things got weird. I placed call after call to the claims department. They
didn’t answer. I left voicemail after voicemail. On April 28th I reached an
actual human, but their computers were apparently down, and no supervisor was
in the building—the representative couldn’t do anything but take down a note
to call me back. They never did. I left another voicemail on May 5th. Again on
May 23rd. Again on August 4th. August 8th. I began to realize that Flat Rate’s
strategy was simply to never answer the phone.&lt;/p&gt;
&lt;p&gt;On August 14th, a new hire in the claims department called me: she was taking over my case. She dispatched a local furniture company
to inspect the damages and help complete the claim. The inspectors agreed: the
movers had done extensive damage. I provided them with documentation and photos
as well.&lt;/p&gt;
&lt;p&gt;On September 14 I called the furniture company and confirmed that yes, they had
submitted their report to Flat Rate. However, the furniture company had a
confidentiality agreement with Flat Rate, and could not release their report to
me. I called their contact at Flat Rate, who didn’t answer, and left a
voicemail. I called the claims representative’s direct line: her number was no
longer in service. On September 19th I got a call from the claims
representative, who promised she’d have a decision by the end of the day. Flat Rate never talked to me again.&lt;/p&gt;
&lt;p&gt;I called again and again, but got voicemail every time. I tried a letter from
my lawyers, an arbitration request with the &lt;a href="https://trucking.org/arbitration-loss-and-damages"&gt;American Trucking
Association&lt;/a&gt;, complaints with the &lt;a href="https://www.usa.gov/agencies/federal-motor-carrier-safety-administration"&gt;Federal Motor Carrier Safety
Administration&lt;/a&gt;,
the &lt;a href="https://www.dot.ny.gov/index"&gt;New York Department of Transportation&lt;/a&gt;’s
Motor Carrier complaint department, and the &lt;a href="https://www.dfs.ny.gov/"&gt;New York State Department of
Financial Services&lt;/a&gt;; none of this went anywhere.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#finding-their-insurance" id="finding-their-insurance"&gt;Finding Their Insurance&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Here’s the thing: movers offer coverage, they have claims departments, and they do many insurance-like things in house. But for complicated reasons, they are not actually insurance companies, and are not regulated as such. Their coverage is backstopped by separate, real insurance companies. I eventually discovered I could bypass Flat Rate and talk to their backing insurance company directly. The Federal Motor Carrier Safety
Administration maintains &lt;a href="https://li-public.fmcsa.dot.gov/LIVIEW/pkg_carrquery.prc_getdetail"&gt;a public database of moving companies&lt;/a&gt;. Putting in their US DOT number
(488466) yields a page with some basic info. At the bottom are links with their
active and pending insurance, and the history of their insurers.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/381/fmcsa.png" alt="A screenshot of the FMCSA page, with links at the bottom" title="A screenshot of the FMCSA page, with links at the bottom"&gt;&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/381/insurance-history.png" alt="A screenshot of their insurer history" title="A screenshot of their insurer history"&gt;&lt;/p&gt;
&lt;p&gt;The insurance history told me their cargo insurer at the time of my move was
Hanover Insurance Company, with policy RHX H706231. The database also had
Hanover’s phone number. I was able to call Hanover and ask about the
policy—it took a few tries, but I got through to a claims adjuster there who
was able to confirm that yes, I could file a claim directly with Hanover. I
sent over a package of documentation, including the photos and the story so
far.&lt;/p&gt;
&lt;p&gt;Hanover Insurance informed me they’d never heard anything about my claim, and
called Flat Rate to check. Over the next few days we discovered that Flat Rate
wasn’t just refusing to answer my calls—they wouldn’t answer calls from their insurance company either. Hanover wasn’t able to obtain the report from the furniture company, but they did agree, on the basis of the photos, to pay a reasonable amount for the damage to cargo. I had a check a few days later.&lt;/p&gt;
&lt;p&gt;I’m still unhappy with Flat Rate: they charged me a good deal of money for services
they failed to deliver, and never paid for damage to the house. I was gearing up for small claims court—but during the tail end of this fiasco I was entering a fresh hell of home renovation, and got distracted. If you’re reading this, Flat Rate, you still owe me.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/380-comments-on-executive-order-14168</id><title>Comments on Executive Order 14168</title><published>2025-02-21T18:04:55-05:00</published><updated>2025-02-21T18:04:55-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/380-comments-on-executive-order-14168"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;&lt;em&gt;Submitted to the Department of State, which is &lt;a href="https://www.federalregister.gov/documents/2025/02/18/2025-02696/30-day-notice-of-proposed-information-collection-application-for-a-us-passport-for-eligible"&gt;requesting comments&lt;/a&gt; on a proposed change which would align US passport gender markers with &lt;a href="https://www.whitehouse.gov/presidential-actions/2025/01/defending-women-from-gender-ideology-extremism-and-restoring-biological-truth-to-the-federal-government/"&gt;executive order 14168&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Executive order 14168 is biologically incoherent and socially cruel. All passport applicants should be allowed to select whatever gender markers they feel best fit, including M, F, or X.&lt;/p&gt;
&lt;p&gt;In humans, neither sex nor gender is binary at any level. There are several possible arrangements of sex chromosomes: X, XX, XY, XXY, XYY, XXX, tetrasomies, pentasomies, etc. A single person can contain a mosaic of cells with different genetics: some XX, some XYY. Chromosomes may not align with genitalia: people with XY chromosomes may have a vulva and internal testes. People with XY chromosomes and a small penis may be surgically and socially reassigned female at birth—and never told what happened. None of these biological dimensions necessarily align with one’s internal concept of gender, or one’s social presentation.&lt;/p&gt;
&lt;p&gt;The executive order has no idea how biology works. It defines “female” as “a person belonging, at conception, to the sex that produces the large reproductive cell”. Zygotes do not produce reproductive cells at all: under this order none  of us have a sex. Oogenesis doesn’t start until over a month into embryo development. Even if people were karyotyping their zygotes immediately after conception so they could tell what “legal” sex they were going to be, they could be wrong: which gametes we produce depends on the formation of the genital ridge.&lt;/p&gt;
&lt;p&gt;All this is to say that if people fill out these forms using this definition of sex, they’re guessing at a question which is both impossible to answer and socially irrelevant. You might be one of the roughly two percent of humans born with an uncommon sexual development and not even know it. Moreover, the proposed change fundamentally asks the wrong question: gender markers on passports are used by border control agents, and are expected to align with how those agents read the passport holder’s gender. A mismatch will create needless intimidation and hardship for travelers.&lt;/p&gt;
&lt;p&gt;Of course most of us will not have our identities challenged under this order. That animus is reserved for trans people, for gender-non-conforming people, for anyone whose genetics, body, dress, voice, or mannerisms don’t quite fit the mold. Those are the people who will suffer under this order. That cruelty should be resisted.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-nginx</id><title>Geoblocking the UK with Debian &amp; Nginx</title><published>2025-02-20T14:45:55-05:00</published><updated>2025-02-20T14:45:55-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-nginx"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;A few quick notes for other folks who are &lt;a href="https://geoblockthe.uk"&gt;geoblocking the UK&lt;/a&gt;. I just set up a basic geoblock with Nginx on Debian. This is all stuff you can piece together, but the Maxmind and Nginx docs are a little vague about the details, so I figure it’s worth an actual writeup. My Nginx expertise is ~15 years out of date, so this might not be The Best Way to do things. YMMV.&lt;/p&gt;
&lt;p&gt;First, register for a free &lt;a href="https://www.maxmind.com/en/geolite2/signup"&gt;MaxMind account&lt;/a&gt;; you’ll need this to subscribe to their GeoIP database. Then set up a daemon to maintain a copy of the lookup file locally, and Nginx’s GeoIP2 module:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span&gt;&lt;/span&gt;apt install geoipupdate libnginx-mod-http-geoip2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a license key on the MaxMind site, and download a copy of the config file you’ll need. Drop that in &lt;code&gt;/etc/GeoIP.conf&lt;/code&gt;. It’ll look like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;AccountID XXXX
LicenseKey XXXX
EditionIDs GeoLite2-Country
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The package sets up a cron job automatically, but we should grab an initial copy of the file. This takes a couple minutes, and writes out &lt;code&gt;/var/lib/GeoIP/GeoLite2-Country-mmdb&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&lt;span&gt;&lt;/span&gt;geoipupdate
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The GeoIP2 module should already be loaded via &lt;code&gt;/etc/nginx/modules-enabled/50-mod-http-geoip2.conf&lt;/code&gt;. Add a new config snippet like &lt;code&gt;/etc/nginx/conf.d/geoblock.conf&lt;/code&gt;. The first part tells Nginx where to find the GeoIP database file, and then extracts the two-letter ISO country code for each request as a variable. The &lt;code&gt;map&lt;/code&gt; part sets up an &lt;code&gt;$osa_geoblocked&lt;/code&gt; variable, which is set to &lt;code&gt;1&lt;/code&gt; for GB, otherwise &lt;code&gt;0&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;geoip2 /var/lib/GeoIP/GeoLite2-Country.mmdb {
  $geoip2_data_country_iso_code country iso_code;
}

map $geoip2_data_country_iso_code $osa_geoblocked {
  GB      1;
  default 0;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Write an HTML file somewhere like &lt;code&gt;/var/www/custom_errors/osa.html&lt;/code&gt;, explaining the block. Then serve that page for HTTP 451 status codes: in &lt;code&gt;/etc/nginx/sites-enabled/whatever&lt;/code&gt;, add:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;server {
  ...
  # UK OSA error page
  error_page 451 /osa.html;
  location /osa.html {
    internal;
    root /var/www/custom_errors/;
  }

  # When geoblocked, return 451
  location / {
    if ($osa_geoblocked = 1) {
      return 451;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Test your config with &lt;code&gt;nginx -t&lt;/code&gt;, and then &lt;code&gt;service nginx reload&lt;/code&gt;. You can test how things look from the UK using a VPN service, or something like &lt;a href="https://www.locabrowser.com/"&gt;locabrowser&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is, to be clear, a bad solution. MaxMind’s free database is not particularly precise, and in general IP lookup tables are chasing a moving target. I know for a fact that there are people in non-UK countries (like Ireland!) who have been inadvertently blocked by these lookup tables. Making those people use Tor or a VPN &lt;em&gt;sucks&lt;/em&gt;, but I don’t know what else to do in the current regulatory environment.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/378-seconds-since-the-epoch</id><title>Seconds Since the Epoch</title><published>2024-12-25T13:46:21-05:00</published><updated>2024-12-25T13:46:21-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/378-seconds-since-the-epoch"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;This is not at all news, but it comes up often enough that I think there should be a concise explanation of the problem. People, myself included, like to say that POSIX time, also known as Unix time, is the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date"&gt;number&lt;/a&gt; &lt;a href="https://www.gnu.org/software/coreutils/manual/html_node/Seconds-since-the-Epoch.html"&gt;of&lt;/a&gt; &lt;a href="https://man7.org/linux/man-pages/man2/time.2.html"&gt;seconds&lt;/a&gt; &lt;a href="https://pkg.go.dev/time#Unix"&gt;since&lt;/a&gt; &lt;a href="https://dev.mysql.com/doc/refman/8.4/en/datetime.html"&gt;the&lt;/a&gt; &lt;a href="https://ruby-doc.org/core-3.0.0/Time.html"&gt;Unix&lt;/a&gt; &lt;a href="https://docs.datastax.com/en/cql-oss/3.x/cql/cql_reference/timestamp_type_r.html"&gt;epoch&lt;/a&gt;, which was 1970-01-01 at 00:00:00.&lt;/p&gt;
&lt;p&gt;This is not true. Or rather, it isn’t true in the sense most people think. For example, it is presently 2024-12-25 at 18:51:26 UTC. The POSIX time is 1735152686. It has been 1735152713 seconds since the POSIX epoch. The POSIX time number is twenty-seven seconds lower.&lt;/p&gt;
&lt;p&gt;This is because POSIX time is derived &lt;a href="https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub151-1.pdf"&gt;in IEEE 1003.1&lt;/a&gt; from &lt;a href="https://en.wikipedia.org/wiki/Coordinated_Universal_Time"&gt;Coordinated Universal Time&lt;/a&gt;. The standard assumes that every day is exactly 86,400 seconds long. Specifically:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The &lt;em&gt;time()&lt;/em&gt; function returns the value of time in &lt;b&gt;seconds since the Epoch&lt;/b&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Which is defined as:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;b&gt;seconds since the Epoch.&lt;/b&gt; A value to be interpreted as the number of seconds between a specified time and the Epoch. A Coordinated Universal Time name (specified in terms of seconds (&lt;em&gt;tm_sec&lt;/em&gt;), minutes (&lt;em&gt;tm_min&lt;/em&gt;), hours (&lt;em&gt;tm_hour&lt;/em&gt;), days since January 1 of the year (&lt;em&gt;tm_yday&lt;/em&gt;), and calendar year minus 1900
(&lt;em&gt;tm_year&lt;/em&gt;)) is related to a time represented as &lt;em&gt;seconds since the Epoch&lt;/em&gt; according to the expression below.&lt;/p&gt;
&lt;p&gt;If year &amp;lt; 1970 or the value is negative, the relationship is undefined. If year ≥ 1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the expression:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;tm_sec&lt;/em&gt; + &lt;em&gt;tm_min&lt;/em&gt; * 60 + &lt;em&gt;tm_hour&lt;/em&gt; * 3600 + &lt;em&gt;tm_yday&lt;/em&gt; * 86400 +
(&lt;em&gt;tm_year&lt;/em&gt;-70) * 31536000 + ((&lt;em&gt;tm_year&lt;/em&gt; - 69) / 4) * 86400&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The length of the day is not 86,400 seconds, and in fact changes over time. To keep UTC days from drifting too far from solar days, astronomers periodically declare a &lt;a href="https://en.wikipedia.org/wiki/Leap_second"&gt;leap second&lt;/a&gt; in UTC. Consequently, every few years POSIX time jumps backwards, &lt;a href="https://marc.info/?l=linux-kernel&amp;amp;m=134113577921904"&gt;wreaking&lt;/a&gt; &lt;a href="https://www.zdnet.com/article/qantas-suffers-delays-due-to-linux-leap-second-bug/"&gt;utter&lt;/a&gt; &lt;a href="https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/"&gt;havoc&lt;/a&gt;. Someday it might jump forward.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#archaeology" id="archaeology"&gt;Archaeology&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Appendix B of IEEE 1003 has a fascinating discussion of leap seconds:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The concept of leap seconds is added for precision; at the time this standard was published, 14 leap seconds had been added since January 1, 1970. These 14 seconds are ignored to provide an easy and compatible method of computing time differences.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I, too, love to ignore things to make my life easy. The standard authors knew “seconds since the epoch” were not, in fact, seconds since the epoch. And they admit as much:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Most systems’ notion of “time” is that of a continuously-increasing value, so this value should increase even during leap seconds. However, not only do most systems not keep track of leap seconds, but most systems are probably not synchronized to any standard time reference. Therefore, it is inappropriate to require that a time represented as seconds since the Epoch precisely represent the number of seconds between the referenced time and the Epoch.&lt;/p&gt;
&lt;p&gt;It is sufficient to require that applications be allowed to treat this time as if it represented the number of seconds between the referenced time and the Epoch. It is the responsibility of the vendor of the system, and the administrator of the system, to ensure that this value represents the number of seconds between the referenced time and the Epoch as closely as necessary for the application being run on that system….&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I imagine there was some debate over this point. The appendix punts, saying that vendors and administrators must make time align “as closely as necessary”, and that “this value should increase even during leap seconds”. The latter is achievable, but the former is arguably impossible: the standard requires POSIX clocks be twenty-seven seconds off.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Consistent interpretation of seconds since the Epoch can be critical to certain types of distributed applications that rely on such timestamps to synchronize events. The accrual of leap seconds in a time standard is not predictable. The number of leap seconds since the Epoch will likely increase. The standard is
more concerned about the synchronization of time between applications of astronomically short duration and the Working Group expects these concerns to become more critical in the future.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In a sense, the opposite happened. Time synchronization is &lt;em&gt;always&lt;/em&gt; off, so systems generally function (however incorrectly) when times drift a bit. But leap seconds are rare, and the linearity evoked by the phrase “seconds since the epoch” is so deeply baked in to our intuition, that software can accrue serious, unnoticed bugs. Until a few years later, one of those tiny little leap seconds takes down a big chunk of the internet.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#what-to-do-instead" id="what-to-do-instead"&gt;What To Do Instead&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;If you just need to compute the duration between two events on one computer, use &lt;a href="https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/reference_guide/sect-posix_clocks#sect-POSIX_clocks"&gt;&lt;code&gt;CLOCK_MONOTONIC&lt;/code&gt;&lt;/a&gt;, or better yet, &lt;code&gt;CLOCK_BOOTTIME&lt;/code&gt;. If you don’t need to exchange timestamps with other systems that assume POSIX time, use &lt;a href="https://www.ipses.com/eng/in-depth-analysis/standard-of-time-definition/"&gt;TAI, GPS, or maybe LORAN&lt;/a&gt;. If you do need rough alignment with other POSIX-timestamp systems, &lt;a href="https://developers.google.com/time/smear"&gt;smear leap seconds&lt;/a&gt; over a longer window of time. Libraries like &lt;a href="https://github.com/qntm/t-a-i"&gt;qntm’s t-a-i&lt;/a&gt; can convert back and forth between POSIX and TAI.&lt;/p&gt;
&lt;p&gt;There’s an ongoing effort to &lt;a href="https://www.timeanddate.com/news/astronomy/end-of-leap-seconds-2022"&gt;end leap seconds&lt;/a&gt;, hopefully &lt;a href="https://www.bipm.org/documents/20126/64811223/Resolutions-2022.pdf/281f3160-fc56-3e63-dbf7-77b76500990f"&gt;by 2035&lt;/a&gt;. It’ll require additional work to build conversion tables into everything that relies on the “86,400 seconds per day” assumption, but it should also make it much simpler to ask questions like “how many seconds between these two times”. At least for times after 2035!&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/371-threads-wont-take-you-south-of-market</id><title>Threads Won't Take You South of Market</title><published>2024-12-01T10:01:36-05:00</published><updated>2024-12-01T10:01:36-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/371-threads-wont-take-you-south-of-market"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;In June 2023, when &lt;a href="https://threads.net"&gt;Threads&lt;/a&gt; announced their &lt;a href="https://techcrunch.com/2023/07/05/adam-mosseri-says-metas-threads-app-wont-have-activitypub-support-at-launch/"&gt;plans to federate&lt;/a&gt; with other &lt;a href="https://en.wikipedia.org/wiki/Fediverse"&gt;Fediverse instances&lt;/a&gt;, there was a good deal of &lt;a href="https://fedipact.online/"&gt;debate&lt;/a&gt; around whether smaller instances should allow federation or block it pre-emptively. As one of the admins of &lt;a href="https://woof.group"&gt;woof.group&lt;/a&gt;, I wrote about some of the &lt;a href="https://blog.woof.group/announcements/considering-large-instance-federation"&gt;potential risks and rewards&lt;/a&gt; of federating with Threads. We decided to &lt;a href="https://blog.woof.group/announcements/deferring-threads-federation"&gt;wait and see&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In my queer and leather circles, Facebook and Instagram have been generally understood as hostile environments for over a decade. In 2014, their &lt;a href="https://www.eff.org/deeplinks/2014/09/facebooks-real-name-policy-can-cause-real-world-harm-lgbtq-community"&gt;“Real Name” policy&lt;/a&gt; made life particularly difficult for trans people, drag queens, sex workers, and people who, for various reasons, needed to keep their real name disconnected from their queer life. My friends have been repeatedly suspended from both platforms for showing too much skin, or using the peach emoji. Meta’s moderation has been aggressive, opaque, and wildly inconsistent: sometimes full nudity is fine; other times a kiss or swimsuit is beyond the line. In some circles, maintaining a series of backup accounts in advance of one’s ban became de rigueur.&lt;/p&gt;
&lt;p&gt;I’d hoped that federation between Threads and the broader Fediverse might allow a &lt;a href="https://blog.woof.group/mods/the-shape-of-social-space"&gt;more nuanced spectrum&lt;/a&gt; of moderation norms. Threads might opt for a more conservative environment locally, but through federation, allow their users to interact with friends on instances with more liberal norms. Conversely, most of my real-life friends are still on Meta services—I’d love to see their posts and chat with them again. Threads could communicate with Gay Fedi (using the term in the broadest sense), and de-rank or hide content they don’t like on a per-post or per-account basis.&lt;/p&gt;
&lt;p&gt;This world seems technically feasible. Meta reports &lt;a href="https://techcrunch.com/2024/11/03/threads-now-has-275m-monthly-active-users/"&gt;275 million Monthly Active Users (MAUs)&lt;/a&gt;, and over &lt;a href="https://www.statista.com/statistics/1092227/facebook-product-dau/"&gt;three billion&lt;/a&gt; accross other Meta services. Fediverse has something like &lt;a href="https://fedidb.org/"&gt;one million MAUs across various instances&lt;/a&gt;. This is not a large jump in processing or storage; nor would it seem to require a large increase in moderation staff. Threads has already committed to doing the requisite engineering, user experience, and legal work to allow federation across a broad range of instances. Meta is swimming in cash.&lt;/p&gt;
&lt;p&gt;All this seems a moot point. A year and a half later, Threads &lt;a href="https://www.theverge.com/24107998/threads-fediverse-mastodon-how-to"&gt;is barely half federated&lt;/a&gt;. It publishes Threads posts to the world, but only if you dig in to the settings and check the “Fediverse Sharing” box. Threads users can see replies to their posts, but can’t talk back. Threads users can’t mention others, see mentions from other people, or follow anyone outside Threads. This may work for syndication, but is essentially unusable for conversation.&lt;/p&gt;
&lt;p&gt;Despite the fact that Threads users can’t follow or see mentions from people on other instances, Threads has already &lt;a href="https://www.threads.net/moderated_servers"&gt;opted to block&lt;/a&gt; a slew of instances where gay &amp;amp; leather people congregate. Threads blocks &lt;a href="https://hypno.social"&gt;hypno.social&lt;/a&gt;, &lt;a href="rubber.social"&gt;rubber.social&lt;/a&gt;, &lt;a href="https://4bear.com"&gt;4bear.com&lt;/a&gt;, &lt;a href="https://nsfw.lgbt"&gt;nsfw.lgbt&lt;/a&gt;, &lt;a href="https://kinkyelephant.com"&gt;kinkyelephant.com&lt;/a&gt;, &lt;a href="https://kinktroet.social"&gt;kinktroet.social&lt;/a&gt;, &lt;a href="https://barkclub.xyz"&gt;barkclub.xyz&lt;/a&gt;, &lt;a href="https://mastobate.social"&gt;mastobate.social&lt;/a&gt;, and &lt;a href="https://kinky.business"&gt;kinky.business&lt;/a&gt;. They also block the (now-defunct) instances &lt;a href="https://bear.community"&gt;bear.community&lt;/a&gt;, &lt;a href="https://gaybdsm.group"&gt;gaybdsm.group&lt;/a&gt;, and &lt;a href="https://gearheads.social"&gt;gearheads.social&lt;/a&gt;. They block more general queer-friendly instances like &lt;a href="https://bark.lgbt"&gt;bark.lgbt&lt;/a&gt;, &lt;a href="https://super-gay.co"&gt;super-gay.co&lt;/a&gt;, &lt;a href="https://gay.camera"&gt;gay.camera&lt;/a&gt;, and &lt;a href="https://gaygeek.social"&gt;gaygeek.social&lt;/a&gt;. They block sex-positive instances like &lt;a href="https://nsfwphotography.social"&gt;nsfwphotography.social&lt;/a&gt;, &lt;a href="https://nsfw.social"&gt;nsfw.social&lt;/a&gt;, and &lt;a href="https://net4sw.com"&gt;net4sw.com&lt;/a&gt;. All these instances are blocked for having “violated our Community Standards or Terms of Use”. Others like &lt;a href="https://fisting.social"&gt;fisting.social&lt;/a&gt;, &lt;a href="https://mastodon.hypnoguys.com"&gt;mastodon.hypnoguys.com&lt;/a&gt;, &lt;a href="https://abdl.link"&gt;abdl.link&lt;/a&gt;, &lt;a href="https://qaf.men"&gt;qaf.men&lt;/a&gt;, and &lt;a href="https://social.rubber.family"&gt;social.rubber.family&lt;/a&gt;, are blocked for having “no publicly accessible feed”. I don’t know what this means: hypnoguys.social, for instance, has the usual Mastodon &lt;a href="https://mastodon.hypnoguys.com/public/local"&gt;publically accessible local feed&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It’s not like these instances are hotbeds of spam, hate speech, or harassment: woof.group federates heavily with most of the servers I mentioned above, and we rarely have problems with their moderation. Most have reasonable and enforced media policies requiring sensitive-media flags for genitals, heavy play, and so on. Those policies are generally speaking looser than Threads (woof.group, for instance, allows butts!) but there are plenty of accounts and posts on these instances which would be anodyne under Threads’ rules.&lt;/p&gt;
&lt;p&gt;I am shocked that woof.group is &lt;em&gt;not&lt;/em&gt; on Threads’ blocklist yet. We have similar users who post similar things. Our content policies are broadly similar—several of the instances Threads blocks actually adopted woof.group’s specific policy language. I doubt it’s our size: Threads blocks several instances with less than ten MAUs, and woof.group has over seven hundred.&lt;/p&gt;
&lt;p&gt;I’ve been out of the valley for nearly a decade, and I don’t have insight into Meta’s policies or decision-making. I’m sure Threads has their reasons. Whatever they are, Threads, like all of Meta’s services, feels distinctly uncomfortable with sex, and sexual expression is a vibrant aspect of gay culture.&lt;/p&gt;
&lt;p&gt;This is part of why I started woof.group: we deserve spaces moderated with our subculture in mind. But I also hoped that by designing a moderation policy which compromised with normative sensibilities, we might retain connections to a broader set of publics. This particular leather bar need not be an invite-only clubhouse; it can be a part of a walkable neighborhood. For nearly five years we’ve kept that balance, retaining open federation with most all the Fediverse. I get the sense that Threads intends to wall its users off from our world altogether—to make “bad gays” invisible. If Threads were a taxi service, it wouldn’t take you &lt;a href="https://sfleatherdistrict.org/wp-content/uploads/2021/04/Rubin-Valley-of-Kings.pdf"&gt;South of Market&lt;/a&gt;.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/370-ecobee-settings-for-heat-pumps-with-resistive-aux-heat</id><title>Ecobee Settings for Heat Pumps with Resistive Aux Heat</title><published>2024-02-28T23:41:38-05:00</published><updated>2024-02-28T23:41:38-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/370-ecobee-settings-for-heat-pumps-with-resistive-aux-heat"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;I’m in the process of replacing a old radiator system with a centrally-ducted, air-source heat pump system with electric resistive backup heat. I’ve found that the default ecobee algorithm seems to behave surprisingly poorly for this system, and wanted to write up some of the settings that I’ve found yield better behavior.&lt;/p&gt;
&lt;p&gt;A disclaimer. I’m not an HVAC professional. I have two decades in software operations, a background in physics, and far too much experience inferring system dynamics from timeseries graphs. This advice may void your warranty, burn your house down, etc.; everything you do is at your own risk.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#the-system" id="the-system"&gt;The System&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;First, a bit about the system in question. You can skip this section if you know about heat pumps, short cycling, staging, etc.&lt;/p&gt;
&lt;p&gt;There are two main subsystems: a heat pump and an air handler. The heat pump sits outside: it has a fan which moves outside air over a heat exchanger, and a compressor, which compresses a working fluid. The working fluid is connected in a loop to the air handler, where it runs through another heat exchanger to heat or cool the inside air. The air handler also has a blower fan which circulates air through the whole house. If the heat pump can’t keep up with demand, the air handler also has a pair of resistive electric heating coils, called &lt;em&gt;aux heat&lt;/em&gt;, which can supplement or take over from the heat pumps.&lt;/p&gt;
&lt;p&gt;A few important things to know about heat pumps. First, electric resistive heaters have a &lt;em&gt;Coefficient of Performance&lt;/em&gt; (CoP) of essentially 1: they take 1 joule of electricity and turn it into 1 joule of heat in the air. My heat pumps have a typical heating CoP of about 2-4, depending on temperature and load. They take 1 joule of electricity and suck 2 to 4 joules of heat from the outside air into the inside. This means they cost 2-4 times less (in electric opex, at least) than a standard resistive electric heating system.&lt;/p&gt;
&lt;p&gt;Second, heat pumps, like A/C systems, shouldn’t start and stop too frequently. Starting up causes large transient electrical and mechanical stresses. Ideally they should run at a low speed for several hours, rather than running at full blast, shutting off, then turning on again ten minutes later. This is called “short cycling”.&lt;/p&gt;
&lt;p&gt;Third, the heat pump’s fan, heat pump’s compressor, and the air handler’s fan are all variable-speed: they can run very slow (quiet, efficient), very fast (loud, more powerful), or at any speed in between. This helps reduce short-cycling, as well as improving efficiency and reducing noise. However, directly setting compressor and fan speed requires a special “communicating” thermostat made by the same manufacturer, which speaks a proprietary wire protocol. My manufacturer’s communicating thermostats are very expensive and have a reputation for buggy hardware and software, so I opted to get an &lt;a href="https://www.ecobee.com/en-us/smart-thermostats/smart-wifi-thermostat/"&gt;ecobee 3 lite&lt;/a&gt;. Like essentially every other thermostat on the planet, the ecobee uses ~8 wires with simple binary signals, like “please give me heat” and “please turn on the fan”. It can’t ask for a specific &lt;em&gt;amount&lt;/em&gt; of heat.&lt;/p&gt;
&lt;p&gt;However, all is not lost. The standard thermostat protocol has a notion of a “two-stage” system—if the Y1 wire is hot, it’s asking for “some heat”, and if Y2 is also hot, it’s asking for “more heat”. My variable-speed heat pump emulates a two-stage system using a hysteresis mechanism. In stage 1, the heat pump offers some nominal low degree of heat. When the thermostat calls for stage 2, it kicks up the air handler blower a notch, and after 20 minutes, it slowly ramps up the heat pump compressor as well. I assume there’s a ramp-down for going back to stage 1. They say this provides “true variable-capacity operation”. You can imagine that the most efficient steady state is where the thermostat toggles rapidly between Y1 and Y2, causing the system to hang out at exactly the right variable speeds for current conditions—but I assume ecobee has some kind of of frequency limiter to avoid damaging systems that actually have two separate stages with distinct startup/shutdown costs.&lt;/p&gt;
&lt;p&gt;The air handler’s aux heat is also staged: if the W1 wire is hot, I think (based on staring at the wiring diagram and air handler itself) it just energizes one of two coils. If W2 is also hot, it energizes both. I think this is good: we want to use as much of the heat pump heat as possible, and if we can get away with juuuust a little aux heat, instead of going full blast, that’ll save energy.&lt;/p&gt;
&lt;p&gt;In short: aux heat is 2-4x more expensive than heat pump heat; we want to use as little aux as possible. Short-cycling is bad: we want long cycle times. For maximum efficiency, we want both the heat pump and aux heat to be able to toggle between stage 1 and 2 depending on demand.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#automatic-problems" id="automatic-problems"&gt;Automatic Problems&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;I initially left the ecobee at its automatic default settings for a few weeks; it’s supposed to learn the house dynamics and adapt. I noticed several problems. Presumably this behavior depends on weather, building thermal properties, HVAC dynamics, and however ecobee’s tuned their algorithm last week, so YMMV: check your system and see how it looks.&lt;/p&gt;
&lt;p&gt;It’s kind of buried, but ecobee offers a really nice time-series visualization of thermostat behavior on their web site. There’s also a Home Assistant integration that pulls in data from their API. It’s a pain in the ass to set up (ecobee, there’s no need for this to be so user-hostile), but it does work.&lt;/p&gt;
&lt;p&gt;Over the next few weeks I stared obsessively at time-series plots from both ecobee and Home Assistant, and mucked around with ecobee’s settings. Most of what I’ll describe below is configurable in the settings menu on the thermostat: look for “settings”, “installation settings”, “thresholds”.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#reducing-aux-heat" id="reducing-aux-heat"&gt;Reducing Aux Heat&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;First, the automatics kicked on aux heat a &lt;em&gt;lot&lt;/em&gt;. Even in situations where the heat pump would have been perfectly capable of getting up to temp, ecobee would burn aux heat to reach the target temperature (&lt;em&gt;set point&lt;/em&gt;) faster.&lt;/p&gt;
&lt;p&gt;Part of the problem was that ecobee ships (I assume for safety reasons) with ludicrously high cut-off thresholds for heat pumps. Mine had “compressor min outdoor temperature” of something like 35 degrees, so the heat pump wouldn’t run for most of the winter. The actual minimum temperature of my model is -4, cold-climate heat pumps run down to -20. I lowered mine to -5; the manual says there’s a physical thermostat interlock on the heat pump itself, and I trust that more than the ecobee weather feed anyway.&lt;/p&gt;
&lt;p&gt;Second: ecobee seems to prioritize speed over progress: if it’s not getting to the set point fast enough, it’ll burn aux heat to get there sooner. I don’t want this: I’m perfectly happy putting on a jacket. After a bit I worked out that the heat pumps alone can cover the house load down to ~20 degrees or so, and raised “aux heat max outdoor temperature” to 25. If it’s any warmer than that, the system won’t use aux heat.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#reverse-staging" id="reverse-staging"&gt;Reverse Staging&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A second weird behavior: once the ecobee called for stage 2, either from the heat pump or aux, it would run in stage 2 until it hit the set point, then shut off the system entirely. Running aux stage 2 costs more energy. Running the heat pump in stage 2 shortens the cycle time: remember, the goal is a low, long running time.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/370/no-reverse-staging.png" alt="A time-series plot showing that once stage 2 engages, it runs until shutting off, causing frequent cycling" title="A time-series plot showing that once stage 2 engages, it runs until shutting off, causing frequent cycling"&gt;&lt;/p&gt;
&lt;p&gt;The setting I used to fix this is called “reverse staging”. Ecobee’s &lt;a href="https://support.ecobee.com/s/articles/Threshold-settings-for-ecobee-thermostats"&gt;documentation&lt;/a&gt; says:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Compressor Reverse Staging: Enables the second stage of the compressor near the temperature setpoint.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As far as I can tell this documentation is completely wrong. From watching the graphs, this setting seems to allow the staging state machine to move from stage 2 back to stage 1, rather than forcing it to run in stage 2 until shutting off entirely. It’ll go back up to stage 2 if it needs to, and back down again.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/370/reverse-staging.png" alt="With reverse staging, it'll jump up to stage 2, then drop back down to stage 1." title="With reverse staging, it'll jump up to stage 2, then drop back down to stage 1."&gt;&lt;/p&gt;
&lt;h2&gt;&lt;a href="#manual-staging" id="manual-staging"&gt;Manual Staging&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;I couldn’t seem to get ecobee’s automatic staging to drop back to stage 1 heat reliably, or avoid kicking on aux heat when stage 2 heat pump heat would have done fine. I eventually gave up and turned off automatic staging altogether. I went with the delta temperature settings. If the temperature delta between the set point and indoor air is more than 1 degree, it turns on heat pump stage 1. More than two degrees, stage 2. More than four degrees, aux 1. More than five degrees, aux 2. The goal here is to use only as much aux heat as absolutely necessary to supplement the heat pump. I also have aux heat configured to run concurrently with the heat pump: there’s a regime where the heat pump provides useful heat, but not quite enough, and my intuition is that &lt;em&gt;some&lt;/em&gt; heat pump heat is cheaper than all aux.&lt;/p&gt;
&lt;p&gt;I initially tried the default 0.5 degree delta before engaging the heat pump’s first stage. It turns out that for some temperature regimes this creates rapid cycling: that first-phase heat is enough to heat the house rapidly to the set point, and then there’s nothing to do but shut the system off. The house cools, and the system kicks on again, several times per hour. I raised the delta to 1 degree, which significantly extended the cycle time.&lt;/p&gt;
&lt;h2&gt;&lt;a href="#large-setback-with-preheating" id="large-setback-with-preheating"&gt;Large Setback with Preheating&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;A &lt;em&gt;setback&lt;/em&gt; is when you lower your thermostat, e.g. while away from home or sleeping. There’s some folk wisdom that heat pumps should run at a constant temperature all the time, rather than have a large setback. As far as I can tell, this is because a properly-sized heat pump system (unlike a gas furnace) doesn’t deliver a ton of excess heat, so it can’t catch up quickly when asked to return to a higher temperature. To compensate, the system might dip into aux heat, and that’s super expensive.&lt;/p&gt;
&lt;p&gt;I’m in the US Midwest, where winter temperatures are usually around 15-40 F. I drop from 68 to 60 overnight, and the house can generally coast all night without having to run any HVAC at all. In theory the ecobee should be able to figure out the time required to come back to 68 and start the heat pump early in the morning, but in practice I found it would wait too long, and then the large difference between actual and set temp would trigger aux heat. To avoid this, I added a custom activity in Ecobee’s web interface (I call mine “preheat”), with a temperature of 64. I have my schedule set up with an hour of preheat in the morning, before going to the normal 68. This means there’s less of a delta-T, and the system can heat up entirely using the heat pump.&lt;/p&gt;
&lt;p&gt;&lt;img class="attachment pure-img" src="/data/posts/370/setback.png" alt="A time series graph showing temperature falling smoothly overnight as the HVAC is disabled, and then rising during the preheat phase in the morning." title="A time series graph showing temperature falling smoothly overnight as the HVAC is disabled, and then rising during the preheat phase in the morning."&gt;&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/369-classnotfoundexception-java-util-sequencedcollection</id><title>ClassNotFoundException: java.util.SequencedCollection</title><published>2024-02-20T18:03:42-05:00</published><updated>2024-02-20T18:03:42-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/369-classnotfoundexception-java-util-sequencedcollection"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;Recently I’ve had users of my libraries start reporting mysterious errors due to a missing reference to &lt;code&gt;SequencedCollection&lt;/code&gt;, a Java interface added in JDK 21:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Execution error (ClassNotFoundException) at
jdk.internal.loader.BuiltinClassLoader/loadClass (BuiltinClassLoader.java:641).
java.util.SequencedCollection
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Specifically, projects using &lt;a href="https://github.com/jepsen-io/jepsen/issues/585"&gt;Jepsen 0.3.5&lt;/a&gt; started throwing this error due to Clojure’s built-in &lt;code&gt;rrb_vector.clj&lt;/code&gt;, which is particularly vexing given that the class doesn’t reference &lt;code&gt;SequencedCollection&lt;/code&gt; at all.&lt;/p&gt;
&lt;p&gt;It turns out that the Clojure compiler, when run on JDK 21 or later, will automatically insert references to this class when compiling certain expressions–likely because it now appears in the supertypes of other classes. Jepsen had &lt;code&gt;:javac-options [&amp;quot;-source&amp;quot; &amp;quot;11&amp;quot; &amp;quot;-target&amp;quot; &amp;quot;11&amp;quot;]&lt;/code&gt; in Jepsen’s &lt;code&gt;project.clj&lt;/code&gt; already, but it still emitted references to &lt;code&gt;SequencedCollection&lt;/code&gt; because the reference is inserted by the Clojure compiler, not &lt;code&gt;javac&lt;/code&gt;. Similarly, adding &lt;code&gt;[&amp;quot;--release&amp;quot; &amp;quot;11&amp;quot;]&lt;/code&gt; didn’t work.&lt;/p&gt;
&lt;p&gt;Long story short: as far as I can tell the only workaround is to downgrade to Java 17 (or anything prior to 21) when building Jepsen as a library. That’s not super hard with &lt;code&gt;update-alternatives&lt;/code&gt;, but I still imagine I’ll be messing this up until Clojure’s compiler can get a patch.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days</id><title>How to Replace Your CPAP In Only 666 Days</title><published>2024-02-03T19:38:53-05:00</published><updated>2024-02-03T19:38:53-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;&lt;em&gt;This story is not practical advice. For me, it’s closing the book on an almost two-year saga. For you, I hope it’s an enjoyable bit of bureaucratic schadenfreude. For Anthem, I hope it’s the subject of a series of painful but transformative meetings. This is not an isolated event. I’ve had dozens of struggles with Anthem customer support, and they all go like this.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;If you’re looking for practical advice: it’s this. Be polite. Document everything. Keep a log. Follow the claims process. Check the laws regarding insurance claims in your state. If you pass the legally-mandated deadline for your claim, call customer service. Do not allow them to waste a year of your life, or force you to resubmit your claim from scratch. Initiate a complaint with your state regulators, and escalate directly to &lt;a href="mailto:[email protected]"&gt;Gail Boudreaux’s team&lt;/a&gt;–or whoever Anthem’s current CEO is.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;To start, experience an equipment failure.&lt;/p&gt;
&lt;p&gt;Use your CPAP daily for six years. Wake up on day zero with it making a terrible sound. Discover that the pump assembly is failing. Inquire with Anthem Ohio, your health insurer, about how to have it repaired. Allow them to refer you to a list of local durable medical equipment providers. Start calling down the list. Discover half the list are companies like hair salons. Eventually reach a company in your metro which services CPAPs. Discover they will not repair broken equipment unless a doctor tells them to.&lt;/p&gt;
&lt;p&gt;Leave a message with your primary care physician. Call the original sleep center that provided your CPAP. Discover they can’t help, since you’re no longer in the same state. Return to your primary, who can’t help either, because he had nothing to do with your prescription. Put the sleep center and your primary in touch, and ask them to talk.&lt;/p&gt;
&lt;p&gt;On day six, call your primary to check in. He’s received a copy of your sleep records, and has forwarded them to a local sleep center you haven’t heard of. They, in turn, will talk to Anthem for you.&lt;/p&gt;
&lt;p&gt;On day 34, receive an approval letter labeled “confirmation of medical necessity” from Anthem, directed towards the durable medical equipment company. Call that company and confirm you’re waitlisted for a new CPAP. They are not repairable. Begin using your partner’s old CPAP, which is not the right class of device, but at least it helps.&lt;/p&gt;
&lt;p&gt;Over the next 233 days, call that medical equipment company regularly. Every time, inquire whether there’s been any progress, and hear “we’re still out of stock”. Ask them you what the manufacturer backlog might be, how many people are ahead of you in line, how many CPAPs they &lt;em&gt;do&lt;/em&gt; receive per month, or whether anyone has ever received an actual device from them. They won’t answer any questions. Realize they are never going to help you.&lt;/p&gt;
&lt;p&gt;On day 267, realize there is no manufacturer delay. The exact machine you need is in stock on CPAP.com. Check to make sure there’s a claims process for getting reimbursed by Anthem. Pay over three thousand dollars for it. When it arrives, enjoy being able to breathe again.&lt;/p&gt;
&lt;p&gt;On day 282, follow CPAP.com’s documentation to file a claim with Anthem online. Include your prescription, receipt, shipping information, and the confirmation of medical necessity Anthem sent you.&lt;/p&gt;
&lt;p&gt;On day 309, open the mail to discover a mysterious letter from Anthem. They’ve received your appeal. You do not recall appealing anything. There is no information about what might have been appealed, but something will happen within 30-60 days. There is nothing about your claim.&lt;/p&gt;
&lt;p&gt;On day 418, emerge from a haze of lead, asbestos, leaks, and a host of other home-related nightmares; remember Anthem still hasn’t said anything about your claim. Discover your claim no longer appears on Anthem’s web site. Call Anthem customer service. They have no record of your claim either. Ask about the appeal letter you received. Listen, gobsmacked, as they explain that they decided your claim was in fact an appeal, and transferred it immediately to the appeals department. The appeals department examined the appeal and looked for the claim it was appealing. Finding none, they decided the appeal was moot, and rejected it. At no point did anyone inform you of this. Explain to Anthem’s agent that you filed a claim online, not an appeal. At their instruction, resign yourself to filing the entire claim again, this time using a form via physical mail. Include a detailed letter explaining the above.&lt;/p&gt;
&lt;p&gt;On day 499, retreat from the battle against home entropy to call Anthem again. Experience a sense of growing dread as the customer service agent is completely unable to locate either of your claims. After a prolonged conversation, she finds it using a different tool. There is no record of the claim from day 418. There was a claim submitted on day 282. Because the claim does not appear in her system, there is no claim. Experience the cognitive equivalent of the Poltergeist hallway shot as the agent tells you “Our members are not eligible for charges for claim submission”.&lt;/p&gt;
&lt;p&gt;Hear the sentence “There is a claim”. Hear the sentence “There is no claim”. Write these down in the detailed log you’ve been keeping of this unfurling Kafkaesque debacle. Ask again if there is anyone else who can help. There is no manager you can speak to. There is no tier II support. “I’m the only one you can talk to,” she says. Write that down.&lt;/p&gt;
&lt;p&gt;Call CPAP.com, which has a help line staffed by caring humans. Explain that contrary to their documentation, Anthem now says members cannot file claims for equipment directly. Ask if they are the provider. Discover the provider for the claim is probably your primary care physician, who has no idea this is happening. Leave a message with him anyway. Leave a plaintive message with your original sleep center for good measure.&lt;/p&gt;
&lt;p&gt;On day 502, call your sleep center again. They don’t submit claims to insurance, but they confirm that some people &lt;em&gt;do&lt;/em&gt; successfully submit claims to Anthem using the process you’ve been trying. They confirm that Anthem is, in fact, hot garbage. Call your primary, send them everything you have, and ask if they can file a claim for you.&lt;/p&gt;
&lt;p&gt;On day 541, receive a letter from Anthem, responding to your inquiry. You weren’t aware you filed one.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Please be informed that we have received your concern. Upon review we have noticed that there is no claim billed for the date of service mentioned in the submitted documents, Please provide us with a valid claim. If not submitted,provide us with a valid claim iamge to process your claim further.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Stare at the letter, typos and all. Contemplate your insignificance in the face of the vast and uncaring universe that is Anthem.&lt;/p&gt;
&lt;p&gt;On day 559, steel your resolve and call Anthem again. Wait as this representative, too, digs for evidence of a claim. Listen with delight as she finds your documents from day 282. Confirm that yes, a claim definitely exists. Have her repeat that so you can write it down. Confirm that the previous agent was lying: members can submit claims. At her instruction, fill out the claim form a third time. Write a detailed letter, this time with a Document Control Number (DCN). Submit the entire package via registered mail. Wait for USPS to confirm delivery eight days later.&lt;/p&gt;
&lt;p&gt;On day 588, having received no response, call Anthem again. Explain yourself. You’re getting good at this. Let the agent find a reference number for an appeal, but not the claim. Incant the magic DCN, which unlocks your original claim.  “I was able to confirm that this was a claim submitted form for a member,” he says. He sees your claim form, your receipts, your confirmation of medical necessity. However: “We still don’t have the claim”.&lt;/p&gt;
&lt;p&gt;Wait for him to try system after system. Eventually he confirms what you heard on day 418: the claims department transferred your claims to appeals. “Actually this is not an appeal, but it was denied as an appeal.” Agree as he decides to submit your claim manually again, with the help of his supervisor. Write down the call ref number: he promises you’ll receive an email confirmation, and an Explanation of Benefits in 30-40 business days.&lt;/p&gt;
&lt;p&gt;“I can assure you this is the last time you are going to call us regarding this.”&lt;/p&gt;
&lt;p&gt;While waiting for this process, recall insurance is a regulated industry. Check the Ohio Revised Code. Realize that section 3901.381 establishes deadlines for health insurers to respond to claims. They should have paid or denied each of your claims within 30 days–45 if supporting documentation was required. Leave a message with the Ohio Department of Insurance’s Market Conduct Division. File an insurance complaint with ODI as well.&lt;/p&gt;
&lt;p&gt;Grimly wait as no confirmation email arrives.&lt;/p&gt;
&lt;p&gt;On day 602, open an email from Anthem. They are “able to put the claim in the system and currenty on processed [sic] to be applied”. They’re asking for more time. Realize that Anthem is well past the 30-day deadline under the Ohio Revised Code for all three iterations of your claim.&lt;/p&gt;
&lt;p&gt;On day 607, call Anthem again. The representative explains that the claim will be received and processed as of your benefits. She asks you to allow 30-45 days from today. Quote section 3901.381 to her. She promises to expedite the request; it should be addressed within 72 business hours. Like previous agents, she promises to call you back. Nod, knowing she won’t.&lt;/p&gt;
&lt;p&gt;On day 610, email the Ohio Department of Insurance to explain that Anthem has found entirely new ways to avoid paying their claims on time. It’s been 72 hours without a callback; call Anthem again. She says “You submitted a claim and it was received” on day 282. She says the claim was expedited. Ask about the status of that expedited resolution. “Because on your plan we still haven’t received any claims,” she explains. Wonder if you’re having a stroke.&lt;/p&gt;
&lt;p&gt;Explain that it has been 328 days since you submitted your claim, and ask what is going on. She says that since the first page of your mailed claim was a letter, that might have caused it to be processed as an appeal. Remind yourself Anthem told you to enclose that letter. Wait as she attempts to refer you to the subrogation department, until eventually she gives up: the subrogation department doesn’t want to help.&lt;/p&gt;
&lt;p&gt;Call the subrogation department yourself. Allow Anthem’s representative to induce in you a period of brief aphasia. She wants to call a billing provider. Try to explain there is none: you purchased the machine yourself. She wants to refer you to collections. Wonder why on earth Anthem would want money from &lt;em&gt;you&lt;/em&gt;. Write down “I literally can’t understand what she thinks is going on” in your log. Someone named Adrian will call you by tomorrow.&lt;/p&gt;
&lt;p&gt;Contemplate alternative maneuvers. Go on a deep Google dive, searching for increasingly obscure phrases gleaned from Anthem’s bureaucracy. Trawl through internal training PDFs for Anthem’s ethics and compliance procedures. Call their compliance hotline: maybe someone cares about the law. It’s a third-party call center for Elevance Health. Fail to realize this is another name for Anthem. Begin drawing a map of Anthem’s corporate structure.&lt;/p&gt;
&lt;p&gt;From a combination of publicly-available internal slide decks, LinkedIn, and obscure HR databases, discover the name, email, and phone number of Anthem’s Chief Compliance Officer. Call her, but get derailed by an internal directory that requires a 10-digit extension. Try the usual tricks with automated phone systems. No dice.&lt;/p&gt;
&lt;p&gt;Receive a call from an Anthem agent. Ask her what happened to “72 hours”. She says there’s been no response from the adjustments team. She doesn’t know when a response will come. There’s no one available to talk to. Agree to speak to another representative tomorrow. It doesn’t matter: they’ll never call you.&lt;/p&gt;
&lt;p&gt;Do more digging. Guess the CEO’s email from what you can glean of Anthem’s account naming scheme. Write her an email with a short executive summary and a detailed account of the endlessly-unfolding Boschian hellscape in which her company has entrapped you. A few hours later, receive an acknowledgement from an executive concierge at Elevance (Anthem). It’s polite, formal, and syntactically coherent. She promises to look into things. Smile. Maybe this will work.&lt;/p&gt;
&lt;p&gt;On day 617, receive a call from the executive concierge. 355 days after submission, she’s identified a problem with your claim. CPAP.com provided you with an invoice with a single line item (the CPAP) and two associated billing codes (a CPAP and humidifier). Explain that they are integrated components of a single machine. She understands, but insists you need a receipt with multiple line items for them anyway. Anthem has called CPAP.com, but they can’t discuss an invoice unless you call them. Explain you’ll call them right now.&lt;/p&gt;
&lt;p&gt;Call CPAP.com. Their customer support continues to be excellent. Confirm that it is literally impossible to separate the CPAP and humidifier, or to produce an invoice with two line items for a single item. Nod as they ask what the hell Anthem is doing. Recall that this is the exact same machine Anthem covered for you eight years ago. Start a joint call with the CPAP.com representative and Anthem’s concierge. Explain the situation to her voicemail.&lt;/p&gt;
&lt;p&gt;On day 623, receive a letter from ODI. Anthem has told ODI this was a problem with the billing codes, and ODI does not intervene in billing code issues. They have, however, initiated a secretive second investigation. There is no way to contact the second investigator.&lt;/p&gt;
&lt;p&gt;Write a detailed email to the concierge and ODI explaining that it took over three hundred days for Anthem to inform you of this purported billing code issue. Explain again that it is a single device. Emphasize that Anthem has been handling claims for this device for roughly a decade.&lt;/p&gt;
&lt;p&gt;Wait. On day 636, receive a letter from Anthem’s appeals department. They’ve received your request for an appeal. You never filed one. They want your doctor or facility to provide additional information to Carelon Medical Benefits Management. You have never heard of Carelon. There is no explanation of how to reach Carelon, or what information they might require. The letter concludes: “There is currently no authorization on file for the services rendered.” You need to seek authorization from a department called “Utilization Management”.&lt;/p&gt;
&lt;p&gt;Call the executive concierge again. Leave a voicemail asking what on earth is going on.&lt;/p&gt;
&lt;p&gt;On day 637, receive an email: she’s looking into it.&lt;/p&gt;
&lt;p&gt;On day 644, Anthem calls you. It’s a new agent who is immensely polite. Someone you’ve never heard of was asked to work on another project, so she’s taking over your case. She has no updates yet, but promises to keep in touch.&lt;/p&gt;
&lt;p&gt;She does so. On day 653, she informs you Anthem will pay your claim in full. On day 659, she provides a check number. On day 666, the check arrives.&lt;/p&gt;
&lt;p&gt;Deposit the check. Write a thank you email to the ODI and Anthem’s concierge. Write this, too, down in your log.&lt;/p&gt;
</content></entry><entry><id>https://aphyr.com/posts/367-why-is-jepsen-written-in-clojure</id><title>Why is Jepsen Written in Clojure?</title><published>2023-12-05T09:49:05-05:00</published><updated>2023-12-05T09:49:05-05:00</updated><link rel="alternate" href="https://aphyr.com/posts/367-why-is-jepsen-written-in-clojure"></link><author><name>Aphyr</name><uri>https://aphyr.com/</uri></author><content type="html">&lt;p&gt;People keep asking why &lt;a href="https://jepsen.io"&gt;Jepsen&lt;/a&gt; is written in &lt;a href="https://clojure.org/"&gt;Clojure&lt;/a&gt;, so I figure it’s worth having a referencable answer. I’ve programmed in something like twenty languages. Why choose a Weird Lisp?&lt;/p&gt;
&lt;p&gt;Jepsen is built for testing concurrent systems–mostly databases. Because it tests concurrent systems, the language itself needs good support for concurrency. Clojure’s immutable, persistent data structures make it easier to write correct concurrent programs, and the language and runtime have excellent concurrency support: real threads, promises, futures, atoms, locks, queues, cyclic barriers, all of java.util.concurrent, etc. I also considered languages (like Haskell) with more rigorous control over side effects, but decided that Clojure’s less-dogmatic approach was preferable.&lt;/p&gt;
&lt;p&gt;Because Jepsen tests databases, it needs broad client support. Almost every database has a JVM client, typically written in Java, and Clojure has decent Java interop.&lt;/p&gt;
&lt;p&gt;Because testing is experimental work, I needed a language which was concise, adaptable, and well-suited to prototyping. Clojure is terse, and its syntactic flexibility–in particular, its macro system–work well for that. In particular the threading macros make chained transformations readable, and macros enable re-usable error handling and easy control of resource scopes. The Clojure REPL is really handy for exploring the data a test run produces.&lt;/p&gt;
&lt;p&gt;Tests involve representing, transforming, and inspecting complex, nested data structures. Clojure’s data structures and standard library functions are possibly the best I’ve ever seen. I also print a lot of structures to the console and files: Clojure’s data syntax (EDN) is fantastic for this.&lt;/p&gt;
&lt;p&gt;Because tests involve manipulating a decent, but not huge, chunk of data, I needed a language with “good enough” performance. Clojure’s certainly not the fastest language out there, but idiomatic Clojure is usually within an order of magnitude or two of Java, and I can shave off the difference where critical. The JVM has excellent profiling tools, and these work well with Clojure.&lt;/p&gt;
&lt;p&gt;Jepsen’s (gosh) about a decade old now: I wanted a language with a mature core and emphasis on stability. Clojure is remarkably stable, both in terms of JVM target and the language itself. Libraries don’t “rot” anywhere near as quickly as in Scala or Ruby.&lt;/p&gt;
&lt;p&gt;Clojure does have significant drawbacks. It has a small engineering community and no (broadly-accepted, successful) static typing system. Both of these would constrain a large team, but Jepsen’s maintained and used by only 1-3 people at a time. Working with JVM primitives can be frustrating without dropping to Java; I do this on occasion. Some aspects of the polymorphism system are lacking, but these can be worked around with libraries. The error messages are terrible. I have no apologetics for this. ;-)&lt;/p&gt;
&lt;p&gt;I prototyped Jepsen in a few different languages before settling on Clojure. A decade in, I think it was a pretty good tradeoff.&lt;/p&gt;
</content></entry></feed>
Raw headers
{
  "cache-control": "private,max-age=60",
  "cf-cache-status": "DYNAMIC",
  "cf-ray": "9508f8fc9135e1e3-ORD",
  "connection": "keep-alive",
  "content-type": "application/atom+xml",
  "date": "Mon, 16 Jun 2025 08:35:14 GMT",
  "server": "cloudflare",
  "set-cookie": "JSESSIONID=WwxUjCM-TXuFONyFO0dRb8O_XfGleGIwbv-2a2zJ; path=/; secure; HttpOnly; Max-Age=2592000; Expires=Wed, 16-Jul-2025 08:35:14 GMT",
  "strict-transport-security": "max-age=31536000; includeSubdomains",
  "transfer-encoding": "chunked",
  "vary": "accept-encoding",
  "x-content-type-options": "nosniff",
  "x-frame-options": "SAMEORIGIN, DENY",
  "x-geoblock": "US",
  "x-xss-protection": "1; mode=block"
}
Parsed with @rowanmanning/feed-parser
{
  "meta": {
    "type": "atom",
    "version": "1.0"
  },
  "language": null,
  "title": "Aphyr: Posts",
  "description": null,
  "copyright": null,
  "url": "https://aphyr.com/",
  "self": "https://aphyr.com/posts.atom",
  "published": null,
  "updated": "2025-05-29T23:05:42.000Z",
  "generator": null,
  "image": null,
  "authors": [],
  "categories": [],
  "items": [
    {
      "id": "https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess",
      "title": "The Future of Comments is Lies, I Guess",
      "description": null,
      "url": "https://aphyr.com/posts/388-the-future-of-comments-is-lies-i-guess",
      "published": "2025-05-29T17:36:16.000Z",
      "updated": "2025-05-29T17:36:16.000Z",
      "content": "<p>I’ve been involved in content moderation since roughly 2004. I’ve built spam prevention for corporate and personal e-mail, moderated open-source mailing lists and IRC channels, worked at a couple social media networks, and help moderate a Mastodon instance for a few hundred people. In the last few years I’ve wasted more time fighting blog comment spam, and I’m pretty sure Large Language Models (LLMs) are to blame.</p>\n<p>I think of spam as a space with multiple equilibria. Producing spam takes work. Spammers are willing to invest that work because each message has a small chance to make money, or achieve political or emotional goals. Some spam, like the endless identical Viagra scams in my email spam folder, or the PHPBB comment spam I filter out here on aphyr.com, is cheap to generate and easy to catch. I assume the spammers make it up in volume. Other spam, like <a href=\"https://www.ibm.com/think/topics/spear-phishing\">spear phishing attacks</a>, is highly time-consuming: the spammer must identify a target, carefully craft a plausible message using, say, the identity of the target’s co-workers, or construct a facade of a bank’s log-in page, and so on. This kind of spam is more likely to make it through filters, but because it takes a lot of human work, is generally only worth it for high-value targets.</p>\n<p>LLMs seem to be changing these equilibria. Over the last year I’ve seen a new class of comment spam, using what I’m fairly sure is LLM-generated text. These comments make specific, plausible remarks about the contents of posts and images, and work in a link to some web site or mention a product. Take this one I caught a few months back:</p>\n<blockquote>\n<p>\"Walking down a sidewalk lined with vibrant flowers is one of life’s simple joys! It reminds me of playing the [link redacted] slope game, where you have to navigate through colorful landscapes while dodging obstacles.</p>\n</blockquote>\n<p>Before 2023, you’d likely have paid a human a few cents to write and post that. Now, thanks to LLMs, this sort of thing is trivially automated. The model will happily fabricate relatable personal experiences in service of a spam campaign:</p>\n<blockquote>\n<p>That photo reminds me of the first time I tried macro photography in my backyard. I spent an hour trying to get a clear shot of a red flower, experimenting with angles and lighting. It was so much fun discovering the little details up close! If you ever need a break from photography, I recommend playing Snow Rider 3D for a bit of quick, light-hearted fun.</p>\n</blockquote>\n<p>Other spam seems to glue together LLM remarks with what I think is a hand-written snippet of ad copy. Note the abrupt shift in grammar, diction, and specificity.</p>\n<blockquote>\n<p>This piece masterfully blends technical depth with mythological storytelling, transforming a standard Haskell programming interview into an epic narrative. It cleverly critiques the complexity and absurdity of some technical interviews by illustrating how type-level Haskell can be pushed to esoteric extremes beautiful, powerful, and largely impractical. A fascinating and relevant read for anyone interested in the intersection of programming, language design, and narrative. I’m James Maicle, working at Cryptoairhub where we run a clear and insightful crypto blog. I’ll be bookmarking your site and following the updates. Glad to see so much valuable information shared here looking forward to exploring more strategies together. Thanks for sharing. If you interest about Crypto please visit my website and read my article [link redacted] Crypto Blog.</p>\n</blockquote>\n<p>The same thing is happening on aggregators like Hacker News, where commenters post more-or-less-obviously LLM output for… I’m not sure, exactly. Karma? Weirder still are bots like <a href=\"https://bsky.app/profile/hackerbriefs.bsky.social\">Hacker Briefs</a>, which I suspect use an LLM to summarize trending HN posts. Here’s <a href=\"https://bsky.app/profile/hackerbriefs.bsky.social/post/3lnz4kirkpv25\">its summary of a recent article I wrote</a>:</p>\n<blockquote>\n<p>“Jepsen: Amazon RDS for PostgreSQL 17.4”</p>\n<p>New multi-AZ clusters in Amazon RDS for PostgreSQL offer better failure recovery but may return outdated data when reading after writes. Caution is needed.</p>\n</blockquote>\n<p>This is a totally plausible summary of <a href=\"https://jepsen.io/analyses/amazon-rds-for-postgresql-17.4\">the article</a>, and it is utterly, laughably wrong. Multi-AZ clusters are not new, and they do not return outdated data when reading after writes. As the abstract succinctly explains, they allow Long Fork, a different anomaly which does not involve real-time orders at all. The bot ignored the actual problem and invented a different one. This sort of spam isn’t obviously motivated by commercial interest, but it is nevertheless depressing: one more drop in the misinformation slurry.</p>\n<p>Of course this is not news. Product reviews are inundated with LLM slop, as are social media networks. LLMs allow for cheap, fast, and automated generation of unique spam which is difficult for machines and humans to identify. The cost falls on me and other moderators, who must sift through LLM bullshit trying to sieve “awkward but sincere human” from “automated attack”. Thanks to OpenAI et al I read more spam, and each message takes longer to check.</p>\n<p>This problem is only going to get worse as LLMs improve and spammers develop more sophisticated ways to use them. In recent weeks I’ve received vague voice messages from strangers with uncanny speech patterns just asking to catch up—a sentence which, had I uttered it prior to 2023, would have been reasonably interpreted as a sign of psychosis. I assume these too are LLM-generated scams, perhaps a <a href=\"https://dfpi.ca.gov/wp-content/uploads/2025/03/Pig-Butchering-Scam-Playbook.pdf\">pig butchering</a> scheme. So far these are from strangers, but it’s not hard to imagine an attacker using text and voice synthesis to impersonate a friend, colleague, or lover in a detailed conversation. Or one’s doctor, or bank.</p>\n<p>As the cost of generating slop decreases, it’s easy to imagine LLMs generating personae, correspondence, even months-long relationships with real humans before being deployed for commercial or political purposes. <a href=\"https://www.techtransparencyproject.org/articles/for-sale-on-facebook-accounts-that-can-run-u.s.-election-ads\">Creating plausible accounts and selling them</a> has been a successful business model in social media for some time; likewise, we have strong clues that <a href=\"https://arxiv.org/pdf/2307.16336\">LLMs are already used for social media bots</a>. Social networks have responded to these attacks via out-of-band mechanisms: IP reputation analysis, javascript and mobile app fingerprinting, statistical correlation across multiple accounts, and so on. I’m not sure how to translate these defenses to less centralized and more privacy-oriented networks, like email or blog spam. I strongly suspect the only reason Mastodon hasn’t been eaten alive by LLM spambots is because we’re just not big enough to be lucrative. But those economics are shifting, and even obscure ecological niches can be worth filling.</p>\n<p>As a moderator, that keeps me up at night.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/387-the-future-of-customer-support-is-lies-i-guess",
      "title": "The Future of Customer Support is Lies, I Guess",
      "description": null,
      "url": "https://aphyr.com/posts/387-the-future-of-customer-support-is-lies-i-guess",
      "published": "2025-05-21T12:12:41.000Z",
      "updated": "2025-05-21T12:12:41.000Z",
      "content": "<p><em>Update, 2025-05-22: TrueNAS was kind enough to reach out and let me know that their support process does not normally incorporate LLMs. They’re talking about what happened internally, and intend to prevent it from happening again through improved documentation and reviewing the support process as a whole. I’m happy to hear it!</em></p>\n<p><a href=\"https://www.truenas.com/\">TrueNAS</a> makes file servers, also known as Network Attached Storage (NAS). I bought one of their smaller boxes to house backups, and it’s a great little NAS. For many years TrueNAS sold boxes with a BSD-based operating system, but in the last few years they’ve released a new, Linux-backed operating system called <a href=\"https://www.truenas.com/truenas-scale/\">TrueNAS SCALE</a>, also called “Community Edition”. I was considering migrating to TrueNAS SCALE, but the docs <a href=\"https://web.archive.org/web/20250408163343/https://www.truenas.com/docs/scale/gettingstarted/migrate/migrate-community/\">started off with this warning</a>:</p>\n<blockquote>\n<p>TrueNAS Enterprise customers should consult with TrueNAS Enterprise Support before attempting migrate.</p>\n<p>The process requires an extended maintenance window, requires executing steps in the correct order to prevent issues with system configuration and operation, and additional system review post-migration to catch and correct any configuration issues.</p>\n</blockquote>\n<p>Since I’d actually purchased their hardware, I wrote to their support email. I asked whether the BSD-based OS was reaching end-of-life, and if there was anything special I should know about the upgrade process for my particular machine. The response came from a customer success coordinator, whose name I’ll omit for privacy. It began:</p>\n<blockquote>\n<p>Hi Kyle,</p>\n<p>It sounds like you’re at an important crossroads with your TrueNAS Mini 3.0-X+ system as it transitions from the traditional BSD-based OS to the Linux-based SCALE platform, now known as TrueNAS Community Edition.</p>\n<p>Understanding the Transition<br>\nEnd-of-Life for Certain Versions: The BSD-based version of TrueNAS you’re currently using (TrueNAS-13.0-U6.7) may be reaching its end-of-life phase, prompting a move to newer systems like TrueNAS SCALE, which is based on Linux and offers more modern features.</p>\n<p>Community Edition: The Community edition represents the transition towards a more flexible, open-source platform that leverages FreeBSD jails (similar in concept to containers) for virtualization, providing scalability and extensibility.</p>\n</blockquote>\n<p>The phrase “may be reaching its end-of-life phase” was immediately worrying: how does a TrueNAS support agent not know whether their own products are EOL? But what really shocked me was the blatant lie in the third paragraph: they told me that the new OS was based on Linux, then immediately turned around and said that the new system was based on FreeBSD. FreeBSD jails were TrueNAS’s old approach to virtualization; the new Community Edition <a href=\"https://www.truenas.com/truenas-community-edition/\">uses Docker containers</a>. I could not believe that a human being had actually skimmed this response, let alone written it. I was, I think, talking to a Large Language Model (LLM) wearing the face of a person.</p>\n<p>The response went on, coining odd phrases and warning of unspecified risks. “Non-HA (Non-Redundant Array of Independent Disks) TrueNAS hardware can sometimes have specific requirements or limitations when migrating to TrueNAS Community Edition,” they said. RAID and HA are different things in TrueNAS. Moreover, the reason I wrote to support in the first place was to ask what those “specific requirements” were. I was hoping for a support article I’d somehow missed, or a canned response. Instead I got my own question, regurgitated as a warning.</p>\n<p>I wrote with followup questions, and got more nonsense in response. When I asked how TrueNAS Community Edition used FreeBSD jails, a different customer success coordinator replied:</p>\n<blockquote>\n<p>FreeBSD jails are similar to containers (like Docker) but built into the FreeBSD system. TrueNAS uses them to run isolated apps and services. For example, you can run Plex, Nextcloud, or other apps safely and separately using jails — all within your TrueNAS system.</p>\n<p>They’re lightweight, secure, and don’t need a full virtual machine to run each service. It’s a great way to expand what your system can do without using much extra power or memory.</p>\n</blockquote>\n<p>This is all lies. TrueNAS CORE, the old FreeBSD OS, ran jails. TrueNAS SCALE / Community Edition runs Docker containers.</p>\n<p>Quoting their HA/RAID confusion, I asked whether for TrueNAS hardware, the two were actually the same.</p>\n<blockquote>\n<p>Good question — they’re different:</p>\n<ul>\n<li>\n<p>RAID protects your data by spreading it across multiple disks, so if one disk fails, your data is still safe.</p>\n</li>\n<li>\n<p>HA (High Availability) means the system has two controllers. If one fails, the other takes over so the system keeps running. This is used in high-end enterprise systems (like our TrueNAS M-series), not in the MINI line.</p>\n</li>\n</ul>\n</blockquote>\n<p>This is, I believe, accurate, but betrays no sign that the original response conflated the two.</p>\n<p>I asked what specific limitations my NAS model had that would cause issues migrating to Community Edition. They replied:</p>\n<blockquote>\n<p>The MINI-3.0-X+ is compatible with Community Edition (TrueNAS CORE), but there are a few things to know:</p>\n<ul>\n<li>\n<p>No Enterprise-only features: Some features like enclosure management and support alerts are only available in the Enterprise version.</p>\n</li>\n<li>\n<p>No HA support: Not an issue for the MINI, since it’s a single-controller system anyway.</p>\n</li>\n<li>\n<p>Support level: CORE is community-supported, so it doesn’t come with iXsystems support like the Enterprise version does.</p>\n</li>\n</ul>\n<p>But overall — the MINI runs CORE just fine for most use cases!</p>\n</blockquote>\n<p>This sounds like a reasonable response, but it too is complete nonsense. TrueNAS CORE is a completely different operating system. I asked whether it would run Community Edition, also known as SCALE.</p>\n<p>I worked support for many years, helping people with desktops, systems and networking, and occasionally for web services. I’m used to knowledge base articles and canned responses on both ends—and indeed, that’s exactly what I was hoping TrueNAS would provide. All I needed to know was why the documentation warned their customers to contact support before upgrading. Instead I got a pile of vague nonsense and lies.</p>\n<p>I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives.</p>\n<p>Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/386-the-future-of-newspapers-is-lies-i-guess",
      "title": "The Future of Newspapers is Lies, I Guess",
      "description": null,
      "url": "https://aphyr.com/posts/386-the-future-of-newspapers-is-lies-i-guess",
      "published": "2025-05-21T11:51:40.000Z",
      "updated": "2025-05-21T11:51:40.000Z",
      "content": "<p><em>Update, 2023-05-23: Added a paragraph about Dr. Howard Whiteman’s non-existent quote.</em></p>\n<p><em>I subscribe to the Chicago Sun-Times, a non-profit newspaper. This week they sent me a sixty-four page special insert, branded with the Chicago Sun-Times logo, full of LLM nonsense. Yesterday I wrote the following letter to the Chicago Sun-Times. That evening, they published <a href=\"https://chicago.suntimes.com/news/2025/05/20/syndicated-content-sunday-print-sun-times-ai-misinformation\">this followup</a>.</em></p>\n<p>Dear Mr. Buscaglia and the Chicago Sun-Times,</p>\n<p>The May 18th “Heat Index” summer insert feels remarkably like “AI slop”: text generated by a large language model. The unusually enthusiastic use of adjectives coats the entire insert with an oily sheen, but I’m also concerned that there may be wholesale fabrications in the text.</p>\n<p>For instance, “Summer soundtracks” cites Dr. Daniel Levitin, in an interview with Scientific American, as saying “Music activates the brain’s reward centers in ways similar to other pleasurable experiences. When paired with meaningful activities, these songs become powerful memory anchors that can transport us back to those moments for decades”. These phrases, and substrings thereof, return zero matches on Kagi or Google. Scientific American’s archives include a <a href=\"https://www.scientificamerican.com/article/exploring-the-musical-bra/\">January 22, 2001 article</a> with several quotes from Levitin, but nothing like the purported quotes.</p>\n<p>The “Nightfall adventures” article cites Dr. Howard Whiteman, professor at Murray State University. “Understanding nocturnal adaptations helps us appreciate the complexity of ecosystems. Even common backyard species display fascinating behaviors that most people never witness.” I checked with Dr. Whiteman, who confirmed that he had never said these words:</p>\n<blockquote>\n<p>I have never spoken to the Chicago Sun-Times or any other Chicago paper, and I can’t recall ever saying any of those words to anyone, ever.</p>\n<p>I write for some regional papers, some of which is online, and I could see an AI putting all of my writing together somehow into those words, but I’ve never actually written about nocturnal behavior per se.</p>\n</blockquote>\n<p>The “Summer food trends” article cites an interview in Bon Appetit, claiming Padma Lakshmi said, “What distinguishes this summer’s global food exploration is how these international flavors are being adapted to local, seasonal ingredients.” Lakshmi <em>is</em> published in <a href=\"https://www.bonappetit.com/story/padma-lakshmi\">an interview with BA</a>, but I can’t find any use of the uncanny marketer-speak “this summer’s global food exploration”.</p>\n<p>The same article also cites the National Ice Cream Retailers Association as forecasting “unexpected savory notes” and “hyperlocal ingredient sourcing” as the “fastest-growing segments in premium frozen desserts”. I find it hard to believe these are segments at all—nor do these phrases appear anywhere on the <a href=\"https://icecreamassociation.org/general/custom.asp?page=Aboutus\">NICRA web site</a>.</p>\n<p>The “Summer reading list for 2025” recommends books like “The Rainmakers”, by Percival Everett—a real author, but as far as I can tell, not a real text. Immediately thereafter it offers “Salt and Honey”, by “Delia Owens”—again, a real writer, not a real book. I started reaching out to some of the experts cited in the insert to ask whether their quotes were real, then realized <a href=\"https://www.theverge.com/ai-artificial-intelligence/670510/chicago-sun-times-ai-generated-reading-list\">the Verge beat me to it</a>.</p>\n<p>These examples are just from a cursory skim; the insert fails my LLM “sniff test” on essentially every page. How did this happen? How do you print and distribute a full-color, sixty-four page special edition without anyone reading it first? Many of the articles have no byline, but those that do are purportedly by Marco Buscaglia. Was he responsible? Or is he too, like so many of the people cited in this nonsense publication, a victim of confabulatory applied statistics? This would be a great project for the Sun-Times newsroom, assuming OpenAI hasn’t eaten them yet.</p>\n<p>Doubtfully,</p>\n<p>—Kyle Kingsbury</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/381-when-flat-rate-movers-wont-answer-your-calls",
      "title": "When Flat Rate Movers Won't Answer Your Calls",
      "description": null,
      "url": "https://aphyr.com/posts/381-when-flat-rate-movers-wont-answer-your-calls",
      "published": "2025-05-03T14:23:16.000Z",
      "updated": "2025-05-03T14:23:16.000Z",
      "content": "<p>Back in 2023 I went through an inter-state move with <a href=\"https://www.flatrate.com/moving/\">Flat Rate Movers,\nLTD.</a>, a.k.a. Flat Rate Moving, US DOT 488466, MC 254356. They bungled the job, damaged my stuff\nand my house, and then refused to talk to me or their own insurance company. I\nplaced dozens of calls, wrote letters, emails, nothing worked. I finally got\nsome money for the damage by working with their insurer directly. I know a lot\nof folks struggle with moving companies, and figured I’d write up a bit of this\nstory in case it helps. You can <a href=\"#finding-their-insurance\">skip ahead</a> if you\njust want to know about the FMCSA insurer database.</p>\n<p>In November 2022 I scheduled the move with Flat Rate. We took a detailed video\ninventory of everything to be moved. I paid over $14,000 for an “all inclusive\nmove”, which, per the move plan, included full packing services, blankets for furniture, boxes for electronics, lamps, and bicycles, a custom crate for a tabletop, and “extended valuation coverage”—their insurance policy.</p>\n<p>A few days before the move in March 2023, Flat Rate called to tell me a\nfive-person team would arrive. When the crew arrived, it wasn’t Flat Rate. It\nwas a team of two from <a href=\"https://www.esquiremoving.com/\">Esquire Moving Inc</a> (US DOT 2820253, MC 940499). They had\nno idea they were supposed to provide packing services. The crate I paid for was nowhere to be found. They didn’t have enough boxes or blankets. I had to have everything out that day, so\nthere wasn’t any choice—I worked alongside the team for thirteen hours to\ncomplete the packing and move-out.</p>\n<p>I let the team handle furniture protection and loading the truck, which was a\nmistake. As they unloaded a few days later, I realized much of the furniture had\nbeen completely unprotected. My bicycle wasn’t boxed. Lamps were tossed in\nloose. There was extensive damage to furniture, art, books, and my home: I\nwatched the crew rip a doorframe off the wall. The crew provided me with a\nbackdated bill of lading only on delivery.</p>\n<p>I called Flat Rate a couple days later to discuss my concerns. Their\nrepresentive apologized and acknowledged that a crew of two was insufficient. I\nsent Flat Rate a detailed email explaining their failure to provide promised\nservices, accompanied by photographs of the move process and the resulting\ndamage. I asked for a partial refund, and for Flat Rate to pay for the damage\nthey caused. I received only an out-of-office auto-response.</p>\n<p>On March 21st, I received a generic form letter from Flat Rate’s claims\ndepartment. I filled out the claim forms and submitted them on March 25th, and\nrepeated that I needed more than an insurance claim: Flat Rate obviously failed\nto provide the services I’d paid for.</p>\n<p>Then things got weird. I placed call after call to the claims department. They\ndidn’t answer. I left voicemail after voicemail. On April 28th I reached an\nactual human, but their computers were apparently down, and no supervisor was\nin the building—the representative couldn’t do anything but take down a note\nto call me back. They never did. I left another voicemail on May 5th. Again on\nMay 23rd. Again on August 4th. August 8th. I began to realize that Flat Rate’s\nstrategy was simply to never answer the phone.</p>\n<p>On August 14th, a new hire in the claims department called me: she was taking over my case. She dispatched a local furniture company\nto inspect the damages and help complete the claim. The inspectors agreed: the\nmovers had done extensive damage. I provided them with documentation and photos\nas well.</p>\n<p>On September 14 I called the furniture company and confirmed that yes, they had\nsubmitted their report to Flat Rate. However, the furniture company had a\nconfidentiality agreement with Flat Rate, and could not release their report to\nme. I called their contact at Flat Rate, who didn’t answer, and left a\nvoicemail. I called the claims representative’s direct line: her number was no\nlonger in service. On September 19th I got a call from the claims\nrepresentative, who promised she’d have a decision by the end of the day. Flat Rate never talked to me again.</p>\n<p>I called again and again, but got voicemail every time. I tried a letter from\nmy lawyers, an arbitration request with the <a href=\"https://trucking.org/arbitration-loss-and-damages\">American Trucking\nAssociation</a>, complaints with the <a href=\"https://www.usa.gov/agencies/federal-motor-carrier-safety-administration\">Federal Motor Carrier Safety\nAdministration</a>,\nthe <a href=\"https://www.dot.ny.gov/index\">New York Department of Transportation</a>’s\nMotor Carrier complaint department, and the <a href=\"https://www.dfs.ny.gov/\">New York State Department of\nFinancial Services</a>; none of this went anywhere.</p>\n<h2><a href=\"#finding-their-insurance\" id=\"finding-their-insurance\">Finding Their Insurance</a></h2>\n<p>Here’s the thing: movers offer coverage, they have claims departments, and they do many insurance-like things in house. But for complicated reasons, they are not actually insurance companies, and are not regulated as such. Their coverage is backstopped by separate, real insurance companies. I eventually discovered I could bypass Flat Rate and talk to their backing insurance company directly. The Federal Motor Carrier Safety\nAdministration maintains <a href=\"https://li-public.fmcsa.dot.gov/LIVIEW/pkg_carrquery.prc_getdetail\">a public database of moving companies</a>. Putting in their US DOT number\n(488466) yields a page with some basic info. At the bottom are links with their\nactive and pending insurance, and the history of their insurers.</p>\n<p><img class=\"attachment pure-img\" src=\"/data/posts/381/fmcsa.png\" alt=\"A screenshot of the FMCSA page, with links at the bottom\" title=\"A screenshot of the FMCSA page, with links at the bottom\"></p>\n<p><img class=\"attachment pure-img\" src=\"/data/posts/381/insurance-history.png\" alt=\"A screenshot of their insurer history\" title=\"A screenshot of their insurer history\"></p>\n<p>The insurance history told me their cargo insurer at the time of my move was\nHanover Insurance Company, with policy RHX H706231. The database also had\nHanover’s phone number. I was able to call Hanover and ask about the\npolicy—it took a few tries, but I got through to a claims adjuster there who\nwas able to confirm that yes, I could file a claim directly with Hanover. I\nsent over a package of documentation, including the photos and the story so\nfar.</p>\n<p>Hanover Insurance informed me they’d never heard anything about my claim, and\ncalled Flat Rate to check. Over the next few days we discovered that Flat Rate\nwasn’t just refusing to answer my calls—they wouldn’t answer calls from their insurance company either. Hanover wasn’t able to obtain the report from the furniture company, but they did agree, on the basis of the photos, to pay a reasonable amount for the damage to cargo. I had a check a few days later.</p>\n<p>I’m still unhappy with Flat Rate: they charged me a good deal of money for services\nthey failed to deliver, and never paid for damage to the house. I was gearing up for small claims court—but during the tail end of this fiasco I was entering a fresh hell of home renovation, and got distracted. If you’re reading this, Flat Rate, you still owe me.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/380-comments-on-executive-order-14168",
      "title": "Comments on Executive Order 14168",
      "description": null,
      "url": "https://aphyr.com/posts/380-comments-on-executive-order-14168",
      "published": "2025-02-21T23:04:55.000Z",
      "updated": "2025-02-21T23:04:55.000Z",
      "content": "<p><em>Submitted to the Department of State, which is <a href=\"https://www.federalregister.gov/documents/2025/02/18/2025-02696/30-day-notice-of-proposed-information-collection-application-for-a-us-passport-for-eligible\">requesting comments</a> on a proposed change which would align US passport gender markers with <a href=\"https://www.whitehouse.gov/presidential-actions/2025/01/defending-women-from-gender-ideology-extremism-and-restoring-biological-truth-to-the-federal-government/\">executive order 14168</a>.</em></p>\n<p>Executive order 14168 is biologically incoherent and socially cruel. All passport applicants should be allowed to select whatever gender markers they feel best fit, including M, F, or X.</p>\n<p>In humans, neither sex nor gender is binary at any level. There are several possible arrangements of sex chromosomes: X, XX, XY, XXY, XYY, XXX, tetrasomies, pentasomies, etc. A single person can contain a mosaic of cells with different genetics: some XX, some XYY. Chromosomes may not align with genitalia: people with XY chromosomes may have a vulva and internal testes. People with XY chromosomes and a small penis may be surgically and socially reassigned female at birth—and never told what happened. None of these biological dimensions necessarily align with one’s internal concept of gender, or one’s social presentation.</p>\n<p>The executive order has no idea how biology works. It defines “female” as “a person belonging, at conception, to the sex that produces the large reproductive cell”. Zygotes do not produce reproductive cells at all: under this order none  of us have a sex. Oogenesis doesn’t start until over a month into embryo development. Even if people were karyotyping their zygotes immediately after conception so they could tell what “legal” sex they were going to be, they could be wrong: which gametes we produce depends on the formation of the genital ridge.</p>\n<p>All this is to say that if people fill out these forms using this definition of sex, they’re guessing at a question which is both impossible to answer and socially irrelevant. You might be one of the roughly two percent of humans born with an uncommon sexual development and not even know it. Moreover, the proposed change fundamentally asks the wrong question: gender markers on passports are used by border control agents, and are expected to align with how those agents read the passport holder’s gender. A mismatch will create needless intimidation and hardship for travelers.</p>\n<p>Of course most of us will not have our identities challenged under this order. That animus is reserved for trans people, for gender-non-conforming people, for anyone whose genetics, body, dress, voice, or mannerisms don’t quite fit the mold. Those are the people who will suffer under this order. That cruelty should be resisted.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-nginx",
      "title": "Geoblocking the UK with Debian & Nginx",
      "description": null,
      "url": "https://aphyr.com/posts/379-geoblocking-the-uk-with-debian-nginx",
      "published": "2025-02-20T19:45:55.000Z",
      "updated": "2025-02-20T19:45:55.000Z",
      "content": "<p>A few quick notes for other folks who are <a href=\"https://geoblockthe.uk\">geoblocking the UK</a>. I just set up a basic geoblock with Nginx on Debian. This is all stuff you can piece together, but the Maxmind and Nginx docs are a little vague about the details, so I figure it’s worth an actual writeup. My Nginx expertise is ~15 years out of date, so this might not be The Best Way to do things. YMMV.</p>\n<p>First, register for a free <a href=\"https://www.maxmind.com/en/geolite2/signup\">MaxMind account</a>; you’ll need this to subscribe to their GeoIP database. Then set up a daemon to maintain a copy of the lookup file locally, and Nginx’s GeoIP2 module:</p>\n<pre><code><span></span>apt install geoipupdate libnginx-mod-http-geoip2\n</code></pre>\n<p>Create a license key on the MaxMind site, and download a copy of the config file you’ll need. Drop that in <code>/etc/GeoIP.conf</code>. It’ll look like:</p>\n<pre><code>AccountID XXXX\nLicenseKey XXXX\nEditionIDs GeoLite2-Country\n</code></pre>\n<p>The package sets up a cron job automatically, but we should grab an initial copy of the file. This takes a couple minutes, and writes out <code>/var/lib/GeoIP/GeoLite2-Country-mmdb</code>:</p>\n<pre><code><span></span>geoipupdate\n</code></pre>\n<p>The GeoIP2 module should already be loaded via <code>/etc/nginx/modules-enabled/50-mod-http-geoip2.conf</code>. Add a new config snippet like <code>/etc/nginx/conf.d/geoblock.conf</code>. The first part tells Nginx where to find the GeoIP database file, and then extracts the two-letter ISO country code for each request as a variable. The <code>map</code> part sets up an <code>$osa_geoblocked</code> variable, which is set to <code>1</code> for GB, otherwise <code>0</code>.</p>\n<pre><code>geoip2 /var/lib/GeoIP/GeoLite2-Country.mmdb {\n  $geoip2_data_country_iso_code country iso_code;\n}\n\nmap $geoip2_data_country_iso_code $osa_geoblocked {\n  GB      1;\n  default 0;\n}\n</code></pre>\n<p>Write an HTML file somewhere like <code>/var/www/custom_errors/osa.html</code>, explaining the block. Then serve that page for HTTP 451 status codes: in <code>/etc/nginx/sites-enabled/whatever</code>, add:</p>\n<pre><code>server {\n  ...\n  # UK OSA error page\n  error_page 451 /osa.html;\n  location /osa.html {\n    internal;\n    root /var/www/custom_errors/;\n  }\n\n  # When geoblocked, return 451\n  location / {\n    if ($osa_geoblocked = 1) {\n      return 451;\n    }\n  }\n}\n</code></pre>\n<p>Test your config with <code>nginx -t</code>, and then <code>service nginx reload</code>. You can test how things look from the UK using a VPN service, or something like <a href=\"https://www.locabrowser.com/\">locabrowser</a>.</p>\n<p>This is, to be clear, a bad solution. MaxMind’s free database is not particularly precise, and in general IP lookup tables are chasing a moving target. I know for a fact that there are people in non-UK countries (like Ireland!) who have been inadvertently blocked by these lookup tables. Making those people use Tor or a VPN <em>sucks</em>, but I don’t know what else to do in the current regulatory environment.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/378-seconds-since-the-epoch",
      "title": "Seconds Since the Epoch",
      "description": null,
      "url": "https://aphyr.com/posts/378-seconds-since-the-epoch",
      "published": "2024-12-25T18:46:21.000Z",
      "updated": "2024-12-25T18:46:21.000Z",
      "content": "<p>This is not at all news, but it comes up often enough that I think there should be a concise explanation of the problem. People, myself included, like to say that POSIX time, also known as Unix time, is the <a href=\"https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date\">number</a> <a href=\"https://www.gnu.org/software/coreutils/manual/html_node/Seconds-since-the-Epoch.html\">of</a> <a href=\"https://man7.org/linux/man-pages/man2/time.2.html\">seconds</a> <a href=\"https://pkg.go.dev/time#Unix\">since</a> <a href=\"https://dev.mysql.com/doc/refman/8.4/en/datetime.html\">the</a> <a href=\"https://ruby-doc.org/core-3.0.0/Time.html\">Unix</a> <a href=\"https://docs.datastax.com/en/cql-oss/3.x/cql/cql_reference/timestamp_type_r.html\">epoch</a>, which was 1970-01-01 at 00:00:00.</p>\n<p>This is not true. Or rather, it isn’t true in the sense most people think. For example, it is presently 2024-12-25 at 18:51:26 UTC. The POSIX time is 1735152686. It has been 1735152713 seconds since the POSIX epoch. The POSIX time number is twenty-seven seconds lower.</p>\n<p>This is because POSIX time is derived <a href=\"https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/fipspub151-1.pdf\">in IEEE 1003.1</a> from <a href=\"https://en.wikipedia.org/wiki/Coordinated_Universal_Time\">Coordinated Universal Time</a>. The standard assumes that every day is exactly 86,400 seconds long. Specifically:</p>\n<blockquote>\n<p>The <em>time()</em> function returns the value of time in <b>seconds since the Epoch</b>.</p>\n</blockquote>\n<p>Which is defined as:</p>\n<blockquote>\n<p><b>seconds since the Epoch.</b> A value to be interpreted as the number of seconds between a specified time and the Epoch. A Coordinated Universal Time name (specified in terms of seconds (<em>tm_sec</em>), minutes (<em>tm_min</em>), hours (<em>tm_hour</em>), days since January 1 of the year (<em>tm_yday</em>), and calendar year minus 1900\n(<em>tm_year</em>)) is related to a time represented as <em>seconds since the Epoch</em> according to the expression below.</p>\n<p>If year < 1970 or the value is negative, the relationship is undefined. If year ≥ 1970 and the value is non-negative, the value is related to a Coordinated Universal Time name according to the expression:</p>\n<p><em>tm_sec</em> + <em>tm_min</em> * 60 + <em>tm_hour</em> * 3600 + <em>tm_yday</em> * 86400 +\n(<em>tm_year</em>-70) * 31536000 + ((<em>tm_year</em> - 69) / 4) * 86400</p>\n</blockquote>\n<p>The length of the day is not 86,400 seconds, and in fact changes over time. To keep UTC days from drifting too far from solar days, astronomers periodically declare a <a href=\"https://en.wikipedia.org/wiki/Leap_second\">leap second</a> in UTC. Consequently, every few years POSIX time jumps backwards, <a href=\"https://marc.info/?l=linux-kernel&m=134113577921904\">wreaking</a> <a href=\"https://www.zdnet.com/article/qantas-suffers-delays-due-to-linux-leap-second-bug/\">utter</a> <a href=\"https://blog.cloudflare.com/how-and-why-the-leap-second-affected-cloudflare-dns/\">havoc</a>. Someday it might jump forward.</p>\n<h2><a href=\"#archaeology\" id=\"archaeology\">Archaeology</a></h2>\n<p>Appendix B of IEEE 1003 has a fascinating discussion of leap seconds:</p>\n<blockquote>\n<p>The concept of leap seconds is added for precision; at the time this standard was published, 14 leap seconds had been added since January 1, 1970. These 14 seconds are ignored to provide an easy and compatible method of computing time differences.</p>\n</blockquote>\n<p>I, too, love to ignore things to make my life easy. The standard authors knew “seconds since the epoch” were not, in fact, seconds since the epoch. And they admit as much:</p>\n<blockquote>\n<p>Most systems’ notion of “time” is that of a continuously-increasing value, so this value should increase even during leap seconds. However, not only do most systems not keep track of leap seconds, but most systems are probably not synchronized to any standard time reference. Therefore, it is inappropriate to require that a time represented as seconds since the Epoch precisely represent the number of seconds between the referenced time and the Epoch.</p>\n<p>It is sufficient to require that applications be allowed to treat this time as if it represented the number of seconds between the referenced time and the Epoch. It is the responsibility of the vendor of the system, and the administrator of the system, to ensure that this value represents the number of seconds between the referenced time and the Epoch as closely as necessary for the application being run on that system….</p>\n</blockquote>\n<p>I imagine there was some debate over this point. The appendix punts, saying that vendors and administrators must make time align “as closely as necessary”, and that “this value should increase even during leap seconds”. The latter is achievable, but the former is arguably impossible: the standard requires POSIX clocks be twenty-seven seconds off.</p>\n<blockquote>\n<p>Consistent interpretation of seconds since the Epoch can be critical to certain types of distributed applications that rely on such timestamps to synchronize events. The accrual of leap seconds in a time standard is not predictable. The number of leap seconds since the Epoch will likely increase. The standard is\nmore concerned about the synchronization of time between applications of astronomically short duration and the Working Group expects these concerns to become more critical in the future.</p>\n</blockquote>\n<p>In a sense, the opposite happened. Time synchronization is <em>always</em> off, so systems generally function (however incorrectly) when times drift a bit. But leap seconds are rare, and the linearity evoked by the phrase “seconds since the epoch” is so deeply baked in to our intuition, that software can accrue serious, unnoticed bugs. Until a few years later, one of those tiny little leap seconds takes down a big chunk of the internet.</p>\n<h2><a href=\"#what-to-do-instead\" id=\"what-to-do-instead\">What To Do Instead</a></h2>\n<p>If you just need to compute the duration between two events on one computer, use <a href=\"https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/reference_guide/sect-posix_clocks#sect-POSIX_clocks\"><code>CLOCK_MONOTONIC</code></a>, or better yet, <code>CLOCK_BOOTTIME</code>. If you don’t need to exchange timestamps with other systems that assume POSIX time, use <a href=\"https://www.ipses.com/eng/in-depth-analysis/standard-of-time-definition/\">TAI, GPS, or maybe LORAN</a>. If you do need rough alignment with other POSIX-timestamp systems, <a href=\"https://developers.google.com/time/smear\">smear leap seconds</a> over a longer window of time. Libraries like <a href=\"https://github.com/qntm/t-a-i\">qntm’s t-a-i</a> can convert back and forth between POSIX and TAI.</p>\n<p>There’s an ongoing effort to <a href=\"https://www.timeanddate.com/news/astronomy/end-of-leap-seconds-2022\">end leap seconds</a>, hopefully <a href=\"https://www.bipm.org/documents/20126/64811223/Resolutions-2022.pdf/281f3160-fc56-3e63-dbf7-77b76500990f\">by 2035</a>. It’ll require additional work to build conversion tables into everything that relies on the “86,400 seconds per day” assumption, but it should also make it much simpler to ask questions like “how many seconds between these two times”. At least for times after 2035!</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/371-threads-wont-take-you-south-of-market",
      "title": "Threads Won't Take You South of Market",
      "description": null,
      "url": "https://aphyr.com/posts/371-threads-wont-take-you-south-of-market",
      "published": "2024-12-01T15:01:36.000Z",
      "updated": "2024-12-01T15:01:36.000Z",
      "content": "<p>In June 2023, when <a href=\"https://threads.net\">Threads</a> announced their <a href=\"https://techcrunch.com/2023/07/05/adam-mosseri-says-metas-threads-app-wont-have-activitypub-support-at-launch/\">plans to federate</a> with other <a href=\"https://en.wikipedia.org/wiki/Fediverse\">Fediverse instances</a>, there was a good deal of <a href=\"https://fedipact.online/\">debate</a> around whether smaller instances should allow federation or block it pre-emptively. As one of the admins of <a href=\"https://woof.group\">woof.group</a>, I wrote about some of the <a href=\"https://blog.woof.group/announcements/considering-large-instance-federation\">potential risks and rewards</a> of federating with Threads. We decided to <a href=\"https://blog.woof.group/announcements/deferring-threads-federation\">wait and see</a>.</p>\n<p>In my queer and leather circles, Facebook and Instagram have been generally understood as hostile environments for over a decade. In 2014, their <a href=\"https://www.eff.org/deeplinks/2014/09/facebooks-real-name-policy-can-cause-real-world-harm-lgbtq-community\">“Real Name” policy</a> made life particularly difficult for trans people, drag queens, sex workers, and people who, for various reasons, needed to keep their real name disconnected from their queer life. My friends have been repeatedly suspended from both platforms for showing too much skin, or using the peach emoji. Meta’s moderation has been aggressive, opaque, and wildly inconsistent: sometimes full nudity is fine; other times a kiss or swimsuit is beyond the line. In some circles, maintaining a series of backup accounts in advance of one’s ban became de rigueur.</p>\n<p>I’d hoped that federation between Threads and the broader Fediverse might allow a <a href=\"https://blog.woof.group/mods/the-shape-of-social-space\">more nuanced spectrum</a> of moderation norms. Threads might opt for a more conservative environment locally, but through federation, allow their users to interact with friends on instances with more liberal norms. Conversely, most of my real-life friends are still on Meta services—I’d love to see their posts and chat with them again. Threads could communicate with Gay Fedi (using the term in the broadest sense), and de-rank or hide content they don’t like on a per-post or per-account basis.</p>\n<p>This world seems technically feasible. Meta reports <a href=\"https://techcrunch.com/2024/11/03/threads-now-has-275m-monthly-active-users/\">275 million Monthly Active Users (MAUs)</a>, and over <a href=\"https://www.statista.com/statistics/1092227/facebook-product-dau/\">three billion</a> accross other Meta services. Fediverse has something like <a href=\"https://fedidb.org/\">one million MAUs across various instances</a>. This is not a large jump in processing or storage; nor would it seem to require a large increase in moderation staff. Threads has already committed to doing the requisite engineering, user experience, and legal work to allow federation across a broad range of instances. Meta is swimming in cash.</p>\n<p>All this seems a moot point. A year and a half later, Threads <a href=\"https://www.theverge.com/24107998/threads-fediverse-mastodon-how-to\">is barely half federated</a>. It publishes Threads posts to the world, but only if you dig in to the settings and check the “Fediverse Sharing” box. Threads users can see replies to their posts, but can’t talk back. Threads users can’t mention others, see mentions from other people, or follow anyone outside Threads. This may work for syndication, but is essentially unusable for conversation.</p>\n<p>Despite the fact that Threads users can’t follow or see mentions from people on other instances, Threads has already <a href=\"https://www.threads.net/moderated_servers\">opted to block</a> a slew of instances where gay & leather people congregate. Threads blocks <a href=\"https://hypno.social\">hypno.social</a>, <a href=\"rubber.social\">rubber.social</a>, <a href=\"https://4bear.com\">4bear.com</a>, <a href=\"https://nsfw.lgbt\">nsfw.lgbt</a>, <a href=\"https://kinkyelephant.com\">kinkyelephant.com</a>, <a href=\"https://kinktroet.social\">kinktroet.social</a>, <a href=\"https://barkclub.xyz\">barkclub.xyz</a>, <a href=\"https://mastobate.social\">mastobate.social</a>, and <a href=\"https://kinky.business\">kinky.business</a>. They also block the (now-defunct) instances <a href=\"https://bear.community\">bear.community</a>, <a href=\"https://gaybdsm.group\">gaybdsm.group</a>, and <a href=\"https://gearheads.social\">gearheads.social</a>. They block more general queer-friendly instances like <a href=\"https://bark.lgbt\">bark.lgbt</a>, <a href=\"https://super-gay.co\">super-gay.co</a>, <a href=\"https://gay.camera\">gay.camera</a>, and <a href=\"https://gaygeek.social\">gaygeek.social</a>. They block sex-positive instances like <a href=\"https://nsfwphotography.social\">nsfwphotography.social</a>, <a href=\"https://nsfw.social\">nsfw.social</a>, and <a href=\"https://net4sw.com\">net4sw.com</a>. All these instances are blocked for having “violated our Community Standards or Terms of Use”. Others like <a href=\"https://fisting.social\">fisting.social</a>, <a href=\"https://mastodon.hypnoguys.com\">mastodon.hypnoguys.com</a>, <a href=\"https://abdl.link\">abdl.link</a>, <a href=\"https://qaf.men\">qaf.men</a>, and <a href=\"https://social.rubber.family\">social.rubber.family</a>, are blocked for having “no publicly accessible feed”. I don’t know what this means: hypnoguys.social, for instance, has the usual Mastodon <a href=\"https://mastodon.hypnoguys.com/public/local\">publically accessible local feed</a>.</p>\n<p>It’s not like these instances are hotbeds of spam, hate speech, or harassment: woof.group federates heavily with most of the servers I mentioned above, and we rarely have problems with their moderation. Most have reasonable and enforced media policies requiring sensitive-media flags for genitals, heavy play, and so on. Those policies are generally speaking looser than Threads (woof.group, for instance, allows butts!) but there are plenty of accounts and posts on these instances which would be anodyne under Threads’ rules.</p>\n<p>I am shocked that woof.group is <em>not</em> on Threads’ blocklist yet. We have similar users who post similar things. Our content policies are broadly similar—several of the instances Threads blocks actually adopted woof.group’s specific policy language. I doubt it’s our size: Threads blocks several instances with less than ten MAUs, and woof.group has over seven hundred.</p>\n<p>I’ve been out of the valley for nearly a decade, and I don’t have insight into Meta’s policies or decision-making. I’m sure Threads has their reasons. Whatever they are, Threads, like all of Meta’s services, feels distinctly uncomfortable with sex, and sexual expression is a vibrant aspect of gay culture.</p>\n<p>This is part of why I started woof.group: we deserve spaces moderated with our subculture in mind. But I also hoped that by designing a moderation policy which compromised with normative sensibilities, we might retain connections to a broader set of publics. This particular leather bar need not be an invite-only clubhouse; it can be a part of a walkable neighborhood. For nearly five years we’ve kept that balance, retaining open federation with most all the Fediverse. I get the sense that Threads intends to wall its users off from our world altogether—to make “bad gays” invisible. If Threads were a taxi service, it wouldn’t take you <a href=\"https://sfleatherdistrict.org/wp-content/uploads/2021/04/Rubin-Valley-of-Kings.pdf\">South of Market</a>.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/370-ecobee-settings-for-heat-pumps-with-resistive-aux-heat",
      "title": "Ecobee Settings for Heat Pumps with Resistive Aux Heat",
      "description": null,
      "url": "https://aphyr.com/posts/370-ecobee-settings-for-heat-pumps-with-resistive-aux-heat",
      "published": "2024-02-29T04:41:38.000Z",
      "updated": "2024-02-29T04:41:38.000Z",
      "content": "<p>I’m in the process of replacing a old radiator system with a centrally-ducted, air-source heat pump system with electric resistive backup heat. I’ve found that the default ecobee algorithm seems to behave surprisingly poorly for this system, and wanted to write up some of the settings that I’ve found yield better behavior.</p>\n<p>A disclaimer. I’m not an HVAC professional. I have two decades in software operations, a background in physics, and far too much experience inferring system dynamics from timeseries graphs. This advice may void your warranty, burn your house down, etc.; everything you do is at your own risk.</p>\n<h2><a href=\"#the-system\" id=\"the-system\">The System</a></h2>\n<p>First, a bit about the system in question. You can skip this section if you know about heat pumps, short cycling, staging, etc.</p>\n<p>There are two main subsystems: a heat pump and an air handler. The heat pump sits outside: it has a fan which moves outside air over a heat exchanger, and a compressor, which compresses a working fluid. The working fluid is connected in a loop to the air handler, where it runs through another heat exchanger to heat or cool the inside air. The air handler also has a blower fan which circulates air through the whole house. If the heat pump can’t keep up with demand, the air handler also has a pair of resistive electric heating coils, called <em>aux heat</em>, which can supplement or take over from the heat pumps.</p>\n<p>A few important things to know about heat pumps. First, electric resistive heaters have a <em>Coefficient of Performance</em> (CoP) of essentially 1: they take 1 joule of electricity and turn it into 1 joule of heat in the air. My heat pumps have a typical heating CoP of about 2-4, depending on temperature and load. They take 1 joule of electricity and suck 2 to 4 joules of heat from the outside air into the inside. This means they cost 2-4 times less (in electric opex, at least) than a standard resistive electric heating system.</p>\n<p>Second, heat pumps, like A/C systems, shouldn’t start and stop too frequently. Starting up causes large transient electrical and mechanical stresses. Ideally they should run at a low speed for several hours, rather than running at full blast, shutting off, then turning on again ten minutes later. This is called “short cycling”.</p>\n<p>Third, the heat pump’s fan, heat pump’s compressor, and the air handler’s fan are all variable-speed: they can run very slow (quiet, efficient), very fast (loud, more powerful), or at any speed in between. This helps reduce short-cycling, as well as improving efficiency and reducing noise. However, directly setting compressor and fan speed requires a special “communicating” thermostat made by the same manufacturer, which speaks a proprietary wire protocol. My manufacturer’s communicating thermostats are very expensive and have a reputation for buggy hardware and software, so I opted to get an <a href=\"https://www.ecobee.com/en-us/smart-thermostats/smart-wifi-thermostat/\">ecobee 3 lite</a>. Like essentially every other thermostat on the planet, the ecobee uses ~8 wires with simple binary signals, like “please give me heat” and “please turn on the fan”. It can’t ask for a specific <em>amount</em> of heat.</p>\n<p>However, all is not lost. The standard thermostat protocol has a notion of a “two-stage” system—if the Y1 wire is hot, it’s asking for “some heat”, and if Y2 is also hot, it’s asking for “more heat”. My variable-speed heat pump emulates a two-stage system using a hysteresis mechanism. In stage 1, the heat pump offers some nominal low degree of heat. When the thermostat calls for stage 2, it kicks up the air handler blower a notch, and after 20 minutes, it slowly ramps up the heat pump compressor as well. I assume there’s a ramp-down for going back to stage 1. They say this provides “true variable-capacity operation”. You can imagine that the most efficient steady state is where the thermostat toggles rapidly between Y1 and Y2, causing the system to hang out at exactly the right variable speeds for current conditions—but I assume ecobee has some kind of of frequency limiter to avoid damaging systems that actually have two separate stages with distinct startup/shutdown costs.</p>\n<p>The air handler’s aux heat is also staged: if the W1 wire is hot, I think (based on staring at the wiring diagram and air handler itself) it just energizes one of two coils. If W2 is also hot, it energizes both. I think this is good: we want to use as much of the heat pump heat as possible, and if we can get away with juuuust a little aux heat, instead of going full blast, that’ll save energy.</p>\n<p>In short: aux heat is 2-4x more expensive than heat pump heat; we want to use as little aux as possible. Short-cycling is bad: we want long cycle times. For maximum efficiency, we want both the heat pump and aux heat to be able to toggle between stage 1 and 2 depending on demand.</p>\n<h2><a href=\"#automatic-problems\" id=\"automatic-problems\">Automatic Problems</a></h2>\n<p>I initially left the ecobee at its automatic default settings for a few weeks; it’s supposed to learn the house dynamics and adapt. I noticed several problems. Presumably this behavior depends on weather, building thermal properties, HVAC dynamics, and however ecobee’s tuned their algorithm last week, so YMMV: check your system and see how it looks.</p>\n<p>It’s kind of buried, but ecobee offers a really nice time-series visualization of thermostat behavior on their web site. There’s also a Home Assistant integration that pulls in data from their API. It’s a pain in the ass to set up (ecobee, there’s no need for this to be so user-hostile), but it does work.</p>\n<p>Over the next few weeks I stared obsessively at time-series plots from both ecobee and Home Assistant, and mucked around with ecobee’s settings. Most of what I’ll describe below is configurable in the settings menu on the thermostat: look for “settings”, “installation settings”, “thresholds”.</p>\n<h2><a href=\"#reducing-aux-heat\" id=\"reducing-aux-heat\">Reducing Aux Heat</a></h2>\n<p>First, the automatics kicked on aux heat a <em>lot</em>. Even in situations where the heat pump would have been perfectly capable of getting up to temp, ecobee would burn aux heat to reach the target temperature (<em>set point</em>) faster.</p>\n<p>Part of the problem was that ecobee ships (I assume for safety reasons) with ludicrously high cut-off thresholds for heat pumps. Mine had “compressor min outdoor temperature” of something like 35 degrees, so the heat pump wouldn’t run for most of the winter. The actual minimum temperature of my model is -4, cold-climate heat pumps run down to -20. I lowered mine to -5; the manual says there’s a physical thermostat interlock on the heat pump itself, and I trust that more than the ecobee weather feed anyway.</p>\n<p>Second: ecobee seems to prioritize speed over progress: if it’s not getting to the set point fast enough, it’ll burn aux heat to get there sooner. I don’t want this: I’m perfectly happy putting on a jacket. After a bit I worked out that the heat pumps alone can cover the house load down to ~20 degrees or so, and raised “aux heat max outdoor temperature” to 25. If it’s any warmer than that, the system won’t use aux heat.</p>\n<h2><a href=\"#reverse-staging\" id=\"reverse-staging\">Reverse Staging</a></h2>\n<p>A second weird behavior: once the ecobee called for stage 2, either from the heat pump or aux, it would run in stage 2 until it hit the set point, then shut off the system entirely. Running aux stage 2 costs more energy. Running the heat pump in stage 2 shortens the cycle time: remember, the goal is a low, long running time.</p>\n<p><img class=\"attachment pure-img\" src=\"/data/posts/370/no-reverse-staging.png\" alt=\"A time-series plot showing that once stage 2 engages, it runs until shutting off, causing frequent cycling\" title=\"A time-series plot showing that once stage 2 engages, it runs until shutting off, causing frequent cycling\"></p>\n<p>The setting I used to fix this is called “reverse staging”. Ecobee’s <a href=\"https://support.ecobee.com/s/articles/Threshold-settings-for-ecobee-thermostats\">documentation</a> says:</p>\n<blockquote>\n<p>Compressor Reverse Staging: Enables the second stage of the compressor near the temperature setpoint.</p>\n</blockquote>\n<p>As far as I can tell this documentation is completely wrong. From watching the graphs, this setting seems to allow the staging state machine to move from stage 2 back to stage 1, rather than forcing it to run in stage 2 until shutting off entirely. It’ll go back up to stage 2 if it needs to, and back down again.</p>\n<p><img class=\"attachment pure-img\" src=\"/data/posts/370/reverse-staging.png\" alt=\"With reverse staging, it'll jump up to stage 2, then drop back down to stage 1.\" title=\"With reverse staging, it'll jump up to stage 2, then drop back down to stage 1.\"></p>\n<h2><a href=\"#manual-staging\" id=\"manual-staging\">Manual Staging</a></h2>\n<p>I couldn’t seem to get ecobee’s automatic staging to drop back to stage 1 heat reliably, or avoid kicking on aux heat when stage 2 heat pump heat would have done fine. I eventually gave up and turned off automatic staging altogether. I went with the delta temperature settings. If the temperature delta between the set point and indoor air is more than 1 degree, it turns on heat pump stage 1. More than two degrees, stage 2. More than four degrees, aux 1. More than five degrees, aux 2. The goal here is to use only as much aux heat as absolutely necessary to supplement the heat pump. I also have aux heat configured to run concurrently with the heat pump: there’s a regime where the heat pump provides useful heat, but not quite enough, and my intuition is that <em>some</em> heat pump heat is cheaper than all aux.</p>\n<p>I initially tried the default 0.5 degree delta before engaging the heat pump’s first stage. It turns out that for some temperature regimes this creates rapid cycling: that first-phase heat is enough to heat the house rapidly to the set point, and then there’s nothing to do but shut the system off. The house cools, and the system kicks on again, several times per hour. I raised the delta to 1 degree, which significantly extended the cycle time.</p>\n<h2><a href=\"#large-setback-with-preheating\" id=\"large-setback-with-preheating\">Large Setback with Preheating</a></h2>\n<p>A <em>setback</em> is when you lower your thermostat, e.g. while away from home or sleeping. There’s some folk wisdom that heat pumps should run at a constant temperature all the time, rather than have a large setback. As far as I can tell, this is because a properly-sized heat pump system (unlike a gas furnace) doesn’t deliver a ton of excess heat, so it can’t catch up quickly when asked to return to a higher temperature. To compensate, the system might dip into aux heat, and that’s super expensive.</p>\n<p>I’m in the US Midwest, where winter temperatures are usually around 15-40 F. I drop from 68 to 60 overnight, and the house can generally coast all night without having to run any HVAC at all. In theory the ecobee should be able to figure out the time required to come back to 68 and start the heat pump early in the morning, but in practice I found it would wait too long, and then the large difference between actual and set temp would trigger aux heat. To avoid this, I added a custom activity in Ecobee’s web interface (I call mine “preheat”), with a temperature of 64. I have my schedule set up with an hour of preheat in the morning, before going to the normal 68. This means there’s less of a delta-T, and the system can heat up entirely using the heat pump.</p>\n<p><img class=\"attachment pure-img\" src=\"/data/posts/370/setback.png\" alt=\"A time series graph showing temperature falling smoothly overnight as the HVAC is disabled, and then rising during the preheat phase in the morning.\" title=\"A time series graph showing temperature falling smoothly overnight as the HVAC is disabled, and then rising during the preheat phase in the morning.\"></p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/369-classnotfoundexception-java-util-sequencedcollection",
      "title": "ClassNotFoundException: java.util.SequencedCollection",
      "description": null,
      "url": "https://aphyr.com/posts/369-classnotfoundexception-java-util-sequencedcollection",
      "published": "2024-02-20T23:03:42.000Z",
      "updated": "2024-02-20T23:03:42.000Z",
      "content": "<p>Recently I’ve had users of my libraries start reporting mysterious errors due to a missing reference to <code>SequencedCollection</code>, a Java interface added in JDK 21:</p>\n<pre><code>Execution error (ClassNotFoundException) at\njdk.internal.loader.BuiltinClassLoader/loadClass (BuiltinClassLoader.java:641).\njava.util.SequencedCollection\n</code></pre>\n<p>Specifically, projects using <a href=\"https://github.com/jepsen-io/jepsen/issues/585\">Jepsen 0.3.5</a> started throwing this error due to Clojure’s built-in <code>rrb_vector.clj</code>, which is particularly vexing given that the class doesn’t reference <code>SequencedCollection</code> at all.</p>\n<p>It turns out that the Clojure compiler, when run on JDK 21 or later, will automatically insert references to this class when compiling certain expressions–likely because it now appears in the supertypes of other classes. Jepsen had <code>:javac-options [\"-source\" \"11\" \"-target\" \"11\"]</code> in Jepsen’s <code>project.clj</code> already, but it still emitted references to <code>SequencedCollection</code> because the reference is inserted by the Clojure compiler, not <code>javac</code>. Similarly, adding <code>[\"--release\" \"11\"]</code> didn’t work.</p>\n<p>Long story short: as far as I can tell the only workaround is to downgrade to Java 17 (or anything prior to 21) when building Jepsen as a library. That’s not super hard with <code>update-alternatives</code>, but I still imagine I’ll be messing this up until Clojure’s compiler can get a patch.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days",
      "title": "How to Replace Your CPAP In Only 666 Days",
      "description": null,
      "url": "https://aphyr.com/posts/368-how-to-replace-your-cpap-in-only-666-days",
      "published": "2024-02-04T00:38:53.000Z",
      "updated": "2024-02-04T00:38:53.000Z",
      "content": "<p><em>This story is not practical advice. For me, it’s closing the book on an almost two-year saga. For you, I hope it’s an enjoyable bit of bureaucratic schadenfreude. For Anthem, I hope it’s the subject of a series of painful but transformative meetings. This is not an isolated event. I’ve had dozens of struggles with Anthem customer support, and they all go like this.</em></p>\n<p><em>If you’re looking for practical advice: it’s this. Be polite. Document everything. Keep a log. Follow the claims process. Check the laws regarding insurance claims in your state. If you pass the legally-mandated deadline for your claim, call customer service. Do not allow them to waste a year of your life, or force you to resubmit your claim from scratch. Initiate a complaint with your state regulators, and escalate directly to <a href=\"mailto:[email protected]\">Gail Boudreaux’s team</a>–or whoever Anthem’s current CEO is.</em></p>\n<p>To start, experience an equipment failure.</p>\n<p>Use your CPAP daily for six years. Wake up on day zero with it making a terrible sound. Discover that the pump assembly is failing. Inquire with Anthem Ohio, your health insurer, about how to have it repaired. Allow them to refer you to a list of local durable medical equipment providers. Start calling down the list. Discover half the list are companies like hair salons. Eventually reach a company in your metro which services CPAPs. Discover they will not repair broken equipment unless a doctor tells them to.</p>\n<p>Leave a message with your primary care physician. Call the original sleep center that provided your CPAP. Discover they can’t help, since you’re no longer in the same state. Return to your primary, who can’t help either, because he had nothing to do with your prescription. Put the sleep center and your primary in touch, and ask them to talk.</p>\n<p>On day six, call your primary to check in. He’s received a copy of your sleep records, and has forwarded them to a local sleep center you haven’t heard of. They, in turn, will talk to Anthem for you.</p>\n<p>On day 34, receive an approval letter labeled “confirmation of medical necessity” from Anthem, directed towards the durable medical equipment company. Call that company and confirm you’re waitlisted for a new CPAP. They are not repairable. Begin using your partner’s old CPAP, which is not the right class of device, but at least it helps.</p>\n<p>Over the next 233 days, call that medical equipment company regularly. Every time, inquire whether there’s been any progress, and hear “we’re still out of stock”. Ask them you what the manufacturer backlog might be, how many people are ahead of you in line, how many CPAPs they <em>do</em> receive per month, or whether anyone has ever received an actual device from them. They won’t answer any questions. Realize they are never going to help you.</p>\n<p>On day 267, realize there is no manufacturer delay. The exact machine you need is in stock on CPAP.com. Check to make sure there’s a claims process for getting reimbursed by Anthem. Pay over three thousand dollars for it. When it arrives, enjoy being able to breathe again.</p>\n<p>On day 282, follow CPAP.com’s documentation to file a claim with Anthem online. Include your prescription, receipt, shipping information, and the confirmation of medical necessity Anthem sent you.</p>\n<p>On day 309, open the mail to discover a mysterious letter from Anthem. They’ve received your appeal. You do not recall appealing anything. There is no information about what might have been appealed, but something will happen within 30-60 days. There is nothing about your claim.</p>\n<p>On day 418, emerge from a haze of lead, asbestos, leaks, and a host of other home-related nightmares; remember Anthem still hasn’t said anything about your claim. Discover your claim no longer appears on Anthem’s web site. Call Anthem customer service. They have no record of your claim either. Ask about the appeal letter you received. Listen, gobsmacked, as they explain that they decided your claim was in fact an appeal, and transferred it immediately to the appeals department. The appeals department examined the appeal and looked for the claim it was appealing. Finding none, they decided the appeal was moot, and rejected it. At no point did anyone inform you of this. Explain to Anthem’s agent that you filed a claim online, not an appeal. At their instruction, resign yourself to filing the entire claim again, this time using a form via physical mail. Include a detailed letter explaining the above.</p>\n<p>On day 499, retreat from the battle against home entropy to call Anthem again. Experience a sense of growing dread as the customer service agent is completely unable to locate either of your claims. After a prolonged conversation, she finds it using a different tool. There is no record of the claim from day 418. There was a claim submitted on day 282. Because the claim does not appear in her system, there is no claim. Experience the cognitive equivalent of the Poltergeist hallway shot as the agent tells you “Our members are not eligible for charges for claim submission”.</p>\n<p>Hear the sentence “There is a claim”. Hear the sentence “There is no claim”. Write these down in the detailed log you’ve been keeping of this unfurling Kafkaesque debacle. Ask again if there is anyone else who can help. There is no manager you can speak to. There is no tier II support. “I’m the only one you can talk to,” she says. Write that down.</p>\n<p>Call CPAP.com, which has a help line staffed by caring humans. Explain that contrary to their documentation, Anthem now says members cannot file claims for equipment directly. Ask if they are the provider. Discover the provider for the claim is probably your primary care physician, who has no idea this is happening. Leave a message with him anyway. Leave a plaintive message with your original sleep center for good measure.</p>\n<p>On day 502, call your sleep center again. They don’t submit claims to insurance, but they confirm that some people <em>do</em> successfully submit claims to Anthem using the process you’ve been trying. They confirm that Anthem is, in fact, hot garbage. Call your primary, send them everything you have, and ask if they can file a claim for you.</p>\n<p>On day 541, receive a letter from Anthem, responding to your inquiry. You weren’t aware you filed one.</p>\n<blockquote>\n<p>Please be informed that we have received your concern. Upon review we have noticed that there is no claim billed for the date of service mentioned in the submitted documents, Please provide us with a valid claim. If not submitted,provide us with a valid claim iamge to process your claim further.</p>\n</blockquote>\n<p>Stare at the letter, typos and all. Contemplate your insignificance in the face of the vast and uncaring universe that is Anthem.</p>\n<p>On day 559, steel your resolve and call Anthem again. Wait as this representative, too, digs for evidence of a claim. Listen with delight as she finds your documents from day 282. Confirm that yes, a claim definitely exists. Have her repeat that so you can write it down. Confirm that the previous agent was lying: members can submit claims. At her instruction, fill out the claim form a third time. Write a detailed letter, this time with a Document Control Number (DCN). Submit the entire package via registered mail. Wait for USPS to confirm delivery eight days later.</p>\n<p>On day 588, having received no response, call Anthem again. Explain yourself. You’re getting good at this. Let the agent find a reference number for an appeal, but not the claim. Incant the magic DCN, which unlocks your original claim.  “I was able to confirm that this was a claim submitted form for a member,” he says. He sees your claim form, your receipts, your confirmation of medical necessity. However: “We still don’t have the claim”.</p>\n<p>Wait for him to try system after system. Eventually he confirms what you heard on day 418: the claims department transferred your claims to appeals. “Actually this is not an appeal, but it was denied as an appeal.” Agree as he decides to submit your claim manually again, with the help of his supervisor. Write down the call ref number: he promises you’ll receive an email confirmation, and an Explanation of Benefits in 30-40 business days.</p>\n<p>“I can assure you this is the last time you are going to call us regarding this.”</p>\n<p>While waiting for this process, recall insurance is a regulated industry. Check the Ohio Revised Code. Realize that section 3901.381 establishes deadlines for health insurers to respond to claims. They should have paid or denied each of your claims within 30 days–45 if supporting documentation was required. Leave a message with the Ohio Department of Insurance’s Market Conduct Division. File an insurance complaint with ODI as well.</p>\n<p>Grimly wait as no confirmation email arrives.</p>\n<p>On day 602, open an email from Anthem. They are “able to put the claim in the system and currenty on processed [sic] to be applied”. They’re asking for more time. Realize that Anthem is well past the 30-day deadline under the Ohio Revised Code for all three iterations of your claim.</p>\n<p>On day 607, call Anthem again. The representative explains that the claim will be received and processed as of your benefits. She asks you to allow 30-45 days from today. Quote section 3901.381 to her. She promises to expedite the request; it should be addressed within 72 business hours. Like previous agents, she promises to call you back. Nod, knowing she won’t.</p>\n<p>On day 610, email the Ohio Department of Insurance to explain that Anthem has found entirely new ways to avoid paying their claims on time. It’s been 72 hours without a callback; call Anthem again. She says “You submitted a claim and it was received” on day 282. She says the claim was expedited. Ask about the status of that expedited resolution. “Because on your plan we still haven’t received any claims,” she explains. Wonder if you’re having a stroke.</p>\n<p>Explain that it has been 328 days since you submitted your claim, and ask what is going on. She says that since the first page of your mailed claim was a letter, that might have caused it to be processed as an appeal. Remind yourself Anthem told you to enclose that letter. Wait as she attempts to refer you to the subrogation department, until eventually she gives up: the subrogation department doesn’t want to help.</p>\n<p>Call the subrogation department yourself. Allow Anthem’s representative to induce in you a period of brief aphasia. She wants to call a billing provider. Try to explain there is none: you purchased the machine yourself. She wants to refer you to collections. Wonder why on earth Anthem would want money from <em>you</em>. Write down “I literally can’t understand what she thinks is going on” in your log. Someone named Adrian will call you by tomorrow.</p>\n<p>Contemplate alternative maneuvers. Go on a deep Google dive, searching for increasingly obscure phrases gleaned from Anthem’s bureaucracy. Trawl through internal training PDFs for Anthem’s ethics and compliance procedures. Call their compliance hotline: maybe someone cares about the law. It’s a third-party call center for Elevance Health. Fail to realize this is another name for Anthem. Begin drawing a map of Anthem’s corporate structure.</p>\n<p>From a combination of publicly-available internal slide decks, LinkedIn, and obscure HR databases, discover the name, email, and phone number of Anthem’s Chief Compliance Officer. Call her, but get derailed by an internal directory that requires a 10-digit extension. Try the usual tricks with automated phone systems. No dice.</p>\n<p>Receive a call from an Anthem agent. Ask her what happened to “72 hours”. She says there’s been no response from the adjustments team. She doesn’t know when a response will come. There’s no one available to talk to. Agree to speak to another representative tomorrow. It doesn’t matter: they’ll never call you.</p>\n<p>Do more digging. Guess the CEO’s email from what you can glean of Anthem’s account naming scheme. Write her an email with a short executive summary and a detailed account of the endlessly-unfolding Boschian hellscape in which her company has entrapped you. A few hours later, receive an acknowledgement from an executive concierge at Elevance (Anthem). It’s polite, formal, and syntactically coherent. She promises to look into things. Smile. Maybe this will work.</p>\n<p>On day 617, receive a call from the executive concierge. 355 days after submission, she’s identified a problem with your claim. CPAP.com provided you with an invoice with a single line item (the CPAP) and two associated billing codes (a CPAP and humidifier). Explain that they are integrated components of a single machine. She understands, but insists you need a receipt with multiple line items for them anyway. Anthem has called CPAP.com, but they can’t discuss an invoice unless you call them. Explain you’ll call them right now.</p>\n<p>Call CPAP.com. Their customer support continues to be excellent. Confirm that it is literally impossible to separate the CPAP and humidifier, or to produce an invoice with two line items for a single item. Nod as they ask what the hell Anthem is doing. Recall that this is the exact same machine Anthem covered for you eight years ago. Start a joint call with the CPAP.com representative and Anthem’s concierge. Explain the situation to her voicemail.</p>\n<p>On day 623, receive a letter from ODI. Anthem has told ODI this was a problem with the billing codes, and ODI does not intervene in billing code issues. They have, however, initiated a secretive second investigation. There is no way to contact the second investigator.</p>\n<p>Write a detailed email to the concierge and ODI explaining that it took over three hundred days for Anthem to inform you of this purported billing code issue. Explain again that it is a single device. Emphasize that Anthem has been handling claims for this device for roughly a decade.</p>\n<p>Wait. On day 636, receive a letter from Anthem’s appeals department. They’ve received your request for an appeal. You never filed one. They want your doctor or facility to provide additional information to Carelon Medical Benefits Management. You have never heard of Carelon. There is no explanation of how to reach Carelon, or what information they might require. The letter concludes: “There is currently no authorization on file for the services rendered.” You need to seek authorization from a department called “Utilization Management”.</p>\n<p>Call the executive concierge again. Leave a voicemail asking what on earth is going on.</p>\n<p>On day 637, receive an email: she’s looking into it.</p>\n<p>On day 644, Anthem calls you. It’s a new agent who is immensely polite. Someone you’ve never heard of was asked to work on another project, so she’s taking over your case. She has no updates yet, but promises to keep in touch.</p>\n<p>She does so. On day 653, she informs you Anthem will pay your claim in full. On day 659, she provides a check number. On day 666, the check arrives.</p>\n<p>Deposit the check. Write a thank you email to the ODI and Anthem’s concierge. Write this, too, down in your log.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    },
    {
      "id": "https://aphyr.com/posts/367-why-is-jepsen-written-in-clojure",
      "title": "Why is Jepsen Written in Clojure?",
      "description": null,
      "url": "https://aphyr.com/posts/367-why-is-jepsen-written-in-clojure",
      "published": "2023-12-05T14:49:05.000Z",
      "updated": "2023-12-05T14:49:05.000Z",
      "content": "<p>People keep asking why <a href=\"https://jepsen.io\">Jepsen</a> is written in <a href=\"https://clojure.org/\">Clojure</a>, so I figure it’s worth having a referencable answer. I’ve programmed in something like twenty languages. Why choose a Weird Lisp?</p>\n<p>Jepsen is built for testing concurrent systems–mostly databases. Because it tests concurrent systems, the language itself needs good support for concurrency. Clojure’s immutable, persistent data structures make it easier to write correct concurrent programs, and the language and runtime have excellent concurrency support: real threads, promises, futures, atoms, locks, queues, cyclic barriers, all of java.util.concurrent, etc. I also considered languages (like Haskell) with more rigorous control over side effects, but decided that Clojure’s less-dogmatic approach was preferable.</p>\n<p>Because Jepsen tests databases, it needs broad client support. Almost every database has a JVM client, typically written in Java, and Clojure has decent Java interop.</p>\n<p>Because testing is experimental work, I needed a language which was concise, adaptable, and well-suited to prototyping. Clojure is terse, and its syntactic flexibility–in particular, its macro system–work well for that. In particular the threading macros make chained transformations readable, and macros enable re-usable error handling and easy control of resource scopes. The Clojure REPL is really handy for exploring the data a test run produces.</p>\n<p>Tests involve representing, transforming, and inspecting complex, nested data structures. Clojure’s data structures and standard library functions are possibly the best I’ve ever seen. I also print a lot of structures to the console and files: Clojure’s data syntax (EDN) is fantastic for this.</p>\n<p>Because tests involve manipulating a decent, but not huge, chunk of data, I needed a language with “good enough” performance. Clojure’s certainly not the fastest language out there, but idiomatic Clojure is usually within an order of magnitude or two of Java, and I can shave off the difference where critical. The JVM has excellent profiling tools, and these work well with Clojure.</p>\n<p>Jepsen’s (gosh) about a decade old now: I wanted a language with a mature core and emphasis on stability. Clojure is remarkably stable, both in terms of JVM target and the language itself. Libraries don’t “rot” anywhere near as quickly as in Scala or Ruby.</p>\n<p>Clojure does have significant drawbacks. It has a small engineering community and no (broadly-accepted, successful) static typing system. Both of these would constrain a large team, but Jepsen’s maintained and used by only 1-3 people at a time. Working with JVM primitives can be frustrating without dropping to Java; I do this on occasion. Some aspects of the polymorphism system are lacking, but these can be worked around with libraries. The error messages are terrible. I have no apologetics for this. ;-)</p>\n<p>I prototyped Jepsen in a few different languages before settling on Clojure. A decade in, I think it was a pretty good tradeoff.</p>",
      "image": null,
      "media": [],
      "authors": [
        {
          "name": "Aphyr",
          "email": null,
          "url": "https://aphyr.com/"
        }
      ],
      "categories": []
    }
  ]
}
Analyze Another View with RSS.Style