-- Leo's gemini proxy
-- Connecting to gemini.circumlunar.space:1965...
-- Connected
-- Sending request
-- Meta line: 20 text/gemini; charset=UTF-8; lang=en-US
This is scrawlspace. I scrawl in this space. Do not expect coherence or permanence here.
There’s all sorts of interest in Geminispace in lower-weight protocols. One of them is JSON Feed:
It’s functionally similar to Atom (although each format’s individual entry isn’t a perfect match for the other format’s), but instead of being in XML which is overly complicated for this purpose (probably), it’s in JSON.
Now, suppose you consume feeds, and want to support JSON Feed instead of (just) Atom feeds. For all this work, how much can you expect to save?
I don’t use full-text feeds, and I haven’t yet deleted anything out of my feeds, so this is what you can expect:
> eza -l --no-user --no-time --no-permissions --no-git --sort size --reverse --bytes atom.* feed.* 80,366 atom.xml 78,915 feed.json 74,876 atom.minified.json 68,445 feed.minified.json 67,492 feed.yaml
I’m kind of surprised the YAML source takes less space than the minified JSON. I guess those extra quote marks and commas and braces add up.
Still, the savings aren’t earth-shattering. Here’s what it would take to download the whole thing on a 33.6 kilobit/sec modem, which would routinely download at 2 KB/s at top speed back in the late 90s:
> numbat █▄░█ █░█ █▀▄▀█ █▄▄ ▄▀█ ▀█▀ Numbat 1.11.0 █░▀█ █▄█ █░▀░█ █▄█ █▀█ ░█░ https://numbat.dev/ >>> 67 kilobytes / (2 kilobytes/sec) 67 kilobyte / (2 kilobyte / second) = 33.5 s [Time] >>> 80 kilobytes / (2 kilobytes/sec) 80 kilobyte / (2 kilobyte / second) = 40 s [Time]
This is…not what I’d call an interactive speed. It’s something you’d want to run in the background periodically, and switching to a lighter-weight format isn’t materially helping. I’d check to see what these would be under both gzip and brotli compression, but it’s not as if Gemini supports transfer encodings.
Of course, “should I minify my Atom feed before publishing it?” and “Should I update my feed parser to support both Atom and JSON Feed?” are questions for pretty much entirely different audiences. Only a tiny handful of people maintain feed aggregators like Antenna and CAPCOM (although I think all of them also provide Atom feeds for their capsules).
Me, I’m a fan of view-source sensibility, so I’m not about to start minifying my feeds as part of the feed-build step. YMMV.
Background:
I didn’t have a website of my own back in the day, but I went to the Internet Button Archive…
…and saw lots of buttons.
This page would have made my machine absolutely CRAWL back in the late 90s. There’s no way it’d have been able to maintain a buttery-smooth 60 frames per second.
I was going to learn enough Zig in one low-energy afternoon to have a quick curl-style Gemini client, but Zig isn’t as batteries-included as Deno is.
My main hangup, as far as I can tell, is in `std.crypto.tls.Client.init()`:
I don’t know how to choose a stream to pass as the first argument
I’m not sure what to pass in for the CA bundle (doubly so since I’m not sure I should be using CAs anyway, since Geminispace is TOFU land)
I’m not sure what kind of thing to pass in as the host, since, well, don’t I need to pass in a port, too?
If you’re looking for some quick whuffie on the Web, you could do worse than to write a quick Gemini client in 100ish lines of Zig and put it where your favorite/least favorite/third favorite search engine can see it.
A guy made TypeScript better:
If you’re the sort of person to read TypeScript release notes, you’ll probably be interested in most of the post.
This bit jumped out at me:
> By the time my PR was merged, my Notion doc ran to 70+ pages of notes.
Let’s back up a bit to the whole paragraph:
> While learning my way around the codebase, I found it incredibly helpful to take notes. Which function did what? What questions did I have? What was I struggling with? What did I have left to do? This helped to keep me oriented and also gave me a sense of progress. In particular, it was satisfying to read questions I'd had weeks earlier that I now knew the answer to. Clearly I was learning! By the time my PR was merged, my Notion doc ran to 70+ pages of notes.
So if you’re trying to get your bearings in a large codebase and wondering “Am I taking too many notes?”, the answer is “probably not”.
Michael Nordmeyer has a post up:
He likes full-text feeds for all the usual reasons, but he worries a bit about bloat for people who download the feed over and over again, as 130 KB over the wire seems a bit much for him.
One thing he does to keep the bloat down is…delete old blog posts. Not always in publication order, probably: his oldest one that’s still up on the site is dated 1/6/2008.
Two thoughts that came to mind:
“The madman. The absolute madman.”
“Based.”
Still, I was wondering if he was overstating the amount of bytes transferred. I had a look at his site with HTTPie, and—
> http https://michaelnordmeyer.com/feed.xml HTTP/1.1 403 Forbidden Alt-Svc: h3=":443"; ma=86400 Connection: keep-alive Content-Encoding: gzip Content-Type: text/html; charset=utf-8 Date: Mon, 29 Apr 2024 03:35:56 GMT Keep-Alive: timeout=5 Server: nginx Strict-Transport-Security: max-age=63072000; includeSubdomains Transfer-Encoding: chunked Vary: Accept-Encoding <html> <head><title>403 Forbidden</title></head> <body> <center><h1>403 Forbidden</h1></center> <hr><center>nginx</center> </body> </html>
Well, hmm. He seems to think that HTTPie is naughty, or something.
Let’s pretend to be me on my usual browser of choice:
> http --headers https://michaelnordmeyer.com/feed.xml "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.4.1 Safari/605.1.15" HTTP/1.1 200 OK Accept-Ranges: bytes Alt-Svc: h3=":443"; ma=86400 Connection: keep-alive Content-Encoding: gzip Content-Length: 136473 Content-Type: application/xml; charset=utf-8 Date: Mon, 29 Apr 2024 03:39:06 GMT ETag: "662eb0da-21519" Keep-Alive: timeout=5 Last-Modified: Sun, 28 Apr 2024 20:26:02 GMT Link: </feed.xml>; rel="canonical" Permissions-Policy: interest-cohort=(), browsing-topics=() Referrer-Policy: strict-origin-when-cross-origin Server: nginx Strict-Transport-Security: max-age=63072000; includeSubdomains Vary: Accept-Encoding X-Content-Type-Options: nosniff X-Robots-Tag: noindex X-XSS-Protection: 1; mode=block
It’s got an ETag. That’s good:
A reasonably clueful feed reader (like NetNewsWire) or feed aggregator (like Feedbin) can just send a HEAD request to /feed.xml instead of a GET request to see if anything changed (probably not, if it’s checking once a day or more often than that).
Of course, reasonably clueful feed aggregators will check your feed only once on behalf of all its subscribers, and tell you how many people have subscribed to your feed in the headers somehow (probably the User-Agent string; I forget).
His nginx installation is also sending a gzip-compressed feed rather than uncompressed XML. The uncompressed XML is about half a megabyte, which is about 140 KB over the wire.
He could shave some more bytes off the transfer size if he were to minify the HTML and XML he’s sending out, but most lines in his feed are left-aligned and this won’t save a whole lot of data.
I like how he has an XSL stylesheet to explain feeds to people who aren’t yet into feeds. I like Substack as much as the next guy, but I’d rather read things in Unread than Mail.
I saw a tweet recently. Rather than subject you to a depression-inducing screenshot of a post on X that’s been subject to five too many lossy-to-lossy conversions, I’ll reproduce it for you here:
> [extraordinarily blurry picture of Kermit the Frog in a collared shirt, red tie, gray sweater, gray jacket, and gray broad-brimmed hat]
> expatanon
> @expatanon
>
> Last two decades of software “innovation”
> - pay forever and own nothing
> - put it on our hard drive instead of yours
> - redo legacy businesses with temporary VC $$$ subsidies
> - databases but shittier
> - ape pics???
> - retarded hallucinating chatbots
> - 1000 twitters
>
> [6:38 PM · 7/6/23 · 42.2K Views]
The hallmark of a good satirist is when he can get you to chuckle or even outright laugh even as he’s skewering things you like. Jon Stewart is in this category, if you ask me. Much of the above list is defensible, but that’s a post — maybe — for another time.
Instead, I’d like to focus on some unrelated innovations in software…development, at least:
Language Server Protocol
In Ye Olden Days, each text editor ships with its own solution for understanding the files a user will edit with it. For example, here’s BBEdit’s longstanding solution to this problem — both “codeless” (specified in an XML plist) and code-based (you have to write Objective-C or something to get it to work):
Of course, nothing else uses this. On the other hand, a handful of text editors and similar are able to consume Sublime’s syntax files:
Microsoft, however, is built different. Since at least the 90s, it seems like they’ve been caring — by multiple orders of magnitude — more about making things nice for developers and people who program whose IQs are in the 100–120 range and already have their heads full of the problem domain and are likely to forget basic programming things, like what methods an instance of a string class has.
So, one of the things they developed for Visual Studio Code was the Language Server Protocol.
Eventually, they figured out that, to get a good in-editor experience, you’d want nothing less than the compiler of a language to be able to tell you which identifier you’re halfway done typing out. So you have the language provide an LSP, you have your text editor support LSPs, and then only one LSP needs to be written per language instead of each editor shipping its own custom bespoke 70% solution.
Of course, things aren’t all wine and roses. I remember reading in a BBEdit release notes that the LSP spec differs from how VS Code does things, and everyone targets the 800-pound gorilla in the room than however the spec says things should happen. So wine may help, assuming you’re not into beer, cider, or hard liquor.
Still, this is way better than the old way.
Automatic formatters
gofmt probably wasn’t the first thing to do this, but it seems to have made the whole thing popular.
People used to — and still do — argue about:
whether whitespace should be tabs or spaces
where to put opening/closing braces
Some of these have obvious, but difficult-to-implement answers, like “tabs for indentation, but spaces for alignment”.
gofmt doesn’t produce the nicest code, but you can type whatever crap you want and gofmt will make it look pretty good, even if it expands
if err != nil { return nil, err }
onto three lines instead of the mere one it deserves.
Prettier handles a lot of Web-adjacent languages that you might want to be formatted:
and if you like — or at least use — Python, there’s black:
If you really trust your formatter, like I do `go fmt`, you might want to enable format-on-save in your editor. If you don’t quite trust it in all cases, like I do for Prettier, you might want to have a mere format-this-file keyboard shortcut in your editor of choice.
⁂
Notably absent from the LSP/formatter revolution are lisps, probably because they’ve had that in Emacs for 30 or 40 years. If you’d like to do a lot of work to try and get people to try to write in languages where “))))))))))” is not an occasion for protracted screaming, you could do a lot worse than to write — or finish up — a pleasantly standalone formatter for Hy or Janet. Sure, all the cool kids use paredit, but “install, try, use, and like Emacs” is a bridge too far for many, including yours truly.
I made a basic Gemini client in TypeScript-on-Deno:
// main.ts import { readAll } from "jsr:@std/io/read-all"; async function gimme(url: URL) { let port = Number.parseInt(url.port, 10); if (Number.isNaN(port)) { port = 1965; } const conn = await Deno.connectTls({ port, hostname: url.hostname, }); const reqString = `${url}\r\n`; const req = new TextEncoder().encode(reqString); conn.write(req); const resp = await readAll(conn); const s = new TextDecoder().decode(resp); return s; } if (import.meta.main) { for (const url of Deno.args) { console.log(await gimme(new URL(url))); } }
Paired with it is a deno.json:
{ "tasks": { "dev": "deno run --allow-net --unsafely-ignore-certificate-errors --watch main.ts gemini://gemini.circumlunar.space/users/adiabatic/", "atom": "deno run --allow-net --unsafely-ignore-certificate-errors --watch main.ts gemini://gemini.circumlunar.space/users/adiabatic/atom.xml", "json": "deno run --allow-net --unsafely-ignore-certificate-errors --watch main.ts gemini://gemini.circumlunar.space/users/adiabatic/feed.json", "news": "deno run --allow-net --unsafely-ignore-certificate-errors --watch main.ts gemini://geminiprotocol.net/news/atom.xml" } }
It’s as dumb as a box of rocks, but you can run commands like
deno task dev
and see real life Gemini output, complete with the initial header line. As a complete surprise, it helped me figure out why Lagrange wasn’t figuring out that my Atom feed was an Atom feed.
The folder it’s in is called “gurl”. There’s probably a halfway-decent “curlz for the girlz” joke in there somewhere, but polishing that into something chuckle-worthy is beyond my abilities today.
In case anyone else wants to play with it and gets twitchy about licenses even for code put out there by pseudonymous Internet randos, I’ve released the thing as CC0, which is a fancy public-domain dedication…although really, the WTFPL is likely a better fit, thematically.
I keep coming back to a thing:
> Computers aren't fundamentally evil by any means, we are not out to vilify them, but the view taken by our technohippy predecessors (whose "Whole Earth" moniker has obviously inspired us), that they are fundamentally empowering and liberatory devices, "personal freedom machines", is a hard pill to swallow in 202x.
I’m not much of a smol.earth guy myself, although we do have points of agreement. This post is something of an exploration on how the smol.earth crowd is at least onto something, even though (it seems) I’m diving in deep into one aspect of a ten- or twenty-point desiderata list and ignoring all the rest.
Anyhow.
“Fundamentally liberatory”. Hmm.
Let’s think about personal computing first, and then, maybe in a later post, think about mainframes/cloud computing run by and for other people. Polya says to solve a simpler problem first, right?
Previously, I’ve discussed Steve Jobs’ “bicycle for the mind” meme. (Use your client’s find-in-page functionality for “bicycle” in the pages for this year (2024), last year (2023), and the year before (2022)). But what else could computers be?
A conduit for the Khala, I suppose.
You do a clean install of Windows 11 onto a bare computer. After signing in for the first time, on the Taskbar, in the lower left corner of the screen, with a button that’s both infinitely tall and infinitely wide, displacing the all-important Start menu, there’s Microsoft button that will feed the Khala at you.
You get a new smartphone. You download apps. The apps ask the operating system to ask you if they can send you notifications. Most of these notifications are just push updates from the Khala.
You sit down at your desk after having unpacked a fresh new MacBook Air. After setting up your user account, you see what applications are in the Dock. You see an icon that looks like the Dota 2 icon. It’s not; it’s Apple trying to get you to pay money to get hooked up to the Khala.
You open a web browser for the first time. It doesn’t really matter which, anymore, these days. It has “Suggested Sites”, possibly by another name. It is pre-populated with “Khala (from Microsoft)”, “Khala (from Meta)”, “Khala (from Yahoo)”, “Khala (from Alphabet)”, and “Khala (from the Wikimedia Foundation)”. This is provided to you on the default browser supplied with Debian Stable, which, at least, has weaker incentives than most operating-system vendors to hook you up to the Khala.
One simple way to keep your personal computer use liberatory is to use your computer and computing devices in ways that haven’t changed much since the mid-80s, or, at the latest, the mid-90s.
Sure, you might prefer using voice recognition (which will happily gobble up as much computing power as you can throw at it to improve accuracy), but sticking to using a computer as a largely-solitary thinking-and-writing-and-organizing-and-mathing device preserves, I think, personal computing as something liberatory, provided you’re not letting yourself get sucked into playing Solitaire for hours on end.
A while back, I listened to a thing:
In short, Skrbina thinks that Uncle Ted is really onto something. I recommend listening to the whole thing.
I forget if Hsu asks, near the end, whether any group following Uncle Ted’s values (sticking to a sociotechnological level found in Renaissance Italy) is necessarily going to need to become a protectorate of some other much higher-tech power with modern weapons, while Skrbina stammers out a non-answer response. It’s been a while.
Anyway. Today, I read a thing:
There’s a lot in there, and many interesting parts are entangled with other interesting parts. I have thoughts on them, and I’ll probably end up writing stuff about them. I like Earth, and think it’s a fine base of operations to start interplanetary and interstellar exploration and colonization from. See my “‘We need to get bagels on Mars’” quip from back in 2022.
But, back to the smol.earth update:
> If you don't feel the points that follow in your bones yet, the smol.earth is probably not yet for you:
> […]
> We believe computing is unsustainable in the long term and that it needs to ultimately disappear from the world, but we are condemned to live *our* lifetimes surrounded by computers which we feel compelled to use.
Funny, I know of a guy who’s kind of doing that.
Here’s his website:
And here’s his X account:
His pinned post shills his book:
The Amazon page also links to the second book in the series. I’m not sure how many are planned.
Of course, there are entire groups of people, at least in North America, who have something resembling the smol.earth attitude toward technology:
This appears to be the logical endpoint of the smol.earth philosophy, and if you’re serious about it, you might want to try becoming one of the few converts to, uh, Amishness and save yourself the awkwardness of being only halfway in the modern world.
As discussed below on 4/14/2024 in “Beating the rush”, I’ve set up a Debian machine.
One thing I use a lot on macOS is option-key shortcuts for characters that aren’t on my keyboard, like curly quotes and em dashes.
X11 has…the Compose key. It is supplemented by ibus’s ⌃⇧u dead key.
Compose (⎄) is not unpleasant given what it tries to do. You tap a key you’ve bound to Compose, then you type a sequence of keys to get what you want. For example, to get a proper curly apostrophe, type ⎄>', or ⎄'> (oftentimes it’s not picky about order).
Oddly enough, you cannot type the Compose glyph with the Compose key. You can, however, type it by pressing ⌃⇧u, releasing all that, and then typing 2384, and finally a space to let ibus know you’re all done with the key.
You will be unsurprised to learn that ⎄ is U+2384.
I was able to type this entire entry on Debian, but I don’t have a good way to generate the entry for this update in the capsule feed source. I guess I’ll have to commit this edit, then push it up, then pull it down to a Mac, add the entry, and then push and publish and announce.
Phooey.
I listened to a couple of episodes of Twenty Thousand Hertz (hereafter 20KHz). They’re kind of a pair:
Most podcasts don’t really need all that great audio to be able to listen to them well. 20KHz is an exception: I try to be in a position to listen to it, at least, with my AirPods Pro with noise cancellation on. This means “not while driving”, if nothing else.
I’ll wait here for a bit while you queue them up in your favorite podcast player and get around to listening to them. Both should take less than an hour combined (make sure to disable your podcast client’s speed acceleration if you use a thing like that).
[smooth-jazz elevator music plays]
While the first episode doesn’t go into the complete history of Windows audio, it does play TADA.WAV for you, which was what you would have heard as a startup sound on Windows 3.0.
Oddly enough, startup sounds seemed to be mostly for the cool factor starting in Windows 95. The host of the podcast says that startup sounds are an indication that your computer is ready to go after maybe a minute or two of a boot-up process, but I generally have memories of Windows only being partially ready to go by the time the logon sound plays — generally, a bunch of other backgroundy things that live in your notification area (“system tray”) still need time to get started before the system really has all its startup tasks out of the way.
Now, we here in Geminispace tend to like older technology for all sorts of reasons. I’m here to tell you that the Windows NT 4.0 startup sound was VERY cool back in the mid-90s.
If you want to ease into mid-90s cool, you would be well served to listen to this episode beforehand, which goes into the peak of early-80s cool:
Of course, podcasts aren’t great for video. You can see the intro sequence here:
I have what I’ve been calling a “Windows machine”. It’s from 2012 or somesuch and has been serving me reliably for years.
Of course, it’s old enough to not support Windows 11, and October 14, 2025, is…well, years away.
I spent some time thinking, and eventually decided to rip the band-aid off early and switch to an operating system that would be supported for longer than a year and a half.
I chose Debian Stable, because I’m boring and barely use the machine anyway and don’t need much that’s particularly new.
I had a look at Mint, but there’s something surprisingly off-putting about trying to copy Windows XP while (rightly) ditching the Fischer-Price color scheme that we all hated in the mid-2000s. Modern GNOME at least kind of tries to do its own thing, to mixed success.
Of course, Stable doesn’t come with everything I want. Sure, Visual Studio Code can get installed and get on the autoupdate train, but I have a bunch of other things that I like to use that are either older in Stable or just plain not there.
Many of these programs I managed to install into ~/.local/bin, like eza and helix. Others I just did without.
Eventually I was told by the NPM website to either install NPM via a tarball or to use Homebrew.
Which I already use on my Macs.
“But I already have a package manager”, I thought.
“But I don’t like having to manually poll websites for updated versions of software”, I also thought.
…having two package managers still feels weird, but it’s nice to have my usual toys available and updating in the usual manner. I’m keeping my fingers crossed that the two collections of things don’t bonk heads somehow.
A new Visual Studio Code update came out today. They added custom labels for open editors:
If your capsule is laid out like mine, you have a LOT of index.gmi files. If you have more than one open, Visual Studio Code will also include (in smaller, dimmer text) the parent directory of the file to clue you in to which is which, but if you have only one open, you’re stuck guessing or trying to lean on your memory.
Enter custom editor labels.
My .vscode/settings.json now looks like this:
{ "[gemini]": { "editor.quickSuggestions": { "other": "off" } }, "workbench.editor.customLabels.patterns": { "**/index.gmi": "${dirname(0)}/" } }
This makes it so scrawlspace/2024/index.gmi (the file I’m editing now) shows up as ”2024/” in the tabs at the top of the editor. If I have multiple index.gmi files in folders with the same name, I get smaller, dimmer text as a disambiguator still.
In case you’re wondering, the other option in there disables VS Code from popping up text suggestions in normal prose. I don’t need autocomplete for words like ”and”.
There is a description of orthogonal persistence out there. You should read the whole thing. However, we are more interested in the coda at the end:
You may recognize its author, François-René Rideau, as “the Houyhnhnm Computing guy”. You may also recognize him as “the guy who takes Urbit seriously, but takes issue with its persistence model”. You may also recognize his X handle of @ngnghm.
Back to the coda. He writes:
> In today’s world (2024), all your data persists… on your enemies’ servers. The big corporations and bureaucracies that try to manipulate you know everything about you, and run AIs to analyze your behavior to manipulate you even more into buying their stuff and obeying their orders. They use Manual Persistence, but they can afford thousands of database experts and system administrators to make it work at scale, so as to spy on hundreds of millions of human cattle.
I like the cut of his jib, but he hasn’t sold me on it yet. At least now, I can predict what will get persisted to disk depending on what I do. While text editors that do not preemptively save anything to The Cloud™ these days are rare, one can open up vi in a window and type to his heart’s content knowing that nothing will be committed anywhere until he types :w and then a filename.
Meanwhile, Rideau somehow does not see a system where every interaction with it is permanent and indelible as a liability. Being unable to write so much as “fuck fuck fuck fuck fuck fuck” without it persisting forever on disk makes me want to get a large notepad and a cross-cutting shredder — and notepads are much less effective bicycles for the mind than computers are.
Maybe the solution is as simple as having all document-based applications (text editors, spreadsheets, calculators) have Private Mode like browsers do now, but I’m not sold yet.
Background reading:
I’m not about to write a program in a month, but I have collected a bunch of programs that, by and large, work offline:
I think there’s at least three math things in there already, and I haven’t gotten to the bottom of the page yet.
My go-to for unit-aware math is Soulver, though:
If you have programming chops, you may want to consider improving an existing program before making one of your own.
I stumbled over a thing recently:
Some of these things look like fun things, or at least interesting things. On the other hand, many of them seem like nothing less than chores:
> * Add an RSS feed so people can subscribe to your blog.
> * Add a print stylesheet.
> * Style code snippets in posts on your blog with a syntax highlighter (i.e. Prism.js).
An RSS feed is actually useful to some fraction of your audience, but writing print-specific styles seems like a thankless chore.
Yes, I’m tired. I used to have the energy and interest to do stuff like this, but not anymore.
I will, however, add one item to the list:
Make a custom 410 Gone page.
Nothing quite communicates “this thing used to be here, and now it’s not” like a custom 410 Gone page. A 404 Not Found page doesn’t convey intentionality like 410 Gone does.
(The Gemini equivalent for 410 Gone is 52, in case you were wondering.)
Prior reading:
Money graf:
> At this point, Apple's refusal to allow another browser engine on it's platforms might be the only thing keeping Chrome from being able to fully dictate the direction of the web.
I certainly say this, but I prefer Apple things to Google things. I’m not a neutral third-party.
John Gruber has a look into what changes under the Digital Markets Act over in the EU:
The relevant bit:
> One point of confusion is that some aspects of Apple’s proposed DMA compliance apply to the App Store across all platforms (iPhone, iPad, Mac, TV, Watch, and soon, Vision), but other aspects are specific to the iOS platform — which is to say, only the iPhone.
And then there’s Apple’s relevant page:
> iOS 17.4 introduces new capabilities that let iOS apps use alternative browser engines — browser engines other than WebKit — for dedicated browser apps and apps providing in-app browsing experiences in the EU.
Two things of note:
> iOS
(as opposed to iOS and iPadOS)
> in the EU
So unless you’re ignoring iPhone users outside the EU, you, as a website developer, can’t just tell your iPhone-using visitors to download Chrome-with-Blink-in-it and come back. Even if you’ll happily do the work, people up the chain of command who are more business-minded won’t have a net financial incentive to say “let the Apple people download Chrome and then they can visit our site”. You’ll have to put in the time to make the site work right in Safari.
This state of affairs largely preserves Apple’s ability to defend its ecosystem and users from Google’s snooping. After all, if you have to use Google’s browser to do almost anything on the web other than browsing a handful of indie sites, that’s a clear-cut monopoly and makes real consumer choice all but impossible. Anti-consumer-choice monopolies, of course, are the kind of thing governments say they’re against, at least when they’re in the private sector.
I first encountered the phrase “let him cook” on a Twitch stream where the streamer speedruns The Legend of Zelda: Breath of the Wild. Generally, when one cooks food in this game during a speedrun, it’s to make up large batches of food, and you can’t un-make an omelet, so there’ll be a chorus of LETHIMCOOK in chat to get other people to, at least temporarily, not try to get the streamer’s attention for a bit.
My second encounter with the phrase, or something like it, was close, but a bit less literal.
If you avoided getting the paraglider, then there more than a few places where your options to continue on are basically one of these two:
get creative
fatally pancake into the ground, get revived by a fairy, and then continue on
The second of these options is way less entertaining, so a guy whose day job is “entertainer” who does so by playing games naturally tries for the first option.
It’s in this context that he says “let me cook” — but here, he’s not asking Twitch chat to not try and get his attention. He’s asking them to hold their horses while he tries to work out a solution to falling down a 2,000-meter hole without dying from the sudden stop at the end.
⁂
…and then I saw “let X cook” on X, coming from the HTMX account:
> i don't like the idea of stored procedures driving UI mainly due to the mechanics of updating them (version control, etc) but i'm willing to let them cook because eliminating the app server/db hop is one of the last big, obvious perf wins in most web apps...
(This is in the context of a hypothetical “React Database Components”. If you don’t want to click through, imagine a stored procedure in your database that returns a snippet of JSX, and inside that is a list of todo items all wrapped in li elements, and the bundle is wrapped in a ul element.)
Still, there’s the phrase
> let them cook
If you find yourself looking for a way to reserve judgement on an idea until implementations of the idea get better fleshed out and/or better-spaded so the upsides and downsides are better understood, you could do worse than to haul out this turn of phrase.
Background information:
I played Diablo 3 for a bit.
One of the things that I noticed was that once I got to endgame content, I could mostly shut my brain off while I was killing demons. However, I had to pause podcasts and give my full attention to what I was doing when I was selling all the loot that I had accumulated, because “do I keep this or do I sell this” was something that took all of my decisionmaking faculties and wasn’t something I could just outsource to my brain stem.
I thought about this for a bit when pondering the process of cooking in The Legend of Zelda: Breath of the Wild and Tears of the Kingdom. If you want a particular effect, or a particular level of an effect, you can’t just shut your brain off — oftentimes you have to look up specific ingredients and their potencies and maybe use an online calculator to find out if you can make something that will give you a level-3 buff for as long as you think you’ll need it for.
Niklaus Wirth passed recently, and so his “A Plea for Lean Software” has been making the rounds:
I actually read it in full. It’s not long. A bunch of people have posted excerpts they agree with. He ends with a list of lessons learned from Oberon. These are mostly sensible, although #5 is a bit suspect. My takes:
Strongly-typed languages seem to be the default in all the desktop environments that matter. Everyone seems to be moving away from Petzold-style C (functions take 5–9 arguments, half of which are 0 or NULL) and towards Rust and modern C++, which isn’t at all like how C++ was written back when Windows 95 was getting the final bugfixes to it. (Or, from runtime-everything’d Objective-C to compile-time-most-things Swift.)
Object-orientation is taken as a given in most new languages, although some of the takes on OO can be wild, like Go’s. However, class hierarchies like the ones Wirth describes can be found in windowing systems, the different HTML elements of the DOM, and Java.
“If one person can’t understand the whole thing, it shouldn’t be built” sounds like wishful thinking in 2024. Certainly modern software composition helps manage complexity, but when it comes to people who can fit entire systems in their heads, we’re undersupplied, even when systems aren’t needlessly overcomplicated (think of websites that use React when really all they need is htmx, at most, for interactivity). There just aren’t as many people who are that smart to hold entire programs in their heads.
⁂
However…
Wirth is writing this at the beginning of 1995. Windows 95 was to come out that summer, and Windows 3.1 is already out there for normal people, and Windows NT 3.5 has been out for a few months already. Oberon, his pride and joy, was written between 1986 and 1989, back when Riker was clean-shaven and Windows hadn’t hit 3.0 yet. Windows didn’t get popular until Windows 3.0.
Back to Wirth. The speed of development of Oberon is impressive:
> Designed and implemented—from scratch—by two people within three years, Oberon has been since been ported to several commercially available workstations and has found many enthusiastic users, particularly since it is freely available.
Oberon, to its (minor) credit, appears to have both color and graphics, although it’s not obvious from the screenshot that any kind of graphical paint program is possible in it. Presumably the giant squiggly can be generated with text, like SVG or POV-Ray. This will be relevant shortly.
Where Wirth seems to go off the rails is near the beginning of his article. There, he lays out his idea of what are — in 1995 — mere nice-to-haves:
> Uncontrolled software growth has also been accepted because customers have trouble distinguishing between essential features and those that are just “nice to have”. Examples of the latter class: those arbitrarily overlapping windows suggested by the uncritically but widely adopted desktop metaphor; and fancy icons decorating the screen display, such as antique mailboxes and garbage cans that are further enhanced by the visible movement of selected items toward their ultimate destination. These details are cute but not essential, and they have hidden cost.
In modern terms:
Anything other than a Suckless-style tiling window manager is “bloat”.
Showing users the metaphors they are operating with, like “move an item into the trash”, is “bloat”.
Later, he continues:
> increased complexity results in large part from our recent penchant for friendly user interaction. I’ve already mentioned windows and icons; color, gray-scales, shadows, pop-ups, pictures, and all kinds of gadgets can easily be added.
Modernizing:
Shadows are “bloat”. (This actually mostly makes sense, if layered window management is off the table, as are modal dialog boxes.)
Dialog boxes are “bloat”.
Graphics are “bloat”.
Grayscale is “bloat”. Presumably Wirth is OK with the black-and-white Macintosh and Mac SE, but this rules out both the Nintendo Game Boy and Newton MessagePads.
⁂
Personally, I’d like to have seen a debate between Niklaus Wirth and, say, Jakob Nielsen of the Nielsen Norman Group. Both men have an anti-frippery bent, but the usability proponent is going to have a much broader idea of what work needs to be done to make systems usable for normal people who aren’t computer experts and also people who have one or more computing-relevant body parts that don’t work right, like eyes or arms.
While text-to-speech systems seem to be mostly a solved problem on even wrist-worn consumer hardware, speech-to-text seems to be a problem that will happily consume whatever computing resources you can throw at it — up to and including machine-learning models that will take up like half your RAM on a 32-GB machine with an M3 Apple Silicon processor in it.
References:
If you want to read older entries, here’s the page for the previous year:
If you want to stay abreast of updates, have a look at this capsule’s colophon. It links to the capsule’s JSON Feed and Atom feed.
Additionally, the following URL will always redirect to the current year, assuming I haven’t forgotten to update the redirect after making the first post of the year:
⏚
-- Response ended
-- Page fetched on Thu May 2 10:45:26 2024