-- Leo's gemini proxy

-- Connecting to circadian.gemlog.org:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

The Jevons Paradox And Software


Wherein improved efficiency leads to a worse outcome.


Thanks to thrig for pointing to the Jevons Paradox:


Jevons Protocol


Their post is about how it applies to protocols; for me, its application to software development is at the forefront of my mind, although I don’t think I knew the name.


Better Tools


I’ve spent a lot of my life working on making software development more efficient, usually for smallish groups of people.


I also try to make what we do more closely fit the word “engineering”, although we always fall short, and that’s why I call it “development”.


And so what happens? Well, we build more stuff. Never once have we said, “Great! Let’s do less."


I suppose many Geminauts have already read this, but I just recently got to Solderpunk’s article about exactly that, about what could happen if we flip into “let’s do less” mode:


One Billion One Continent


I like the idea.


Protocols vs Software


So here we are in Gemini, a protocol designed to do less.


And it’s good. It’s surprising—“you could do that with HTML” is a common response on first reading about it—but it works. The boundary of what is possible sits exactly where it should, capturing a specific type of value while excluding countless bad patterns. What a capsule can be technically is very strongly limited, but this does not at all impede the infinite variety possible within.


Books, I suppose, are a similar kind of protocol. Now that I think about it, books and Gemini are in some senses closer than Gemini and The Web. When you open a book you have pretty solid expectations about the kind of interaction that is about to happen—just as when you visit a capsule.


Could there be some equivalent in software terms?


What would a “Gemini of software development” look like?


Cost of Too Much Software


The first thing to ask is, what is the cost of too much software?


As we were told at uni: it’s not writing software that’s expensive, it’s maintaining it.


The more software you have, the more software maintenance you have; and that means costs around bugs, security issues, discoverability, compatibility, and so on.


A smaller corpus of software should concentrate this effort, leading to higher quality software. It should be easier to find the code you want; and it will have been in existence for longer and be of higher quality.


Another cost that I think is important but rarely mentioned is the opportunity cost of having millions of smart people working every day on churning out new software, most of which is not needed. They could be applying that energy to something else instead.


Great! Let’s Do Less


What if a software platform—a language, a framework, an ecosystem, however you like to look at it—was built around the idea of doing less, of reducing duplicate work?


Iteration and reinvention is a natural part of software; some tool is built, then its weaknesses are identified, new ones are created, and if something is good enough, it might—might!—supersede the old. More likely some mix of libraries exist in parallel.


So a mechanism for reducing work is great, but it needs to leave enough room for growth—the boundary needs to be drawn carefully.


Here I should pause and clarify what kind of control mechanism I have in mind. Of course, a programming language can’t suggest that you stop writing code. (At least, I hope not). Rather, I’m talking about a setup where there is a managed repository of public libraries, and so there is a possibility to guide users towards particular libraries or away from them.


Nobody has their freedom impeded; but there are strong, community-established, living guidelines aimed at doing less.


Work Reduction Protocol


First, there would need to be a mechanism for identifying competing libraries.


This could start with self-identification via tags and then use some kind of community classification on top.


Once competing libraries are identified, we need two things: a way to push users of libraries towards the “current best” option, and a way to allow a “new best” to arise.


Pushing users towards the “current best” could perhaps work using tiered publishing. A library is only “first tier” if it uses only first tier packages. The recommended software must always use only recommended libraries.


Then, there can be two more tiers. The third tier is a “free for all”: publish anything, use anything. This is where any new library can be placed when first published.


The second tier is for libraries that want to be promoted to the first tier.


To be promoted, a second tier library must do three things:


First, it must fully replace its first tier equivalent, providing all the same functionality.


Second, it must win the support of the community—it must be convincingly better in some way.


And third, it must come with a fully automated way to migrate software from the current first tier library to the new proposed solution.


This last requirement is hard to meet, but the need for it is clear: fully automated migration makes promotion possible. If the library should satisfy the community and be promoted to first tier, all first tier libraries are immediately re-issued depending on the replacement, via the automated migration.


Details And Apps


Perhaps one first tier library of each type is not enough; there is always, definitely, value in having more than one team explore the solution space. The imbalance between the current first tier library and the many second and third tier libraries might be too much to allow this to be effective. So maybe there should be two first tier libraries of each type, or three; or maybe there should be some mechanism for deciding it case by case.


I’ve talked about libraries, but what about applications? On the one hand the same mechanism can apply: there can be one first tier CAD application, one desktop publishing application, and so on; but there is no equivalent of a fully automated migration, instead the question is of changing what application users do.


Still, some things can be done. There can be a requirement that a replacement for a first tier app provides equivalent functionality, that it can import all data and settings, that there is a guide for users moving from one to the other. The question of how many apps of the same type should be first tier is likely an even harder one than for libraries.


It seems like this starts to become a meaningful idea! Only...


Reality Check


Is this a grand new pattern for software development? No; because we already have it.


This is already pretty much how open source software works.


There is a direct incentive to avoiding duplicate work, which is that you can do more with less. You have free reign to check through all the prior open source work, and so people do: if the right open source library or app exists, it will get reused; not always, but often.


Open standards and data formats go a long way to solving the problem of switching from the old best library or app to the new one.


These are well understood, well loved and popular principles for building software.


And so increasing the efficiency of work on open source software should not be such a bad thing; there are already mechanisms in place, based on that ever-present motivator, enlightened self interest, to concentrate effort and reuse towards a smallish number of high quality solutions.


Closed Source


Then, it’s closed source software where it all falls down; that’s where the duplication happens, that’s where increased efficiency will lead to more, more and still more, with duplication and waste.


But, there is a silver lining to this cloud.


The base of open source software can only increase, gradually reducing the viable scope of closed source code.


There are multiple open source web browsers, for example; it may be that there will never again be an important closed source web browser.


Endgame


In this optimistic projection, more and more problems get convincingly solved in open source, reducing the scope for closed source work; until eventually we run out of code to write.


This year? Next? I don’t think so. It’s hard to wrap my head around just what “running out of code to write” would look like. I suppose it won’t happen soon.


Could we reach a point where all the meaningful work has been done, and all that’s left is to churn out variations—reskins? This should apply to other creative fields, too; have we reached peak TV? Are we running out of novels to write?


It may be that we’ve passed all these points without noticing. Are there enough books that you could read for a lifetime and not get bored? Yes. Enough TV shows? I suspect so. Does the existing software base already cover all the useful things for day to day life? Well—maybe.


Perhaps the only big next step left for software is “Great! Let’s do less."


Afterword: AI Enters The Fray


The biggest possible downside I see with AI writing software is that it will likely make it cheaper to create the wrong type of code: single purpose, duplicate effort, hard to maintain. On the flip side, AIs might also turn out to be good at exactly the opposite: automated code maintenance, deduplication.


The guardrail of open source, providing a baseline of work that does not need to be repeated, and preserving any new value forever, will remain.


Feedback 📮

👍 Thanks!

👎 Not for me.

🤷 No opinion.

Comments.


So far today, 2024-05-12, feedback has been received 1 times. Of these, 0 were likely from bots, and 1 might have been from real people. Thank you, maybe-real people!


   ———
 /     \   i a
| C   a \ D   n |
   irc   \     /
           ———

-- Response ended

-- Page fetched on Sun May 12 14:31:22 2024