-- Leo's gemini proxy

-- Connecting to gem.snowgoons.ro:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

gem.snowgoons.ro

When I first got involved in software development, assembly programming was pretty much the norm and always having less memory than you wanted absolutely was.


Not so many years later when I graduated I found myself developing embedded systems on controllers with 256 bytes of RAM, so not so much change - but even by then it was clear the world was moving on. Within a couple of years it was clear that the world of the managed runtime was upon us, and the skills of an embedded systems engineer would become increasingly niche. And so I embarked on the move to Java and web services and instantiating objects like you just don't care, and - notwithstanding the XML-packed, enterprisey horror of the early 2000s - never really regretted it, even if I missed the challenge of embedded engineering.


But during an idle thought earlier today, I realised that maybe - as is so often the case in this industry - things are turning full circle.


At my day-job, we have a small team developing Rust solutions, originally for embedded systems, but incresingly also developing the server-side components that those systems talk to. Why use Rust when a managed-runtime language like Java or .NET would be much more common? Well, why indeed... Partly it's because these are mostly research projects at the moment and so "why not?", but also because a language like Rust actually gives us some real advantages in the modern server world - the world of orchestration and containers, rather than fixed server builds and virtual machines.


Rust allows us to write software where we have clear visibility and understanding of the memory allocation and how the software will perform under memory pressure - just like the world of embedded systems. Ask a .NET or Java engineer how much memory their application needs, and there is a chance, but only a chance, they can give you a good answer. Ask them how much memory we need to give the runtime so they can guarantee that Garbage Collection won't slaughter the performance at exactly a critical moment when you can't afford it to... And they have no idea. What's the practical result of this? We tend to massively overprovision the memory for these managed runtime applications "just in case" memory pressures causes the runtime to murder performance Just In (the wrong) Time.


When we're deploying those applications on dedicated servers, or even just dedicated VMs, the worst this does is usually hopelessly inefficient use of resources. But ultimately that's just a cost thing, and so-what as long as you can keep your 100% uptime? But when you start moving those applications to containerised environments, this leads to a much worse outcome: *unpredictable reliability*. When every container running on your Kubernetes node has basically allocated memory on a "wild stab in the dark" basis, and you don't have a hypervisor to make sure the outcome of that is at least predictable, estimating the overall capacity of your system becomes impossible. The result is *even more* terrible resource utilisation, as we take provisioning estimates that were already generous and then add an even bigger safety factor, alogn with unpredictable outcomes when the orchestrator decides to move workloads around nodes.


It seems to me the solution is not just, or maybe not even, getting rid of managed runtimes. It seems to me the solution might just be learning to program like embedded engineers again; it was never just about how *little* resources you could get away with using, it was about how *predictably* you could use those resources.


One might even say, maybe it's time we just learned how to program, again...


Cover Photo[1] by Brett Sayles[2] from Pexels


1: https://www.pexels.com/photo/cables-connected-on-server-2881229/

2: https://www.pexels.com/@brett-sayles

--------------------

Home - gem.snowgoons.ro

-- Response ended

-- Page fetched on Fri May 10 10:30:06 2024