-- Leo's gemini proxy

-- Connecting to thrig.me:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

Unit Tests


> ... that we as programmers are not trusted to write code without tests ...


gemini://gemini.conman.org/boston/2022/12/21.1


Programmers are not trusted because it is not difficult to find articles such as "Linux's strcmp() For the m68k Has Always Been Broken" and far too many other such "Picard, with the facepalm" moments.


https://www.phoronix.com/news/Linux-m68k-strcmp-Always-Broken


Whoops.


Granted, programmers may be under some amount of time pressure; tests and documentation are likely first to the axe. And rare may be the developer who actually likes writing and maintaining the tests and documentation. Certain programming languages and operating systems do have more of a culture for tests and docs than others, but I have seen claims like "80% of programmers hate writing documentation" thrown around on Hacker News. And there is rather a lot of undocumnted, untested code out there.


Anyways, tests will result in some amount of test code (or formal verification, but I have no experience with that) with an obvious question of who tests the test code. This will range from nobody to code in the test framework itself. Test2::Suite for example tests itself to ensure that its code is limited in the "Picard, with the facepalm" department:


https://metacpan.org/pod/Test2::Suite

https://metacpan.org/release/EXODIST/Test2-Suite-0.000145/source/t


One can then build on Test2::Suite with little need to test the test framework or other libraries that have their own tests:


https://metacpan.org/pod/Music::RhythmSet

https://metacpan.org/release/JMATES/Music-RhythmSet-0.04/source/t


100% test coverage is a pretty good start for a Perl module, but there are diminishing returns. These depend on the code. Certain conditions may be difficult to produce, or may result in unportable test code due to, for example, the particulars of closing a socket at a certain point being different between OpenBSD and Linux. These areas will be obvious in a code coverage report, if there is one.


Tests have caught a lot of bugs in my code.


Programmers have claimed their code is perfect; the pager at 2AM has said otherwise. Of course there was no documentation--why would there need to be, when the code is perfect? And maybe the code was perfect, who knows. But when the JVM corrupts itself, or the Linux kernel is silly, or the hardware turns out to have mighty sandy foundations, and your margin for error is one bit...


https://www.bleepingcomputer.com/news/microsoft/meltdown-patch-opened-bigger-security-hole-on-windows-7/


Whoops.


Now I don't particularly care what tests are called (unit, system, integration, rozgu, whatever) provided that they exist (often rare) and are good (also a problem). Experience helps. I may not write tests for a user interface when I will be looking at that UI a lot. Frequent bugs might change my mind. Aim for the low-hanging fruit. A minimal suite will help with refactoring: is the tested behavior the same before and after your cowboy|rockstar edits? If not, why? It's a safety net, better that your code falls there rather than setting production on fire. Again. Are you testing someone's already tested code? If so, why?


Is the code even testable? trek(6) for example--unix /usr/games code from the 1970s--I recall having pretty untestable code. It could maybe do with a rewrite. Fewer globals, less twining of UI and logic. Bad code without tests nor documentation will change estimates of how long a project will take, and may prompt a search for a new job. Management may not like the bad news. You could write an expect program to play trek, which isn't nothing, but such high level testing may be difficult to maneuver into a particular state to test a particular function, like are your damage rolls returning sane numbers?


https://crawl.develz.org/wordpress/crawl-0-16-1-bugfix-release

logicbug.jpg


Whoops.


Meanwhile, I've forgotten how to play trek, again, and the documentation remains terrible. Maybe because the games were frowned upon by some, and at the time there was someone who could teach you, maybe after hours, how to play it. That's probably material for another posting.


Another point: it is much easier to write tests with the code as you write it. This changes the code, usually to make it simpler: "I'm going to have to write tests for how many branches?! What if I simplify the interface, and only have to check this one thing?" Code from months or years ago: who knows how it works, or what all those branches and magic numbers are for. Not impossible to deal with, but nowhere as nice as writing the tests with the code. Tests may also show how to use the software. This may help if the documentation is somewhat lacking. Or it might help to show where the interface is weak, while a test is being written.


Libraries, especially ones you will use a lot, probably need lots of tests. Less so one-liners that glue all that (hopefully well tested) library code together. Tests probably do not suit quick-to-market code, or where the customers do not care if you fail on them, a lot. Perhaps regulations or test suites to pass may help for standard protocols? A test suite to pass might help keep workarounds for buggy clients out of server code, or the other way around as well.


tags #testing #perl #security

-- Response ended

-- Page fetched on Tue May 21 18:08:57 2024