-- Leo's gemini proxy

-- Connecting to oberdada.pollux.casa:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini; lang=fr

GEMINILOGGBOOKOBERDADAISTICUS



Recommendation algorithms: all fail


Is there a best practice of automated recommendation systems? Clearly there are lots of bad recommendation algorithms out there. Either they don't work as intended, or they do but have undesirable side effects. I've touched on music recommendation in a previous post (rewilding music), here I will focus on video.


Youtube's algorithms have been criticised for promoting inaccurate information, serving scary content to toddlers, and much more. Since the algorithms are tightly kept secrets the criticism must be based on their observable effects, which partly depend on the interaction with millions or billions of users. And with individual customisation, each viewer will see different recommendations.


The panic about misinformation has hit hard on those who are critical of the current order of things, regardless of whether they present accurate information or deranged misconceptions. Project Owl in 2017 had the ostensible goal of diminishing the spread of false information, and if it succeeded at that it also had a deleterious effect on independent news sites of high quality, who may have presented opinion pieces that were not well aligned with establishment views.


I grew tired of Youtube several years ago. One way its recommendation algorithm has failed, in my view, is that it keeps reminding me of what I watched last week, as if I should be given a chance to revisit what I was exploring then instead of digging into today's topic. Another failure is how it inevitably pushes videos with more views if you find something on the fringe, let's say an obscure metal band with thirty views. Conversely, videos with a million views avoid having recommendations to videos with only a hundred views, as if pretending that such unsuccessful content can't exist. Language barriers or geographic regions also seem to restrict recommendations. If you look for raag performances by Indian classical musicians there is a lot available, but you'll miss much of it unless you specifically set India as the content country.


Even if I had enough of binge watching videos, there is of course still some interesting content. Lately I started using Invidious, sometimes also following recommended videos. Although I have no idea how it works, it seems to present a clean slate, or perhaps someone else's viewing history as the basis for personalised recommendations, but after a while it seems to catch up on what you have watched before.


It is easy to get the impression that Youtube tries to push science, math, technology, and optimism in general as a response to the fear of content threatening to induce a mass psychosis, spewing hatred, or inciting violence. Science is certainly preferable to kooks and cranks, but it can be a distraction. Learning about the abstruse wonders of quantum field theory or cosmology does little to solve the pressing problems of today. Popular science can become an escapism, just like any form of entertainment.



Tournesol


So, everyone agrees that recommendation algorithms are difficult to get right, and the consequences of design choices are enormous. Two videos about the same topic that differ in their appeal, their "click-baitiness," may result in perhaps a tenfold more views for the more popular one, everything else being equal. But the inequality of recommendation algorithms make some videos reach millions of views while other, equivalent videos may only reach the hundreds, according to Lê Nguyên Hoang (in French, and on peertube):


Les algorithmes menacent-ils l'humanité ?


> ... des algorithmes de recommandations, ... la menace existentielle qu’ils représentent qui semble très gravement sous-estimée par à la fois les chercheurs et le grand public ...


We get a little bit of fear mongering, just enough to motivate the need for a solution: The existential threat posed by recommendation algorithms is greatly underestimated both by researchers and the public at large.


> Le futur des démocraties et du monde de façon générale dépend pas mal de ce que ces algorithmes amplifient à très grande échelle.


It is no coincidence that a Jordan Petersen recommendation comes up sooner or later, pretty much whatever you have just watched. The Intellectual Dark Web is no secret, although perhaps the IDW and its founder Eric Weinstein are not as known as they should be. Behind a recommendation algorithm there are always people with interests.


To fix this problem, an engineering solution is transparency and open source algorithms, engaging viewers to compare and rate videos, and try to build a better metric. This is what project Tournesol is about. Tournesol might prefer to be associated with the sunflower turning towards the light, rather than their namesake, the absent-minded professor in Tintin's entourage who comes up with brilliant solutions while bumping into trees that have gotten into his path. However, like the professor, project Tournesol may come up with clever solutions within a narrowly defined scope, while ignoring some important stumbling blocks that aren't in their field of vision.


Tournesol is the kind of project that makes people watch even more videos. Or, in the best case, the number of videos watched may remain constant while the quality of the selection, however measured, improves. Meanwhile, in other parts of society people who try to be productive, making art or music, find that they are wasting their time watching stupid videos. Even if the video is about science, one may be left feeling no wiser. Learning by reading books is more efficient. Granted, learning styles differ; for some people reading books is out of question while a video may be just perfect.


Tournesol's site allows anyone with the right kind of email account to join (because they take measures against fake anonymous accounts). They also link to several published articles that describe methods and aims.


https://tournesol.app/


A smolnet solution would never care for big data and there would be no need for algorithms to decide what to watch, read, or listen to next. The randomness of a webring creates a flat structure which is entirely sufficient. On the big web things are as they are, so I don't deny that Tournesol could become useful if they succeed.



glog index

Main page




-- Response ended

-- Page fetched on Tue May 21 21:16:10 2024