-- Leo's gemini proxy

-- Connecting to notes.hugh.run:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en-AU

Universal


Finally, a trending Gemini conversation I can get interested in!


Now I'm regularly checking Gemini again, I've gone back to browsing CAPCOM, the aggregator designed to surface new-to-you Gemini content. Last week there were a couple of posts about search engine indexing and ranking. Specifically, in relation to Gemini search engines. Last I checked there was only GUS, so I was a little surprised and intrigued to find that there are now four:


AuraGem

GUS

TLGS

Kennedy


In typical Gemini style, Kennedy is currently not loading for me.


The two articles in question were by Krixano, the AuraGem developer, and a reply of sorts from Martin, the developer of TLGS.


Search Engine Ranking Systems Are Being Left Unquestioned

Search Engine Dilemma Bias VS Accuracy


I found Martin's reply particularly interesting in terms of the assumptions made about what search engines are for, what they should be expected to do, and — in an implied way —  the nature of knowledge itself. Martin seems particularly exercised by the idea of a pure machine, unsullied by humanity. Martin's perfect search engine would be exactly accurate, and completely free of "bias", which is the point of the post. Martin is rightly concerned about big corporates and just generally large powerful entities controlling the flow of information and the results that are provided by search engines. Yet the underlying assumption appears to be that a single, universal search engine should be able to serve all humans and all use cases equally. This seems to me to be a fundamental error. Large corporations and governments can indeed be bad. Generally it is their bigness that lies at the root of the problem. Why should a search tool be any different?


I've read a lot over the years about the problems with search and discovery tools. It is, after all, a large part of my job to understand how these things work. I also came of age in the early 1990s, a world before Google. As both Krixano and Martin rightly point out, there are models other than those currently in use. But neither seem to be interested in building anything other than a "universal" search engine.


There is no universal truth. There is no uncontested set of facts. All knowledge is socially constructed. Yes, all knowledge. Yes even that one. Why then even bother attempting to build an tool to surface information and knowledge without human intervention or bias? It's a nonsense. Even Wikimedia realises this. There is no single Wikipedia but rather many. They are separated by language, which is a crude way to divide up humanity but even so, it highlights that this tool considered to be the one, universal and singular record of all human knowledge doesn't even actually try to be that.


Universal search engines, singular encyclopedias, libraries of all the world's knowledge — humans are drawn to such ideas. But they are fantasies. Increasingly I am interested in how to revitalise and relegitimise the idea of, well, "the right tool for the job". An Encylopedia of X, where "X" is something reasonably contained, like a city, a field of study, or a concept. A Dictionary of Y, where "Y" is anything from food combinations to finance jargon. It probably sounds very "old man yells at cloud", but despite all the obvious gains of being able to "just Google it", there have definitely been losses from using one tool to find out about ...everything.


I'd love to read more about what these developers are thinking about how to make search work, and maybe even have a dialogue with them about what they're trying to do, why, and what assumptions drive their work.

-- Response ended

-- Page fetched on Fri May 17 11:04:04 2024