-- Leo's gemini proxy

-- Connecting to bbs.geminispace.org:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini; charset=utf-8

Natural Stupidity


The language used in AI papers always caputers my attention. In all those papers, the models and architectures, are always *state of the art*. There is never a primitive model. They are always the best, the brightest, the most creative, even if they just recycle old ideas and combine basic simple thoughts.


This air of self importance resulted in the creation of a zoo of complicated acrynoms that even their models can't comprehend. IMO, this zoo of acrynoms is needed to confuse the reader and make him think that the concept is more important than what it should be.


It does take an encyclopedic mind just to remember all those useless acrynoms and figure out the differences between them. Is R-CNN different from RPN, does the H in ViT-H mean hybrid or hierarchical, is MViT a multi scale or multi axes vision transformer, etc..?


Late in 2017, a paper titled: *Attention is all you Need* proposed a poweful architecture called the Transformer that proved to be very successful in Natural Language Processing tasks. A search on arxiv now shows that nearly hundereds of papers use a simillar title. It seems that there are more papers title *X is all you need* than there are researchers in the field, demonstrating the amount of originality and creativity of the people in the domain.


Reading these cryptic papers, makes me wonder if they exert so much effort in enhancing computer artificial intelligence models only to cover their own natural stupidity!


👻 naf

2023-09-01 · 9 months ago · 👍 eph, coderwx


1 Comment


☯️ eph · 2023-09-02 at 15:34:

Some of them might be faked too, spat out of paper mills or poorly Google translated just to get someone’s name “published”.

-- Response ended

-- Page fetched on Sun May 19 15:19:42 2024