-- Leo's gemini proxy

-- Connecting to gemini.techrights.org:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en-GB


● 08.26.23


Gemini version available ♊︎

● Gemini Links 26/08/2023: Politics and Benchmarking


Posted in News Roundup at 8:40 am by Dr. Roy Schestowitz


Gemini* and Gopher


Personal/Opinions


↺ 🔤SpellBinding: UEGMRSB Wordo: PYGM


↺ Work Slopped into the Water


We extracted cases and cases of jars from the *dispensa* and from the two storage units on the other side of the *finca*. Some had been placed there nearly two decades ago. They were cherries and figs and myriad other comestibles preserved for an unknown future in this realm by a person who no longer lingers in said realm. She was a product of another time, of a generation and a mentality that never accustomed itself to an abundance now taken for granted.


We forced each jar open with tines of forks and now cracked blades of cheap dinnerware. Contents were poured into buckets. One by one, I lugged these buckets to the stream that flows beside the house, that flows to river Tirón and finally is lost forever to the Cantabrian Sea. I tipped the buckets and hours and days and weeks and months of work slopped into the water.


↺ Re: Dresses


Wokeness has reared its ugly head on Gemini yet again.


I seem to remember someone, in reply to an earlier post of mine, not grasping my definition of wokeness.


Politics and World Events


↺ ELP, The Only Way


lol funny and sad not gonna bother about good writing apostrophes wtv


there is no god neither me nor you nor anyone else has ever seen heard tasted felt smelled experienced this superhero


Technology and Free Software


↺ Benchmarking RK3588 NPU matrix multiplication performance


My goal with my RK3588 dev board is to eventually run a large language model on it. For now, it seems the RWKV model is the easiest to get running. At least I don’t need to somehow get MultiHeadAttention to work on the NPU. However, during experimenting with rwkv.cpp, which uses pure CPU for inference on the RK3588, I noticed that quantization makes a huge difference in inference speed. So much so that INT4 about 4x faster then FP16. Not a good sign. It indicates to me that memory bandwidth is the bottleneck. If so, the NPU may not make much of a difference.


Share in other sites/networks: These icons link to social bookmarking sites where readers can share and discover new web pages. Permalink  Send this to a friend

----------

Techrights

➮ Sharing is caring. Content is available under CC-BY-SA.

-- Response ended

-- Page fetched on Thu Jun 13 21:42:25 2024