-- Leo's gemini proxy
-- Connecting to gemini.tuxmachines.org:1965...
-- Connected
-- Sending request
-- Meta line: 20 text/gemini;lang=en-GB
Tux Machines
Posted by Roy Schestowitz on Dec 05, 2022
> It’s been almost a year since I announced the 15-Minute Bug Initiative for Plasma. In a nutshell, this initiative proposed to identify and prioritize fixing bugs you can find “within the first 15-minutes of using the system” that make Plasma look bad and feel fundamentally unstable and broken.
> Josh and Kurt talk about a new tool that can do Stylometry analysis of Hacker News authors. The availability of such tools makes anonymity much harder on the Internet, but it’s also not unexpected. The amount of power and tooling available now is incredible. We also discuss some of the future challenges we will see from all this technology.
> Following that we have a new limited edition board, the Constellation MultiStar Ornament. This is the same board found in our Qwiic Constellation Kit (you can learn more about that below) but we've turned this one into a festive holiday ornament. Don't worry, it still works!
> In addition to these deals, we are also now offering a Constellation MultiStar Ornament! We've heard multiple requests from you to offer the MultiStar on its own from the SparkFun Constellation MicroMod Kit and we wanted to oblige you during the holiday season!
> Data science and data engineering are incredibly cognitively demanding professions. As data professionals, we are required to leverage both our analytical/engineering skills and our interpersonal skills to be effective contributors within our organisations. Based on my personal experience, the field seems to concentrate humans who are detail-oriented, curious, impact-driven and tenacious to a fault. This A-type personality profile, while magical when applied to technical work, could reasonably also count as an occupational hazard.
> If you squint, LLMs resemble something like a vector search database. Items are stored as embeddings, and queries return deterministic yet fuzzy results. What you lose in data loading time (i.e., model training), you make up for in compression (model size) and query time (inference). In the best case, models denoise and clean data automatically. The schema is learned rather than declared.
-- Response ended
-- Page fetched on Thu Jun 13 20:48:29 2024