-- Leo's gemini proxy

-- Connecting to gemini.tuxmachines.org:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini;lang=en-GB

Tux Machines


Programming Leftovers


Posted by Roy Schestowitz on Aug 09, 2022


KDE: Bugfixes and GSoC Reports

Open Hardware and Linux Foundation Openwashing



1.63.0 pre-release testing | Inside Rust Blog


↺ 1.63.0 pre-release testing | Inside Rust Blog


> The 1.63.0 pre-release is ready for testing. The release is scheduled for August 11. Release notes can be found here.



Slackware: OpenJDK11 has been added to my repository


↺ Slackware: OpenJDK11 has been added to my repository


> For ages, I have had Java 7 and Java 8 packages in my repository. I compile these versions of Java from the OpenJDK sources and using the icedtea framework.


> People have been asking about more recent versions of Java, in particular Java 11 and Java 17 are required more and more by software projects. So far, I have been hesitant, since icedtea still only supports Java 7 and 8. Writing a new build script from scratch is a lot of work and Java gives little reward.


> Eventually, I have decided to build Java 11 packages regardless, main reason being that LibreOffice seems to need it to enable functionality in Base. Therefore expect the next update of my LibreOffice packages to have been compiled against OpenJDK11.


> Note that I will not be creating separate JRE (Java Runtime Environment) packages. The JDK (Java Development Kit) is what you’ll get from me. It contains everything you need to compile and run Java programs. Don’t forget to logout and login again after installing openjdk11, since it installs a profile script which is sourced during login.



Top 10 Best Java Frameworks For Web Development in 2022


↺ Top 10 Best Java Frameworks For Web Development in 2022


> Java is one of the most used object-oriented languages due to its versatile usability and easy implementations. Many corporate IT sectors rely heavily on Java, and Java developers are in high demand. So you can only imagine how popular Java frameworks are as they make working with Java faster and easier in real-world scenarios.


> That said, you might not even notice that Java plays a significant role in the software you regularly use, such as Spotify, Twitter, Opera Mini, and much more. Hence, if you intend to set up a career in Java-related web development, learning the proper usage of popular Java web frameworks and staying up-to-date with the most exciting ones is a must.



A guide to JVM interpretation and compilation | Opensource.com


↺ A guide to JVM interpretation and compilation | Opensource.com


> Java is a platform-independent language. Programs are converted to bytecode after compilation. This bytecode gets converted to machine code at runtime. An interpreter emulates the execution of bytecode instructions for the abstract machine on a specific physical machine. Just-in-time (JIT) compilation happens at some point during execution, and ahead-of-time (AOT) compilation happens during build time.


> This article explains when an interpreter comes into play and when JIT and AOT will occur. I also discuss the trade-offs between JIT and AOT.



NNAISENSE announces release of EvoTorch, a rare open-source evolutionary algorithm library | VentureBeat


↺ NNAISENSE announces release of EvoTorch, a rare open-source evolutionary algorithm library | VentureBeat


> EvoTorch is built on top of the open-source PyTorch machine learning library.


> Timothy Atkinson, research scientist at NNAISENSE, explained that EvoTorch has several components, including a collection of evolutionary algorithms and logging capabilities so a data scientist can track machine learning experiments in real time.


> “The main idea is that you can take anything that you have built in PyTorch and immediately optimize it with EvoTorch,” Atkinson said.


> NNAISENSE has also integrated EvoTorch with the open-source Ray framework that is used for scaling Python and AI applications. Atkinson said that if a data scientist builds a problem as a PyTorch function to optimize on EvoTorch, it’s possible to scale to thousands of CPUs and hundreds of GPUs.


> “We’ve built EvoTorch in a very sensible way on top of the Ray library, which means that it can scale as much as you can afford,” Atkinson said.



Descriptors are hard


↺ Descriptors are hard


> Over the weekend, I asked on twitter if people would be interested in a rant about descriptor sets. As of the writing of this post, it has 46 likes so I’ll count that as a yes.


> I kind-of hate descriptor sets…


> Well, not descriptor sets per se. More descriptor set layouts. The fundamental problem, I think, was that we too closely tied memory layout to the shader interface. The Vulkan model works ok if your objective is to implement GL on top of Vulkan. You want 32 textures, 16 images, 24 UBOs, etc. and everything in your engine fits into those limits. As long as they’re always separate bindings in the shader, it works fine. It also works fine if you attempt to implement HLSL SM6.6 bindless on top of it. Have one giant descriptor set with all resources ever in giant arrays and pass indices into the shader somehow as part of the material.


> The moment you want to use different binding interfaces in different shaders (pretty common if artists author shaders), things start to get painful. If you want to avoid excess descriptor set switching, you need multiple pipelines with different interfaces to use the same set. This makes the already painful situation with pipelines worse. Now you need to know the binding interfaces of all pipelines that are going to be used together so you can build the combined descriptor set layout and you need to know that before you can compile ANY pipelines. We tried to solve this a bit with multiple descriptor sets and pipeline layout compatibility which is supposed to let you mix-and-match a bit. It’s probably good enough for VS/FS mixing but not for mixing whole materials.



Picscale package removed


↺ Picscale package removed


> I experimented with an older version of bacon in OE, picscale compiled ok, but there is a segmentation fault when try to use it.



Regular Expressions Cheatsheet - Make Tech Easier


↺ Regular Expressions Cheatsheet - Make Tech Easier


> If you work with text, you’ll appreciate how useful regular expressions are. These are small characters of text which allow you to create elaborate rules on what a word looks like. These rules can either be as simple as matching a single letter in a document or something complex such as looking for every word that begins in “a” and “c” but ends in “ism.”



CXL Borgs IBM’s OpenCAPI, Weaves Memory Fabrics With 3.0 Spec [Ed: IBM-sponsored puff pieces without disclosure from the author or the publisher]


↺ CXL Borgs IBM’s OpenCAPI, Weaves Memory Fabrics With 3.0 Spec


> When the CXL protocol is running in I/O mode – what is called CXL.io – it is essentially just the same as the PCI-Express peripheral protocol for I/O devices. The CXL.cache and CXL.memory protocols add caching and memory addressing atop the PCI-Express transport, and run at about half the latency of the PCI-Express protocol. To put some numbers on this, as we did back in September 2021 when talking to Intel, the CXL protocol specification requires that a snoop response on a snoop command when a cache line is missed has to be under 50 nanoseconds, pin to pin, and for memory reads, pin to pin, latency has to be under 80 nanoseconds. By contrast, a local DDR4 memory access one a CPU socket is around 80 nanoseconds, and a NUMA access to far memory in an adjacent CPU socket is around 135 nanoseconds in a typical X86 server.


> With the CXL 3.0 protocol running atop the PCI-Express 6.0 transport, the bandwidth is being doubled on all three types of drivers without any increase in latency. That bandwidth increase, to 256 GB/sec across x16 lanes (including both directions) is thanks to the 256 byte flow control unit, or flit, fixed packet size (which is larger than the 64 byte packet used in the PCI-Express 5.0 transport) and the PAM-4 pulsed amplitude modulation encoding that doubles up the bits per signal on the PCI-Express transport. The PCI-Express protocol uses a combination of cyclic redundancy check (CRC) and three-way forward error correction (FEC) algorithms to protect the data being transported across the wire, which is a better method than was employed with prior PCI-Express protocols and hence why PCI-Express 6.0 and therefore CXL 3.0 will have much better performance for memory devices.




gemini.tuxmachines.org

-- Response ended

-- Page fetched on Sat Jun 1 09:13:29 2024