-- Leo's gemini proxy
-- Connecting to quasivoid.net:1965...
-- Sending request
-- Meta line: 20 text/gemini
It has been more than two weeks since I last wrote an entry here. It has actually been more than two weeks since I have done much of anything at all. I had stopped doing the majority of my core activities, and lately, I am trying to build up steam to do them again, starting with writing this entry.
I stepped outside last night and saw the full moon was bright and the air temperature was warm (well, not skin warm, but warm enough that as soon as I started riding I'd be burning). At around 1 AM in the morning I rushed to put on my cycling kit, stuff my gullet with food, find a headlamp and start riding.
I didn't do a super long distance, just a five mile lap around the neighborhood. As I have previously established, to go much further requires getting on a single 55 mph highway, and that time of the morning is one of the worst in terms of quantity of drunks on the road.
A coyote darted out of the bushes. Instead of completely crossing the road, it ran along it away from me. I probably spent twenty seconds or so chasing it. I knew they were small but it was still smaller than I expected.
I haven't been riding much during this downtime, other than a couple of five mile rides. There was three weeks where I didn't ride at all, and natural motivation to do long distance has not been coming back on its own. I'm aiming to start doing twenty mile plus rides again soon.
On the night ride I climbed the 13% .5 mile hill on one gear harder than I have ever been able before. That felt pretty cool, especially in the middle of the night. Since my home comes immediately after the climb, I pretended I was a racer and squirted all the water out of my downtube water bottle at the base of it.
Nine months after Lagtrain, inabakumori published Haru no Sekibaku. I love it, especially the dynamics of the vocals. This is the first song they have produced without using VOCALOID, opting for CeVIO AI instead, an instrument that's part of some new wave of "AI" vocal synths.
I first heard about this strain of software when I randomly clicked on a cover of Säkkijärven polkka using Tohoku Kiritan, except here she's called "AI Kiritan". I was confused by the vocals at first since I could immediately recognize that they were not human vocals, but they were nothing near what VOCALOID could produce. Turns out, that cover was made using something called NEUTRINO, which is similar in concept to CeVIO, utilizing machine learning to produce vocals.
I love the sound of these new synthesizers, but I desperately hope that the goals of future developments don't become "be indistinguishable from a human", since their artificial nature is what makes them good. I doubt older instruments would ever go disused, since many artists would likely still desire to produce vocals that only something like VOCALOID could achieve, but I can see a cultural shift away from them happening, as it looks like many listeners who never cared much for software such as VOCALOID in the first place seem to love the motion towards realism.
Cavalcade is out now. It goes. I'm not sure how I would rank it against Schlagenheim, or if thats even possible since it's fairly different from its predecessor anyways, but it's solid. It hasn't had the same wow factor for me as Schlagenheim did, either it's going to grow on me with subsequent listens, or my expectations have increased dramatically since black midi debuted.
The moment it was released, black midi did a premeire on YouTube, with an interview with a fake black midi panel for the intro and an announcement from a real black midi panel as the outro. The interview pokes fun at music journalism, and I'm pretty sure the "journalists" are the real black midi, at least one if not all of those voices are Geordie Greep's, I'm sure.
I've bought tickets to one of black midis shows in October, although it's probably a 50/50 that that falls through since the disease forecast keeps changing. Either way, the first time I saw them on their Schlagenheim tour was one of the most memorable shows I've ever been to, and memories of the pit there flooded my mine when I purchased tickets for this tour.
I've been slowly rolling into this. I haven't been able to do immersion every day like I wanted to, but I have most days, and have gotten sentence mining down to a science. I rewatched FLCL with Japanese audio and subtitles, and found that I understood a significant portion of it and almost half of it were N+1 sentences.
I'll share some resources I have found which made immersion using anime and producing flashcards from them much easier. Starting with Kitsunekko, which is a directory of Japanese subtitle files for various anime. They are almost all mistimed, so you'll want to learn how to apply a timing offset in your video player of choice when using these files. When watching FLCL with subs I borrowed from Kitsunekko, I had to apply a 600ms offset.
An absolutely awesome plugin for MPV called mpvacious makes creating Anki flashcards while you are watching highly frictionless. With properly timed subtitles, you can produce a sentence card with the sentence, a screenshot, and the audio that goes with that subtitle with a single keystroke, and only a couple more keystrokes for sentences spanning multiple subtitles. Then, when you are done watching, you can go into Anki to filter which cards you want to keep and enter in definitions.
I haven't started trying to build the habit of passive immersion yet, so I still haven't made proper use of this next resource, but I know I will be making use of it very soon. Something that Refold suggests when inserting shows you have watched into a passive immersion loop is to remove all sections of audio where there is no dialogue, so that you can listen to more in the same period of time. This requires some manual processing, and there are tools to help with this that use subtitle files for timing, but there also exists multiple repositories of television shows already condensed. The one I have bookmarked is a mega.nz archive. Now that I have finished watching FLCL, I will be downloading condensed FLCL audio from there.
While watching FLCL I shocked myself with how much I understood. I finished each episode with around fifty cards to process, although I cut each down to fifteen cards after filtering out sentences that weren't truly N+1. I'm also working to make reading Japanese news and imageboards part of my habits once again, something I stopped doing regularly after graduating high school.
While setting up Anki for sentence mining, I was astounded by how much more difficult the build process had become since the last time I used it. When I first used Anki to learn the Jouyou set kanji, it Just Worked on my then-Slackware laptop. Later, the binary distribution of Anki developed a dependency on systemd for who knows what reason, but would work fine if you built it yourself. Many distributions have been frozen on Anki 2.1.15 for over a year since it developed a dependency on Rust and using rustup as part of its build process.
Most recently, Anki has gained a dependency on a fresh new build system called bazel, written in Java, and despite reinventing the wheel to magically make build systems painless, has an awful hideous non-portable bootstrap process. The combination of not being able to build Anki myself as bazel is not currently packaged for Void (and being unable to build bazel myself), and unable to use the binary distribution of Anki due to its dependency on systemd, forced me to distro hop for the first time in years.
I decided to install Arch, since I figured it's the only distribution with a large enough and unstoppable community that could put up with anyones shit. And that's true, it's been helpful in this regard in ways beyond just having Anki working, but holy shit I hate this distribution so much. I don't want to talk about what I hate about Arch I just want to complain but really. Anki devs, what are you doing? And more broadly, please never force software down peoples throats if you are not going to take responsibility for portability. This means bazel, systemd, Rust, etc...
Anyways, here's an issue from the void-packages issue tracker the talks about the suck of bazel.
There's a short entry about .flow in the wiki here, but I hadn't ever actually completed the game before recently. Over the course of four days, I did four two hour long livestreams starting at midnight of me playing the game, and talking with the couple of my friends who showed up in the chat.
It is much comfier than I remembered it, and I was able to find 23 of the effects without cheating, which I was pretty proud of since I couldn't stop going in circles to the same area the first time I played it. I did use a guide for the last "chapter" of the game, when Sabitsuki becomes Rust, though.
.flow is arguably the least "meaningful" of the well known Yume Nikki fangames, but it certainly has my favourite aesthetic of them all. It has a dark, serene, industrial-decay feel that permeates the entire game rather than just a few areas. The heavy use of machine, natural, and gorey imagery is a nice feel. Also, I love Oreko and hope her machine turns out very cool.
-- Response ended
-- Page fetched on Fri Jul 23 17:01:46 2021