-- Leo's gemini proxy

-- Connecting to idiomdrottning.org:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini; lang=en

Free Will


Some systems consist of multiple layers.


Computers


On one level, a computer is just a system of “on” or “off” switches and a deterministic system to toggle then. If A is on and B is off then C will turn on. That kinda thing. The most obedient and predictable system of semaphores ever created.


On another layer, many steps up, you can play Super Mario Bros or read fanfic or calculate equations or use cubic interpolation on images.


Network protocols


On one layer systems can connect to each other using IP addresses. On layer up they can send information losslessly through TCP. Then they can connect to particular port and send and receive information using HTTPS. And on top of that you can see images or video or read the news.


Vision


In a lit room full of interesting objects, light waves bounce off everything and the bouncing can be different for different wavelengths. In eyes, there are cells that can detect different wavelengths (and different amounts), with the help of the neurocircuitry in the brain translate that to color, and use that to create a visual representation of what’s in the room. Pigments of paint, or varieties of led lights in a computer screen, can also reflect or even emit light of similar wavelengths and frequencies, making it possible to look at a photograph of a particular room and recognize it.


On the one hand, something is “just” a piece of paper with ink pigments that make the wild waves of light bounce at specific speeds and amounts. On the other, we can recognize that paper as a comic book page and see the characters and read their words.


Wax Sticks


Building blocks


With all those systems, we’ve made building blocks out of building blocks out of building blocks. We’ve defined a range of wavelengths as the color “red”, created multiple ways of producing those wavelengths, and can use it to tell Mario from Luigi on the screen or Superman from Mon-El on the printed page.


Sometimes what’s going on in the intermediate layers are pure scaffolding for layers above or below. The Mario game works by sprites, tiles, palettes and pixels.


Thought and Emotion


On a material layer, humans have some signaling systems both electrical (neurons firing or not firing, depending on how many signals they receive) and chemical, from hormones and other signal substances.


On another layer, some steps up, we can have the thought “I don’t feel like omelet for lunch today” or “The first thing we’ll do when we get there is visit the horses”; complete with internal imagery, sense memory of omelet or of the grass field between the bus parking lot and the horse paddock and the scent of November.


The Separation of Layers


What I am trying to instill here is some sense of independence between the layers, at least in the layers as defined by our meaningful interaction with them. We don’t use a hand magnet to flip bits on a digital camera’s memory to create a megapixel photo. Instead we aim the viewfinder and press a button.


This isn’t literally magic, but... It’s an exhilarating thing. Making an app by combining libraries without a thought of the original bootstrapping compiler of the language of the operating system of the editor where those libraries were written, across seas and streams.


Free Will, Then


It’s been argued that the lowest layers of these systems are deterministic. That there is no true random. That given a specific set of inputs and circumstances there is only one result.


It has also been argued that logically, this means that upper layers are just as deterministic. The button presses on your Mario game are just a byte of ons and offs. A falling die, by this reasoning, can only land in one way, based on winds and momentum and velocities and angles in the room, hand and table.


Do decisions exist? Choices at any point on the road, not just at forks. I can keep writing this text or I can stop. Do ideas, invention, creative thought exist? Can there be “eureka moments” of coming up with a new connection or insight? Do moral deliberation and agony exist, such as if my friend tells me a terrible secret, what do I do?


On the layer where those terms are used, they do. Choices, options, dilemmas, decisions do exist as meaningful constructs on that level. Just like the word “word” is a meaningful construct when editing an English text. Ultimately, do words “exist”, or are they just ink stains on a typewriter’s paper?


Sometimes, when faced with a problem, it’s helpful to go to one of the layers below or above. (Try it, it’s a great tool!) Other times, the most straightforward solution is with the tools on the layer you are on.


Hey, am I dodging the question? I’m not really getting around the notion of the upper layers being just as deterministic as the lower ones. Pointlessly laying out the cards for a game of solitaire when the answer is already carved in granite by God when the deck was “shuffled” (i.e. meaninglessly fiddled with, in the reductionist mindset) and the decisions on which cards to made were already preset by my brain’s current state and ability (influenced by sleep, food, microbiome, light, noise).


On the layer where concepts such as “card” (as opposed to “clump of atoms”) or “game rules” (as opposed to “state of neurons”) are wielded, a game of solitaire can be interesting.


The Threat from Holism


If trying to approach problems from lower levels is reductionism, holism can be just as problematic for free will.


Setting aside all the atoms and quarks, if some people in a house discuss a dilemma and come up with a plan of action and then an unknown avalanche mauls them before they could actually do it, did they have free will?


Conclusion


Be careful to not misapply stuff from other layers to your current layer. You have a responsibility to make decisions and act on those decisions. If that only applies to the layer where the semantics of the word “decision” exist, then that “only relevant on this layer”-ness goes both ways.


The determinism of lower levels and the fatalism of upper levels can’t touch “decisions”, “choices” — because they can’t meaningfully use those semes without imbuing them!


“I might as well rob a bank because I’m just a preprogrammed heap of atoms and the universe’s heat death is unavoidable”—no! This is not a legit conclusion since there is no “might” or “may” or “decide to” on the deterministic lower layers. In the land of “maybe”, “maybe” does work.


Then sure, it might be argued that the layer as a whole, when viewed from outside, is a deterministic black box, it could only go one way. Those inputs could only ever create those outputs, but inside the layer, the process were described and interacted with as if there were decisions. Make those decisions responsibly.


Teapot Post Script for Logical Nitpickers


Skip this post script if you are not a philosophy nerd.


Just now, I wrote:


> The determinism of lower levels and the fatalism of upper levels can’t touch “decisions”, “choices” — because they can’t meaningfully use those semes without imbuing them!


To clarify, it’s not generally true that if a signifier can be used, the referent is ontologically meaningful. Sorry, fans of Russell’s Teapot and of the ontological argument.


Just saying that if the word “decision” can be uttered and understood, decisions can be made.


I am not saying that this inherently logically follows; in other words I’m not saying “Because A, then causally B.” I am saying, though, “If no B, then there can also be no A!” (Since they operate on the same layer and by the same principles—cognitive “understanding” operate on the same layer [for a broadly enough drawn layer] as conscious “decision-making”.)


Yes, that does leave it open for deterministic reductionism to argue for the case that it follows that the word “decision” cannot be uttered and cannot be understood—that indeed no word can be understood. They do have a logical tight, case for that. But if so, they may just as well STFU instead of making statements that, by

their own jure, inherently can never be heard and understood, not even by themselves.


Russell’s Teapot

Ontological argument


Even more post script


Funny how the same peeps who argue that “since the bottom layer is deterministic, the layers above also need to be deterministic” have no problems grokking that even though weather is chaotic, climate can have patterns.


From a conversation about free will


When you’re writing a macro you only have the lists and symbols. You can’t know what those symbols represent. But when you’re writing a function, you can. It’s meaningful when you’re working on that layer.


Maybe it’s “predetermined” that Alice called me and wanted to rob a bank and I considereded it carefully for two hours until I “decided” not to. That’s just how the atoms are gonna fall, how the electrons and air vibrations are gonna transmit from her mouth into my ear and then the neurons firing and pencil-on-paper as I mindmap the decision.


Me “deciding” that way is still gonna be the way I operate on the human level. If I go “lol my decision is already made for me so might as well rob that bank like God intended”, that’s messed up. We are condemned to choose. That’s our responsibility as humans.


A neuron doesn’t have a choice on whether it’ll fire. It fires if it gets enough signals in.


But our “interface”, our API, to these neurons is thoughts, memory, feelings, instincts, intuition, and decision. That’s the “layer” of head-neurons and brain-atoms we have access too, and that layer feels like free will, feels enough like free will that fighting the idea of free will is “misusing the API”.


You know about duck typing, right? In programming. The way we experience and interact with our will has all the responsibilities and consequences and interfaces of free will. All of it.


So as far as “what do we do about this ‘will’ of ours”, it becomes a question of semantics of whatever the heck “free will” even means.


My own mental model for how brain works is that I use the metaphor of strands. I have a bunch of strands: thought processes, sensations, emotions, dreams, ideas, metacognition; and I can direct my conscious attention (kinda like a spotlight) on one or a few of them at a time.


I can let myself feel how the bed I’m on as I’m typing this feels against me, and how the glass feels under my thumbs as I’m typing. I can think about an inner modelled calendar as I worry about the next laundry day.


I can think back to board games I played yesterday and how the heck I managed to finally win that difficult Star Wars game after such a long losing streak.


Sometimes I roughly sort these “strands” into “primary sensation”, “simulated experience”, and “reflexive awareness”. That’s just a rough “duck typed” sort, the map isn’t the territory, and the sort doesn’t imply same etiology of these strands or other shared qualities beyond what qualifies them for these labels. But it’s useful enough.


In this model, “will”, then, becomes kinda ambiguous. It’s my “reflexive awareness” that let’s me notice these “strands”. How I direct this awareness is sometimes related to what’s been historically called “will”. “I choose to focus on my breath right now.”


But mostly “will” is related to the simulated experiences of planning, thinking about pros and cons, (both with cognition but also with intuition, instinct, memory, emotion, gut feel, not-thinking-about-it-just-doing-it), about GTD stuff, and about making a directed effort towards a goal.


The more familiar I become with these “strands”, the word “will” becomes kind of a hodge podge of somewhat-related things. Decisions, dispatch. If my arm reflexively pulls back my hand from a hot stove, and I later sense the words “ouch that was hot” run through my head, did I “will” that?


Language and “word-form” strands aren’t the only strands in there, after all.


Catatonia has a song lyric: “For all your Telecaster dreams, a Telecaster’s all you need.” (Kind of not accounting for the externalities in the transaction but there you go.) If you want to play bass, get a bass and start playing it. If you want to practice bass, start practicing.


There is only the present. K’s Choice has another song lyric: “Take my future, past, it’s fine, but now is mine.” If you wanna get good at playing a particular song and you practice every day for four weeks and then on the fifth Monday your arms fall off in an accident and then you can’t play it (at least not the way you practiced it).


So in that sense, there is no “try”. We can label some actions (including cognitive actions such as thinking about a difficult programming problem) as directed, intentional effort.


“Will” is a convenient label for all kinds of different strands as long as they are related to decisions, to intentions, to directed effort, to hope (and “hope” is a label on a similarly tangled layer as “will” is), to doing actions (physical and cognitive) in the present that we hope will lead to a particular outcome, to a future. Ultimately there is only “now”.


And seen that way, we are condemned to choose. We have the opportunity (and with it, the responsibility), of directing our efforts via intentions and directions. If you wanna call that “free will”, I don’t mind that label. If you don’t, that’s fine by me too.


The Block Universe


Einstein said that all the past and all the future exist at the same time, immutably, and only our perception is moving through this four-dimensional “block”. That’s fine. The way our perception moves through it is by making what we perceive as choices. Don’t make dumb choices and blame it on “not having free will”.


Composition Considered Harmful

Means vs Ends

The block universe and you

-- Response ended

-- Page fetched on Thu Mar 28 13:19:41 2024