What’s the BFD w/ The Quantum Measurement Problem and Consciousness?
If you’ve fed on trends in Machine Learning, Artificial Intelligence, or Quantum Computing in the past few decades — even if just casually consuming some sci-fi story plot, you’ll have encountered this much-ado about consciousness. It’s quite a rabbit hole, right?
This caught my attention while watching two quantum physicists argue about the “measurement problem” (the fact that we can’t observe the collapse of the wave function directly) and the related observer-effect. In forums of public science communication, different interpretations of this problem (by different theoretical physicists) seem to be particularly slippery around the specifics of what an “observation” is, what “consciousness” is, and what the relationship is between these two concepts.
It was surprising to me that a consensus for defining Consciousness was still so palpably unresolved — not between New Age power crystal gurus vibing on Ayahuasca; but amongst world-renowned prize-winning scientists — even after centuries of math, physics, and neuroscience development. Progress is being made, but this inconsistency in the modern discourse revealed less progress than I’d assumed. The extremes of interpretation range in positions from “human consciousness causes reality to materialize by observing it” to “electrons are conscious — human consciousness isn’t required”.
(Curiously, in Quantum Computing algorithm development, this topic is largely dismissed as trivial background detail for practical reasons — just as a Classical Computing developer doesn’t worry about semiconductor chemistry and the electromagnetic field-effect of a transistor with every line of Python code written.)
The contention arises when a panel of scientists is asked “Is a conscious observer required for any collapse of the wave function to happen?”
I want to understand this in more depth — or at least at a level to know why physicists aren’t all on the same page about how to answer this.
How can I recognize consciousness by qualifying it?
So, before any in-depth research, I try various thought experiments to make sense of this through my own lens of day-to-day information technology systems design. How would I, personally, try to define the process of “conscious observation” based on my basic understanding of physical reality?
- From sensors, take input information radiated by other interactions.
- Store and correlate this new information with previously stored and correlated information (prior experience).
- Generate options for responding to the new input information by making predictions about possible future outcomes of each option — ranked by priorities and filtered by constraints ( such as resource requirements ).
- Select an option.
- Execute a response.*
This ability to make a decision is how I was trying to understand the difference, if any, between an observing system that is conscious and one that is not. I might have chosen this particular human method of information processing to fit some sort of simple notion of consciousness that feels familiar to me. *Also, after further consideration (think, Complete Locked-In Syndrome), I realized processing only through step 4 meets the requirement to qualify this phenomena as conscious observation. Additionally, I wondered how different time scales outside typical human experience affect this qualification. In any case, by this poor description, I’ve reduced conscious observation as the conversion of energy and information inputs to a decision tree for how to achieve a prioritized future state. Whether a decision is the “best” one — or how that is expressed or executed — seems like a different matter outside this scope.
After arriving at this point, I pause to make sure I’m tracking with a typical interpretation: Werner Heisenberg ( a co-founder of the mostly widely accepted “Copenhagen Interpretation” of Quantum physics) indeed said a “decision registration” in the physical world by a conscious observer is an absolute requirement for interpreting Quantum theories. He elaborates that this can be from a measuring device or machine — and not necessarily a human. Still a lot of questions, but okay: got it.
As a weak attempt to further “de-anthropocentricize” this view, I tried to map this information processing pattern down through a grade school biology idea of a phylogenic tree and chemical interactions — from other animals, plants, down to single-celled organisms and even unorganized organic compounds. This is where one notices that a nervous system and neurons are an especially recognizable part of information processing systems in terrestrial biology that were optimized for a certain type of efficiency through evolution. Good job, neuron critters — but no reason to conclude that our definition of consciousness at this point is exclusive to you.
OK, now I’m caught up to thinking up through the early 1900’s or so, right?
How can I recognize consciousness by quantifying it?
Now I wonder: is there a definable specification of this information processing configuration at which consciousness emerges?
So, my next thought was to further define consciousness by trying to quantify components of the information processing system:
- number of neurons
- average distance between neurons
- level of inter-connectivity across the neural network
(I later learned a bit about one specific approach to this called Integrated Information Theory — which defines a metric, Phi, for measuring “degree” of consciousness.)
It also seemed obvious that the accuracy of input information was only as good as:
- sensor input quality
- information storage and retrieval quality ( especially over time )
These quality factors seem like they would affect prediction accuracy and efficiency. While this might further characterize a form of consciousness, these factors don’t seem to determine whether or not a system is conscious at all. As far as terrestrial biological systems, it also makes sense that input information quality would only need to be “good enough” to achieve basic Darwinian-driven outcomes. That is, to survive long enough to propagate the species. So, I’m sort of crossing-off quantifying sensory input quality off the list of things to care too much about.
Are there different scopes of consciousness?
Are we to only ascribe this special characteristic of consciousness to ourselves? What about the loving family dog; tool-making corvids; the fluffy rabbit down in this hole we’re in; or networks of tree roots and mycelium hyphae that appear be able to collectively manage their future state in their environment over centuries?
Maybe our biological information processing systems are just another layer above chemical interactions, which are just another layer above fundamental particle interactions. Is consciousness a fundamental phenomena that looks different at each of those levels? If so, are there higher levels of interactions that this information processing pattern scales with? Is there an IIT value of Φ (Phi) for an AI system at which it might become a moral dilemma to power it down?
Strangely, the harder I try to qualify and define it, the more elusive and meaningless this word “consciousness” becomes.
Is human consciousness — this feeling of awareness of the correlation of sensory input from one moment to the next — ultimately just a cruel artifact of entropy in nature? This is usually where my casual stroll through this thought experiment arrives.
Did I not get the “Don’t talk about the Conscious Observer” memo?
To me, it seems reasonable to at least conclude that a “conscious observer” doesn’t have to mean anything like the type of consciousness we ascribe to ourselves. I don’t have the answers, but this seems like very approachable subject matter — so I’d expect to see less disagreement than there is by now. What’s the deal?
Just when I think I’m progressing with my own sense of this, along comes the various types of Delayed Choice Quantum Eraser experiments — and I’m just left aghast and a bit unsettled. How could simply knowing something literally affect reality? Furthermore, why does this effect appear to be instantaneous — even across billions of lightyears — between the observer and the observation?
Time to go down the entanglement rabbit hole next.
I think some amount of confusion in science communication (in English, at least) relates to the words themselves—perhaps resulting from ideas in circulation at the time when an old term is recycled with an expanded nuanced meaning within scientific circles, combined with a lack of word inventory in the vernacular to map on a specific nuanced phenomena or unprecedented new concept. This leaves students in underfunded classrooms — or the otherwise-curious general public — ill-equipped and less willing to engage these topics further because their initial exposure to terminology (like “spin”, “imaginary numbers”, or “orbital”) intuits an inaccurate, hard-to-shake mental model based on the previously-learned everyday use of these words.