Consciousness: The Mind Messing With the
Mind
George
Johnson JULY 4, 2016 New York Times
A paper in
The British Medical Journal in December reported that cognitive
behavioral therapy — a means of coaxing people into changing the
way they think — is as effective as Prozac or Zoloft in
treating major depression.
In ways no
one understands, talk therapy reaches down into the biological
plumbing and affects the flow of neurotransmitters in the brain.
Other studies have found similar results for “mindfulness” —
Buddhist-inspired meditation in which one’s
thoughts are allowed to
drift gently through the
head like clouds reflected in
still mountain water.
Findings
like these have become so commonplace that it’s easy to forget
their strange implications.
Depression
can be treated in two radically different ways: by altering the
brain with chemicals, or by altering the mind by talking to a
therapist. But we still can’t explain how mind arises from matter
or how, in turn, mind acts on the brain.
This
longstanding conundrum — the mind-body problem — was
succinctly described by the philosopher David
Chalmers at a recent symposium at The
New York Academy of Sciences. “The scientific and philosophical
consensus is that there is no nonphysical soul or ego, or at least
no evidence for that,” he said.
Descartes’s
notion of dualism — mind and body as separate things — has long
receded from science. The challenge now is to explain how the inner
world of consciousness arises from the flesh of the
brain.
Michael
Graziano, a neuroscientist at Princeton University, suggested to
the audience that consciousness is a kind of con game the brain
plays with itself. The brain is a computer that evolved to simulate
the outside world. Among its internal models is a simulation of
itself — a crude approximation of its own neurological
processes.
The result
is an illusion. Instead of neurons and synapses, we sense a ghostly
presence — a self — inside the head. But it’s all just data
processing.
“The
machine mistakenly thinks it has magic inside it,” Dr. Graziano
said. And it calls the magic consciousness.
It’s not
the existence of this inner voice he finds mysterious. “The
phenomenon to explain,” he said, “is why the brain, as a machine,
insists it has this property that is nonphysical.”
The
discussion, broadcast online, reminded me of Tom
Stoppard’s newest play, “The Hard
Problem,” in which a troubled young
psychology researcher named Hilary suffers a severe case of the
very affliction Dr. Graziano described. Surely there is more to the
brain than biology, she insists to her boyfriend, a hard-core
materialist named Spike. There must be “mind stuff that doesn’t
show up in a scan.”
Mr.
Stoppard borrowed his title from a paper by Dr. Chalmers. The “easy
problem” is explaining, at least in principle, how thinking,
memory, attention and so forth are just neurological computing. But
for the hard problem — why all of these processes feel like
something — “there is nothing like a consensus theory or even a
consensus guess,” Dr. Chalmers said at the symposium.
Or, as
Hilary puts it in the play, “Every theory proposed for the problem
of consciousness has the same degree of demonstrability as divine
intervention.” There is a gap in the explanation where suddenly a
miracle seems to occur.
She rejects
the idea of emergence: that if you hook together enough insensate
components (neurons, microchips), consciousness will appear. “When
you come right down to it,” she says, “the body is made of things,
and things don’t have thoughts.”
Proponents
of emergence, who have become predominant among scientists studying
the mind, try to make their case with metaphors. The qualities of
water — wetness, clarity, its shimmering reflectivity — emerge from
the interaction of hydrogen and oxygen atoms. Life, in a similar
way, arises from molecules.
We no
longer believe in a numinous life force, an élan vital. So what’s
the big deal about consciousness?
For lack of
a precise mechanism describing how minds are generated by brains,
some philosophers and scientists have been driven back to the
centuries-old doctrine of panpsychism — the idea that consciousness
is universal, existing as some kind of mind stuff inside molecules
and atoms.
Consciousness doesn’t have to emerge. It’s built into matter,
perhaps as some kind of quantum mechanical effect. One of the
surprising developments in the last decade is how this idea has
moved beyond the fringe. There were three sessions on panpsychism
at the Science of
Consciousness conference earlier this year
in Tucson.
This
wouldn’t be the first time science has found itself backed into a
cul-de-sac where the only way out was proposing some new
fundamental ingredient. Dark matter, dark
energy — both were conjured forth to
solve what seemed like intractable problems.
Max
Tegmark, a physicist at the Massachusetts Institute of Physics (he
also spoke at the New York event), has proposed that there is a
state of matter — like solid, liquid and gas — that he calls
perceptronium: atoms arranged so they can process information
and give rise to subjectivity.
Perceptronium does not have to be biological. Dr. Tegmark’s
hypothesis was inspired in part by the neuroscientist Giulio
Tononi, whose integrated information theory has become a major
force in the science of consciousness.
It
predicts, with dense mathematics, that devices as simple as a
thermostat or a photoelectric diode might have glimmers of
consciousness, a subjective self.
Not
everything is conscious in this view, just stuff like perceptronium
that can process information in certain complex ways. Dr. Tononi
has even invented a unit, called phi, that is supposed to measure
how conscious an entity is.
The theory
has its critics. Using the phi yardstick, Scott Aaronson, a
computer scientist known for razorlike skepticism, has calculated
that a relatively simple grid of electronic logic gates — something
like the error-correcting circuitry in a DVD player — can be many
times more conscious than a human brain.
Dr. Tononi
doesn’t dismiss that possibility. What would it be like to be this
device? We just don’t know. Understanding consciousness may require
an upheaval in how science parses reality.
Or maybe
not. As computers become ever more complex, one might surprise us
someday with intelligent, spontaneous conversation, like the
artificial neural net in Richard Powers’s novel “Galatea 2.2.”
We might
not understand how this is happening any more than we understand
our inner voices. Philosophers will argue over whether the computer
is really conscious or just simulating consciousness — and whether
there is any difference.
If the
computer gets depressed, what is the computational equivalent of
Prozac? Or how would a therapist, human or artificial, initiate a
talking cure?
Maybe the
machine could compile the counselor’s advice into instructions for
reprogramming itself or for recruiting tiny robots to repair its
electronic circuitry.
Maybe it
would find itself flummoxed by its own mind-body problem. We humans
may not be of much help.