July
16, 2001 |
To get to the future we must first pass through science. This column
is about that passage. Every other week I will present to you some
piece of current science along with my personal speculations on
why it is relevant to our futures. Though the science of this initial
column is almost two years old, it is still largely unreported,
and is for obvious reasons perfect for my inaugural column on Mindjack.
`The matrix has its roots in primitive arcade games,' said
the voice-over, `in early graphics programs and military experimentation
with cranial jacks.' On the Sony, a two-dimensional space war
faded behind a forest of mathematically generated ferns, demonstrating
the spacial possibilities of logarithmic spirals; cold blue military
footage burned through, lab animals wired into test systems, helmets
feeding into fire control circuits of tanks and war planes. `Cyberspace.
A consensual hallucination experienced daily by billions of legitimate
operators, in every nation, by children being taught mathematical
concepts... A graphic representation of data abstracted from the
banks of every computer in the human system. Unthinkable complexity.
Lines of light ranged in the nonspace of the mind, clusters and
constellations of data. Like city lights, receding...'
William Gibson, Neuromancer - 1984
It was still very much a 300 baud universe when I jacked into Gibson’s
future for the first time. In 1984 there were very few systems I
could connect to with the surplus acoustic modem I had access to,
and almost all of them were a forbidden long distance telephone
call away. My borrowed deck suffered from sensory deprivation and
just like a person, it hallucinated. It hallucinated games. The
games I made were even more primitive than the ones in the arcade
that Gibson places at the foundation of the matrix. They ran at
1.77 MHz on a screen with a resolution of a mere 128 by 48 pixels,
on a deck had no idea what color was. The lack of speed, resolution
and color weren’t important to me. What was important was that I
was in full control of an entirely different reality that was embedded
within our own. I spent nearly all of my free time hacking pixels
into lowres TRS-80 approximations of the hires characters and vehicles
that populated the books, movies and arcades of my youth. I was
a pixel God, able to control human perception in a fundamental,
yet disconnected way. It was a powerful feeling to have at a time
when few adults knew what a pixel was, but I longed for a direct
connection. I wanted to draw pixels not on a screen, but directly
in mind; I wanted to be a Neuromancer. So I think, did Garret B.
Stanley.
Dr. Stanley is Assistant Professor of Biomedical Engineering in
the Division of Engineering and Applied Sciences at Harvard University.
He is the ultimate voyeur. He jacks into brains and extracts video.
Using cats selected for their sharp vision, in 1999 Garret Stanley
and his team recorded signals from a total of 177 cells in the lateral
geniculate nucleus - a part of the brain's thalamus that processes
visual signals from the eye - as they played 16 second digitized
(64 by 64 pixels) movies of indoor and outdoor scenes. Using simple
mathematical filters, the researchers decoded the signals to generate
movies of what the cats actually saw. Though the reconstructed movies
lacked color and resolution and couldn’t be recorded in real-time
(the experimenters could only record from 10 neurons at a time and
thus had to make several different recording runs, showing the same
video) they turned out to be amazingly faithful to the original.
Caption:
Comparison between the actual and the reconstructed images in an
area of 6.4 degrees by 6.4 degrees. Each panel shows four consecutive
frames (interframe interval: 31.1 msec) of the actual (upper) and
the reconstructed (lower) movies. Top panel: scenes in
the woods, with two trunks of trees as the most prominent objects.
Middle panel: scenes in the woods, with smaller tree branches.
Bottom panel: a face at slightly different displacements
on the screen.
This study which was published in the September 15, 1999 issue
of the Journal of Neuroscience and was the first demonstration
that spatiotemporal natural scenes can be reconstructed from the
ensemble responses of visual neurons. It put us firmly in Gibson’s
future.
Now, we know what raw experience looks like inside the brain of
another being, and thus entire philosophies of mind that were premised
on internal experience forever being private, have been rendered
obsolete. I have no doubt that it won’t be long before these interfaces
are made with human subjects with the frequency and expense of a
complex tattoo. Those interfaces will also be bi-directional - giving
us the ability to augment reality, replace it, or simply to record
our nightly dreams to share with others. It won’t be long before
our preferred interface with cyberspace will be through "mindjacks".
© 2001,Chris McKinstry
Links: Garret B. Stanley http://www.deas.harvard.edu/~gstanley/
bio:.
Chris McKinstry is a
Canadian living in Chile where he operates the world's largest
optical telescope for the European
Southern Observatory. He is also the creator of the Mindpixel
Digital Mind Modeling Project, the world's largest AI effort.
|