Thursday, December 4, 2008

Simstim


Definitely the coolest science cover I've seen ;-) That roll of "brain film" is actual pictures reproduced from visual-cortex activity, of what people were seeing or reading. This is the first time my name's appeared in a scientific journal. In a brain-reading study from Yuki Kamitani's lab in Kyoto called "Visual image reconstruction from human brain activity." I'm the only non-Japanese name on the paper! I'm credited in the acknowledgments section on the last page for "manuscript editing" (fixing English prepositions and overuse of the word "the", mostly; very scientific). I can send anybody a copy if you're interested to read it.

The paper gets pretty technical in parts, but at least take a look at the first two figures. These are two of the clearest images of brain-pattern classification I've seen, like cartoons of how this science is done. And the pictures of the word "NEURON" reconstructed in real-time from visual-cortex brain activity are eery. I love how the figures illustrate the power of the technique both scientifically-- in terms of "mean square errors"-- and viscerally, in the form of the actual images being decoded direct from the brain. I'm excited to have been attached to this paper in any way, even if my only contribution was English prepositions and articles (If you read a particularly well-used "the", "of", or "in", that may have been me ;-). I really think this is a paradigm shift in brain-imaging... Gone are the days of flaky region-of-interest studies pointing to a hunk of cortex "representing language" or "emotion". If you like this one, definitely check out the Mitchell, 2007 paper from Carnegie Mellon. Those people are decoding words-- novel nouns that the classifier's never seen before!-- from brain pattern's, by defining "meaning" in terms of a noun's frequency of co-ocurrence with certain sensory-motor verbs. Language theory meets biology meets computer science in a really thrilling way.

Speaking of speech. I heard today on the November "Neuropod" podcast on this year's Society For Neuroscience conference (SfN; the conference I went to in Atlanta in senior year; this year in DC) that a neuroscientist and a brain surgeon at BU, Frank Gunter and Dr. Kennedy, have designed a brain-machine interface that can decode speech sounds from neural activity. Speech from thought! Our work is connected to this-- the ultimate clinical goal for all brain-pattern-classification work, like my Spidey experiment in Ken's lab-- to help brain- damaged people regain use of speech and motion and their senses.

This speech-decoder is an implanted electrode in the brain of a guy in Georgia with "locked-in syndrome", who's paralyzed except for his eyes. The coolest part about the brain chip is that it's a "neurotrophic device", meaning that it's filled with nerve food-- neurotrophic factors, nutrients for neurons. So, the axons of the nerves actually grow inside the chip, stitching it in place so it doesn't move relative to the brain. A genuine neural cyborg! The hope is that within five years, this locked-in man who hasn't been able to speak beyond yes/no eye-blinks for years, might be able to talk at speech-speed through a computer's synthesized voice. Amazing stuff huh?

If anyone's interested to hear the podcast, it's the November edition here. Neuropod is the journal Nature Neuroscience's monthly podcast, and I love it. It's at a layman's enough level that someone pretty interested in brain science would get something out of it, and it clues you into the latest cutting edge in research if you happen to not be in academia... So I can get cool new articles sent to me from Greg, my grad student buddy back at Princeton. Brain still hungry for brain. and stomach for dinner.

best from the paddies and brains of japan,
T

No comments: