Wednesday, December 17, 2008

Brain-imaging in Japanese


When I got to Mitani this morning, dressed as "Spidey Santa", the school nurse Kuboyama-sensei had newspaper clippings for me.

Last week, when Kamitani's "Neuron" paper came out on Thursday it made the front page of the Asahi Shinbun (目で見た文字や図形、脳活動からコンピューターが再現 "Letters and Shapes seen with the eye are recreated from brain activity by a computer.") The Asahi is kind of like Japan's NYTimes, one of the two most-read national newspapers, with the Yomiuri. It was also written-up on page 3 of the more local Sanyou Shinbun. Both had some great manga like the ones I've doodled in my notebooks, to illustrate how brain-imaging and pattern classification work. A person imagining a snowman, having his brain decoded and the snowman reproduced on a computer screen. The headlines mention mental images, too: "Dreams and fantasies we can see" was the headline of one:夢や空想見えれかも, yume ya kuusou mierekamo. It's exciting to see the experiment start popping up in the real-world press.

I haven't spotted anything in the mainstream U.S. media yet, but it popped up on pink tentacle through digg.com if anybody wants to read something in English that's not hardcore technical jargon. My friend Adam mentioned the experiment on a train on Saturday, after reading about it on Digg, without ever having heard of it from me ;-)

I spent all afternoon learning how to say things like "brain-imaging", "changes in cerebral blood flow", and "functional magnetic resonance imaging" in Japanese. It's exciting to be learning the vocabulary, and even kanji, I'd need to talk about my thesis in Japan. Let me know if you guys spot the study anywhere else.

p.s. No newspaper yet has mentioned the foreigner who provided mild grammatical help and was credited in small print on page 33. Most papers, true to Japanese style, don't even mention the main author of the paper. Just Kamitani, since he's the 研究室長、the head of the lab. No credit to the little guys...

Thursday, December 4, 2008

Simstim


Definitely the coolest science cover I've seen ;-) That roll of "brain film" is actual pictures reproduced from visual-cortex activity, of what people were seeing or reading. This is the first time my name's appeared in a scientific journal. In a brain-reading study from Yuki Kamitani's lab in Kyoto called "Visual image reconstruction from human brain activity." I'm the only non-Japanese name on the paper! I'm credited in the acknowledgments section on the last page for "manuscript editing" (fixing English prepositions and overuse of the word "the", mostly; very scientific). I can send anybody a copy if you're interested to read it.

The paper gets pretty technical in parts, but at least take a look at the first two figures. These are two of the clearest images of brain-pattern classification I've seen, like cartoons of how this science is done. And the pictures of the word "NEURON" reconstructed in real-time from visual-cortex brain activity are eery. I love how the figures illustrate the power of the technique both scientifically-- in terms of "mean square errors"-- and viscerally, in the form of the actual images being decoded direct from the brain. I'm excited to have been attached to this paper in any way, even if my only contribution was English prepositions and articles (If you read a particularly well-used "the", "of", or "in", that may have been me ;-). I really think this is a paradigm shift in brain-imaging... Gone are the days of flaky region-of-interest studies pointing to a hunk of cortex "representing language" or "emotion". If you like this one, definitely check out the Mitchell, 2007 paper from Carnegie Mellon. Those people are decoding words-- novel nouns that the classifier's never seen before!-- from brain pattern's, by defining "meaning" in terms of a noun's frequency of co-ocurrence with certain sensory-motor verbs. Language theory meets biology meets computer science in a really thrilling way.

Speaking of speech. I heard today on the November "Neuropod" podcast on this year's Society For Neuroscience conference (SfN; the conference I went to in Atlanta in senior year; this year in DC) that a neuroscientist and a brain surgeon at BU, Frank Gunter and Dr. Kennedy, have designed a brain-machine interface that can decode speech sounds from neural activity. Speech from thought! Our work is connected to this-- the ultimate clinical goal for all brain-pattern-classification work, like my Spidey experiment in Ken's lab-- to help brain- damaged people regain use of speech and motion and their senses.

This speech-decoder is an implanted electrode in the brain of a guy in Georgia with "locked-in syndrome", who's paralyzed except for his eyes. The coolest part about the brain chip is that it's a "neurotrophic device", meaning that it's filled with nerve food-- neurotrophic factors, nutrients for neurons. So, the axons of the nerves actually grow inside the chip, stitching it in place so it doesn't move relative to the brain. A genuine neural cyborg! The hope is that within five years, this locked-in man who hasn't been able to speak beyond yes/no eye-blinks for years, might be able to talk at speech-speed through a computer's synthesized voice. Amazing stuff huh?

If anyone's interested to hear the podcast, it's the November edition here. Neuropod is the journal Nature Neuroscience's monthly podcast, and I love it. It's at a layman's enough level that someone pretty interested in brain science would get something out of it, and it clues you into the latest cutting edge in research if you happen to not be in academia... So I can get cool new articles sent to me from Greg, my grad student buddy back at Princeton. Brain still hungry for brain. and stomach for dinner.

best from the paddies and brains of japan,
T