Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:11):
A new technology is translating patterns of brain activity into
written descriptions of what you're seeing or imagining. I'm Darren Marler,
and this is weird dark news. Scientists in Japan have
built something that sounds like pure science fiction, except it's
real and working right now. They've created a system that
(00:33):
reads your brain activity and turns it into complete sentences
describing what you are watching or remembering. Think of it
as closed captions for your thoughts. The system doesn't just
spot objects or simple ideas. It writes out full descriptions
that explain what's happening, who's doing what to whom, and
(00:54):
how different things relate to each other. A researcher named
Tomoyasu Horikawa, who works at a communications company's science lab
in Japan, figured out how to make this work. He
combined brain scanning machines with AI programs similar to chat GPT.
Think of it like teaching a translator to convert one
language into another, except instead of French to English, it's
(01:17):
converting brain patterns into readable sentences. Six people volunteered for
the study, four men and two women, all Japanese speakers
between ages twenty two and thirty seven. Each person spent
nearly seventeen hours lying inside an MRI scanner, those big
tube shaped medical imaging machines while watching short video clips.
(01:37):
They watched two thousand, one hundred and eighty different silent videos,
each lasting just a few seconds. The videos showed all
kinds of things, animals playing, people, hugging, someone jumping off
a waterfall, everyday activities. For every single video, twenty different
people wrote descriptions of what they saw happening. The researchers
(01:58):
checked all these descriptions carefully, and even used chat GPT
to fix typos and make sure the sentence has made sense.
The researchers trained their computer system to recognize patterns. They
taught it to notice which parts of the brain lit
up when someone watched a dog running versus a person talking.
The AI learned to connect specific brain activity patterns with
(02:21):
specific types of visual information. The system built sentences one
word at a time, kind of like filling in a
crossword puzzle. It starts with nothing and makes one hundred
attempts to get the sentence right, checking each time whether
the words it's choosing match what the brain is showing.
Another AI program suggests words that might fit looking at
(02:42):
what words usually go together. If you're describing someone jumping,
words like waterfall or cliff make more sense than kitchen
or book. When the researchers tested their system on brand
new videos the volunteers had never seen before, it worked
surprisingly well. They show the computer one hundred different videos
and ask it to figure out which one a person
(03:04):
was watching based only on their brain activity. The system
picked the right video about half the time. That might
not sound impressive until you realize that random guessing would
only work one percent of the time. The descriptions the
system created weren't just lists of objects. They captured the
whole scene. The computer generated sentences like people are speaking
(03:27):
while others hugged, or someone is jumping over a waterfall
on a mountain. These aren't just naming things, they're explaining
relationships and actions in full sentences. Then researchers tried something
even weirder. They had people close their eyes and remember
videos that they had watched earlier. Just by measuring brain activity.
While people were imagining these scenes from memory, the system
(03:51):
could still generate descriptions of what they were thinking about.
The volunteers weren't watching anything, they were remembering, and the
machine could still read it. To make sure the system
was really understanding relationships and not just spotting random words,
researchers scrambled the word order in the generated sentences. If
(04:11):
the system wrote a dog is biting a man, they'd
shuffle it to something like man a is biting dog a.
Even though all the same words were there, The scrambled
versions performed much worse at matching the correct videos. This
proved the system was capturing real meaning, not just throwing
out words it detected. Scientists made a surprising discovery that
(04:34):
changes how we understand thinking itself. They tried an experiment
where they completely ignored the parts of the brain that
handle language, the areas you use when you talk or
read or write. They expected the system to fail, but
it didn't. The computer could still create structured, meaningful sentences
with nearly the same accuracy, correctly identifying the right video
(04:55):
about half the time out of one hundred options. The
brain stores detail, organized information about what you're seeing in
areas that have nothing to do with words. The parts
of your brain that process vision and understand actions maintain rich,
complex representations of scenes. Your brain knows who does what
to whom without necessarily turning that knowledge into language. It's
(05:19):
like having a detailed mental movie playing that exists completely
separate from the narration. Alex Huff, who studies brains and
computers at the University of California, Berkeley, said the system
predicts what someone is looking at with remarkable detail. He
noted this is genuinely difficult to accomplish and surprising that
you can extract so much specific information from brain patterns.
(05:42):
This discovery matters for understanding conditions where people lose their
ability to speak. A Phasia, for example, happens when the
language parts of the brain get damaged, often from a stroke.
People with aphasia might understand everything perfectly and have complete,
detailed thoughts, but they can't get the words out. This
research shows their brains are still maintaining all that structured information,
(06:06):
it's just stuck without a way to become speech. The
technology might offer help to people who've lost the ability
to communicate normally. Since the system can read brain activity
without needing the language centers. It could potentially work for
people whose language areas are damaged but whose thinking remains intact.
Several conditions could potentially benefit. Aphasia leaves people unable to
(06:29):
express themselves with words, even though they understand everything and
think clearly. Als, also called lou Garrig's disease, is a
devastating illness that gradually destroys the nerves controlling muscles, including
the ones needed for speech. People with severe autism sometimes
struggle with verbal communication. For all of these people, this
(06:50):
technology might eventually provide an alternative way to communicate. Scott
Berry Kaufman, a psychologist who teaches at Bernard College in
New York and wasn't involved in the research, said the
study opens doors for helping people who have trouble communicating,
specifically mentioning nonverbal autistic individuals. Getting the system to work
requires significant setup. Researchers need to collect massive amounts of
(07:14):
information about how each person's brain works, watching their responses
two thousands of videos. Once that training is complete, though,
the system can sometimes generate understandable descriptions from a single
memory or viewing. That means there's hope this could eventually
become practical outside research laboratories. The six volunteers in this
(07:35):
study were all native Japanese speakers. Some spoke English well,
others didn't, but the computer generated all its descriptions in
English anyway. That's because the AI was trained on English
descriptions and English language patterns. The system translates brain activity,
which doesn't have a language, into whatever language the AI
(07:56):
was taught to use. Your brain processes visual information the
same way, regardless of what language you speak. The researcher
Horikawa himself admits the system is not ready for real
world use yet. Training it for each person requires many
hours of brain scanning and thousands of test videos. The
computers need to learn how your specific brain works. The
(08:18):
scans have to be done in huge, expensive MRI machines
that you have to lie perfectly still inside. You can't
exactly carry that around in your pocket. The videos they
used showed common typical scenes, a dog biting a man,
someone jumping into water, people hugging. They didn't test unusual
or unexpected situations like a man biting a dog, so
(08:40):
researchers don't know yet whether the system can handle surprising
or rare scenarios. The computer might rely too heavily on
what usually happens, filling in descriptions based on common patterns
rather than what's actually in your head. The scanning technology
also has limitations in how fast it can read your brain.
(09:00):
MRI machines capture what happens over several seconds, not instant
by instant, so the descriptions represent a chunk of time,
not specific moments. If you're watching a fast action sequence,
the system might blend it all together rather than capturing
each distinct movement. The closer scientists get to reading thoughts,
the more uncomfortable the privacy implications become. If machines can
(09:24):
turn your brain activity into words, who gets access to
that information? Could employers use it, police advertisers trying to
figure out what you really want to buy? Could governments
use it for surveillance. Both Horikawa and Huff stressed that
their current techniques require people to volunteer and give consent.
(09:45):
The technology cannot read private thoughts, at least not yet.
Huff's exact words were, nobody has shown you can do
that yet. That word yet is doing a lot of
work In that sentence. Horracama points out that the current
version of this technology requires hours, a personalized data collection
from each participant, room sized scanning equipment, and carefully controlled conditions.
(10:08):
Someone can't just point a device at your head on
the street and read your thoughts. The technology needs your
cooperation and a massive amount of preparation. But Marcelo Ayenka,
a professor in Germany who studies the ethics of AI
and brain science, described this work as another step forward
toward what can legitimately be called mind reading. Even if
(10:29):
we're not there yet, we are moving in that direction.
The researchers themselves warn't about mental privacy risks in their paper.
They're calling for regulations to protect people's mental privacy and autonomy.
As the technology improves and requires less data to work,
it might become more accessible. Right now, you need willing
volunteers and extensive data collection. Future versions might not need
(10:51):
as much, which makes the privacy concerns more urgent. The
computer learns to predict what scientists call semantic features brain activity.
Think of semantic features as the meaning of words translated
into numbers that computers can understand. When you say dog,
your brain has a certain pattern of activity. The AI
(11:12):
learns to recognize that pattern and connect it to the
numerical code for dog, along with all the related concepts animal, pet,
four legs, sparks, wagging tail. The computer stores information not
just about individual words, but about how words relate to
each other and what order they go in. Dog bites
man means something completely different from man bites dog, even
(11:36):
though it's the exact same words. The AI has to
preserve that structure when generating a description. The system essentially
tries out different possible sentences and checks each one against
the brain activity it's trying to describe. It's searching through
countless combinations define the sentence that best matches what your
brain is representing. This approach lets it create original descriptions
(12:00):
rather than just picking from a menu of pre written captions.
Earlier attempts at brain reading could only identify individual things.
The computer might detect dog man park running, but it
couldn't put together that the dog was chasing the man
through the park. Other AI approaches could create sentences on
(12:20):
their own, or researchers couldn't tell whether those sentences reflected
what was actually in someone's brain, or whether the AI
was just making up plausible sounding descriptions. This method goes further.
Scientists have been working on brain decoding for decades, successfully
identifying faces, objects, and places from brain activity. Some recent
(12:42):
work even decoded speech related information from brain activity when
people were talking or thinking about talking. This new system
captures complete visual scenes with relationships and actions, turning complex
brain representations into coherent sentences. The research for just on
visual information what you see or imagine seeing, but the
(13:04):
same basic approach might work for other types of mental content.
Scientists might eventually decode sounds, abstract concepts that aren't visual
at all, or even dreams. Better AI models that work
more like human brains could make the whole system more accurate.
The research was funded by Japanese science grants JST. Presto
(13:25):
and JSP Cuck and high if I want to look
them up. The study was published in the journal Science
Advances on No. Fifth, twenty twenty five. This technology sits
at the intersection of three major fields, brain science, artificial intelligence,
and language processing. The researchers have created a bridge between
patterns of brain activity and human language. They've taken another
(13:47):
step toward understanding what's actually happening inside our minds and
potentially helping people who've lost the ability to communicate the
traditional way. Questions about privacy, consciousness, and what it means
to read some one's mind all follow from this work.
The technology also offers hope for people trapped by disabilities
that have stolen their voices, whether that trade off is
(14:09):
worth it, and how we regulate and control such powerful technology.
Those are decisions that we're going to need to make
pretty soon. If you'd like to read this story for
yourself or share the article with a friend, you could
read it on the Weird Darkness website. I've placed a
link to it in the episode description, and you can
find more stories of the paranormal, true crime, strange, and more,
including numerous stories that never make it to the podcast,
(14:32):
in my Weird Darknews blog at Weirddarkness dot com slash
news