Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Stephen Wilson (00:05):
Welcome to
Episode 26 of the Language
Neuroscience Podcast. I'mStephen Wilson and I'm a
neuroscientist at the Universityof Queensland in Brisbane,
Australia. My guest today isLaura Gwilliams. Laura is
currently a postdoctoralresearcher at the University of
California, San Francisco. Butin a few months, she'll be
starting a new position atStanford University, as an
(00:26):
Assistant Professor ofPsychology, Neuroscience and
Data Science. Laura is anoutstanding up and coming
researcher, who usesmagnetoencephalography and
electrocorticography to studyhow the brain derives
representations from auditoryinput, and to investigate the
computational processesinvolved. Today we're going to
focus on her recent paper,Neural dynamics of phoneme
(00:46):
sequences revealposition-invariant code for
content and order, withco-authors Jean-Rem King, Alec
Marantz and David Poeppel, thatjust came out in Nature
Communications. Okay, let's getto it. Hi, Laura. How are you?
Laura Gwilliams (00:58):
Hi, I'm good.
How you doing?
Stephen Wilson (01:00):
I'm good, too.
So, it’s very early in themorning in Brisbane and I think
it's like, you know, the sun'sjust coming up. How about you?
Where are you at and what time
Laura Gwilliams (01:11):
Yeah, it’s
just past lunchtime. So, the sun
is it for you?
is very high in the sky and I'mwell fed, ready to go.
(Laughter)
Stephen Wilson (01:19):
Alright. I've
just got my coffee that I'm
working on here. So, I don'tthink I’ve met you but I
certainly have seen your talk atsome conferences and I know that
you're friends with some of mystudents, right?
Laura Gwilliams (01:31):
Yeah. Yeah.
Some of my favorite humans areyour students actually. Yeah.
Deb Levy and Anna Kasdan. So, Ifeel like I know you very well,
because I know them so well.
Stephen Wilson (01:44):
Yeah. I mean,
they've probably told you some
things about me that may or maynot be true. (Laughter) Yeah, so
that's like a really neatconnection. And, where are you
working at the moment?
Laura Gwilliams (01:58):
So, I’m at
UCSF, right now. I, coming up on
my three year anniversary ofbeing a postdoc here with Eddie
Chang. But, some exciting news,come September 1st, I'm actually
starting my own lab at Stanford.
So yeah, really…
Stephen Wilson (02:18):
That’s so great!
Laura Gwilliams (02:18):
Really excited
for that. Yeah.
Stephen Wilson (02:20):
Yeah. You
mentioned that that was in the
in the works, and I'm reallyglad that it's all panned out
and that you'll be startingthere. I mean, what a great
place to land for your firstfaculty job.
Laura Gwilliams (02:33):
Yeah, yeah.
And it's gonna be really ideal Ithink for, the type of
multidisciplinary work I triedto do, because I'm going to be
officially and jointly appointedbetween Psychology, Neuroscience
and Data Science, with a link tothe linguistics department as
well. So I think it's going tobe a pretty nice combo.
Stephen Wilson (02:56):
Oh, it'll be
perfect. And they have such a
great linguistics departmentthere, you know, like, when, you
know, because I, because I gotmy first academic training in
Australia, working on AustralianAboriginal languages. And, a lot
of the theory behind all thatwork came out of Stanford people
like Joan Bresnan, Lexicalfunctional grammar, because
(03:16):
like, kind of mainstreamgenerative grammar didn't really
do very well with free wordorder languages of Australia.
And, you know, Bresnan and, youknow, the work that she'd done
there at Stanford, wasdefinitely like, sort of the
guiding theory behind like theAustralian linguistics when I
was an undergrad. So, I’vealways really appreciated that,
(03:39):
that department.
Laura Gwilliams (03:42):
Yeah, yeah. I
think it's going to be a great
mix of minds of people withdifferent expertise. So yeah, I
think it's going to be great.
Stephen Wilson (03:50):
And do you even
have to, are you going to keep
living, are you living in SanFrancisco right now, in the
city?
Laura Gwilliams (03:54):
Yeah, that's
right. And I'm going to continue
living in San Francisco.
Stephen Wilson (03:58):
And just commute
down?
Laura Gwilliams (04:00):
Yea. See, I'll
see how the commute is. Luckily,
I'm an early bird. So I don'tmind kind of traveling before
the majority of people aretraveling down. So I'm hoping
that that will work in my favorand won't make it too bad.
Stephen Wilson (04:15):
It's not that
far.
Laura Gwilliams (04:16):
Yeah. I think
on a good day, it's about 40
minutes door to door, whichisn't terrible.
Stephen Wilson (04:23):
Well, you just
have to find some things to do
in the car. Like listen to mypodcast. (Laughter)
Laura Gwilliams (04:27):
I can listen
to your podcast. Yeah, exactly.
Stephen Wilson (04:29):
Or the train.
Are you going to get the trainor are you going to drive? Have
you figured that out yet?
Laura Gwilliams (04:33):
Yeah, I'll
figure that out. I might do a,
do a switch up, depending on howI'm feeling.
Stephen Wilson (04:38):
Yeah. So can you
tell me about how you came to be
in this world? You sound likeyou're from Britain. Where were
you born and where did you growup?
Laura Gwilliams (04:49):
Yeah, so I
grew up in a pretty small town
called Shrewsbury, which, isright on the border between
England and Wales in theMidlands and would probably take
about three hours to drive upfrom London. So it's a pretty
small town like 60,000 people inthe countryside. Most people
(05:12):
haven't heard of it. But it doeshave one claim to fame, which is
that Charles Darwin was bornthere. They don't mention the
part that he left reallyprobably, when, when he could.
But yeah, that's my one claim tofame from where I grew up. And I
(05:34):
then studied my undergrad atCardiff, in linguistics.
Stephen Wilson (05:41):
What did you,
well, before that, like, were
you, so you kind of started,came into the field from the
language side of things. So,were you interested in languages
as a kid? Like, how did you getto ending up in a linguistics
program?
Laura Gwilliams (05:54):
Yeah, I guess
I guess I can say this, now that
I have a faculty position.
Honestly, I wasn't reallyinterested in language or
anything school related at all.
I was much more interested inunderage drinking with my
friends in fields. (Laughter)But one thing that I did know
(06:15):
about myself is that I reallyliked to write. And, actually, I
nearly didn't go toundergraduate University at all,
it was a very last minutedecision that, I actually didn't
I didn't have good grades oranything. I was like, Okay,
well, if I'm going to do this, Ishould probably have reasonable
(06:36):
grades to go to like, have a bitmore choice. So I just kind of
scrambled in my last year ofhigh school to redo all of my
tests to get like, a, likereasonable grades, and then
decided to go. And I choselinguistics, because I was like,
Okay, I don't know what it is Iwant to do really, but I know
(06:57):
that I like writing and so itsounds like linguistics is
probably a topic that wouldallow me to do something write
writing related in the future.
So that's, that's why I pickedit. So yeah, I definitely don't
feel like language studies werereally kind of written in the
stars, from, from my childhoodor anything, it was definitely
(07:23):
just making making a choicebased on the few things that I
knew about myself at that time.
Stephen Wilson (07:33):
That's really
funny. And I like to the
specific detail that you givethat the underage drinking
happened in fields. (Laughter)It definitely reminds me of when
I went to England, with myparents and my my brother, when
I was eighteen, for a familyholiday. They let, my parents
like, generously let me go outon the town and by myself in
(07:56):
Oxford. And I don't know whythey, I mean, that would be, now
that I have kids, I'm like, thatwould be terrifying. But maybe
you get more like you loosen upas they get older, I guess.
Anyway, so I wandered aroundOxford and like, somehow I met
some hippies and they took meout to a field where we drank
like, and there, there was atent there. I don't really
understand the setup, or if theywere living there or what but
(08:18):
like, definitely, like, I wastaken, that was like my one
experience of like, genuinelike, English drinking with
English people was, it happenedin a field. (Laughter)
Laura Gwilliams (08:28):
Yes,
interesting thing and there are
certain fields, kind ofearmarked for the drinking
location. So, all of the youthsknow that this is where you need
to meet to know 11am on aSaturday, though. Because this
was, making myself sound old,but there was, this was pre
phone era, like not everyone hadphones, but everyone just knew
(08:52):
you just need to go to thiscertain field at a certain time
and you see people there.
(Laughter)
Stephen Wilson (08:59):
That's
wonderful. So, when you studied
linguistics in college, did youget into it at that point? Like
did you start to get into thesubject matter? Or were you, how
did that go for you?
Laura Gwilliams (09:11):
Yeah, this was
kind of a transformative few
years for me. I never really, asI said, put much kind of passion
into, into schoolwork, and theneverything just changed and I
found myself just fascinated bywhat it was that I was doing and
fell in love with the process oflearning for like, the first
(09:35):
time I think. And yeah, I also Iwas really fortunate that I had
an undergraduate advisor, LiseFontaine. We often think of her
as the the angel in my life. Andshe really believed in me and
really made me see for myselfthat I could do these things and
(09:59):
that, I had some kind ofacademic talent. And yeah, I
guess this kind of spurred me onto pursue the topics further. I
think the thing that I reallyenjoyed about linguistics is
(10:19):
that, it felt very systematicand I liked that I could sit
down and create, say, a novelsentence and then use a set of
tools in order to understand howit is that that sentence came to
be and why it's structured in acertain way, and not in a
another certain way. And I'dnever thought about language in
(10:42):
this systematic sense before, itwas always just something that
came out of my mouth withoutreally considering it. So…
Stephen Wilson (10:50):
So, syntax is
what kind of appealed to
intellectually?
Laura Gwilliams (10:55):
Yeah, I also,
I studied a particular, but I
didn't study ChomskyanLinguistics. I studied
functional grammar, which is aslightly different flavor.
Actually, Halliday, I think heis Australian.
Stephen Wilson (11:11):
Yeah, and
functional grammar, It's yeah,
like you're talking to one ofthe few people in our, that’s
had some exposure to thatprobably at our, in our field,
because Sydney University whereI did my undergrad, had a big
functional grammar kind of wing.
And there was this kind of like,unspoken tension between the,
you know, kind of generativepeople who were, like I said,
(11:33):
like, kind of, in the StanfordSchool of generative
linguistics, and they're allfocused on, you know, Australian
Aboriginal languages, mostly andthen there was the functional
people. And yeah, so I tookcourses in that and I really
enjoyed it actually. It’s like,it's just a different you know,
it's just a different kind ofgrammar, right? It's, it's more
semantic, it's definitely moresemantic from the get go and
(11:54):
it's more tied to pragmatics andinformation structure.
Laura Gwil (11:59):
Yeah, yeah, exactly.
Stephen Wilson (12:00):
So yeah. I
appreciate it, like having that
as part of my training. So thatwas big way you were?
Laura Gwilliams (12:07):
Yeah and we
didn't have a generative wing.
And it was all functionalgrammar. So, I wasn't exposed
to, to generative linguisticsuntil much, much later. Yeah,
yeah. Which I think, is not socommon.
Stephen Wilson (12:25):
Yeah. One of my
favorite, like assignments that
I remember from uni, was I did afunctional grammar analysis of
the Leonard Cohen song, Story ofIsaac. I don't know if you… Do
you know that by any chance?
Laura Gwilliams (12:38):
No, I don’t.
Stephen Wilson (12:39):
It's like, it's
okay. It's kind of obscure. It's
like, it's like the telling ofthe, the Abraham and Isaac
story, but the point of view ofIsaac. Instead of doing the
killing, you're like gettingkilled. And then I compared it
to like, the biblical version,and with a functional grammar
analysis. And I put, like, somuch work.
And I put somuch work into that, like, you
Laura Gwilliams (12:59):
Oh, cool.
know, I studied, like, thestructure of every single
sentence in like, the biblicaltelling of the story, and then
the song, and just kind ofshowed how, like Leonard Cohen
had, like, you know, justchanged the narrative, and like,
how he, you know, all thedevices he'd used to like, flip
the perspective. It was cool.
It’s really cool. Yeah. Nice.
Stephen Wilson (13:18):
Okay, so, I know
that you went to grad school at
NYU. Did that kind of justfollow? How did, how did that
come to be?
Laura Gwilliams (13:26):
Yeah. So now,
so then, after my undergrad,
this undergrad advisor of mine,Lise, she asked me if I wanted
to do a PhD with her in liketheoretical linguistics. And I
thought that would be a goodidea. Honestly, I didn't have
(13:47):
kind of a better plan. I reallyenjoyed what it was that I was
doing with her. So, I thoughtthat was a good idea. But, I
didn't, I was too late in theprocess to actually apply and
the UK system requires you tohave a master’s before you can
apply to a PhD program directlyand I was also a bit too late in
(14:11):
that process. So, I was workingin a falafel shop at the time
and my plan was to continueworking in this falafel shop for
a year to save up enough money,which, we can just pause there
for a second. I'm not sure inwhat world I was living in that
I was going to save all of thismoney working in my falafel
(14:33):
shop, to save up enough money tobe able to afford to do a
master’s, such that I could theneventually do the PhD with Lise.
But this, this was all my plan.
And I signed a lease in Cardiffand I was all geared up to stay
there for another year. Andthen, kind of out of the blue
Lise messaged me and was like Isaw that there's this master’s
(14:57):
program, and it's just one year,and they covered the tuition
fees. So maybe you could do thisand then you would be good and
just a year's time to do thePhD. I was like, this sounds
great. And she's like, there'sjust a couple of things. One,
it's in the Basque Country, innorthern Spain. (Laughter) And
(15:21):
two, it’s slightly differentfrom what it is that you've been
working on so far, in as much asit's about the cognitive
neuroscience of language. Atthis point, I was like, Okay,
Lise, I love you, but I thinkthat you've lost your mind. I
don't know anything about thebrain other than like, roughly
(15:42):
where it's located in my body.
(Laughter) But, I was like,Okay, what Lise, would go. So, I
was like, Okay, I'll apply tothis thing. There is no way that
anything is going to come out ofthis. There's just, there's just
(16:02):
no way. But, I put myapplication together, and it was
also partly in Spanish, which Iwas like, Okay, what? The
master’s itself was going to bein English, but the application
was partly in Spanish.
Stephen Wilson (16:16):
At least it
wasn't in Basque. You should be
grateful.
Laura Gwilliams (16:18):
Yeah,
actually, I did have the option,
and I didn't, I didn't evenapproach that side. Yeah,
luckily, one of the peopleliving in my house, knew some
Spanish. So she helped me withthis. Like yeah, I put my
application together, I’m like,my, everyone was saying, what
are you doing? I'm like, I don'tknow. But this will make Lise
happy, so I'm going to do it.
And I put my application in andhonestly just forgot about it. I
(16:40):
was like, Okay, well, that's aweek of my life. I'm never gonna
get back, but it’s fine. I justcontinued working my falafel
shop. And then, sorry, I'mgiving you the long version of
this. But, we can maybe fastforward it.
Stephen Wilson (16:59):
I want the long
version, yeah, it's good.
Laura Gwilliams (17:02):
So then, yeah,
fast forward. Three months
later, I'm going on a long bikeride with a friend of mine. And
we're going down one of thesebeautiful wealth hills, and she
gets a puncture in a tire. Butluckily, at the bottom of this
hill, there was a pub. So westopped at this pub to fix her
(17:26):
puncture, and pay when the tiresfixed. She goes inside to like,
wash the oil off her hand,whatever. And what do you do in
these idle moments, but pull outyour phone and and check your
emails. So I did that. And I seethat. Okay, new email, had a
like, congratulations. I wait,what? And I open it up, and it's
(17:51):
from the BCPL, the Basque centerof brain cognition and language.
Like, Oh, I totally forgot thatI applied. And then my friend
comes out and she must have myface must have been a picture
because she was like, Laura, youokay, what's going on? Like, I
just got accepted into amaster's program that you apply
(18:14):
to a master's program. And like,yeah, she's like, that's great.
Where is it? Is it in London? Isit here in Cardiff? I'm like,
no, no, it's in the BasqueCountry. What's the Basque
Country? I'm like, Yeah,exactly. So yeah, and I, I
wasn't gonna go, I was like, I,I'm not gonna just rock up to,
(18:37):
like, a country where I don'tspeak either of the two
languages and study a thing thatI have no, like, I know nothing
about. But then, I Googledimaged San Sebastian, where the
research center is, and I mean,to this day, it's the most
(19:00):
beautiful place I've ever seenin my life. So yeah, I decided
to pack my bags and endeavorinto this adventure with the
idea that okay, if I failedmiserably, which, at that point,
I was like, This is much morelikely than not that I'm just
going to, this is going to be acatastrophe. But I'll be in the
(19:21):
same position that I would be inif I didn't try, so I decided to
do it. And in the first it wastough. It was like my, I've
never never done statisticsbefore. I didn't know anything
about the brain before like itwas all brand new. And I feel
like I had a constant like brainpain for the first few months of
(19:43):
just like growing all of theseneurons because clearly that's
how it works. But I just fell inlove with it, completely.
Stephen Wilson (19:51):
The subject
matter?
Laura Gwilliams (19:53):
The subject
yeah, and yeah, I told Lise I
won't be coming back to do a PhDwith you in theoretical
linguistics, I think I need topursue the cognitive
neuroscience of languageinstead. And yeah, I just, I
(20:13):
just loved it.
Stephen Wilson (20:15):
And how did how
did she feel about that? Was she
as supportive as you would havehoped a mentor to be?
Laura Gwilliams (20:23):
Yeah, no,
she's, she's amazing and it's
amazing. I mean, she wasdisappointed, because I think
she was excited about having meback to work with her, but, I
had also corresponded with heras I was going through, and they
were very clear. Yeah, exactly.
So I think she was just reallyhappy that I found something
(20:43):
that I was really passionateabout.
Stephen Wilson (20:47):
Yeah, I think
that's really all you want for
your students, right? At the endof the day, you want them to
find something where they'regoing to really thrive?
Laura Gwilliams (20:54):
Yeah.
Stephen Wilson (20:55):
Even if it means
letting go, which is hard to do
sometimes. Okay, so did you doresearch there, or was it all
coursework?
Laura Gwilliams (21:02):
I did a couple
of behavioral studies. It was a
lot of coursework, it was mainlycoursework, but we had projects
that was going on alongside. Andyeah, I investigated
morphological processing withlike a auditory lexical decision
task, which was a really niceintroduction to doing
(21:27):
experiments and I worked withArthur Samuel and Phil Monaghan
there. Both of whom are, I feellike I also owe a lot to them.
They were really, I learned alot from them about how to do
science and how to think aboutproblems and how to write
papers. So, that was, yeah, thatwas a really great experience
(21:52):
to…um…
Stephen Wilson (21:53):
And did you go
to NYU after that?
Laura Gwilliams (21:55):
Right, okay,
then (chuckle), then I knew at
that point that I wanted to do aPhD in like neuro linguistic
topics. And I decided that Iwanted to do it in the States.
Stephen Wilson (22:06):
You’ve got a bit
of a history of going to odd
places. (Laughter)Because I felt like you maybe
had a bit more autonomy over thestudies that you conduct that
you have a little bit more timeto do them. But I, I didn’t
feel quite ready to just gostraight into a PhD program. So,
I decided to apply to like labmanager, research assistant
(22:32):
positions instead. So, yeah, solooked at a different open
positions, and NYU had anopening at the Abu Dhabi campus,
Laura Gwilliams (22:55):
Yeah, I can't
say that it was always really
planned. But I mean, I yeah,definitely wouldn't change a
thing and so, yeah, so, fastversion I applied to this
position and Alec Marantz andLiina Pylkkänen were the
directors of the MEG languagelab in, in Abu Dhabi. And, yeah,
(23:21):
so then I packed my bags, andwent over to Abu Dhabi for for a
couple of years and worked as alab manager there. And also had
the great opportunity to even domy own experiments as a research
assistant. So, that’s where Ilearned how to do MEG and kind
(23:44):
of taught myself how to codeand…
Stephen Wilson (23:48):
Yeah, because I
know you have publications
dating back to about then, so, Ididn't realize that, that was as
an RA rather than a PhD student.
Laura Gwilliams (23:56):
Yeah. Yeah.
Yeah, exactly. Yeah. That was,yeah, that was really great and
Alec and Lena were reallysupportive of that of like, me
kind of coming up with my ownexperiments and testing
different things. Yeah, and thenfrom from there, I applied to
(24:16):
different PhD programs in the USand decided that NYU was, was
the best fit and then Icontinued working with Alec and
then convinced David Poeppel tojoin my team as well. So, yeah.
Stephen Wilson (24:34):
I am sure he
didn’t take much convincing.
Laura Gwilliams (24:39):
Yeah, he was
very enthusiastic about having
me on board. So, yes, that wasgreat. They had like two lab
families, really, one inlinguistics with Alec and his
group and then one in psychologywith David's group, and I just,
literally just split my day inhalf, like I would spend the
(25:01):
morning in one building and theafternoon and the other and it
worked, worked really well.
Stephen Wilson (25:08):
Isn't that the
story of our field? Right? I
mean, it's just like,inherently, you have to bridge
these, these worlds, right?
Like, if you want to do languageneuroscience research, you just
need to, it's always on theborder of two things.
Laura Gwilliams (25:24):
Yeah, yeah, I
think so. Yeah, I think that's
part of what makes itchallenging, but also exciting.
Like, I think in order to, toreally do language neuroscience
properly, you do need to pull inexpertise and ideas from, from
(25:45):
all of these differentdisciplines. So, yeah, I think
it's not, also I mean, that kindof gets reflected in the, like,
backgrounds of the differentpeople who work in this field,
like, someone could be abackground in linguistics, or
computer science, orneuroscience or biology, and
(26:08):
everyone is equally as relevantand welcomed into the community
I think and everyone has theirpiece to contribute.
Stephen Wilson (26:18):
Yeah, it's so
true. And it's funny, I often
don't know which field peoplecame from when I first meet
them, like, and, you know, onthe podcast, I always like
asking people where they camefrom, and like, I'm, sometimes
I'm just surprised to learn thatthey came from like, CS or, you
know, I philosophy or whatever,like, people just come from all
these different backgrounds.
Okay, so shall we get to talkingabout the paper that we decided
(26:42):
to talk about today?
Laura Gwilliams (26:46):
Yeah. That
would be great.
Stephen Wilson (26:46):
Is this
dissertation related? Is this
from your dissertation time,this paper?
Laura Gwilliams (26:51):
Yeah, this,
yeah, exactly. Yeah, I worked on
the, towards the end of my PhD,and it's one of the papers that,
that made it into mydissertation.
Stephen Wilson (27:02):
Okay, so it's
called Neural dynamics of
phoneme sequences revealposition-invariant code for
content and order. Just came outin Nature Communications. I
looked at the received date andthe accepted date. (Laughter)
This is a two and a half yearreview process, which we may
(27:23):
we'll talk a bit about as we gothrough. That's maybe one of the
longest I've seen. I'm sure youhave some, some more stories
from that.
Laura Gwilliams (27:32):
Yeah, yeah, it
was quite a journey. Which Yeah,
a number of people havecommented on, on that duration
of time. But it was also I mean,eighth of May 2020. Yeah, no one
was in a good frame of mind inthat month, either. So, I think
there were a lot of historicalthings going on during that time
Stephen Wilson (27:52):
Yeah. Right.
Yeah, this is the pandemic
Laura Gwilliams (27:52):
Yeah, I mean,
I think that sequences obviously
too.
paper, or at least the review.
You must have finished workbefore then. Okay, so yeah, I
mean, well, you have, it's beenout as a preprint for a few
years and I know that it's awell cited preprint and it's now
out in top journals. Socongratulations. It's a new
paper and it's about somethingwhich people don't really study
(28:12):
enough, which is sequencing. Soplay a crucial role in many
different parts of languagecan you tell me why, why do you
think sequence like, why do youthink the study of sequential
processing is so important forprocessing. Here, I'm looking at
phoneme sequences, but it's alsostudying speech comprehension?
Stephen Wilson (28:29):
Yeah. So that,
yeah crossing that that bridge
crucial for sequencing wordstogether, sequencing phrases
together. But looking atsequences at the phoneme level,
(28:50):
I think is also nice, because,the closer you are to the
sensory input, the more readilyable you are to get clean
signals non invasively. So yeah,so from that standpoint, it was
a really nice kind of subjectslevel to look at. I also feel
(29:13):
like the phoneme sequences inparticular, I find it really
fascinating because it's kind ofat the moment that you're going
from processing somethingsensory, to then connecting to
something symbolic. So, yeah,you're going from something kind
of in the outside world only tosomething that only exists in
(29:36):
your own mind. And I think thatthat's one of the most exciting
things about this paper, is itstarting, we're starting to
understand how the sensorypieces are actually used to then
connect to storedrepresentations in, in our mind,
in this case, say lexicalrepresentations.
(30:02):
between an acoustic input and anabstract linguistic
representation.
Laura Gwilliams (30:07):
Right,
exactly.
Stephen Wilson (30:08):
And so how did
you decide to work on this
question? Did you, was, did youkind of develop that interest in
sequencing? Or Did somebody say,hey, why don't we work on
sequencing?
Laura Gwilliams (30:20):
Yeah. So, the,
I mean, the the lead into this
was actually prettystraightforward. It was
something I've been thinkingabout for a while, based on a
previous study that I had done,where I basically found that the
brain encodes the properties ofa speech sound, for a super long
(30:41):
period of time, for around abouta second, after that sound has
completely disappeared from theactual sensory signal. Which, to
me just led to a conundrum.
Like, your, how can you keepthis information around for such
a long period of time, whileyou're still hearing, other
(31:01):
sounds come in at the same timeas well. So, it suggests that
there's a very high degree ofparallel processing going on.
But, up until now, I hadn't comeacross a study that really
explained how it's possible thatall of these different sounds
(31:23):
get processed at the same time,without them interfering with
one another, and actually, alsokeeping track of the order with
which all of those soundsactually entered into the ear,
which are two very importantthings that you want your system
to do correctly in order to beable to correctly figure out
what people are saying.
Stephen Wilson (31:45):
Right. Yeah, I'm
just kind of thinking back to
like, what would have been, youknow, before you did this work,
what was the paper that showedus the most, and it was like a
Mesgarani et al., 2014, from thelab that you're now in, right?
About how speech sounds areprocessed in the brain. I’m sure
you were living in, I mean, I'msure you knew the paper well.
(32:08):
But it kind of, and I talked toEddie about it on Episode Three,
two years ago. But, you know, inthat paper, you kind of see the
brain respond to each phoneme asit comes in and there's like a
very distinctive signature, youknow, that, that allows them to
reconstruct, you know, toreconstruct what phoneme is
being listened to at that momentin time. But it's all very like,
(32:29):
moment by moment in time, right?
It's almost like if you hear theword cat, it's like, ‘k’
happening in the, in the neuralsignal, and there a ‘a’ and then
there's a ‘t’ and there's never,and that's not in that paper,
like a, you know, any mechanismby which you would integrate
those sequential phonemes intosomething larger. So you're
saying that you've noticed that,that yeah, those representations
(32:49):
don't actually just disappearthe moment the phoneme finishes,
they linger for maybe a second,and you're going to do something
about it?
Laura Gwilliams (32:58):
Right,
exactly. And with the Mesgarani
paper, that, there you'relooking at Electrocorticography
electrodes. So, you're recordingfrom, say, 10s of 1000s of
neurons, as opposed to in mywork where I primarily use
magnetoencephalography. There,you're pulling over like
(33:19):
hundreds of thousands of neuronstogether. And so I think that
that's why that discrepancy kindof comes out. And that was part
of the question of this paper aswell, like is the reason that,
if you're looking at just areduced set of neurons, is the
reason this looks like atransient encoding, because the
(33:41):
actual information is movingacross space. And when you're
looking at the global wholebrain view that you get with
MEG, you would be able toactually tect the responses as
they, they are actually movingacross a much larger neural
(34:04):
population. So I think that allof this is kind of consistent
with, the differentsensitivities that different
recording techniques have aswell.
Stephen Wilson (34:14):
Okay, that's
interesting. So can you tell me
about the study that this paperis based around, like, the
participants and the stimulusthat they listened to?
Laura Gwilliams (34:26):
So, um, sorry,
you mean, what was it that, what
task people were performing inthis experiment or in the
previous experiment?
Stephen Wilson (34:35):
Yeah. Oh, no,
this, the current 2022 paper
Laura Gwilliams (34:39):
Okay. Yeah,
so, here the task is quite nice
that just came out,because the participants just
need to listen to stories for acouple of hours while their
heads in the MEG and thisnaturalistic listening approach
is actually something that Ireally, I feel like it's a good
way forward and something that Iplan to continue doing in the
(35:02):
future. So, yeah, so peoplelisten to these stories, and I
annotated the stories forprecisely what phoneme occurred
at what time in the story, andwhat phonetic features belong to
each of those sounds. And thenalso, importantly, where that
(35:24):
sound actually occurs relativeto word boundaries. So, is it
the first phoneme of the word orthe second volume of the word or
the third? Such that, then wecan investigate the sequencing
of the sounds within the word.
Stephen Wilson (35:42):
Okay, that's,
that sequencing piece, that’s so
novel here. So, you end uphaving 31, what you call
linguistic features, and they'rekind of, like you just said,
they're kind of in a fewdifferent categories, right, so
14 of them acoustic phonetic,kind of capturing the properties
like place, manner and voicing.
Then you've got like, orderrelated features that are
(36:06):
encoding where the, where thephoneme is in the context of the
word and syllable, and morpheme.
That's, that you know, that,there's your like, original, you
know, you did your first studiesever on morphology and so now
you're like…
Laura Gwilliams (36:19):
Yeah, it's
like that. Right, exactly.
Stephen Wilson (36:24):
And then you
also model, you've also got
coding for boundaries betweenwords and then you've got these
information theoretic measureslike surprisal, and entropy and
frequency, can you can you talkabout why those are in the model
and how you computed those?
Laura Gwilliams (36:42):
Yeah, so some
of the properties here are
included, essentially ascovariates. So,
representationally, I wasprimarily interested in those 14
phonetic features. But languagehas a very annoying habits,
(37:03):
although I actually think it'sintentional, of correlating with
itself in terms of its, itsfeatures are correlated and so
certain sounds tend to occur atcertain positions in the word
with a given likelihood. So, tomake sure that I was truly
(37:24):
investigating phoneticprocessing, and not other
aspects of language that Ididn't want to focus in on here,
that’s why I included some ofthese other properties like,
yeah, the, where the sound is inthe word and some of the
statistical properties. Thatbeing said, some of, some of the
(37:48):
properties in here too, Iactually, in later analyses,
wanted to look at phoneticprocessing, as a function of
the, for example, surprisal, tosee whether phonetics gets
processed the same in a highlypredictable environment or in a
(38:10):
low predictable environment. So,to make the model kind of
complete, I put everything inthe model to begin with, and
then split things up by thosedifferent factors later on. And
in terms of how I…
Stephen Wilson (38:26):
Yeah, it ends up
being quite…
Laura Gwilliams (38:28):
I was just
going to say, sorry…
Stephen Wilson (38:29):
I didn't mean to
talk over you. Yeah, well, yeah,
you can get Yeah, why don't youfinish? Yep.
Laura Gwilliams (38:36):
Okay. Yeah, I
am sorry, just in terms of how I
computed these, I used the, avery large language corpus
basically to, a spoken languagecorpus to determine how likely a
sound is given all of the otherpossible sounds that could occur
(38:57):
given the sequence of sounds inthe word at that point.
Stephen Wilson (39:02):
Okay. So,
originally, some of these are
intended as covariates, but theyended up being quite important
to the analysis. I mean, the, Imean, they kind of end up being,
you know, but what the order,the positional stuff, was that
always intended to be a centralpart of this study, or was that,
did that kind of just comelater?
Laura Gwilliams (39:22):
Yeah, no, this
was, that was always intended to
be a key part of the, of theanalysis. And yeah, in the later
analyses I break up thephonetics based on the location
that it occurs in the word andthings like that. But one of
the, yeah, I actually quantifylocation in kind of three
(39:50):
different ways. So, where thephoneme is in the syllable,
where the phoneme is in the wordand where the syllable is in the
word. And it didn't make it intothe final version of these
analyses , but I think it's alsoan interesting question of what
the kind of end goal of thesequencing is? Like, is the goal
(40:14):
here to make words out ofphonemes? Or to make syllables
out of phonemes? Or maybe, maybephoneme isn't actually the right
kind of basis unit to be lookingat, in the first place. So,
yeah, part of the goal ofincluding these different
distance metrics was to try toadjudicate between them.
Stephen Wilson (40:36):
Okay, that
didn't make it. And that's why,
I was wondering why you kept onsaying (sub) lexical, with ‘sub’
in parentheses throughout thepaper, like, and so what are you
talking about? And now, now Isee like, you're thinking that
it might be phonemes tosyllables, and not two words,
Laura Gwilliams (40:52):
Yeah, right.
And here, I didn't really haveright?
the ability to discriminatewhether phonemes were being made
into morphemes or lexical items.
So, the parenthesis ‘sub’, is tokind of indicate that maybe what
actually is happening here issequencing into morphological
(41:12):
constituents, which, as youmentioned, morphemes are
something very close to myheart. So, I feel, this is
something that I would like toinvestigate in the future,
precisely, what is the unitbeing connected to? And like,
later, downstream, or maybehigher downstream?
Stephen Wilson (41:37):
Yeah. Cool. So
you code all these things in the
dataset, like you have 21people, they listen to two hours
of stories, you go throughphoneme by phoneme, coded by
hand, I guess. And then you,then you try to predict them,
using neural data from the, fromthe MEG and also using the
(41:58):
spectrogram, with the stimulus,kind of like as a control
condition, I guess. So, theprediction model is really
complicated. It's called back toback regression and it was
developed by your co-authorJean-Remi King, and some of his
colleagues and in a previouspaper in Neuroimage, that I
definitely had to like read, inorder to, well, try to read to
(42:20):
understand your paper. And, asfar as I understand it, like
what it's trying to do, isrespect the fact that like, the
neural data, and the featuraldata are both multidimensional,
right? So the neural data,you've got, like 208 channels of
MEG recordings, and then, thenyou've got these 31 features,
and it's like, multidimensionalon both sides. So, it's not just
(42:41):
a typical regression model. So,can you explain why you chose
this back to back regressionapproach, and maybe try and give
us the gist of how it works?
Because it's really quite hardto understand.
Laura Gwilliams (42:55):
Yeah, so the
main motivation for the back to
back is to try to overcome thechallenge that I mentioned that,
if we take all of thesedifferent features of language,
those features are very likelyto be correlated with one
another. And so, when you'retrying to code feature a, you
(43:16):
want to be certain that you'redecoding feature a and not that
which is correlated with featurea, and the back to back
regression essentially allowsyou to, to separate out the
correlation between thesedifferent features. So you can
be confident that you areactually just decoding the
(43:39):
feature of interest. And Iguess, briefly, the way that
this works, and the reason it'scalled back to back regression,
is because, the first thing youdo is fit your run of the mill
decoding regression model, inorder to get a prediction for
(44:02):
let's say, all of those 31features that you're interested
in. But then the way that youthen evaluate how good that
decoding was, rather than justtaking the say the truth of
feature a and correlating thatwith the predicted feature a,
you're going to take the, thetruth of all of your features,
(44:26):
and compare that to thepredicted of just one feature.
And, so you're just going tolook at the variance accounted
for above and beyond thevariance that all of those
correlated features accounts foras well. This is, it ends up
being a pretty harsh analysisessentially, kind of, in
(44:51):
proportion to how correlatedyour features are. Fortunately,
the in this case, the featuresaren’t too detrimentally
correlated. So it ends up beingpossible. But yeah, the power of
your analysis just ends upreducing with the strength of
the correlation. If obviously,if your features are identical,
(45:13):
a correlation of one, you won'tbe able to separate them.
Stephen Wilson (45:17):
Yeah. Okay,
that, that helps me understand a
bit. And there was another thingthat I was a little unclear on.
So when you're predicting like,either from the neural data,
often the spectrogram. Are youusing just a single slice of
time? Or is it like a window oftime over which the signal is
evolving? Like, is there amoving window kind of approach?
Laura Gwilliams (45:38):
Yeah, I just
use one single time stamp at a
time.
Stephen Wilson (45:45):
It's like
literally a 208 vector of MEG
signal from one moment in timepredicting the features, either
at that moment or at a differentmoment.
Laura Gwilliams (45:55):
Right.
Stephen Wilson (45:56):
Okay , cool. So,
in the, you know, when the paper
starts out, you kind of havethis, I guess, it's like a proof
of principle in the firstfigure, where you just show that
you're able to predict most, ifnot all of the features from the
MEG data, is that a faircharacterization of that first
figure?
Laura Gwilliams (46:14):
Yeah, and
also, kind of showing that it is
indeed the case that thesefeatures are decodable for a
long period of time, they're notjust instantaneous, and then
disappear. but they're aroundfor like, half a second or so.
Yeah.
Stephen Wilson (46:30):
Exactly. Okay.
So yeah, what the figure, thekey part of the figure shows
time on the x-axis, and then theexplainability of various
features like nasal, vowel,voicing, approximant fricative,
as well as those other featureslike entropy and surprisal, and
location features. So all ofthem are kind of predictable for
about half a second, related tothe onset of the phoneme. And so
(46:53):
that's what you told us before,like about how like, these
things are not justrepresentative for a moment, the
representative over the overtime. And you know, how long is
a phoneme right? So like, justso, I mean, this I, I was
calculating this from yourmethods section, you've got two
hours of data, and you've got50,000 phonemes and so that's
basically seven phonemes persecond. So, therefore, if a
(47:17):
phoneme is being instantiated inthe signal for half a second,
that means that you have threephonemes at a time, right?
Laura Gwilliams (47:28):
Yeah, yeah,
exactly. Yeah, each, yeah, in
these stories, each phonemes areabout 80 milliseconds or so.
Stephen Wilson (47:37):
Yeah, so
something of the order of three
to a little more than three.
Laura Gwilliams (47:42):
Yeah. Right.
Exactly.
Stephen Wilson (47:43):
So yeah, that
first, so what you're trying
first is you can decode thisinformation from MEG, or M E G,
Liina told me that, that youguys don't call it MEG.
Laura Gwilliams (47:54):
You can do
whatever you want, Stephen. It’s
your podcast.
Stephen Wilson (47:57):
I guess so.
Yeah. But you know, I need toget the terms, right. And then
you also, okay, so the actuallike, you're talking about this
in the paper, like, the actualpredicting, predictability or
the prediction performance isnot high, right, it's only
slightly above chance. Don'ttake this as any kind of
(48:18):
critique, because I'm, like, youknow, an fMRI guy, and we, and
we deal with signals that arelike, you know, a fraction of 1%
and get really excited about it.
So, I know that like, it's notabout the size of the signal,
it's about the, you know…
Laura Gwilliams (48:32):
What you can
do with it.
Stephen Wilson (48:33):
What you can do
with it, exactly. (Laughter) So,
you know, you're actually onlypredicting like, you had a
chance and your prediction foryour binary features will be 50%
and you're mostly like, hoveringaround 51, 52, but with super
low p-values, because you've gottwo hours of data and 20
subjects. So, that's notimportant. Right? Can you
(48:55):
explain like, you know, it'snot, it's not really about like,
how predictive it is, just thefact that it is predictive.
Laura Gwilliams (49:01):
Yeah. And,
yeah, so I think it's expected
that, especially with thecontinuous story, listening, I
mean, we've already kind ofindirectly demonstrated here
that a lot of the signal isbeing washed over from previous
sounds and subsequent sounds andso, it's not surprising that
(49:26):
the, the amount of variance youcan explain is tiny, because
there's so many things going on.
When you're doing something likelistening to a story. I think
you would expect the effectsizes to increase much more if
you were just blasting someonewith one syllable and then
taking it away. But yeah, so thepoint here is the robustness of
(49:47):
the effects given how manyrepetitions we have of these
different sounds and how manysubjects we were able to record
from.
Stephen Wilson (49:59):
All right. And
then you have your control
condition of predicting from thespectrograms. And it looks like
from the spectrogram, you canprobably predict better overall
that you don't really talk aboutthat in the paper. It's all kind
of relative. But it looks like,you can predict the acoustic
phonetic features, but you can'treally predict the other
features like surprisal andorder, right? Is that a fair
(50:21):
characterization of the findingsthere?
Laura Gwilliams (50:22):
Yeah, yeah,
exactly. And, indeed, the same
sounds that we can decode betterfrom the spectrogram. We also
decode better from the neuralresponses. But that doesn't hold
true for the more higher orderproperties, like the statistics,
and like the the order of thosesounds. So it seems that those
(50:44):
properties are not present inthe spectrogram, and the fact we
can decode them from the brainresponses is, it reflects that
it's something that the brain isapplying to the acoustic signal,
it's not something present inthe acoustic signal.
Stephen Wilson (51:03):
Right. Yes,
that's clear. Okay. So then you
kind of, so that's the, that'sjust the sort of setup. Then you
ask the question, like, how doesthe brain you know, that you're
representing these phonemes forlike, half a second each and you
ask like, well, how does thebrain then do that without
mixing up the phonetic featuresof the multiple sounds that it
(51:24):
must be coding at once? And thefirst hypothesis that you test
is what you call positionspecific encoding. Can you
explain what that is, and howyou tested it and why you
decided that’s not the answer?
Laura Gwilliams (51:37):
Yeah. So, one
solution that you might think
that the brain employs to beable to process multiple sounds
at the same time without gettingconfused, is, well it’s easy,
you have one set of neurons thatlike let's say, P sounds when
they occur at the beginning of aword, and a totally different
set of neurons that like Psounds when they're in the
(51:59):
second position of the word,etc, etc. And so you'll have a
different set of neurons foreach sound, but also different
depending on where the sound isin the word and that would leave
you to code that up, that wouldgive you a reasonable solution
to this kind of conundrum thatwe see that these different
sounds are processed inparallel. So, the way that I
(52:23):
tested whether or not that seemsto be the solution the brain
implements, is by taking mydecoding algorithm, and I train
it on just the responses to thefirst sounds of the word. So
let's say I'm trying todistinguish a fricative from a
(52:44):
non-fricative sound, at firstphoneme position. If the, if the
way that the brain solves thisis by having a totally different
set of neural populations, ifthat same fricative was in
second phoneme position, then mydecoder should be completely
(53:04):
useless at reading up thatinformation at second phoneme
position. And essentially, thatdoesn't seem to be true. I can
take the decoder, which Itrained on phoneme one, and that
generalizes very well forphoneme two, and I can do the
same train on phoneme two andthat generalizes very well to
phoneme three. And you can doany kind of mix of position you
(53:25):
want, and you're still able toread out the information, which
suggests that, there is a set ofshared neural populations, which
encode genetic information,regardless of where that sound
actually occurs in the word.
Stephen Wilson (53:41):
And I'm guessing
you weren't surprised, to rule
out that theory, right? Becausewe kind of knew that neural
response was, in at least inconsiderable part related to the
spectrotemporal receptive fieldof the neurons and like, that,
you know, the phoneme, you know,the different phonemes would
have would differ a lot on thosedimensions, and that that was
(54:03):
going to be reflected in theneural signal. So, you are going
to be able to do it acrossposition. Right?
Laura Gwilliams (54:09):
Right. And
position is of itself, I don't
know, like, I could make up aword, which is 321 phonemes
long, and you wouldn't expect tobe able to arbitrarily have a
neural population which canencode any position.
Stephen Wilson (54:27):
Only if you're a
native Hawaiian. (Laughter)
Laura Gwilliams (54:30):
Right. Right.
What is kind of cool, though, isthat even just from a spectral
standpoint, sounds which occurat word onset do tend to have
very different spectralproperties than those occurring,
like say at the end of the wordor like before or after a vowel
and these kinds of things. So, Iget into this a little bit
(54:52):
deeper later in the paper, butthis is one of the first
indications that what we'relooking at here is something
slightly abstracted away, fromthe actual acoustic signal. That
it is common across thedifferent acoustic realizations
of the same phonetic feature.
Stephen Wilson (55:12):
Right. So, if
that's not the answer, let's
talk about this other codingmechanism that you then
investigate. And here, I'm goingto read a quote from your paper,
because I think you said it moreclearly than I could. ‘The idea
is that each speech soundtravels along a processing
trajectory, whereby the neuralpopulation that encodes a speech
(55:35):
sound evolves as a function ofelapsed processing time.’ So can
you tell me like, what wouldthat look like? And how would
that solve the sequentialrepresentation problem if, if it
works like that?
Laura Gwilliams (55:47):
Yeah, so one,
one idea is, okay, you have a
sound enters the ear, it goes tocortex, and it's processed in
that same set of neuralpopulations for 500
milliseconds. However, thiswould lead to the problem that
(56:10):
then you have the same neuralpopulation, essentially trying
to process multiple sounds atthe same time. So, an
alternative that I wanted totest here was that it's not just
one neural population thatprocesses a sound, but that
you'll have population Aprocesses it and then pass it to
(56:30):
population B, which processes itpassed it to population, C. And
so as a function of time theinformation gets passed between
different neural populations.
And one of the important thingshere, is that it always gets
passed along that same spatialtrajectory. It always it always
(56:53):
traces the same path as afunction of time. And why would
this solve the problem,? Itsolves the problem, because by
the time you're hearing the nextsound of the word, those neural
populations have already kind ofpassed the hot potato over to
the next set of neuralpopulations. And so their work
(57:16):
is kind of freed up, ready toreceive the next pack, packets
of information. And then they inturn, pass it over to the next
set of workers. And so, thismeans that parallel processing
can occur without, without everhaving to burden the same neural
(57:36):
populations to process more thanone sound at the same time.
Stephen Wilson (57:43):
So, how do you
test this in your paper?
Laura Gwilliams (57:47):
So, as you
nicely set me up, as a function
of time, I'm fitting a differentdecoder at each millisecond. And
then figuring out the accuracyof each of those decoders as a
function of time. So here,simply what I'm doing is asking,
(58:12):
Okay, the, the decoder that Itrained at, let's say 100
milliseconds after the sound,the gun, is that decoder, still
informative to reading out theinformation, which at a later
point in time, so is the waythat that neural pattern exists
(58:33):
at 100 milliseconds, the same at200 milliseconds? And that would
suggest that the information isin that same set of neural
populations for that amount oftime, or has the neural pattern
actually changed such that Iwouldn't be able to decode the
(58:53):
information from decoder earlierin time to read out information
later in time.
Stephen Wilson (59:01):
And this is a,
this approach is called temporal
generalization. And it's laidout in a paper by again,
Jean-Remi King with StanislasDehaene from 2014. It's a really
interesting paper and the wayyou implement that, in this is
really nice. So, it's basicallyexpressed in Figure three, which
is like really the core of yourpaper, right? Is that Is that a
(59:24):
fair thing to say? Like, if youwere to, you know, kind of make
a postage stamp about this paperwould just basically have figure
three on it. So yeah, I It'shard to describe figure 3 to the
audience, because it's, it'svery complicated. I pored over
for a long time. It looks like,a rainbow of fingers and a 45
(59:44):
degree angle and I'm not reallygoing to try and explain it
because I think you just explainwhat it really shows. But the
idea of the 45 degree angle, isit's kind of showing that like,
as you can only decode a certainmoment in time past a phoneme by
using the, that same moment oftime to predict it. So in other
(01:00:04):
words, what's going on in thebrain, like 100 milliseconds
after you hear a ‘Peh’, is notthe same as what's going on in
the brain 300 millisecond aftera ‘Peh’. So you can't make that
prediction well. If you want topredict what's going on to the
brain, 300 milliseconds afterhearing a ‘Peh’, you have to
look at other instances of 300milliseconds after hearing a
(01:00:28):
‘Peh’. So in other words, it's,these phonemes are consistent in
the neural response, but insteadof just being a, like a drawn
out, like 500 milliseconds ofthe same thing, it's like an
evolving response that'spredictable, but it's evolving.
So I'm not saying anything otherthan what you just said, I'm
just trying to say it more thanonce, because I think that's
(01:00:52):
necessary. It's reallycomplicated. I definitely
advocate people to take a lookat that figure. It's, it's very
nice. So, you know, by andlarge, like these, the fingers
don't really overlap in thefigure, which is how you're
showing that you're processingthe phonemes parallel, and
simultaneously. But there's thisone exception in the figure that
(01:01:14):
I found really intriguing and Ithink you know what I'm talking
about. You call it in the paper,you call it a left sided
appendage, which I think isfunny, because yeah, you're also
like, and I, but you don'treally explain it in the paper.
You just mentioned it. It's likethis, Okay, so what it is, and
I'm sorry, if this is like,getting really deep in the
(01:01:37):
weeds, but I'm very curious.
It's, it seems to be that thefirst phoneme of the word has a
different pattern to all theothers. Like with the first
phoneme only, you can predictearly responses with data from
later. It's almost like there'ssome kind of echo. Does that…
Laura Gwilliams (01:01:55):
Yeah.
Stephen Wilson (01:01:55):
What's up with
that? Do you have any idea
what's going on with that?
Laura Gwilliams (01:01:58):
Yeah, I have
some hypotheses. This is another
thing. I mean, I guess thisalways happens when when you
work on a project, it raises allof these really interesting
things to look at in the futureand this is certainly one of
them. So yeah, here you canbasically decode the properties
of the first sound, when you'reat the offset of the previous
(01:02:25):
word, and one of my hypotheseshere is that this is a
predictive effect. And the,maybe this only happens for the
first phoneme of the word.
Because when you're, you'relistening to continuous speech,
and you're trying to anticipatewhat it is that these people are
saying, or you just kind ofnaturally anticipate let’s say.
(01:02:45):
Once you know what the firstsound of the word is, knowing
that reduces your uncertaintyabout the identity of the word
more than any other sound inthat sequence. And so I wonder
if the way by which we kind ofpredict what word it is that
(01:03:06):
someone is going to say ispartly by just anchoring
ourselves on to what that firstsound is, and then allowing the
rest of the process to unfold.
So I think this, this might be apredictive mechanism, which
(01:03:26):
facilitates lexical recognition.
But yeah, this is something thatI really want to look at
further.
Stephen Wilson (01:03:35):
Yeah, that's
really interesting. I, I, I
think I agree with you. Well, Imean, that it's something
special about like,predictability of the word
boundary. But I wonder if itmight be my, my sort of first
guesses, it's almost theopposite of what you said, like,
like that prediction of the nextphoneme is actually much greater
(01:03:56):
within the word than across theword boundary, like the crossing
the word boundary as a point oflow predictability of the next
phoneme, relative to being in aword. And I would interpret it
as like, showing not so muchthat you can predict, I mean, I
think it looks like you canpredict early from later, not
later from earlier, but I don'tknow it's very complicated and I
(01:04:17):
don't think, I think thatthat's, it's something like what
you said like, there's adifference in the
predictability. That's, like,the first phoneme is very
special. And it's like, also,like some kind of when it's like
some kind of like, index intothe lexicon that's privileged.
Laura Gwilliams (01:04:34):
Yeah, I think
so. And I think that first
phonemes are in general, specialin terms of how they narrow down
the lexical items. So, yeah,it'd be really great if we could
use such a simple analysis ofthis in order to be able to
(01:04:56):
investigate lexical predictionand processing like that, then
that'd be great.
Stephen Wilson (01:05:00):
Yeah, I mean, I
think that future, future follow
ups are pretty, like prettyclear what you'd want to do
there. Okay, so now I want toask you something about, from
the peer review process. So thisis interesting. Um, so you know,
this journal publishes the wholepeer to peer review document.
Laura Gwilliams (01:05:21):
Oh, wow! You
read through that!
Stephen Wilson (01:05:23):
I didn't read it
word for word. It's like 66
pages or something. I found itreally interesting. It just gave
me a, I mean, so, kind of what Iwant to ask you something that a
reviewer asked. But also want totalk about the mental process.
So I feel like this paper, it'sfunny, because you're reading
this document, right? And it'sreferring, it's like a back and
(01:05:44):
forth conversation. There arefive reviewers here, by the way.
Laura Gwilliams (01:05:46):
Yeah.
Stephen Wilson (01:05:47):
Which I am sure
you know, I'm just saying that
for the listener, there are fivereviewers. And, you know,
there's, it goes through threeversions, I think, at least.
And, it's very interesting, likethinking about, okay, is this
valuable, seeing this peerreview history, and it
definitely is, as, as from myperspective as the reader, but
it's also kind of brutal,because you're reading about
(01:06:08):
this paper that's morphed overthe two and a half year period
and it's changed quite a bit.
Like, it's the trivial thingslike, you know, figure numbers
are different and page numbersare different. And like, you
know, they're talking about afigure that doesn't exist
anymore, that's got a differentnumber now. So there's a lot of
a lot of detective work. Butit's also kind of at the end of
the day rewarding, because youdo get to see, like, at a deeper
(01:06:28):
level, like what, what peoplethought about different aspects
of the paper. What do you think?
Do you feel like the paper gotbetter in peer review?
Laura Gwilliams (01:06:39):
Um, yeah, I
mean, I think that there, it
definitely helped to clarify alot and to, I think that some
things were highlighted, thatwould have been something that a
lot of readers might also havequestioned or had uncertainties
or confusion about. So Idefinitely think that it helped
(01:07:02):
with that. There, there also wasno, I didn't have any MEG
simulations in the paperbeforehand and I think that also
helped a lot to really, alsoconvinced myself of what it was
that I was seeing. And I don'tthink that I had any of the
spatial analysis on the…
Stephen Wilson (01:07:24):
No, you didn’t.
Laura Gwilliams (01:07:27):
And I think
that, that also really helped.
So, yeah, the reviewers ifyou're listening, it was
painful, but I think that thethe outcome was, was a positive
one.
Stephen Wilson (01:07:40):
That was my
impression. I mean, I didn't
actually read the preprintseparately but just from seeing
the final product and seeing thereview, I could see what had
changed. And so your, yourreviewer two here, I mean that
in the pejorative sense, not inthe actual numerical sense, was
reviewer four. (Laughter) Soreviewer four your reviewer two,
(01:08:02):
right? this is the the trickiestreviewer and they said, they,
their biggest critique was thatit's no big deal that you're
seeing this processing andtrajectory, because it's just
going to reflect responsespropagating through the auditory
system. You end up rebuttingthat later, like reviewer four,
they are like, I am done here.
They don't come back and, andthe editor brings in reviewer
(01:08:24):
five to deal with the reviewerfour response, and then, it's
just a long story. Listeners youshould go read this. (Laughter)
But anyway, you end up rebuttingit, like not even immediately,
but later on down the line, liketo reviewer five. So tell, tell,
can you tell us how you, how doyou address that critique?
Laura Gwilliams (01:08:44):
Yeah, so this
was I mean, this is where the
spatial analysis comes in. Soone idea which I don't know,
maybe you can consider ittrivial, I don't think it would
have been trivial, even if ithad been true. But I think it's
interesting that it's not. Whichis, okay sure, you just see that
(01:09:04):
the information is movingbecause you're going from like
primary auditory cortex, frontalto frontal cortex as a function
of time. And so, what I did tosee whether or not that was the
case, was, by looking at theweights of the decoder,
(01:09:26):
basically, just to see at thesedifferent time points, so as you
go through the trajectory, whereis the informative information
on those MEG sensors, and what Ifind, is that all of the
phonetic features actually endsup kind of looking like a
spaghetti mess. (Laughter) Butthe information, which, I don’t
(01:09:51):
know is interesting finding,that the information seems to
stay local, within auditorycortex. I don't have the ability
to make super strong spatialclaims, but if I was to guess
where this would be, I wouldsay, it’s hanging out in
Superior temporal gyrus. Itseems to be the case that, the
(01:10:14):
information isn't just kind ofmoving, say anteriorly as a
function of time, but actually,the information is just kind of
being reconfigured in the samebrain area as a function of
time. And that actually is areally important finding for
(01:10:36):
understanding how the system isactually using this information
and why it gets configured thisway. So, yeah, so I think that's
a really nice outcome of thiswork to say that the information
still remains local in one brainarea, it just, the specific
(01:11:00):
neurons within that brain area,or what are kind of passing this
information around.
Stephen Wilson (01:11:06):
Right. So yeah,
that's just kind of telling us
something about the nature ofthis temporarily extended
trajectory, in terms of itsanatomy, cool. So yeah, I mean,
I don't know, I feel like thereviewer was harsh, but like
they ended up, you know, gettingyou to put some stuff in the
paper that wasn't there before.
And that and that made the paperbetter. So that's, that's what I
suppose that's what's supposedto happen. Okay, so, you know, I
(01:11:29):
know, we talked before that, youknow, we have a time cutoff
coming up soon. I was like, Oh,don't worry at all that we won't
take longer to talk about thisand of course, we're coming up
close to our time. But um, wantto ask you just a couple more
things. So, the last majorempirical aspect of the paper
is, concerns like the way thatthe timing of these
(01:11:51):
representations depends on highlevel linguistic factors like
surprisal, and lexical entropy.
Can you tell us about thosefindings?
Laura Gwilliams (01:11:59):
Yeah, yeah.
This, I think, is really a very,very cool aspect of this work.
If I'm allowed to say that.
Stephen Wilson (01:12:07):
Of course you
are. Hey, you just got a job at
Stanford, you can say anything.
(Laughter)
Laura Gwilliams (01:12:11):
Right. But
what I saw that, I know every so
often, your data is just I don'tknow, just surprises you. So
yeah, I wanted to, the way thishappened was I was looking at
the rainbow fingers, as youdescribe them. And I, I noticed
that a couple of things. One,that, the information about the
(01:12:36):
first phoneme of the word seemsto be maintained for a much
longer period of time, than, forexample, the last phoneme of the
word. And similarly, I alsonoticed that when I tried to
decode the properties of thatfirst sound, I can't do it until
later in time, as opposed to ifit's the last time of the word,
(01:12:59):
I can decode it much earlier.
And I wasn't sure if this wasreally a word onset /offset
response, or rather, somethingto do with how much you can, how
well you can predict. What itis you're about to hear, and
ultimately, what word thisperson is actually saying. So I
basically repeated the fingeranalysis. But I broke it up as a
(01:13:26):
function of those two things Ijust said, so low and high
surprisal would be the caseswhere you can really well
predict the next sound versusyou can't well predict the next
sound. And I quite clearly seethat when you can predict what
(01:13:46):
the next sound is going to be, Ican decode it from the neural
responses around about 100milliseconds earlier.
Stephen Wilson (01:13:56):
That’s a lot.
Laura Gwilliams (01:13:58):
Yeah, then, in
the cases where you can't well
predict it. And so myinterpretation of this is, I
should say, it's not that I candecode it before it happens. And
I think that is an importantdistinction. But once the sound
has happened, you canreconstruct it faster from the
(01:14:20):
neural responses if that thingwas already anticipated, let's
say. And so I think this maybesuggest that as the brain is
processing all of these veryrapidly incoming sounds, it kind
of sets itself up in a certainstate of process, and maybe kind
(01:14:41):
of pre, pre-activates thesynaptic weights of certain
sounds that it's expecting tohear. And so if it then if the
sensory input then indeedmatches those weights, then the
process can essentially justunfold much faster as opposed to
If they don't miss much, andit's not the case that the whole
(01:15:03):
process breakdowns entirely, butit just, it takes a longer
period of time to get to thatsame level of processing. And
then, on the flip side, I, againsplit those fingers up, but this
time as a function of how, howcertain the listener can be
(01:15:25):
about what this word that'sbeing said, what the identity of
that word actually is. So, youcan be very certain that, yeah,
that I'm saying a word, that I'msaying word X, or very uncertain
I’m saying word X. And this, Ialso think is really, really
(01:15:47):
cool. So in the cases where itis not very clear what word this
word is going to end up being,the actual phonetic information
itself, hangs around for alonger period of time. And
contra, if actually, okay, theword hasn't finished, but it
(01:16:10):
doesn't really matter, because Ialready know what word you're
saying, then the phoneticinformation actually gets
discarded faster. And so, Ithink that this, I mean, both of
these things, I think, butespecially the kind of lexical
entropy stuff really highlightshow flexible this processing
(01:16:30):
system is, it's not just like, ablind process that always
involved with the same kind oftemporal latency and duration,
but that the system can kind ofchoose to keep information
around for a longer period oftime, in precisely the
circumstances where it mightneed that information in order
to disambiguate the lexicalidentity, which yeah, maybe you
(01:16:55):
can consider is kind of part ofthe point of the phonetic
processes to be able to link tothe higher order structures.
Stephen Wilson (01:17:03):
Right. So do you
think at the end of the day like
is that the most importantcontribution of the paper even
because I think the paper makes,you know, has several, it makes
several distinct contributionsinto one thing is kind of just
showing again, that like,phonemes are process for much
longer than their actualtemporal duration. Secondly,
(01:17:24):
it's showing how that takesplace, ie, it's not just kind of
an extended representationthat's static over time. But
it's an evolving representationthat allows you to decode order
as well as identity. And thenthird, there's this modulation
by predictability and byprocessing needs. I mean, those
are three distinct contributionshere, which one is the most
(01:17:48):
exciting to you at this point?
Laura Gwilliams (01:17:51):
I mean, I
think honestly, all of them are
really exciting. I think thatthey all kind of, are part of
the kind of emergingunderstanding of how all this
works. Like there's just,there's a constant interaction
between these different levelsof representation and like the
(01:18:12):
information available to thesystem at any given time. So I'm
really excited about the stuff Ijust talked about the lexical
entropy, because that I think,is getting us closest to what we
talked about in the beginning ofthe linking the sensory signal,
to making contact with that,like, special moment where you
(01:18:35):
go to something kind of storedand symbolic and kind of step
away from the sensory stuff. ButI think that we need to look at
it from kind of the wholeperspective of how it is that
you even get to that point. AndI think that that's through the
processing of these sequences.
Stephen Wilson (01:18:54):
Yeah.
Laura Gwilliams (01:18:55):
And another, I
guess, final thing is that I've
started to look at thesefingers, not just the phonetic
properties of speech sounds, butalso say, lexical properties of
words and they seem to followthis similar kind of trajectory,
(01:19:17):
trajectory type process, whichalso, I think is also really
exciting because it suggeststhat maybe this is kind of a
processing motif that just getskind of like recycled at these
different levels. So I thinkthat's another reason why I
think the whole package isreally important for yeah,
(01:19:39):
understanding the system thatkind of like a broader level.
Stephen Wilson (01:19:43):
Yeah. It's a
lovely paper and I hope
everybody takes the time to takeit, you know, take a look at the
figures and understand it moredeeply then then we could get to
in conversation. So I was gonnaof course ask you about like
what you've been doing inEddie’s lab and what you're
gonna do in your new lab atStanford, but we ran out of
(01:20:03):
time. So I'll just have toinvite you back some other time
when you've done more stuff, andwe can talk about it. I think
that there's so much more, andI'm looking forward to seeing
what you're going to do nextwith your work.
Laura Gwilliams (01:20:13):
Great. Thank
you. Yeah, I'm ready to be
invited back anytime. So it'sbeen a lot of fun.
Stephen Wilson (01:20:18):
Yeah. Great.
Well, thanks so much for takingthe time and I'll let you go and
we will hopefully catch up at,will you be at SNL?
Laura Gwilliams (01:20:28):
Yeah, yeah, in
Marseille.
Stephen Wilson (01:20:30):
I’ll see you in
Marseille. Right. Take care.
Laura Gwilliams (01:20:34):
Thanks, you
too. Bye.
Stephen Wilson (01:20:35):
Bye. Okay, well,
that's it for Episode 26. Thanks
very much, Laura, for coming onthe podcast. I've link to the
paper we discussed in the shownotes and on the podcast website
at langneurosci.org/podcast. I'dlike to thank Marcia Petyt for
transcribing this episode, andthe Journal of Neurobiology of
Language for generouslysupporting some of the cost of
transcription. A brief plug forthe journal, of which I do serve
(01:20:58):
on the editorial board, I hopethat you'll all consider
submitting your work to thejournal. There's a lot of
reasons to do so. It's openaccess, it has very reasonable
article processing charges only$850, if you are first and last
authors are members of thesociety. That's like probably a
third or a quarter of the costof what most journals are
charging. It has really expertand fair editors and reviewers
(01:21:21):
in my experience. I've submittedfour papers there, two are
published, and two are in thereview process and I've been
really happy with how it's gone.
And it's a new journal, so itdoesn't have an impact factor
yet, but it will have an impactfactor in the near future. So, I
hope you'll consider sendingyour work there. All right,
well, thanks for listening, andI will see you next time. Bye
(01:21:41):
for now.