All Episodes

August 26, 2025 27 mins

What do the names of colours, kinship terms and legal jargon tell us about the human mind? Dr Frank Mollica explores language as a cognitive tool – shaped by culture, adapted for purpose, and far from universal.

We dive into how children learn language, how it evolves and why legal language is so confusing. Along the way, we challenge common assumptions about how we think, communicate and learn.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Cassie Hayward (00:00):
This podcast was made on the lands of the Wurundjeri people,
the Woi Wurrung and the Bunurong. We'd like to pay
respects to their elders, past and present, and emerging.
From the Melbourne School of Psychological Sciences at the University
of Melbourne, this is PsychTalks.

Nick Haslam (00:22):
Hello and welcome to another round of PsychTalks. I'm Nick Haslam,

Cassie Hayward (00:25):
And I'm Cassie Hayward. We're your hosts, and we're just
itching to deep dive into more fascinating research in psychology
and neuroscience. This season of PsychTalks is already halfway, so
if you haven't already, make sure you subscribe so you
don't miss any episodes.

Nick Haslam (00:39):
Today we're talking with Dr Frank Mollica. Frank studies the
wonders of human language, what it tells us about how
we think and how we interact with the world around us.
His research crisscrosses numerous cultural and linguistic settings, and it
also has a lot to say about the horror of
legal language. Let's get into it.

(01:01):
Welcome, Frank.

Frank Mollica (01:02):
Hi, thanks for having me.

Nick Haslam (01:04):
So, Frank, you describe yourself as a computational cognitive scientist,
and now that might be unfamiliar to some of our listeners.
So what does it mean? And can you explain what
cognitive science is and what's computational about your approach to it?

Frank Mollica (01:16):
Yeah. So, cognitive science is an interdisciplinary field. It mixes anthropology,
computer science, linguistics, neuroscience, philosophy, and of course, psychology. Uh,
we study the mind like a computer.
Right, uh, what are the mental representations and processes, the
computer programme basically, that explains human reasoning?
So I'm a computational cognitive scientist, I spend most of

(01:38):
my time translating between these two fields and basically taking
the theories and the insights from these fields and putting
them into math, and then, you know, formalising the math,
making nice testable predictions that, you know, I then go
out and find collaborators or go into the field and
and test with large data normally.

Cassie Hayward (01:54):
Um, as you've said, the focus of your research is language,
but you talk about it as a cognitive technology. What
do you mean by that? OK.

Frank Mollica (02:03):
So, a cognitive technology, I guess at the simplest level,
it's a tool. For example, uh, language and number are
cognitive technologies, right, we invented that, that's our fault, right,
like we have littered the world with linguistic structure and
numerical structure, and then we learn from it and we
use it to achieve these goals.
Um, and importantly with these cognitive technologies, we learn them, right?
We learn them as kids, we use them to, you know,

(02:26):
achieve different goals, whether it's, you know, math or whether
it's communication, right, and that leaves extra structure in the
world that other people learn from. And additionally, we teach
these things, right, we explicitly teach people number, we explicitly
teach people, right, like language.
Right, and through this repeated pattern of uh learning and teaching,
uh we culturally evolve optimal solutions to our problems, right,

(02:49):
that help us to achieve our goals, and this actually
allows us to flexibly adapt to the environment in ways
that biological evolution wouldn't allow us to.

Nick Haslam (02:56):
This is so interesting because the idea that language is
shaped by culture is sort of intuitive to most of us, um,
the idea that, you know, different,

Frank Mollica (03:04):
Languages split up the world in different kind of ways.
But I remember back in the dark ages when I
was studying language, uh, the emphasis was, was much more
on universals, on how all languages are in, if you like,
built from the same building blocks. So can you give
us an example, um, say, colour words, how do different
cultures or languages, if you like, um, break up the

(03:24):
colour space?
Yeah, so colour is this fun case, right, we all
can perceive the same colours if we have normal vision, right,
we can distinguish, you know, the eggshell white from the white, um,
but different languages of the world, they don't have a
vocabulary that's that rich, right? Um, the basic
colour terms, the ones that we use every day, right,
not the eggshell, off-white, mauve, turquoise, right? There's only a

(03:48):
finite inventory of these basic colour terms. Uh, English has
about 11, right, um, blue, yellow, pink, purple, um, but
other languages have fewer and some languages have more. So
for example, Agarabi, uh, a language they speak in Papua
New Guinea, uh, that only has about 5 colour terms that,
you know, everyone would use, right, they use like blue, green, yellow, red,

(04:08):
and dark or black.
Um, whereas other languages like uh Mexican Spanish, for example,
has two blues, uh, they have a celeste kind of
sky blue, and then the rest of what an English
person will call blue.
Uh, similarly, Russian also has two blues, except they carve
it up differently. They have like the blue that you know,
generally English people would, you know, agree is blue, but

(04:30):
then they carve off a bit that's like a navy blue,
it's darker.

Nick Haslam (04:33):
So interesting, so I guess part of what a computational
cognitive scientist might do is see whether there are principles
underlying these differences, uh, right? So, uh, one of the
ones you refer to, I think when you talk about
this stuff is,
a principle of efficiency. So what do you mean by
efficiency and how do you show that languages are efficient
or not?

Frank Mollica (04:52):
Yeah. So this is the day job, right? Figuring out
what are these underlying principles that can explain like all
of this variation. We've optimally solved these goals, right? We,
we've come up with these efficient structures like language or number, right?
But how do we get there? Uh, and so one
thing I want to clarify is that goals don't always
go in the same direction.
Right, so let's take for example, a listener, a listener's

(05:13):
goal might be to hear a word and then be
able to identify any object in the world, right? So
if I give you a word that can point to
exactly the object that's intended, an optimal language from a
listener's perspective is a language where every possible state of
the world has its own unique word, right? So that
means we have a unique word for all 621 Marvel

(05:34):
universes and every single item inside them.
Right, uh, impossible, we can't do that, especially because if
you think about what a speaker's goal is, a speaker
wants to be able to choose the word as quickly
and accurately as possible, that's going to get their listener
to understand, right? They don't want to search memory through
an infinite, you know, amount of words in order to
figure out what's the right word that would get my

(05:54):
speaker to understand what I'm talking about. They have only
a finite memory, they want us to be quick. A
speaker's optimal language would essentially be one word, right, 'ba',
and when I say 'ba', it means whatever I intended
a speaker to mean.
So all of my intentions, just one word, ba, I
don't have to think about it, right? Now, of course,
neither of these languages actually work, right, we call them
degenerate languages, but real languages have to figure out how

(06:17):
to trade off these two goals and how they choose
to budget these goals defines a sort of efficiency trade-off.
Right, it defines a trade-off between these two pressures, one
for uh simplicity, we don't want a language that blows
up in like a vocabulary, right, but also informativity, we
want all of the new words that we add to
be useful, right, to help us identify the things that

(06:38):
we care about. Uh and so languages of the world
tend to efficiently optimise this trade-off, right? Each language gets
to choose how it wants to budget and spend on,
you know, uh this much complexity for this much informativity.
And what's interesting is we can build these computational models
off of these two principles that define all of the
different ways that you could budget between informativity and uh

(07:00):
and simplicity. And when you do that, you get a
whole variety of sort of a trade-off Pareto frontier of
possible ways that languages could be, right? They could solve
this problem.
And all of the existing languages for something like colour
tend to fall exactly near this boundary, suggesting that every
language is indeed efficiently solving this problem. They're balancing their goals,

(07:24):
but they all pick how they want to budget, you know, uh, which,
which goal do I prefer more or prioritise more differently.
And that actually characterises the diversity that we see in
the world.

Cassie Hayward (07:34):
Is it just about efficiency, or does it shape how
we see the world? I had to chuckle when you
talked about the Marvel Universe because I feel like I've
learned a whole new language over the past few years
as my kids become obsessed with soccer. So I've learned
a whole new words, new everything.
Am I changing the way I see the world with that?

Frank Mollica (07:53):
So I would argue that language definitely changes how you
see the world, but it's not in like the Arrival,
the movie kind of way where if you learn the
alien language, right, suddenly you can time travel, right? And
it's also not in the perceptual kind of way, where like,
you know, my language has two colours for two colour
words for blue, now I can, you know, see blues differently,
because we can all see, you know, the shades of

(08:14):
blue is the same, right, we can all distinguish between them.
Um, but what language does do is it points attention
on all of the structure in the world and it
highlights what's important, right, what's going to be useful distinctions
for later things in learning, right? Language is a primary
tool for social transmission, right, this core thing, uh, where
if we're going to culturally evolve good optimal solutions for,

(08:36):
you know, our different goals, then we need to be
able to transmit it. Uh, so for one example, to
give you, you have to make this concrete, is if
you think about like kinship terms.
Right, kinship terms, you know, everybody has a family tree, right,
uh kinship terms are things like mother and brother and whatnot.
So everyone has a family tree, but it's invisible, you
don't see it, right, like it's latent. If you're a

(08:58):
kid and you're trying to figure out what your kinship
terms are, you have to figure out basically just from
the words as highlighting different people, what the underlying relationships are,
and these underlying relationships can have different goals that they're
made to achieve that you don't actually pay attention to
or even see when you're a kid. For example, uh,
we can think about the Yanomami tribe, uh, these are

(09:20):
people that live in the Amazon rainforest, right?
Whereas English doesn't separate uh cousins, your mother's brothers, uh,
your father's brothers, your mother's sisters, your father's brother's kids,
they all get the same term, they're all your cousin, right?
And Yanomami, they actually care about uh whether you're a
parallel cousin or a cross cousin. So a parallel cousin

(09:43):
is your mother's sister's kid.
Your father's brother's kids, right? A cross cousin is your
mother's brother's kids or your father's sister's kids, right? Um,
and so kids at a very young age have to learn,
you know, the difference between these two, different groupings, right? Um,
and this for a kid, you know, it can seem
very arbitrary, it's a nice genealogical relationship, it exists, you

(10:03):
can look at it on the tree, right? But like, why?
Um, and it turns out that these kind of relationships
actually have more to do, uh, with in this case
in mating, for example, where, uh, it's tough to find, uh,
you know, a potential mate in these kind of tribal places, right,
and so this community solves that by having preferential marriages
to cross cousins when they're mate limited. So your cross

(10:24):
cousins are potentially future mates, um, but this doesn't matter
to a kid, right? Like a kid who's learning this
language is not going, who am I mating, they're a kid.
Right? Um, so without language, you wouldn't be able to
be aware of these kind of distinctions like early on,
and these kind of distinctions can highlight or build uh
structures that, you know, are useful for other goals later
on in life.

Nick Haslam (10:45):
That's so interesting and so you've given us an explanation
for why the Yanomami might have a different, way of
dividing up kinship than we do here. What's the reason why,
just going back to colour, why Russians would divide blue
in a different way from Mexicans and why we English
folk don't?

Frank Mollica (11:06):
Uh, so this is a, a nice question. I don't
know if I'm gonna have a satisfying answer, um.
The answer that I would uh suggest or I'd go
with a hypothesis at the moment is that if we
want to communicate about things informatively, right, that means that
we need to know how often we need to talk
about or this particular tool in this case is word,

(11:26):
is going to be useful in achieving our goal. Um,
and so if you look at the, the different parts
of the colour space, what we want to do is
we want to normally identify things by colour.
Right, and so if there are things that are important
to identify by colour, right, that's how we're going to
carve up our colour space so that we can be
more precise and have harder edges around those specific cases.

Nick Haslam (11:47):
So sort of adapted to the local cultural or ecological
or something environmental context?

Frank Mollica (11:53):
Ideally it would be adapted, uh, exactly, it would be
adapted to the local context.
So for example, if you look at uh places that
are tropical, have lots of, uh, you know, rainforests, green vibes,
lots of bright, vibrant colours, uh, they tend to split
the colour space so that they, they make really clear
these bright, you know, uh, colourful, poisonous amphibians that you
should not touch, uh, versus, you know, things that are

(12:15):
OK to touch and interact with.

Nick Haslam (12:18):
So another aspect of language you've studied, and you brought
this up in relation to the uh kinship terms a
little bit, is how language is acquired.
And you were involved in some fascinating work again in
South America on the development of mathematical, uh, knowledge and
number concepts in particular indigenous people, uh, in that region.
Can you tell us a bit about that work and

(12:38):
what it showed?

Frank Mollica (12:39):
Yeah. So I was lucky enough to be able to
collaborate with some people who are working, uh, in Bolivia
with the Tsimane people. Uh, the Tsimane are hunter gatherers,
it's a hunter-gatherer society, uh, they live not far from
La Paz along a river.
So what we're interested in is primarily numerical developments and
what's fun is that the Tsimane language has a base
10 counting system, right, so 1-2-3-4-5-6-7-8-9-10, then it sort of resets, right?

(13:05):
A lot of other industrialised societies also have, you know,
a base 10 counting system, right, any English two-year-old can
sing that song for you, they know how to count
like 1-2-3-4-5-6-7-8-9-10.
The interesting thing though is that while a 2 year
old can count that, 2-year-olds don't actually know what those
words mean. So if you put a pile of cookies
in front of the same 2 year old who just
counted 10 for you and ask them, Can I have

(13:27):
3 cookies, right, you're just as likely to get the
entire pile of cookies as you are to get however
many fit in their hand, and certainly not 3.
Um, right, kids learn number words in stages. Uh, first
they learn what the meaning of one is, they can
hand you one cookie exactly, then they learn what the
meaning of two is, right, so they can hand you
1 or 2 cookies, but you ask them for 3,

(13:49):
you still get that handful, then they become 3-knowers and
they figure out what 3 means. Sometimes you see kids
that are 4-knowers, so they know what 4 means, but
what's really cool is that we don't have 5-knowers.
Right, By the time that a kid would be a 5-knower, right,
they just figured it out, they figured out the algorithm,
they can now count. So however many numbers that they

(14:09):
can count to, they can now accurately, well, as accurately
as they can count, they can hand you that many cookies, right?
What's really cool is this happens in all industrialised societies
that have a base 10 counting system. It takes a
couple of years, right, we see this in Japan, we
see this in Hebrew, Arabic, um.
Every language that we have data for, right, kids go
through the stage of development, you know, uh 1-knower, 2-knower, 3-knower, 4-knower.

(14:33):
We want to know, hunter-gatherer society, Tsimane kids, do they
show the same pattern, right? They also have a base
10 system, but you know, very different from these industrialised
societies that we have data for. So, uh, you know, uh,
my collaborators basically went and looked at the Tsimane children
and we found out that yes, they do go through
this exact same pattern. Uh, they go 1-knower, 2-knower, 3-knower, 4-knower,

(14:53):
and then eventually they figure out the algorithm. What's really
cool is we recently did a meta-analysis, and it turns
out that the cardinal principal knowers, the kids that figured
out the algorithm.
They all have some formal schooling, um, and formal schooling
among the Tsimane is actually really recent, um, and it's
not formal schooling like an industrialised society, is much more like, um,

(15:14):
you know, when the government can provide aid, they have
a teacher who comes and they teach the Tsimane kids
in Spanish, uh, and so the instruction is in Spanish
and it's very recent, um, but only the kids with
formal schooling, uh, have, at least in our data set, uh,
have acquired that full counting ability.
But what's really cool is that there are people, there

(15:34):
are Tsimane that are adults that, you know, did not
have any of this formal schooling, uh, and they can count, right?
And so we wanted to figure out, you know, well,
how did they figure it out, right? They didn't have
formal schooling, so is formal schooling actually required, right? Um,
and so the anthropologist on our team, David O'Shaughnessy, went
and actually uh went to go find adults that, you know,
might not have had the experience of formal schooling, but

(15:56):
they might have had mathematical uh experience elsewhere, they might
have done trade.
Right, so the Tsimane do trade with uh other people, right,
and the trade language is sort of Spanish because that's a,
you know, the the lingua franca of Bolivia, right? Um,
and what they found, Dave, when, when he went there,
he found out that basically
these people

(16:18):
maybe they, they can't exactly count, but they can do trade,
and so he found this really cool uh group of
people who could do mathematical computations that were necessary for trade.
So for example, uh, in the Tremane society, basically trade
works in fives, you have these jatata leaves uh that
they trade or sometimes they'll trade like a bunch of
uh bananas or plantains if they're in the right sort

(16:40):
of amount, like for a cluster. They basically group them
in 5 and they do multiples of 5 is like
the the trade.
So, uh, if you ask these people, you know, 5
times 2, 5 times 3, 10 times 4, fine, 10
plus 5, fine, 3 plus 4, no idea, wrong, right,
2 plus 5, nope.
Um, right, so they figured out enough math, they figured

(17:01):
out math that works specifically for them for the goals
that they're trying to achieve, right? And I think that's
why I guess numerical cognition is such a great idea,
such a great example of these cognitive technologies, right, these
technologies that are customly adapted to our goals, right, if
the only thing that I really need to do is
trade in multiples of 5, I'm going to learn math

(17:21):
in multiples of 5, right? Like why learn that 1
+ 1 stuff?
Um, and I mean, this isn't the first time an
anthropologist has found a result. Um, if you look back, uh,
Jeff Sachs has worked with Brazilian candy sellers during periods
of inflation, where, uh, if you're trying to sell candy
in massive periods of inflation, these like 6 to 10
year old boys would basically be trying to sell candy

(17:43):
at the public transportation, uh, stops, right, but you wouldn't know,
like how much am I actually going to need to
cost these things because the, you know, the, the price
changes rapidly over the course of a day.
Right, so you have to sort of really quickly offhand
make some heuristics that in this case, use a magnitude
comparisons to figure out how you should price your candy
so that you have enough to buy tomorrow's box of

(18:04):
candy and that you still make a profit.
Right, and when you compare these, you know, like non-schooled
Brazilian candy sellers to, you know, people of equivalent socioeconomic
status with schooling or even the the kids that have,
you know, like a good socioeconomic status and, you know, schooling,
they're better at these abstract magnitude comparisons than their peers, right,

(18:25):
they can't do the formal counting thing that they would
teach you in school, but it's like they can teach
you what they needed, right, and they're better at it, um, so.
These kind of things really are adapted, cognitive technologies are,
are really adapted to what we need to do.

Cassie Hayward (18:39):
I think they're such fascinating examples of how efficient language
can be, but also so adapted to the goals that
you need in that, in that scenario. But you've also
done research on the type of language that's spectacularly inefficient.
Can you tell us about legalese?

Frank Mollica (18:54):
Yes, um.
Legalese is notorious, right? What do I really have to say? Um,
legalese is difficult in so many different ways, um, rather
than just saying a clean sentence, legalese likes to take
a sentence and throw it in the middle of the
other sentence, right? For example, the contractor who knowingly and
in sound mind and good conscious entered this employment agreement

(19:16):
upon termination of their employment at either the employer or
their behest, will be entitled to a one-time payout of
$100 million.
It's a golden parachute clause that I just made up, um, right.
So when you have that clause that that's a big
long sentence, but it's actually two sentences where they just
threw one right in the middle, right? The outer sentence
is that the contractor is entitled to a one-time payout

(19:39):
of $100 million right, upon termination of their employment at
either their employer or their own behest, right? But then
we ended this other sentence, this, the contractor is in
sound mind and good conscious and when they entered this agreement.
Right? Why did that need to be in the middle
of this other sentence? It's memory taxing, legalese isn't fun, uh,
it's a tax on memory, but also legalese uses these

(20:01):
rare words that we never see anywhere else, um, mens rea,
mens sana, um, or even simple things if you look
at your, uh, real estate agreement in a lot of
different languages, right? Why do we call it lessee instead
of renter?
Renter is much more friendly when everyone knows what a
renter is. What is a lessee?

Cassie Hayward (20:20):
It, is it all the complexity required because legal concepts
are themselves extremely intricate, or are they just trying to
confuse us?

Frank Mollica (20:28):
Ah, great question. So, we've actually looked at this, uh,
and worked with my grad student Eric Martinez and Ted
Gibson at MIT.
Uh, we basically asked a whole bunch of lawyers, right, uh,
we gave them contracts that were written in sort of
plain English, and we gave them contract excerpts that are
written in legalese, and we asked them, are they both
equally enforceable, right? Do they have the same legal content,

(20:49):
do they have the same legal standing, right? Uh, and
overwhelmingly they do, right? Plain English equivalents do exist, it's possible.
So there's nothing about the concepts that make these things
complex because I can easily undo them, uh, just by
taking sentences outside of other sentences, using slightly more frequent words.

Nick Haslam (21:06):
So if language tends to become more efficient over time,
surely over time legalese has also become more efficient?

Frank Mollica (21:14):
Uh, you would think that. Um, no, we've actually also
looked at this. So, uh, in a study, I guess
last year now, uh, we looked at the entire US
legal code, uh, and I should say so far we've
only ever looked at US laws, right? It might be
different than other laws and we're looking at that now,
but so far we've only just looked at
US legal code. And if you look at the entirety

(21:34):
of the US legal code up to about 2022.
Right, and you can find uh other texts that are comparable.
We can look at like fiction from the exact same
time periods, right, or even academic texts with all of
their weird jargon and whatnot, um, from the exact same
time period, and we can actually look at, you know,
how prevalent are these really hard-to-process linguistic structures, this kind

(21:55):
of centre embedding where you take sentences and throw them
into other sentences or the frequency of words. Uh, and
if we look across all of this time, legalese is
always containing
more of these difficult to process structures than any of
the other control tests. This is despite calls for, you know,
reform in like the 1970s and even the Plain Writing Act, uh,
the Plain Language Act of 2010.

Nick Haslam (22:18):
Uh, I do hope, Frank, that we don't get sued
for talking about all of this, and, and, uh, probably
we should be evenhanded and start talking about that jargon
in psychology you mentioned, but if legal language isn't getting
more plain and more straightforward over time, what is the obstacle?
Uh, and what does it tell us about how we
should be trying to increase the use of plain language? Yeah.

Frank Mollica (22:39):
So, whenever you're trying to change something that exists and
has this kind of structural momentum, it's really hard to
just overcome momentum. It's so much easier to just copy
this template and use it on the next document, right? Similarly,
there's uh very few structural changes that uh really incentivise
uh using simpler language, right? There's no reason to use

(23:00):
legal language is going to change people's financial and goals basically.
It's harder to change something that already has momentum when
there's no clear incentive to do so.
Uh, and so we have to change the incentive structure
of institutions if we want to actually see results.

Cassie Hayward (23:13):
It reminds me how kids have their own language, every
kind of generation have their own words for what's cool
and what's not. Is it just so they have their
own little kind of way of speaking that other people
aren't allowed in to understand?

Frank Mollica (23:27):
Uh, so when we did that study with the lawyers,
we actually asked them this exact same question. We were,
you know, interested to see if maybe it's just an
in-group bias, so it's like you're going to signal that
you're a good lawyer and so you're going to use
your jargon to do that. Um, and it turns out
it's not, the lawyers would equally hire, equally work with
uh people who use the plain language alternatives versus uh
legalese alternatives. What we actually think keeps legalese going this way, um,

(23:52):
is actually something called performativity.
Uh, is this idea that language in legally isn't just
describing something. Normally when we use language, we're trying to communicate,
we're just trying to describe the state of the world.
When we use legalese, we're actually changing the state of
the world, right? I am now placing with this legal
language an obligation over you or some kind of right
or something upon you, right?

(24:12):
The same way where you, you know, like crack a
wine bottle over a ship, and you say like, I
now christen you the whatever and you've now named it,
you've changed the state of the world, or when a
priest says I now pronounce you man and wife, right,
and you're now married, the world has changed and this
kind of performativity, right, um, is the same kind of
thing that happens in legalese.
Uh, and so we see this kind of performativity also

(24:32):
in somewhere else, uh, we see this in magic spells, right, uh,
magic spells are also supposed to change the world by just,
you know, words themselves, and magic spells also use language
to signal that they're doing it, right? So your magic
spell is supposed to use some archaic language or it's
supposed to rhyme, right? That's how you know that it's
a magic spell. Well, our current hypothesis is that with
legal language, it's basically the same thing, except the structure

(24:54):
isn't rhyming or
archaic language or maybe it is some archaic words, um,
but it's throwing sentences in other sentences and being complicated, uh,
and that sets it apart, that gives it sort of
legal weight in people's minds, not in any sort of, uh,
evaluative body of the law.

Nick Haslam (25:09):
Fascinating stuff. So look, Frank, on a final note, I've
got a bone to pick with you about an old
paper of yours, uh, where you said that English speakers
have learned only about 1.5 megabytes of linguistic information.
which, for listeners of my age, will remember a 5.25
inch floppy disc from the eighties, uh, would hold. I mean,
surely clever people like Cassie and I know more than that.

Cassie Hayward (25:32):
I mean, it's true, like, clearly people know more than just,
you know, the save icon worth of information about language. Um,
but when we look at language, we talk about the
linguistic forms of words, right, and how things combine.

Frank Mollica (25:44):
That's actually really small. One save icon, one floppy disc
contains all the information that you actually need about uh
language and how it combines the the actual structures and forms.
The part that really, you know, takes up most of
the space even on the floppy disc is the semantics,
what words mean, right? Uh, it's the part that makes
the large language models large, they're supposed to be world

(26:05):
knowledge or something like that. Um, but that's, you know,
separate from language, it's not much knowledge about language, um,
and even in our estimate
that's the vast majority of the, the information that you
need to learn about a language is semantics, it's word meanings.

Cassie Hayward (26:19):
I feel like I need a whole floppy disc size
memory of the offside rule in soccer, which I will
never understand. But, Frank, the work you do probably doesn't necessarily,
isn't necessarily the first thing people think of when they
think about psychology, but I think it really shines a
light on how we think and learn and speak and interact. Um,
and I just want to thank you so much for
sharing with us today.

Frank Mollica (26:39):
Thank you so much for being here. It was a pleasure.

Cassie Hayward (26:44):
And that wraps up this episode of PsychTalks. A big
thank you to our guest Dr Frank Mollica for sharing
his insights. This episode was produced by Carly Godden with
support from Mairead Murray and Gemma Papprill. Sound engineering by
Jack Palmer. Thanks for tuning in. See you next time.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

NFL Daily with Gregg Rosenthal

NFL Daily with Gregg Rosenthal

Gregg Rosenthal and a rotating crew of elite NFL Media co-hosts, including Patrick Claybon, Colleen Wolfe, Steve Wyche, Nick Shook and Jourdan Rodrigue of The Athletic get you caught up daily on all the NFL news and analysis you need to be smarter and funnier than your friends.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.