Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
All right, so are you ready for this?
(00:00):
This deep dive, we're going to be looking at how AI is totally
changing the game in science.
And I think knowing you and your interest in AI,
you're going to have some serious mind-blown moments.
We're talking like plastic-eating enzymes,
the possibility of AI exceeding human intelligence,
like the whole nine yards.
I'm excited.
Yeah.
So our source material today is an episode
of the AI Uncorked podcast.
(00:22):
They recorded a live episode at the AI for Science Forum.
And it features Demis Asabas.
He's the CEO of Google DeepMind.
Oh, wow.
Yeah.
And they're talking with Professor Hannah Frye.
But here's the kicker.
This episode also includes a panel discussion with four,
yes, four Nobel Prize winners.
Wow.
Yeah.
So we've got the minds behind AlphaFold and CRISPR
(00:43):
all in one room.
So yeah, we're dealing with some serious brain power here.
That's incredible.
So let's start with Demis Asabas.
This guy has done it all, chess prodigy, video game designer,
neuroscientist, and now pioneer in AI.
Seriously, is there anything this guy can't do?
I have to suspect that he's got the blueprint
for artificial general intelligence just
(01:05):
tucked away somewhere.
Yeah.
Like just waiting to unleash it on the world.
That wouldn't surprise me.
No.
But he also seems like a pretty down-to-earth guy.
He shared this hilarious story about how
he found out he won the Nobel Prize.
It was actually his wife's computer
that started buzzing with a Skype call.
And it turned out to be the call from Sweden.
And they were scrambling to find his phone number.
Oh, wow.
Imagine that, just like waiting, anticipation.
(01:28):
And then to celebrate, he had a poker night.
Ah.
With a bunch of chess grandmasters,
including Magnus Carlsen.
Wow.
So talk about a high stakes game.
I'm going to wonder who walked away with the winnings.
Yeah, it does.
What's fascinating about Asabas, though,
is that his work extends far beyond just winning games.
He's truly dedicated to using AI to solve
(01:48):
these fundamental scientific problems.
Oh, totally.
Let's get into that.
So his creation, Alpha Fold, has completely
revolutionized the field of protein structure prediction.
We're talking over 28,000 citations.
Yeah.
But it's not just about the numbers.
What matters is what those citations represent.
Exactly.
Those 28,000 citations, they translate
(02:09):
to real world applications.
Alpha Fold has helped researchers
determine the structure of incredibly complex proteins,
like the nuclear pore complex, which acts as a gatekeeper
for a cell's nucleus.
Right.
It's like figuring out this intricate lock mechanism
for one of the most important doors in the cell,
and it doesn't stop there.
(02:29):
Alpha Fold is also being used to develop things
like a molecular syringe, like for delivering drugs
with incredible precision.
And here's where it gets really exciting.
Researchers are using Alpha Fold to design enzymes
that can literally break down plastic.
Imagine the potential impact on the environment.
This isn't just theoretical science.
It's about developing solutions to real world problems.
That's incredible.
(02:50):
OK, so they've solved the static picture
of proteins, as Asawa calls it.
But where do they go from there?
What's next on the AI for Science agenda?
Well, one of the exciting avenues they're exploring
is the Genome Project, which uses AI to design entirely
new materials.
And this ties in directly with your interest
in sustainability.
OK, I'm all ears.
How so?
Well, imagine a world with vastly improved batteries,
(03:12):
capable of storing much more energy, or even room
temperature superconductors, which could revolutionize
energy transmission, making it incredibly efficient.
That's the kind of potential we're talking about with AI
design materials.
Wow, that would be a game changer.
So we've got plastic eating enzymes,
super efficient batteries.
What other scientific dreams are they cooking up
(03:32):
over at DeepMind?
One of Hasabas's long term goals is to create a virtual cell,
a complete computer model of a living cell.
They're starting with yeast.
But the ultimate goal is to simulate a human cell.
Hold on, a virtual human cell.
That sounds like something straight out of a sci-fi movie.
What would that even look like?
Think of it as the ultimate simulation.
By modeling all the complex interactions within a cell,
(03:54):
we could dramatically accelerate biological research.
We could test new drugs virtually,
gain a deeper understanding of diseases,
and potentially even personalize medicine.
Based on your unique genetic makeup,
the possibilities are truly mind boggling.
Wow.
Yeah.
So a virtual cell, that's wild.
(04:14):
It just makes you wonder, what can't we do with AI these days?
Speaking of pushing boundaries, I'm
curious about their thoughts on quantum computing.
You'd think that quantum computing would
be a natural fit for a company like DeepMind, right?
Yeah, for sure.
But Hasabas actually has kind of a surprising perspective on it.
Really?
Yeah.
Tell me more.
So he believes that traditional computers,
like the kind we use every day, still have untapped potential.
(04:37):
He thinks that we can push them further, maybe even
to the point of modeling quantum systems themselves.
He's been in discussions with some of the leading minds
in quantum science.
And it seems like we might be on the verge of a major shift
in our understanding of computing.
So maybe we don't need to rush into quantum computing just
yet.
Maybe we can just squeeze a bit more out
of our current technology.
Exactly.
It's a reminder that we should never underestimate
(04:59):
the power of human ingenuity, even
when it comes to designing and utilizing computers.
And speaking of ingenuity, Hasabas
has also founded Isomorphic Labs, a company specifically
focused on using AI for drug discovery.
Oh, yeah, I've heard of Isomorphic Labs.
They've partnered with some major pharmaceutical companies,
right?
They have, yeah.
What's their approach?
(05:19):
They're using AI to streamline the drug discovery process
with the goal of dramatically reducing the time and cost
it takes to develop new medicines.
That's huge.
We all know how long and expensive
it can be to bring a new drug to market.
If they can really speed things up,
it could have a massive impact on the lives
of millions of people.
Absolutely.
And it's not just about speed and cost.
(05:42):
AI could also lead to the development
of entirely new types of drugs, targeting diseases
that were previously untreatable.
It's a really exciting time for medical research.
It sounds like Demis Hasabas has his fingers in a lot of pies.
Leading the charge in AI research,
exploring new frontiers, and computing,
revolutionizing drug discovery.
(06:03):
Does he ever sleep?
He's certainly a busy guy.
But what's remarkable is that he manages
to balance all of this with a deep commitment
to ethical considerations.
He's not just interested in building cool technology.
He wants to ensure that it's used for the benefit of humanity.
That's refreshing to hear.
It's easy to get caught up in all the hype of AI.
But it's important to remember that these technologies have
(06:25):
the potential to impact our lives in profound ways.
We need people like Hasabas who are thinking carefully
about the ethical implications.
Absolutely.
And that brings us to the panel discussion
with the Nobel laureates, which was
a highlight of the podcast.
OK, let's hear it.
What were some of the key takeaways from that discussion?
One of the central themes was the importance
of scientific rigor in the age of AI.
(06:47):
Hasabas and the other panelists emphasized
that we can't just blindly trust the results of AI algorithms.
We need to apply the same level of skepticism
and critical thinking that we would
to any scientific finding.
That makes sense.
Yeah.
Just because an AI comes up with a result
doesn't mean it's automatically correct.
We still need to test it, verify it,
understand the reasoning behind it.
(07:07):
Exactly.
It's about ensuring that AI is a tool that enhances
our scientific understanding.
Not a shortcut that undermines it.
And this ties into another important point that came up.
The need for AI systems that can explain themselves.
Oh, that's an interesting one.
So it's not enough for an AI to just spit out an answer.
We need to know how it got there, right?
(07:27):
Right.
If we can understand how an AI system arrives
at its conclusions, we can gain deeper insights
into the problem we're trying to solve.
It's like having a conversation with the AI
where it can teach us as well as answer our questions.
That would be incredible.
But is that even possible?
Can AI really explain its thought process in a way
that humans can understand?
(07:47):
It's definitely a challenge, but researchers
are making progress in this area.
They're developing techniques to make AI systems more
transparent and interpretable.
And Hasabas believes that we'll eventually
have AI systems that can communicate their reasoning,
perhaps even in the language of mathematics.
Imagine the possibilities.
Wow.
That would be a game changer.
It'd be like having an AI tutor that can not only solve
(08:10):
problems, but also explain the concepts behind them.
Exactly.
It would be a powerful tool for education
and scientific discovery.
And speaking of pushing the boundaries of what's possible,
someone in the audience raised a fascinating question.
Can AI be applied to the social sciences?
Can we use algorithms to understand human behavior?
Ooh.
Well, that is a tricky one.
(08:31):
Human behavior is so complex, influenced by so many factors.
Can we really capture all of that in an algorithm?
It's definitely a challenge.
Human behavior is not as predictable as the physical world.
But that doesn't mean we should dismiss the potential of AI
in the social sciences.
So how could AI be used in this context?
What are some potential applications?
(08:51):
One possibility is using AI to analyze large data
sets of human behavior, like social media posts
or online interactions.
This could help us identify trends patterns
and even predict future behavior.
That sounds a bit big brother-ish.
Are there any ethical concerns we
should be thinking about here?
Absolutely.
Data privacy is paramount.
(09:12):
We need to ensure that any data used for social science
research is collected and analyzed
in an ethical and responsible way.
And we need to be mindful of the potential for bias
in these algorithms.
Right, because if the data we feed into the AI is biased,
then the results will be biased too.
And that could have serious consequences
for how we understand and interact with each other.
Exactly.
We need to be very careful about how we use
(09:34):
AI to study human behavior.
It's a powerful tool, but it needs to be used responsibly.
So it's not just about the technical challenges of AI,
but also about the ethical and societal implications.
Precisely.
And that brings us to another important theme that
emerged from the panel discussion, the importance
of public trust in AI.
OK, that's a big one.
If people don't trust AI, they're
(09:55):
less likely to accept the scientific breakthroughs
it enables, right?
Exactly.
We need to ensure that AI is developed and used
in a way that aligns with our values.
And that benefits society as a whole.
If people see AI as a force for good,
they'll be more likely to embrace its potential.
So how do we build that trust?
What needs to happen?
(10:16):
It's about transparency, communication, and education.
We need to demystify AI to explain how it works
and to highlight its potential benefits in a way that resonates
with people's everyday lives.
So it's not just about publishing research papers
and scientific journals.
It's about engaging with the public, telling stories,
and building relationships.
Exactly.
And it's about involving the public in the conversation
(10:38):
about AI, seeking their input and addressing their concerns.
I like that.
It's about making AI more human-centered,
more focused on the needs and values of people.
Precisely.
And that brings us to the big question that's
on everyone's mind, artificial general intelligence, or AGI.
Ah, yes, the holy grail of AI research,
the idea of an AI that can think and reason like a human being.
(11:01):
It's both exciting and a little bit terrifying, isn't it?
It certainly is.
And it's a topic that sparked a lot of debate
among the panelists.
Some believe that AGI is just around the corner,
while others are more skeptical.
So is it even possible?
Will we ever achieve true AGI?
It's hard to say for sure.
But one thing's for certain.
The pursuit of AGI is driving incredible innovation
(11:23):
in the field of AI.
Even if we don't achieve true AGI,
we're still developing amazing tools and technologies
along the way.
Exactly.
And those tools are already having a profound impact
on science, from drug discovery to material
science to climate modeling.
So even if AGI remains elusive, the journey itself
is incredibly valuable.
OK, so AGI is a big unknown.
But there are some more concrete challenges
(11:44):
we need to address right now, like that question
about public trust in AI.
How do we ensure that people embrace these breakthroughs
instead of rejecting them out of fear?
OK, so we're back for the final part of our deep dive
into AI for science.
And I have to say, throughout this whole thing,
I keep coming back to this question of,
if AI can do so much, like analyze data, generate
(12:05):
hypotheses, design experiments, what
does that mean for the future of scientists?
Are we all going to be replaced by robots?
You know, that's a question a lot of people
are asking these days.
But I think it's important to remember that AI, at its core,
it's a tool.
And like any tool, its effectiveness
depends on the skill of the person using it.
Right, so like a hammer can build a house
or it can cause destruction, depending on who's holding it.
(12:26):
Exactly.
AI can be an incredible force for scientific progress.
But it's up to us, the humans, to guide its development
in the application.
We need to ask the right questions,
design the right experiments, and interpret the results
in a meaningful way.
So it's not about scientists becoming obsolete.
It's about scientists evolving and learning
to work alongside AI as a partner.
Precisely.
(12:47):
AI can handle the heavy lifting, analyzing massive data
sets, and identifying patterns that would take humans years
to uncover this.
Freeze up scientists to focus on the big picture,
asking the truly creative questions,
developing innovative research strategies,
and making those crucial leaps of insight
that drive scientific breakthroughs.
So instead of replacing human ingenuity,
(13:09):
AI actually amplifies it.
Exactly.
It's like having a super-powered research
assistant capable of processing information at lightning speed
and presenting you with a wealth of possibilities.
But it's still the scientists with their curiosity,
intuition, and deep domain knowledge
who ultimately guides the direction of the research.
And thinking back to that Nobel Laureate panel discussion,
it seemed like they shared this view.
(13:30):
They really emphasized the importance of scientific rigor,
even in the age of AI.
Absolutely.
They stressed that we can't just blindly accept
the output of an AI algorithm.
We need to apply the same level of critical thinking,
skepticism, and rigorous analysis
that we would to any scientific finding.
AI is a powerful tool, but it's not
a substitute for good science.
(13:51):
Right, and I remember one panelist even raised the question
of whether AI could hinder our understanding of how things
work.
Yeah.
If we become too reliant on AI to provide answers,
do we risk losing the ability to ask the right questions
ourselves?
That's a really valid concern.
It's like relying too heavily on a GPS navigation system.
You might get to your destination,
but you won't necessarily understand how you got there
(14:13):
or what other routes might exist.
So it's crucial to maintain that balance between leveraging
the power of AI and nurturing our own scientific skills
and intuition.
Exactly.
We need to use AI as a tool to enhance our understanding,
not replace it.
And that's where education comes in.
We need to train the next generation of scientists
to be both AI savvy and deeply grounded
in scientific principles.
It's about creating a new breed of scientists,
(14:37):
one who can seamlessly navigate both the digital
and the physical worlds of research.
Precisely.
And this brings us back to that final thought-provoking
question.
If AI can help us solve the mysteries of biology materials,
and even the universe itself, what role
will human curiosity and creativity
play in shaping the future of scientific discovery?
Yeah, what a question.
(14:57):
It's almost like, what will it mean
to be a scientist in a world where AI can do so much?
I don't think anyone has a definitive answer yet.
But I believe that human curiosity, our innate desire
to understand the world around us,
will always be the driving force behind scientific progress.
AI may be able to provide us with answers,
but it's the questions, the why and the how,
that truly drive us forward.
(15:17):
And those questions often come from the most unexpected places,
from a spark of inspiration, a sudden connection
between seemingly unrelated ideas, a moment of pure,
unadulterated awe at the complexity
and beauty of the universe.
Exactly.
And those moments, those flashes of insight and inspiration,
are at the heart of what it means to be human.
They're what drive us to explore, to experiment,
(15:39):
to push the boundaries of knowledge,
and to create new things.
AI can help us along the way, but it
can't replace that fundamental human spark.
So in a sense, the more powerful AI becomes,
the more important human ingenuity becomes.
And I think that's a brilliant way to put it.
AI is not the end of science.
It's a new beginning.
It's an opportunity to redefine what's
(15:59):
possible, to expand our horizons,
and to tackle scientific challenges that were previously
beyond our reach.
But it's up to us to ensure that AI is used wisely, ethically,
and in a way that benefits all of humanity.
This deep dive has been incredible.
We've explored the cutting edge of AI for science,
from the intricate workings of protein folding,
to the grand vision of a virtual cell.
(16:19):
We've grappled with some of the biggest questions facing
science in the 21st century.
The role of human ingenuity in an AI-driven world,
the ethical considerations of powerful new technologies,
and the importance of public trust
in shaping the future of scientific progress.
It's been a privilege to share this exploration with you.
And I hope it's left you feeling inspired, curious, and perhaps
(16:40):
even a little bit awestruck by the incredible possibilities
that lie ahead.
I know I am.
And for our listener out there, this deep dive
has given us a glimpse into a future
where science and technology are transforming our world
at an unprecedented pace.
It's a future filled with challenges,
but also with incredible opportunities
for discovery, innovation, and positive change.
(17:02):
And as we move forward into this exciting new era,
it's important to remember that the future is not
predetermined.
It's something we create collectively
through our choices, our actions,
and our unwavering commitment to using knowledge
for the betterment of humankind.
Beautifully said so.
To our listener out there, we leave you
with this final thought as AI continues
to reshape the landscape of science.
(17:22):
What role will you play in shaping
the future you want to see?