All Episodes

April 8, 2025 • 10 mins

The conversation around AGI and ASI is louder than ever—but the definitions are often abstract, technical, and disconnected from what actually matters. In this episode, I break down a human-centered way of thinking about these terms, why they’re important, and a system that could help us get there.

I talk about: 

• A Better Definition of AGI and ASI
Instead of technical abstractions, AGI is defined as the ability to perform most cognitive tasks as well as a 2022 U.S.-based knowledge worker. ASI is intelligence that surpasses that level. Framing it this way helps us immediately understand why it matters—and what it threatens.

• Invention as the Core Output of Intelligence
The real value of AGI and ASI is their ability to generate novel solutions. Drawing inspiration from the Enlightenment, we explore how humans innovate—and how we can replicate that process using AI, automation, and structured experimentation.

• Scaling the Scientific Method with AI
By building systems that automate idea generation, recombination, and real-world testing, we can massively scale the rate of innovation. This framework—automated scientific iteration—could be the bridge from human intelligence to AGI and beyond.

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Chapters:

00:00 - Why AGI and ASI Definitions Should Be Human-Centric
01:55 - Defining AGI as a 2022-Era US Knowledge Worker
03:04 - Defining ASI and Why It’s Harder to Conceptualize
04:04 - The Real Reason to Care: AGI and ASI Enable Invention
05:04 - How Human Innovation Happens: Idea Collisions and Enlightenment Lessons
06:56 - Building a System That Mimics Human Idea Generation at Scale
09:00 - The Challenge of Testing: From A/B Tests to Biotech Labs
10:52 - Creating an Automated, Scalable Scientific Method With AI
12:50 - A Timeline to AGI and ASI: Predictions for 2027–2030

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:00):
Unsupervised Learning is a podcast about trends and ideas in cybersecurity,
national security, AI, technology and society, and how best to
upgrade ourselves to be ready for what's coming. There's a
ton of discussion everywhere about AGI and ASI and whether
or not they're possible to achieve. I think they are.

(00:24):
And I want to talk about one way we could
possibly pursue that. So I'm going to step through definitions
of AGI and ASI, why we should care about them,
and a system for pursuing them. First, on the definitions themselves,
I think a big problem with AGI and ASI definitions

(00:44):
are really around AI at all is that they're too technical.
They tend to be too technical and therefore not usable.
Not really useful in conversation. I think the best definition
for these things needs to be something that's very human centric.
It should be obvious, and I think we should use
this as a benchmark. Why should I care? We should

(01:05):
be able to look at these definitions and know why
we should care, or at least have a hint towards
why we should care. And I think if we can't
get that from the definition, then it's probably not a
very good one. So my definition for AGI is an
AI system that's able to perform most or all cognitive tasks,
as well as an average US based knowledge worker from 2022.

(01:29):
And I say a US based knowledge worker, because most
people probably won't doubt that there's some kind of base
level smart at doing lots of different tasks, which is
the general in AGI, right? AGI is artificial general intelligence.
So it's general tasks that you do in knowledge work.

(01:50):
And I think if someone's making, you know, a decent
salary as a US based knowledge worker, aren't too many
people that are are going to say that this person
doesn't have general intelligence. So we're using humans as the
baseline for having true general intelligence. And I say before 2023,

(02:11):
because that's when modern AI kicked off. And we don't
want to have the definition keep shifting because humans get
more and more augmented with AI. So so the bar
keeps moving, right. So we want to lock that in place.
ASI is a bit harder and a bit easier to
define at the same time. It's a little more intuitive
because it should be super or above human, but it's

(02:35):
also harder to think about because unlike human level generality,
we've never actually seen anything that's smarter than us. So
you have to actively imagine that. And I think both
of these definitions here are simple enough, and it's obvious
by looking at them why you should care for AGI.
It could replace knowledge workers, which is going to affect

(02:57):
the economy massively. And for ASI you could do a
whole lot more than that. So the next thing is,
why do we care about AGI and ASI? Like what
are they actually going to produce as output. I think
the most important output, or at least the most tangible one,
is invention. Like coming up with. Net new things, ideas, concepts, products, services,

(03:23):
whatever in the same way that humans do. And whenever
I think of that, I have one main question. Well,
how do humans do it? Like what is that actual methodology?
And I saw a recent episode of Lex Fridman's podcast.
He had an evolutionary biologist on and he was talking
about during the enlightenment, there were people meeting and sharing
ideas and like different shops and salons and whatever, wine bars.

(03:48):
I'm not sure where they actually went, but they would
take their ideas, they would share their ideas, and they
would try to copy each other's ideas. But sometimes they
would make mistakes and those mistakes would make even better ideas.
But this idea exchange is like the natural way that
we had tons of innovation during the enlightenment. And this

(04:09):
tracks for me because I've always seen innovation as like
bombarding your brain like a particle accelerator with ideas from
multiple sources, right? You talk with your your smart friends,
you talk about cool ideas, you read a whole bunch
of books, you watch a whole bunch of videos. Whatever
you do, and all these ideas like go into your
brain getting bombarded by other ideas that may be different

(04:32):
or the same or whatever, and they just kind of
percolate in there and kind of reproduce in there. And
then as you sleep and you dream and you think
about other things and work on other things, all of
a sudden you'll be like, wait a minute and you'll
have like these moments where actual innovation happens. So the
idea here is really simple. Let's copy how humans do

(04:53):
this right. How do humans do this at an individual scale?
And let's use automation and AI to orchestrate and scale
that process, which looks, I think, something like this. So
you have your own ideas. Ideas from books, ideas from
other people, ideas from wherever. And you basically put that
into an idea repository. And you could look at this

(05:15):
project right here called substrate, which I put together a
couple of years ago. And it's basically crowdsourced ideas, crowdsourced problems,
crowdsourced solutions. This is a way for us to pull
together ideas and solutions and problems all into a place
that we can crowdsource them and see them and work
on them. And most importantly, we can now hand this

(05:38):
to AI to start thinking about them all together. Then
you have this idea of an idea combination system, and
this is where you combine ideas. You vary them slightly,
change them in a subtle way, add randomness, whatever, and
then fold those back into the idea store. and so
the list of ideas just keeps growing. And then you

(06:00):
have the testing stuff. This testing stuff is absolutely critical.
And the most difficult actually, where you actually test the
ideas against the problems and you need to have a
way to experiment, right? And this is why so many
startups are actually spinning up labs like material science labs
or bio labs, where you can actually build molecules and

(06:22):
test them against living tissue. Right. And you have to
be able to do this. Otherwise you can't know whether
or not the idea worked or not. Uh, in some
cases you can in some like digital cases, you could
do like a B testing or something like that, and
you could say, yes, this is good enough to say
this actually worked. But in a lot of cases it's
hard science, it's hard reality. You actually have to have

(06:44):
a lab to do that. But what you do once
you have all these components, the ideas, the problems, the
idea combination engine and then the experimentation engine, You. Now
just run through this. You iterate through this. So we
have taken the human system of trying these different things,
and we've sort of broken it into its components of

(07:06):
the scientific method. And we are scaling it with AI,
with crowdsourcing and automation, you know, using pure tech to
scale the crap out of an already awesome human process.
And keep in mind, this is not just for like
a new type of keyboard or a better car battery

(07:28):
or something like that. The list of problems could be
anything from like marketing campaigns to figuring out better ways
to connect with kids who need to learn math or whatever.
We could put all of humanity's problems into these problem buckets, right?
And as we get better and better ways to test them,
we accelerate, right? We accelerate this entire process of automating

(07:53):
the scientific method. So this ends up being an algorithm
for solving general problems and testing them. And instead of
doing it at the scale of like the few universities
that we have and the few researchers that we have,
we now can do this at AI scale. And with

(08:15):
the bottleneck really only being, you know, how much testing
we actually need to do in the real world. Uh,
and I'm just really excited about this because, I mean,
we're talking about, I don't know, five x ten x
100 x 1000 x million x. Whatever. Our current iterations,

(08:35):
our current, you know, attempts on goal for doing the
scientific method, but just scaling that to an insane level.
So I don't think this system is actually needed for
AGI or ASI, to be clear. But this chart here
I think, shows how it is actually just a continuum

(08:56):
going from bottom to top. So you go from the
bottom subhuman level of general intelligence or cognitive capability. You
move up through AGI and then into AC at the top.
But I do think a system like this that we've
talked about is a way to actually make the transition
from where we are into AGI and then beyond into AC. Now,

(09:19):
my current guess, as I've sort of captured here in
this chart for AGI is 2027. And I think that's
going to instantiate as a true knowledge worker replacement agent
that actually you just hire as a company. It comes in,
it basically logs in and starts doing onboarding. It reads
the slack messages, it reads Confluence and Google Docs and

(09:43):
basically onboards like a regular employee. And this will be
our first instance of AGI will be like a commercial
project like that Um, or a commercial product like that.
And again, I think that's going to be around 2027.
My original range that I gave in 2023 was 25
to 28. So I'm, you know, well within those bounds.

(10:05):
And then for ASI, I have a lot less strong
of an intuition, but I'm guessing 2028 to 2030 for ASI.
And hopefully this has been helpful. Cool way to sort
of think about this uh scientific method algorithm. And we'll
see you next time. Unsupervised learning is produced on Hindenburg

(10:30):
Pro using an SM seven B microphone. A video version
of the podcast is available on the Unsupervised Learning YouTube channel,
and the text version with full links and notes is
available at Daniel Mysa.com newsletter. We'll see you next time.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.