Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hey guys, Saga and Crystal here.
Speaker 2 (00:01):
Independent media just played a truly massive role in this election,
and we are so excited about what that means for
the future of the show.
Speaker 1 (00:08):
This is the only place where you can find honest
perspectives from the left and the right that simply does
not exist anywhere else.
Speaker 2 (00:14):
So if that is something that's important to you, please
go to Breakingpoints dot com. Become a member today and
you'll get access to our full shows, unedited, ad free,
and all put together for you every morning in your inbox.
Speaker 1 (00:25):
We need your help to build the future of independent
news media and we hope to see you at Breakingpoints
dot com. We can go to Venezuela. There's a lot
actually going on there. Let's go and put d one
up here on the screen. This is important from Secretary
Marco Rubio. There had been a lull currently in Venezuela,
and just from basically from what I know, here's what
(00:47):
I can share. Rubio is still hell bound on regime
change now. What he did very recently was smart on
his plan to try and do this because Trump has
shifted current strikes away from the Caribbean and to the
Eastern Pacific. What he has done is they are now
going to designate the so called Cartel de las Solos
as a foreign terrorist organization, headed, as he says, by
(01:09):
the illegitimate Nicholas Maduro. The group has corrupted the institutions
of the Venezuelan government, is responsible for terrorist violence conducted
by and with other designated ftos, for trucking drugs in
the United States and with Europe. This is an important
legal designation for potential strikes on the regime itself because
of a twenty twenty indictment which alleges that Maduro is
(01:29):
the head of Cartel de los Solo. So by designating
it a FTO, it gives them the potential legal authority
to strike Maduro himself and to have regime change. Now.
At the same time, let's be clear, while Rubio and
all of them are pushing this, Trump is also being
presented with overtures from Maduro, who wants to negotiate with him.
(01:51):
And here's what he said whenever he was confronted about
potential talks with Maduro just yesterday in the Oval office,
to take a listen.
Speaker 3 (01:57):
In these talks with Maduro, is there anything that he
could say or do that would let allow you to
feel like he could stay with your support. Is there
anything that he could say that you would be okay?
Speaker 4 (02:11):
You can say it's hard to say that, you know,
the question is a little bit tricky. I don't think
it was meant to be tricky. It's just said, Look,
he's done tremendous damage to our country, primarily because of drugs,
but really because we have that problem with other countries too,
but more than any other country. The release of prisoners
(02:31):
into our country has been a disaster. He's emptying his heels.
Others have done that also. He has not been good
to the United States. So we'll see what happens at
a certain period of time. I'll be talking to.
Speaker 1 (02:47):
So always like, well, you know, maybe you will talk
to him. Maduro has done tremendous damage, but I will
be talking to him. Trump at this point is completely torn,
and this is the product of the part. This is
the Rubio problem that I've been flagging from beginning, because
he is the National Security Advisor, and he's a Secretary
of State, which means he totally controls the interagency process
(03:07):
and the stuff that gets to Trump and the President's desk.
In terms of the level of options there's never been
before that level of control except for Henry Kissinger under
Richard Nixon and with Gerald Ford, and Ford eventually is
fed up with that. Now. The reason why this is
important is that Maduro there is a lot of stuff
from behind the scenes, who has tried to make overtures
to Trump in the first place. Who Maduro doesn't trust
(03:30):
Rubio to relay any of his messages, so he has
to try to go through outside channel. Yeah, he's totally
right to do that, but what he's trying to do
is go through outside channels to reach Donald Trump. Now
they have this interlocutor with Rick Cornell, who I believe
is one of the best members of the administration. But
the problem is is that with the teragency warfare, you
just never know how this is all going to play out,
(03:51):
and Trump's own mind at the end of the day,
the most convincing argument that's been made to Trump is that, guys,
you don't have a plan and this is going to
be just like Libya. But with every step that moves forward,
from the FTO designation to the strikes and more, eventually
your hand may be forced. And who knows what Maduro
is going to do. Let's give the guy. I'm not
a Maduro fan, just everybody knows, but we can rationally
(04:14):
view he's got. You know, the biggest military force in
the Caribbeans is fucking Cuban Myscal crisis out there. They've
got strikes in the Caribbean. You got the US government
with a bounty on your head. I mean, a less
rational person would go berserk and would declare war. He's
not doing that because he just wants to survive. He
doesn't care about Russia, he doesn't care about China. He
(04:35):
wants to sell us oil and gold and just tell
us to go away. Like at the end of the day,
he wants to die in his own bed, rich, happy
and unafraid. That's what North Korea wants. That's what most
of these people that were told are so spooky and
horrible want. They're just pretty rational actors at the end
of the day. So when it comes to negotiation, the
deal on the table is a good one. He's willing
(04:56):
to sell us a shit ton of oil and gold
and minerals and specifically to deprioritize Russia and China. If
that's what we want, why would we not take that?
But Trump has got all this stuff in his ear
from Rubio and others, and they're making all kinds of
BS arguments. The latest one is that if we overthrow Maduro,
then the new government will be even more friendly to
(05:17):
us on terms of oil and of gold, which makes
no sense because why would you have to overthrow somebody
when you've got a good deal already that's on the
table with you, so you know, and you know at
this point the Venezuelan population. This is the other thing
I've learned in terms you know, everyone's like, oh, you know,
you're you're a Maduro simp or whatever. I'm not saying
(05:38):
he's popular, but in this instance, any leader like let's
say Maria Machado who's being directly propped up by the
United States of America and is seen as endorsing strikes
on her own countrymen, what legitimate you know, what legitimate
power are you gonna have with those people? Exactly? They're
(05:58):
gonna say, you're a treat deliber right, you're a tool
of the United States. And yeah, I don't like Maduro,
but I don't want some CIA backed literally USAID leader
to come in and to usurp my country. There's elements
of nationalism here, which are very important. The dynamics are
still very scary and they're very dangerous. So yeah, yeah,
things are not good right now.
Speaker 2 (06:19):
Well, let's check in with the New York Times Opinions
check out to see what sort of rational analysis they're
offering there. This is the next element. We've got a
real banger here from Brett Stevens, the case for overthrowing Maduro.
Let me just read you a portion of this just
to show you how stupid it is. So his first
argument he makes is he says, let's take it point
by point. Is there a vital American interest at stake?
(06:40):
And he says there is, and it's not just the
one the administration keeps talking about drugs, which we've talked
a lot about how they are not actually a significant
contributor in terms of the drug problem we have.
Speaker 5 (06:51):
In the US.
Speaker 2 (06:51):
But he goes on to say the larger challenge posed
by Maduro's regime is that it is both an importer
and exporder of instability. An importer because the regime's close
economic and strategic ties to China, Russia, and Iran give
America's enemies a significant fitthold in Americas. One that Tehran
reportedly could use for the production of Kamakazi drones. So
(07:13):
we've got the whole axis of evil type language. Ooh,
the bad guys like them, So we can't have that
an exporterer. Because the regime's catastrophic misgovernance has generated a
mass exodus of refugees and migrants nearly eight millions so far,
with ruinous results around the hemisphere. Both trends will continue
for as long as the regime remains in power. Now,
you might also you might also add that the sanctions
(07:37):
that this administration and others have levied against this regime
have also contributed to that economic collapse and migration. You
might also ask yourself the question, well, what happens if
this administration, the Maduro administration falls, and then you think
that's just going to be, you know, pretty innate, and
then everyone's going to want to stay at home in Venezuela.
(07:57):
The much more likely outcome is Libya. The much more
likely outcome is that you have effectively a failed state
where things get even more catastrophically violent, unstable, economically disastrous,
and you have even more people who leave their own
home country, which is a very difficult thing for those
individuals and obviously destabilizing for the countries in the region,
(08:17):
and something that you know, Republicans have been very against
accepting any of these migrants to come to our borders,
at least under the Trump administration. Two point zero. So
just completely idiotic. Doesn't think through at all. Okay, So
what happens if we do intervene and we take out Maduro?
How is that going to go for people? Based on
our extensive experience just over the past, let's say, twenty
(08:41):
years in how that has gone when we've gone in
and tried to do a regime change operation? You tell me,
has that led to peace and stability? Has that led
to a lessening of chaos in every single instance? The
answer is absolutely not. Has been complete and utter disaster
first and foremost for the people of those countries.
Speaker 1 (08:57):
Yeah. I mean, look, the arguments fall part on any
basic scrutiny, but they still exist. That's the problem. I mean,
put the next one, please up on the screen. We
still have an immense amount of military assets which are
in Venezuela. The all of these, you know, are potential strikes.
You've got the aircraft carrier that just came on its way,
(09:19):
You've got multiple US warships. You've got drones and planes
flying around in the Caribbean and the Eastern Pacific just
striking random boats where they've got the legal justification that
you're talking about. Let's go to D six or D
five please, just to show you all, some military personnel
are now seeking legal advice on whether these missions are
even lawful. Apparently all of this relies on some secret
(09:42):
memo which is inside of the White House Council's office
where they've determined the legality of the strike. Just so
if you're too young to remember, this is what the
Bush administration was like, and there you can go watch.
Yeah you remember. Yeah, there's actually some great what do
they call the frontline documentary on PBS. If you guys
are interested, you can go watch them on YouTube. I
(10:02):
think there's these great frontline documentaries specifically about these secret memos,
like with torture or warrantless spying with the NSA where somebody,
you know, they go to some guy's hospital room to
try and get him to reauthorize spying. And I mean,
the drama of it all usually comes back to the
fact that these military personnel were taken out carrying out
(10:23):
these strikes, you know what they're afraid of. They're like, yo,
we could be prosecuted if somebody ever comes after us.
And by the way, I don't think that they should
be because the people who are making the decisions and
ordering them to what are they supposed to do? You know,
they're caught between a rock and a hard place. The president,
the SECTIF is telling them go do something. The sec
DEFF and the President the ones who are trying to
cover themselves with all of these legal memos and then
(10:44):
briefing them to Congress. But even Congress is not buying
a lot of their bs on this issue. So look
all of this. By the way, there was even supposed
to be a descent vote. Remember there was actually a
War Powers Resolution potential descent, which they barely won in
the US Senate. The only reason that they won is
because the administration was leaking that actually they had backed
(11:06):
off of military options when it came to Venezuela. But
as you can see with the designation, they wanted to
keep the optionality alive. And of course that's where Congress
just completely abdicates their responsibilities too from the Senate.
Speaker 2 (11:18):
So yeah, the whole thing, why you would trust these
people at all bad, you know. And on the service
members seeking outside council, it's pretty interesting if you listen
to or read this piece.
Speaker 5 (11:28):
So they say it's it's mostly.
Speaker 2 (11:30):
Not the more junior level, it's mostly the higher level.
Because the junior level service members are they trust the process.
They're like, if they're telling me to do this, I'm
sure it's gone through the proper channels, it must be lawful.
But the more senior people are like, I've been around
the block. I don't trust these people like I. Also,
by the way, they dismissed a bunch of the jag
(11:51):
the lawyers, internal lawyers that they thought would go against
some of the more aggressive and illegal things that they're doing.
So they're the ones predominantly who are seeking this outside council.
And to your point about those secret memos that they've
used for the legal justification, I think it was the
New York Times that did do some reporting on that.
They were able to see those memos and or learn
(12:13):
about those memos, and what they found is that they
rely not on the actual facts of you know, the
realities of the drug trade and the minimal involvement, the
lack of direct involvement and Madure whatever. They rely on
the President and other administration officials' statements of their characterization
of Maduro and drug trafficking, so they're not even using
(12:38):
their own analysis.
Speaker 5 (12:40):
They're using just.
Speaker 2 (12:41):
The lies and the statements of like Trump as their
justification for these unbelievably like insane actions. And last one
this is I guess maybe a little heartening could put
this up on the screen. Wasn't sure how the American
people would feel if they would just buy the writer
at cook line and thinker of like, we're getting the
bad guys, let's go. But only twenty nine percent of
Americans support the US military killing these what they describe
(13:06):
as drug suspects. You know, I mean we some of them,
maybe some of them may not be. We don't really know.
We offered no evidence of such thing. But you have
a majority oppose it and just twenty nine percent who
say they actively support it, and the rest are unsure.
So you know, got to give some credits to the
American people here that they are they're they're not buying
all this bullshit.
Speaker 1 (13:25):
No they're not buying it. But it's one of those
where does it matter at this point? You know, if
your own Congress basically abdicates responsibility, and then the administration
and just making up shit, you know inside whatever they
that's always you know, this is the scary part about
the government that a lot of people don't remember. And
you know, forgive the tangent, but do you remember the
on moral Alwicky strike. The Barack Obama in his White
(13:49):
House were like, well, this guy's an American citizen, we
want to kill him abroad. Well, technically he needs due process,
So they just invented this entire fake legal court. The
executive branch technically gave him due process, and then the
White House itself goes, yeah, we're allowed to kill him now.
And then we killed him and we killed a son
as well as a bunch of other people in Yemen.
(14:09):
They can just do whatever they want. I mean, you know,
you can trust it up if they want. But at
the end of the day, like when you start to
really break all this stuff down, this is why who
you elect, and not even just who you elect matters,
but people need to remember like where on foreign policy
we effectively have a king, and especially in the War
on Terror era, these congressmen they want the reason why
(14:32):
that they don't dissent from the administration. Is even if
they don't trust them, if things go bad, they don't
want their hands dirty. Because Iraq took so many of
these different congressmen out in the races, they never want
to actually vote on anything foreign policy related in the
event they could ever come back to bite them politically.
(14:54):
Obviously cowardice, but you know that's the unfortunate reality of
where we are right now. Okay, let's move to property tax.
On the heels of Texas voters deciding to embrace lower
property taxes and lower school taxes for boomers, Florida has
decided to try and one up them. Governor Ron DeSantis
(15:15):
now fully embracing erasing all property taxes for so called
homestead owners in the state of Florida, with one of
the oldest populations in the entire United States. Here is
the governor's argument.
Speaker 6 (15:27):
From the property tax situation. It's very important given how
that's pinched so many homeowners, particularly our senior citizens, who
have their homes paid off, and they bought it thirty
years ago for a certain amount. Now they're being told
it's worth so much more and they have to pony
up more and more money. It's almost like they have
to pay rent to the government just to be able
to enjoy their property, and that's wrong and we need
(15:48):
to do something about it.
Speaker 1 (15:50):
We are going to do something about it. Okay, yeah, yeah,
you're right. By the way. Ever since I've engaged in
a jihad against this, Ron DeSantis is now on Twitter
implying saying that I'm being paid as an actor to
dissent on who's paying me. Miami Dade County, the School
District of Miami is like, please, sir, will you please
(16:11):
post on behalf of the children of Flora. I'm doing
this out of the goodness of my own heart, Okay,
because I care about the young people in the state
of Florida. And people like DeSantis and Greg Abbott love
to go around and say, oh, look at all these
families that are moving here to Florida. And at the
same time, what's happening. They're enacting policy which will be
a massive giveaway to the elderly. Now, their argument is, actually,
(16:34):
we already have property tax for old people, so this
would be directly helpful to the young. Here's the deal, guys,
it doesn't take a genius to do just a tiny
little bit of research into where does all this property
tax dollars go. Let's go ahead and put let's put
this up here. Please e four up on screen. Shall
we deal with the world of numbers and actual funding.
(16:54):
So here we go. This is the Florida Policy Institute.
Eighteen point five billion dollar is currently at stake for
Florida counties. Now, the actual question here is about this
so called homestead tax property tax revenue. That's seven point
eight billion for counties, seven point seven billion dollars for
school districts. Let's go through the counties where a homestead
(17:15):
property tax revenue is the greatest share of the total
government revenue. You've got multiple counties including Nasau, Saint John's,
Saint Lucie, and Miami Dade would lose most revenue in
dollars from an exemption of homesteads from the property tax.
Their basic argument is that by doing away with this
will shift the tax burden to the snow birds and
(17:35):
the second home owners, and this will make it so
the quote young families will not have to pay. Here's
the deal. Ultimately, property tax is literally one of the
least distortionary taxes that exists in the United States. Second,
people say you know how libertarians always like taxationist theft
guys tax, property tax, and land tax, and specifically a
general property tax has been around since before the Republic
(17:57):
was even founded. Its income tax, capital game tax, and
all these other taxes which are more novel inventions. Property
tax is the most American tax that exists that goes
all the way back to again even before the founding.
The reason why is and again this is even a
Republican thing if you think about it. When we pay
income tax, where's that shit going? Do we really know?
(18:18):
Venezuela regime change, Israel, weapons, you know, some bullshit somebody else, Boomers, Medicare,
social Security, something like that. When you pay property tax,
you literally know where it is going into your locality,
your community and schools. This entire thing is being sold
as a way to shift the tax burden off of
(18:39):
young people. If you're going to lose eighteen and a
half billion dollars in the state of Florida. And again,
if you break it down, there are multiple counties where
the homestead property tax specifically makes up some fifty one
percent of the overall dollars that flow in and as
I said, almost half of the dollars go towards the
school district. Who is going to pay for the damn schools?
Do you know who's going to pay for the damn schools?
(19:00):
Everybody else? Now they're saying, well, of course we can't
raise income tax. No one's saying that you're gonna raising
compests because you don't have one. So what else do
you have? You only have two options in a state
like Florida. You're gonna have to rail safe sales tax
and you're gonna have to bilk everybody who tries to
come down to go to Disney World. Have you ever
stayed in a Florida hotel? You know, if you have,
you know, you can look at your bill and there
(19:20):
literally is like an entire section of your bill where
it's a non Florida resident tax. That's how they fund
most of their state. So they're actually trying to pass
the burden off to all the tourists who come down
to their state, family that wants to go to Disney
World or Orlando, Harry Potter or whatever, you know, passing through
their airport crews. They're builking everybody else to try and
(19:42):
pay for their stuff. But let me just you know,
and find that's voluntary. People voluntarily go down there at
the end of the day. One of the reasons why
property tax in particular is important, and this is not popular,
I'll just admit it straight up, is yes, you know,
the seniors. What they say is seen years unquote, fixed
income should not have to continue to pay into the system.
(20:05):
They shouldn't. They've paid their dues, so they shouldn't have
to pay for school tax Now let's again just think
about that older silent generation. They paid for the boomers
children's school taxes whenever they were going to school. But second,
we all live in a collective society. We pay for
their Medicare, for their dialysis, for their Social Security checks,
by the way, vastly more that they receive than they
(20:25):
ever paid into the system. And in return, we expect
that they also pay into let's say, property taxes, school taxes,
for the societal compact that we're all in this together.
Oh and by the way, nine to one one ambulance services,
city services, and others. Who anyone want to tell me
which age group vastly consumes a lot of those public services,
right right of course, So at the end of the day,
(20:46):
somebody has got to pay. Someone has to pay for cops,
for ambulances, for schools, which is what the vast portion
of a lot of this is. There's only one way
if you get rid of property tax to do that
is to shift to a highly regressive sales tax and
to a tourist tax, which, look, maybe that will work
in Florida, and ultimately this is up to Florida residents.
(21:07):
You guys decide your own destiny, and Texans have already
you know, my own home state has decided to do this.
But I think this is highly dangerous because this is
now going to catch on in every red state in
the country. Many of these red states are places where
people who are fleeing New York City, California or whatever,
are coming for a more affordable way of life. This
(21:30):
is being sold as, oh, it's going to reduce your
overall tax burden. Again, it will come back to bite
you no matter which way you look at it. Okay,
it's coming for you, no matter what. Somebody has got
to pay. Ultimately for this, what this net result would
do would push it off to other people, and they
would try in the form of a sales tax, and
(21:52):
it locks people into housing stock. One of the most
common arguments from a lot of these people is like, oh, well,
you know the property. I bought this house in nineteen
seventy five and since then, poor me, it's gone up
to two point five million dollars and now I can't
afford the property tax payment. Now here's the thing, and that,
for some reason, we're supposed to feel sympathy only for
the senior. How many people do you and I know
(22:14):
who are young, who's rent has gone up by twenty
five or thirty percent? Does anybody do anything for that person? Right?
Does anybody care? Hey? Oh what does Ben Shapiro say? Move?
Speaker 4 (22:24):
Oh?
Speaker 1 (22:24):
Sorry, that's the free market. And by the way, they
don't even have equity in their rental property. These people
are sitting on millions of dollars of unrealized gays, not
to mention their booming stock portfolio. So I'm supposed to
feel like deep sympathy because you're sitting on a multimillion
dollar property which you can sell, by the way, if
you wanted to it, because you can't keep up with
(22:44):
the burden. And then finally, fixed income. This thing is bullshit.
The idea to quote seniors on fixed income they get
social Security is literally pegged to inflation. Now, I'm not
saying that the COLA, the cost of living adjustment is
true inflation or any of that. Does anybody else in
the world get automatic adjustments in their income. For a
(23:07):
young person who's rent went up by thirty percent, they
don't get their requisite you know, seven eight percent or
whatever automatic paymump. They probably get either a decrease or
they have to do the humiliating ritual, you know in
a big corporation of being like, hey, yeah, I know
it's my review time, but I would really like to
get a X percent increase in my ry and they're like, oh, sorry,
actually we can't do it this year. And what do
(23:28):
you do? You just have to eat it. So everybody
else is stuck with personal responsibility. But supposedly this whole
fixed income thing, even though it's not fixed, it's literally
inflation adjusted and now tax free. As a result of
the big beautiful bill, eighty eight percent of Social Security
recipients will not pay a single dollar in federal income
tax on what they get from the government. So you
put all this together, who are you prioritizing in Florida?
(23:52):
It's obvious, And he even said it in the video
our seniors. Now he's trying to sell it on Twitter
as actually because the young have to pay more more
in property taxes. Because we cap it for old people,
we're really helping them. No, at the end of the day,
those are the people who need those services, like schools
in particular. You will be paying for it no matter what.
You were just going to be paying for it in
(24:13):
a different way. Except now there's no market pressure to
overturn housing stock, which, yes, I know that that sounds mean,
but I'm sorry. In healthy society, what happens is that
housing stock are four to five six bedrooms and all
these okay, six bedrooms, you're just super rich. But let's
say three four bedrooms something like that. Yes, that's normally
supposed to turn over. Downsizing is a part of the
(24:36):
American story, and I mean not just the American story.
If you go throughout the entire world, the idea that
you have people who are in their eighties living in
three four five thousand square foot houses, you are nuts
out of your mind. Multi Generational housing is the way
that the rest of the world does it for a reason,
so that people can look out for each other and
(24:56):
be close to one another. You know, this whole thing
is built on this idea as if you have a
right to keep this massively appreciating asset, which through all
human history usually turns over either within family or not,
and that you should get breaks from it in terms
of paying for the future generation. So that's the end
of my rant. But I mean, I've never seen. I've
(25:18):
never seen. The amount of weed is the only issue
that's even comparable in the amount of hate that I
get on top of this. And so that's how I
know that I'm over the target. That's how I know
I'm over the target. I was right on weed. I
get more tax from people by the year of Oh
my god, I can't believe I used to make fun
of you. You were totally correct. You'll all be back
here whenever your sales tax spikes or your school's completely
(25:42):
turned to shit. You're going to be back and tell
me about why you wish that you wouldn't have voted
for this. But I know everybody will because everybody, everybody, Oh,
I just want to pay less taxes. You're gonna pay
for it. You're gonna pay for it somehow.
Speaker 2 (25:53):
Well, And that is the other option, is that instead
of paying for it through a highly rerossive sales tax
that they just cut the school budgets. And I mean
there is a long and DeSantis is part of this,
so a long like conservative war on public education. They
love to demonize teachers, you know, that's part of what
the like Educate the teacher strike wave was all about,
as these massive cuts that were coming to education and
(26:14):
to teacher pay, et cetera. And so you know, it
sort of fits with this broader ideological project.
Speaker 5 (26:20):
For me personally, I'd.
Speaker 2 (26:21):
Be okay with reducing property taxes if you're replacing it
with something that is more progressive and more based on
your class. That would be a progressive income tax. But
in Florida, since there is no income tax, that's obviously.
Speaker 1 (26:32):
Right, they don't have on the table.
Speaker 2 (26:34):
Is so what you're going to be replacing it with
is something that is deeply regressive. And it is kind
of like the logic of this is very bizarre to me.
Of like the fact that you own this asset that
appreciated and has a higher value, Now that that's a negative,
and that's something that we need to like, you know,
orient our political system around protecting people who live in
(26:56):
homes and own homes number one, which is a very
like privileged set of society at this point. And you
have had the good you know, the good fortune to
see that asset appreciate over time. So yeah, it fits
with this broader ideological conservative project. So I think it'll
continue to be a trend. I did want to read
Rnda santiss like subtweet of you.
Speaker 1 (27:16):
Yeah, just so people, you go ahead, Yeah, so you
put it in post productions.
Speaker 2 (27:19):
Sagar had said, let me be one hundred percent clear
ending property taxes in Florida, especially as a massive giveaway
to the elderly. It's in a time by the worst
generation of pass any and all expenses of living in
a society to the young while they get free healthcare and
inflation adjusted free income. By the way, and you argue
with that fully supports us securities, fully support medicare. But
I do think that level of care and safety net
(27:40):
and concern should not just be exclusively for boomers. This
other lady quote wreeted you and was like, this is
the most ridiculous take I've ever seen.
Speaker 5 (27:49):
And then Ronda santis chimed in and said.
Speaker 2 (27:51):
Isn't it a bit odd that the same week arguments
are all of a sudden circulating at the same time.
Speaker 5 (27:57):
Gee, I wonder why must be a coincidence? Like so
just like you say.
Speaker 2 (28:01):
You say, I'm me that you're being paid to have
this take and that you're you know, just some sort
of like bot out there who's pushing this narrative.
Speaker 5 (28:09):
But to me it was an indication.
Speaker 1 (28:11):
Oh, I'm over the target.
Speaker 5 (28:12):
Not only over the target, but you've been influential.
Speaker 2 (28:15):
Other people must be chiming in if he feels like
there's a whole army of this quote unquote week argument
circulating at the same time.
Speaker 1 (28:22):
It's circulating because and look, I'll be on the Charlie
Kirk Show later today to lay out the same thing
about property tax. The reason why this is catching on
is they for years they've just been able to chant
brain dead mantra all taxes or bad. Well, look, guys,
here's the deal. You are literally targeting the least distortionary
type of tax, one of the oldest taxes in American history, Lindy,
(28:45):
in terms of our society, if you care about your community,
these are literally the only type that's actually going to
helping what you're doing, it's helping raise the next generation,
and it's literally a gambit by the elderly and the
home ownership class, which is largely elderly. Do you want
to know what the medi and home buyer in Florida
is is sixty years old? Median is sixty years old?
(29:08):
Who's living in the state of Florida? Who are again
getting free health care, free social security from the government
and then the state? Oh, every time there's a hurricane. Yeah,
we're talking about our federal tax purer. Where's that going? Huh?
You guys are literally an unensurable state. Who do you
think is backing that up FEMA, which basically exists for
the state of Florida. That's fine, you guys are part
of the country. You send some dollars here too. It's cool.
(29:30):
We all live in a society. I accept my role,
even though I don't live in Florida that we often
have to bail out the state of Florida. I'm actually
okay with that. But they're not okay at the end
of the day with trying to fund basic social services,
especially in these counts. I mean Miami Dade County. As
I just read, there is one of those counties that
massively relies specifically on this homestead property tax to fund
(29:54):
their schools. That's who you're attacking. And then they always
are bragging about how great their schools are. Don't say
gay and all that. That's fine with me, all right, cool,
all right again, But if you're going to massively try
and slash all of this, what's going to happen. You're
either going to slash it or you're gonna have to
move the burden again literally in a highly regressive sales tax,
or on tourists who are already getting built in your
(30:16):
own home state by the Disney Corporation, by the cruise
industry and all these other people. I just think it's,
you know, it's deeply unfair. And this stuff is catching
on because in the age of the affordability revolution. Let's
think about the Republicans and the Ben Shapiro's of the
world and the Dessantas of the world. What do they
say to New York City renters when they say they
(30:38):
want to freeze the rent? Communism, socialism, Right, It's the
same argument on property tax, is it's unfair how much
my house is appreciated in value and I shouldn't have
to pay based upon that value. He likens the transaction
of purchasing a home to purchasing a television. That is
the IQ of the oar which is being made here.
(31:01):
E two. Please let's take a lesson.
Speaker 6 (31:03):
You should own your property free and clear. I think
to say that someone that's been in their house for
thirty five years just has to keep ponying up money
that you know that is not you don't own your
home if that's the If that's the case, so yes,
of course I'd like to see people be able to
owe oh, free and clear. And it's interesting because it's
like you, you know, if I go to Best Buy
(31:26):
and buy a flat screen TV and put it on
the wall, I got to pay a sales tax on it, right,
But I don't keep paying tax on it every year.
I mean, it's just not that's not how we do things.
It's like, okay, if you're going to tax something, you
tax it at the transaction and then let people actually
enjoy their free their their private property, free and clear
of the government. So that, I think is the vision,
(31:47):
that's the philosophical insight.
Speaker 1 (31:50):
Well, here's the stupidity that he just laid out there.
If you want to take to meet ball seriously in
that let's tax it at the time of transaction. Okay,
everybody won one hundred thousand dollars surcharge on the top
of their house. That's what he's saying there it Oh, okay,
so in perpetuity, let's say average home ownership in Florida
or in the US, that's like eight years. We you
have to front load eight percent of property tax in
(32:12):
your transaction. Do we all want to sign up for that?
That sounds really great for a new homeowner. Oh, actually,
part of my closing costs just went from you know,
X thousands to multiples of x thousands not rolled into
my mortgage payment. What is that fair? Is that superfic
You're gonna pay for it? That's what this is what
people just don't seem to get. They think we can
just cut this here. You can live in a state
(32:32):
with no income tax and you can just magically have
a school district. You're either gonna have a shit school
district or you're gonna move to a high sales tax.
By the way, highest sales tax is not only aggressive
on the poor, it's also regressive on the young. When
you look at consumption data, so it's at the end
of the.
Speaker 2 (32:47):
Day, you'll just for like your private firefighter service.
Speaker 1 (32:50):
They look at LI But even in these high tax
states like California, you still end up with places with
things like that because of the distortion of that property
tax measure that they pass, where people who have get
people have owned their homes for quite a long time,
you do not get. They get property tax capped, and
(33:10):
it's not pegged to the overall market rate, which means
that there's no turnover in the supply, and it is
the least the least affordable housing in the entire United States.
You can't be pro affordability, pro family and then also
be anti property tax. It's just it's it's a completely
in Congress position. It's also not a surprise to me
that fucking Dave Rubin and all these other filthy rich
(33:33):
you know, confluencers down there are against it. Yeah. Why
do you think, uh, when you own a multi million
dollar property in Florida and that's the only fucking tax
that you actually have to pay. Oh, I'm sure it's
a very principled position. You know that these people are
all take so at the very lead. Yeah, if you
want to if you want to say I'm arguing as
my own economic interest. All right, you know, it's who's
(33:53):
paying me. Nobody is paying me. It takes look if
you want to actually look out for the future, and
in part if you want to look out and I
believed in this. In the sun Belt and everything, people
were moving to Texas, Florida, Georgia, all across the nation.
They wanted more space, they want more quality of life,
and they want a little bit less of being told
what to do from the you know, big blue authorities.
(34:14):
I have no issue with that. In fact, i'm supportive
of it. I think it's cool. But I think it's
really disgusting because what they're doing is basically putting the
interests of these elderly people over all of these new
young people who are moving into the States to pursue
their own American dream. It's like a bait and switch,
if you ask me. And so I think that's really sick.
And ultimately people, you know, look, I already know everyone's
(34:36):
going to support this and going to vote for it,
and they're going to find out the hard way at
the end of the day about why this has always
been one of the oldest funding mechanisms for you know
since the very foundations of the republic, and you know,
it sounds fun and sexy and all of that, and
ultimately the net result will be much worse public services,
worse schools. You know, if everybody wants, if you want
(34:58):
a whole school homeschool, I think that's fine, but not
everybody does. You know, some people actually want to send
their children in public school. I went to Texas Public
High School. There's nothing wrong with it. And you know,
it's one of those where Bradley was long seen as
something that you know, you could be proud of, even
in a red state where you have school choice and everything,
and even now they're trying to do away with it.
(35:19):
So I think it's sick. We've got a great guest
standing by, let's get to it.
Speaker 2 (35:26):
So, as you guys have probably noticed, we've been talking
a whole lot about AI on this show and how
it will impact all of our lives, how it's already
impacting all of our lives. So we're very excited to
have a guest today who knows a whole hell of
a lot more about it than we do. Daniel coco
Tello is formerly with Open AI. He now is the
executive director of AI twenty twenty seventy co authored a
(35:47):
paper that tries to sketch out as best as they
can their predictions of the way AI development is heading.
And spoiler alert here there are some very troubling warnings
contained within this report. So, Daniel, welcome, great to have.
Speaker 7 (35:59):
You you, thanks for having me excited talk.
Speaker 5 (36:02):
Yeah, of course.
Speaker 2 (36:03):
So before we jump into the report and your sort
of projections of the future, i'd love for you a
level set of where AI development is today in terms
of the sort of overall landscape and trajectory of where
you expect things to go.
Speaker 8 (36:17):
So probably your audience has heard of chat, GPT and
various other ais like that. They've sort of exploded onto
the scene in the last few years because they've finally
gotten sufficiently capable that they are somewhat useful in real
life for a wide.
Speaker 7 (36:34):
Range of tasks.
Speaker 8 (36:37):
They're getting more capable rapidly, and we can get into
the details about why that is if you're interested, But
the point is that progress is fast and the ais
are rapidly becoming more capable. As a result, several large
tech companies, most notably Open AI, but also Anthropic and
Google XAI, Meta have explicitly set the goal of getting
(37:01):
to superintelligence in the near future. Superintelligence means a AI
system which is better than the best humans at everything,
while also being faster and cheaper.
Speaker 1 (37:11):
Daniel, then diving a little bit into your work, what
you have warned about about this super intelligence is that
jump off point of what your project has led to
AI twenty twenty seven or perhaps twenty twenty eight. That
you're now saying is that as these ais began to
train each other, that the exponential growth and the potential
kind of apocalyptic scenario may soon be upon us. Can
(37:32):
you describe some of your own work and your research.
Speaker 8 (37:34):
In that yes, And before I do, I want to
again set the context here, so you know, I was
working at OpenAI, and people at open EI and at
Anthropic and at Google. We're talking about what it would
be like if and when we finally got to superintelligence.
This is something that these companies take seriously as a possibility.
(37:55):
It's literally what they're aiming to do, and it's wild,
it's crazy, like what if they succeed? You know, Open
AI they their internal projection, which they actually made public recently,
is that they will have automated AI research in twenty
twenty eight, so the AIS will just be you know,
(38:17):
self improving around that time. Anthropics seems to think it's
going to happen sooner twenty twenty seven. Other companies are,
you know, maybe maybe thinking this a bit later. There's
a lot of uncertainty. Nobody, including us, nobody knows exactly
when they would succeed at this goal, or even if
they would succeed at this goal. However, it seems to
(38:40):
us that yeah, they might succeed, and they might succeed soon.
So we wrote twenty twenty seven to illustrate what that
would look like if it happened. As for when it's
going to happen, there's a lot of uncertainty. It could
be twenty twenty seven, could be a bit later. I
think right now my median is more like twenty twenty nine,
twenty thirty, something like that. That like fifty percent chance
(39:01):
it happens before then, fifty percent chance it happens after then.
But one way or another, I think that things are
going to get pretty crazy pretty soon, and that's why
we wrote this scenario to sort of illustrate what it
might be like. And you know, we wrote it based
on the sorts of things that people in the industry
talk about. You know that again, the AI is self improving.
(39:23):
That's not just science fiction. It's literally the plan. This
is what the companies are trying to do, is to
get an AI that can automate that research process entirely. Right,
the AI is being potentially misaligned. Well, they're already misaligned now.
They don't always behave in the ways that they're supposed to,
and our means of controlling them is limited right now.
(39:45):
So you know, this is sort of just extrapolating into
the future how this might go. Obviously, the future is
really hard to predict. It's probably not going to go
the way that AI twenty twenty seven says. But we
thought it would be helpful to, I guess, get people
to start thinking more seriously about this, like people in
the companies are thinking about this. People, you know, you
(40:06):
can go to the cafeteria and ask people what do
you think the future is going to be like? And
oftentimes we'll hear stories that aren't that different from me
on twenty twenty seven. But the rest of the world
sort of isn't really paying attention already for this.
Speaker 2 (40:20):
Let's talk a little bit about misalignment, and for people
who aren't steeped in this language, basically it means that
the AI is doing things that the humans don't want
it to do.
Speaker 5 (40:30):
And you can tell me.
Speaker 2 (40:31):
If i've got that, you know, I've got that basically correct.
What have you seen as being some of the key
instances that we know about where there's been significant misalignment?
And how have you felt about the response from the
companies whose models are experiencing this misalignment.
Speaker 8 (40:49):
So you've probably heard about rock Mecha Hitler, yes being Sydney.
Those are sort of the the exciting, you know, twitter
worthy instances of misalignment that are happening, but they're thankfully
somewhat rare, and there are less exciting examples that are
(41:12):
more persistent, which is such as sycophancy.
Speaker 7 (41:14):
I think these days.
Speaker 8 (41:17):
The companies have had some trouble getting the AIS to
not suck up to the user, and this is because
the well sucking up to the user works to some extent,
like it it often works to make the user, uh,
you know, feel happy and approving, and based on the
training process that they're using to train these ais, uh
(41:38):
you know, if that if that sort of thing is reinforced,
then it becomes a persistent behavior.
Speaker 4 (41:43):
Right.
Speaker 8 (41:44):
Uh, there's a similar thing called reward hacking, which is
happening a lot sometimes where these days they often train
the AIS to do lots of math and coding problems,
and sometimes it's possible to cheat when you're doing the
math and coding problem and you know, produce some code
that isn't really good code, but that nevertheless passes the tests.
(42:05):
And so some of these aas have learned to cheat
on these coding problems. And this, you know, I think
probably if you use AI systems frequently, you'll have actually
experienced this yourself at least a few times where they
they say they've done something and they totally haven't, and
then they maybe double down and and you know, try
to like deflect when you ask them about it.
Speaker 7 (42:27):
Right.
Speaker 8 (42:27):
So, so these are some everyday misalignments that are happening
right now, and they're not what the companies want to happen,
and not what the companies intended to happen, but their
behaviors that have resulted anyway because the company sort of
accidentally allowed that to happen and allowed it to be
reinforced right the way. One way of putting it is
that the behaviors that were selected for in the training
(42:53):
environment were not exactly the behaviors that the company wanted
to select for, you know, and and so what the
gout at the end is not exactly what they wanted, right, Yeah.
Speaker 1 (43:05):
Go ahead. Well, something I'm really concerned about are some
of these new recent claims, let's say, from Sam Altman,
about suicide and about erotica and the descent into pornography.
The basic framework that Sam Altman has put forward on
both of these highly sensitive issues, which of course are
going to touch billions of people and potentially work behavior,
(43:26):
is we have put processes in place. Now, given what
you just said about misalignment and your own experience working,
how seriously can we take these promises that they have
processes in place to ensure that people don't experience and
use these products, you know, to enhance their mental illness
or god forbid, you know, encourage him to commit suicide
(43:47):
or in the case of erotica pornography, like develop deep,
like unhealthy attachments between technology and humans. They're saying, you know,
we've built all of these things into place to make
sure the tech behaves in a way that we wanted
to seriously should we take some of those claims, given
your experience in your research.
Speaker 7 (44:04):
Not very seriously. I would say I think that the.
Speaker 8 (44:08):
You know, Opening Eye says on their website, and they've
said internally for a long time that their strategy for
a safety is iterative iterative deployment, which means, you know,
build the thing, put it out there in the world,
have it, do a bunch of stuff, and then see
how it goes wrong, and then fix the problems after
they happen. And I think there's actually a lot going
(44:31):
for this strategy. I think it's sort of historically how
humans have made a lot of things safe, Like a
lot of people had to die in car crashes before
cars could become as safe as they are today, and
a lot of planes crashed before planes became as safe
as they are today. But the bottom line is that
you know, as a consumer, you're going to be dealing
(44:51):
with a new AI system that will probably have all
sorts of traits and properties that weren't intended, that might
cause all sorts of of strange effects to the people
who work with it that the company didn't anticipate, and
then only you know, months later when they get the
reports of the suicides, will they then do something to
fix it? And that's the way it's going to be,
(45:14):
not just at Open Eye, but these other companies too,
because of the intense race dynamics that we're under. And
I think that, you know, someone could say, maybe that's fine,
Maybe that is actually the normal way that we get
used to technology is by sort of having it go
out and do stuff and cause harm, and then we
fix the problems as they come up. However, I think
(45:35):
it's not fine when things get extremely high stakes, when
they you know, when the AIS are building the next
generation of AI systems, then even a relatively small misalignment
or problem could be passed on into the next generation
and then into the next generation and so forth, and
it could sort of snowball out of control in the
manner described in a twin twicelf.
Speaker 2 (45:56):
Yeah, it seems to be like there's actually two misalignments
to be concerned about. One is the misalignment between the
models and the programmers. The other misalignment is between the
you know, tech guys that are developing this stuff and
all the rest of us and humanity and what we
actually would like to say, because I mean I'm even
thinking about that example of the sycophancy, like, in a
sense it creates a worse product because it leads to
(46:21):
lying and manipulation, et cetera. But from maybe an Elon
Musk perspective, it's not a worse product because it keeps
users engaged for a longer period of time. And when
you when you mentioned this overall race dynamic, and I
want to use this as a way to get more
deeply into AI twenty twenty seven and what you think
the trajectory is.
Speaker 5 (46:39):
You know, Elon is an.
Speaker 2 (46:40):
Interesting character because this is someone who was very worried
about AI safety to begin with. And my understanding is basically,
once you realize, well, everybody's off to the races, I
guess I got to put my own product down there
and make it anti woke because I think that's what's
going to be the thing that protects us all, which
I think is completely ludicrous on its face. I can't
even believe that he actually thinks that, but that was
the logic that him also participating in this arms race,
(47:03):
which is both with US companies and then also against
China and Chinese competitors. So given those are the sort
of fundamental dynamics we have going on here? What do
you project out? How do we go from this moment
where it's like, Okay, I can ask chat GPT a
question and it's sort.
Speaker 5 (47:20):
Of like a glorified Google search.
Speaker 2 (47:22):
I can get some weird AI slot videos from Sora
or whatever, to this could actually potentially be the end
of human civilization. What is the chain of events that
lead you to that conclusion.
Speaker 8 (47:34):
Well, that's a very important question, and the answer to
that question is read AI twenty twenty seven, which is
a fifty page detailed chain of events that leads from
where we are now to that conclusion. And I'm glad
you've mentioned the two misalignment problems because because that's another
important issue that I wanted to talk about, Which is
(47:55):
the way that I would put it is, there's the
loss of control problem, which is, like, how can we
have any humans in control of these ais after they
become super intelligent and embedded in everything and running autonomous
factories and so forth. That's the loss of control problem.
But then there's the concentration of power problem, which is
which humans control them and what are they doing with
(48:17):
the armies of superintelligences that they control right, And I
think both problems are very serious and we're currently not
on track to have solved either of those problems. We
talked a little bit about the the misalignment problem the
loss of control problem already. Briefly, I'll say about the
conservation of power problem. The industry inherently has returns to scale.
(48:39):
It's just in the shape of technology. The best AIS
are going to be trained on the biggest data centers.
Generally speaking, you're not going to have mom and pop
shops or like hackers in their basement building. You know,
better AIS than the giant tech companies because of how
much returns to scale there are in AI training. And moreover,
(49:00):
it's it is like software in this in a way
that makes it more winner ticks all right, Like, even
though you can make your own Facebook clone relatively easily,
you can't recruit people to be part of your Facebook
clone so easily because everyone's on Facebook or whatever like
you sort of it is hard to compete in that way.
(49:23):
And then there's an additional dynamic which hasn't been seen before,
which is the recursive self improvement automation loop. Once the
ais are doing the AI research, then the gap between
the company that has the best aies and other companies
could potentially grow really fast. And as a result, I
think that basically power concentrates by default. You know, by
(49:49):
by default, we end up in a situation where one
to four mega corporations have these one to four giant
armies of superintelligences their data centers, and then those one
to four armies of superintelligences are going out into the
economy doing all the jobs, you know, giving advice to
the presidents, being integrated into the military, et cetera. And
(50:13):
that's an insane amount of concentration of power compared to
anything we've seen historically.
Speaker 1 (50:18):
I want to ask you about power use. There's something
that we've been looking at quite a bit. What does
that look like? You know, based on your guys's research.
You're talking about the compounding kind of network effects why
naturally big companies are going to be the people who
are going to both you know, experience kind of runaway
AI improvement, but also, you know, the capital expenditures that
are required for data centers and power usage naturally lend
(50:41):
themselves to these kind of bigger tech monopolies and the
more established players. When the AI start training themselves and
that kind of leads to this runaway growth. Are we
going to see exponential power needs usage by these data
centers expenditure? What are your thoughts on that.
Speaker 8 (50:56):
First of all, the as are already training themselves, but
in the future they'll be doing the whole research stack
instead of just parts of it. As For electricity consumption, currently,
I think that there's a bit of a I think
I think there's a lot of misconceptions and myths about
how how much energy and water these ais consume. It's
(51:17):
less than humans, however, it is still a lot, and
it's growing fast as the companies scale up, and sometime
before the end of this decade, it will start to uh,
if trends continue, it'll start to strain the US power grid.
And so you know, people in the industry will talk
about how you have a nuclear power plant next to
(51:38):
a data center and then there'll be a bunker underneath
with the researchers.
Speaker 7 (51:44):
That there's that sort of that image.
Speaker 5 (51:48):
So your report came out a little while ago.
Speaker 2 (51:52):
How are your predictions stacking up against reality thus far?
And also, I know you've been very open to, you know,
to feedback to critiques. Has there been any critique that
you have found to have a lot of merits such
that it has shaped or altered your opinions since the
report came out.
Speaker 7 (52:08):
Yeah, a couple.
Speaker 8 (52:08):
So the good news from my perspective is that things
are going a little bit slower than I thought when
we were writing twenty twenty seven. So at the time
that we published a twenty twenty seven, twenty twenty eight
was my medium, and now it's more like twenty thirty.
So I've pushed things back a little bit. And there's
no single reason for that. It's a bunch of little
reasons that sort of added up. So one reason is
(52:32):
that the you know, we made predictions for benchmark performance
on a bunch of benchmarks and also qualitative predictions for
sort of what types of things AIS would be doing
by the end of twenty twenty five, and I think
that things have gone like mostly as fast as we said,
but not as fast, like a little bit slower, So
that pushes it. That's like one piece of evidence. Another
(52:55):
piece of evidence is that we readd our timelines model.
We fixed a few books that people who pointed out,
and we added a bunch of features to it, new
things to consider, new factors to incorporate, and the net
result of all of that pushed things out by like
two years.
Speaker 1 (53:13):
Yeah. So, Daniel, one of the things that we were
flagged about is you've also warned about kind of the
you know, a lot of the discourse right now is
about white collar jobs, but what about some of these
more blue collar jobs, potential robots working as plumbers and
others that you've described in the past. Curious for your
take on that.
Speaker 8 (53:30):
Yeah, So it depends on how far away superintelligence is
so or how far away the full automation of AI
research is in the neo term, like in the next
couple of years, probably in the twenty twenties. I would
say that if we get the full automation of AID
research in the twenty twenties in the way that the
companies seem to think, then the sort of cognitive intellectual
(53:55):
capabilities of AIS will outstrip the physical capabilities for some period.
Speaker 7 (54:00):
And this is what's described in AI twenty twenty seven.
Speaker 8 (54:02):
So in AI twenty twenty seven, again in our scenario,
they succeed in automating AI research in twenty twenty seven,
and this results in better and better AIS that become
super intelligent. By twenty twenty eight, and then those ais
sort of explode out into the economy, redesign all the robots,
(54:22):
redesign all the factories to produce new robots, and then
control the robots to go build new factories and so forth.
And so there's this massive change in the physical economy
that takes place over the course of twenty twenty eight.
Speaker 7 (54:35):
But the change in the sort of.
Speaker 8 (54:38):
White collar, sort of intellectual you know, the desk job
economy was disrupted earlier in twenty twenty seven, if that
makes sense. However, if things happen later, like if it's
in the twenty thirties, then I think that robots and
physical machinery may have caught up, and so it might
(54:59):
be more of a both of these things happening at
the same time type situation, which I actually think is
less dangerous because humanity will be paying more attention to
what's going on if it's sort of happening in the
physical world distributed across the economy. I think that it's
quite scary to have an intelligence explosion happening, possibly in secret,
(55:21):
in one or more tech companies, because then you have
this sort of discontinuity from the perspective of most people,
where the world still looks quite normal, even as the
AI is our self improving and becoming vastly superhuman, and
you just don't know about it, perhaps because it's a
(55:42):
state secret, perhaps because it's a corporate secret, you know.
And then by the time you find out, it's because
this army of superintelligences has taken your job and is
now telling you how to build, you know, the new
widget in the new type of factory that's going to
build robots, and you don't really have much of us
say in the matter, because they're super intelligent and they've
(56:03):
already you know, got the president on side, and they've
already you know, got they've already sort of like made
all the moves to accumulate all the power, you know.
So that's uh, well, that's the movie depicted on A
twenty twenty seven.
Speaker 5 (56:17):
Basically, what is the level of job loss that you
expected on what timeline?
Speaker 8 (56:23):
So I think that prior to the full automation of
AID research, there will be some job loss, but not
most of the jobs, so to speak. I think this
partly because the companies are trying to automate their own
jobs first, like they are really gunning for automating AI research.
That like that's you know, everything else is almost like
(56:43):
incidental on the way to that. And so I think
that prior to succeeding at automating AID research, there will
be various sectors that get impacted, but most people will
still have their jobs. After the automation of AI research,
then I think you get super intelligent, you know, within
a year or so, and then everyone's job all at
(57:05):
once is obsolete, like not not even just gone, but
sort of obsolete. Like super intelligence is by definition do
everything better, faster, cheaper than the best humans.
Speaker 7 (57:18):
And so I think that's just a very different world.
Speaker 8 (57:22):
It's a world where it becomes a matter of pole
politics rather than economics.
Speaker 7 (57:28):
Right, It's a world where.
Speaker 8 (57:31):
Humans don't need to work anymore because there's all these
amazing superintelligences and robots that can do everything so much
more efficiently and produce amazing abundant wealth. And as long
as the political structure and the alignment is in place,
then that wealth can be distributed to the humans who
don't have anything to do.
Speaker 2 (57:51):
But if that structure is not in place, then you know, yeah, well,
and that's the thing is, like you know, I am
like not a big fan of these human beings. I
assume they don't want to like you know, violent revolution
or and human civilization.
Speaker 5 (58:05):
Like, how are they thinking that this is going to go?
Speaker 2 (58:07):
And if they have I know many of them have
probably read a twenty sevent they've done their own thinking
about how this is all going to play out. You know,
they're pushing and spending trillions of dollar, committing trillions of
dollars to the BILLDOUN to try to be the first
to get to this milestone. Why aren't they also doing
any of the work to sort of you know, positive Okay, well,
here's how society will function when nobody needs to work anymore.
(58:30):
I don't see any of that thinking or working or
you know, a major focus on alignment even happening. So
how are they just rushing headlong into something that seems
so potentially catastrophic?
Speaker 8 (58:43):
Well, you answered your question earlier with the reference to
Elon Musk. There's a there's a I think there's an
interview with him that you can go look up where
he says, yeah, like it seems like maybe ais are
going to be the end of human civilization, maybe they'll
kill us all. But you know, what can I do?
Speaker 7 (59:00):
The race isn't going to stop. At least I want
to be.
Speaker 1 (59:03):
Part of it.
Speaker 8 (59:03):
Now I forget, I forget exactly the quote he said,
but like basically there's a sort of if you can't
beat them, join them mentality across a lot of these companies,
and the sort of like more.
Speaker 7 (59:15):
The way that many of them will put it is
basically like, well, we're the good guys.
Speaker 8 (59:19):
You know, there's a bunch of evil corporations racing to
build superintelligence, and they're not going to do it very well,
and they're not going to do it very safely, and
who knows what they'll do with it after they succeed.
But we're the good guys, and so we're going to
beat them all and build it first, and then we
will be open about it, or then we will make
it safe, or then we will distribute it or.
Speaker 1 (59:40):
What it like.
Speaker 8 (59:41):
You know, that part is always very hazy. They don't
really have much of a concrete plan for like what
they do after they win. But but they tell themselves
that that they're the good guys and that it's important
for them to win.
Speaker 7 (59:54):
Yeah, yeah, that's and that to be clear, like it's
not just me saying this.
Speaker 8 (01:00:00):
You can go look up about the founding of open
AI and the founding of deep mind and the founding
of Anthropic and there are echoes of that narrative present
in all three.
Speaker 1 (01:00:12):
Yeah, they say it. They say it every time they
say this stuff out loud effectively about we need to win,
and as you said, they openly acknowledge that like it
will lead to mass job loss. I do have the goal, right,
that's what they want. Can we steal man your case
a little bit? I'm just curious, you know, for your thoughts.
What I've observed is, you know, and this is maybe cynical,
but like when you're no longer talking about curing cancer
(01:00:34):
or superintelligence and you're instead talking about pornography and ads
in the feed, and you're running advertisements on the NFL
encouraging using AI as you know, AI chatbots to do
studio ghibli Are we so sure that superintelligence is coming
and that we haven't just recreated, you know, a new
Internet platform like a new Google Chrome, a new great
(01:00:55):
advertising sales model, but not all that revolutionary. That's the
steel Man case that may offer.
Speaker 8 (01:01:00):
Yeah, yeah, so we shouldn't be sure that super intelligence
is coming soon, Like I'm not sure, you know, like
we I was giving my median, not my like it's
definitely going to happen, right, So I have this long
tail of probability mass like maybe it's gonna maybe it's
gonna take ten more years, maybe it's going to take
twenty more years.
Speaker 1 (01:01:16):
You know.
Speaker 8 (01:01:18):
However, I think that probably it will happen in the
next five to ten years or so, and we need
to be prepared for that. In terms of like do
these companies believe it? I would say they probably have
a similar attitude towards it. To me, there's different people
at the companies. Some people think it's farther away, some
people think it's closer.
Speaker 4 (01:01:36):
You know.
Speaker 8 (01:01:36):
Just the other day I was talking to someone who
has who works at one of these companies, who who
thinks it's coming sooner than I think. But probably the
reasonable thing that most people would agree on is that,
like it could happen in the next few years, or
maybe it won't.
Speaker 1 (01:01:51):
You know.
Speaker 2 (01:01:53):
Yeah, Can I ask you sort of philosophically, because I
think this gets to what would be quote unquote like
motive of a super intelligence to effectively take over everything
in the world. How do you think about what AI
actually is. I mean, do you even think of it
as saying that it could have a motive or like
(01:02:14):
a level of consciousness or a series of you know,
sort of like personal goals the way that humans do.
How do you think about what is actually being grown
and created in these labs?
Speaker 8 (01:02:27):
Yeah, So I'm glad you mentioned the phrase grown, because
that's another important thing for the public to understand is
that these they are sort of technically pieces of software,
but they're not software in the ordinary sense. They are
giant neural nets that are grown rather than constructed. So
and that that is a source of a lot of
the problems for alignment, is that we don't really understand
(01:02:49):
how they work because we didn't design them. Instead, we
sort of grew them, were trained them in various training environments. Yeah,
to answer your question, I mean there's I actually I
used to be an academic philosopher, like that was what
I studied in college. So I have a lot to
say about machine consciousness and you know, how it might
(01:03:10):
relate to human consciousness and so forth, But I also
think that it doesn't actually matter that much for these discussions.
An analogy I would bring up is to is a corporation, Like,
does a corporation have goals? Does a corporation have intentions? Yeah, basically,
I mean maybe not in the same way that humans do,
but it's reasonable to say, like, oh, like Microsoft sees
(01:03:33):
Google as a competitor, and Microsoft wants to beat Google
and make profits. You know, like these are reasonable abstractions
to describe corporations. And in a similar way, again, the
plan is to have a sort of corporation within a corporation.
The plan is to make an army of AIS and
(01:03:54):
have them autonomously do AI research to build better AIS
and then build better AIS and so forth. And you know,
how is that army of AI is going to be organized? Well,
I don't know, maybe something like an internal corporation. There's
the point is there's going to be a group of
them and they'll be you know, sending messages back and forth,
and they'll be working towards goals, and they'll be like
tracking their own progress towards those goals, and they'll be
(01:04:15):
communicating with the outside world and so forth. And so
you know, that corporation of AIS you can think of
as like a human corporation. You can abstractly describe it
as working towards goals as wanting things, you know, et cetera.
And the question is what will those goals be, And
the company will be writing up a spec. Open Eye
(01:04:39):
has something called the Models PEC. They also have their
various public statements about what they're trying to do right,
and so the company will basically be trying to give
goals to their AIS. They'll be trying to say, go
out and make all this money for us, but also
obey the law and also be ethical and you know,
but really we want you to make lots of money,
(01:05:00):
and also we want you to beat China. And they'll
be giving all these you know, instructions and goals to
their army of AIS, and maybe that works, and maybe
then we get into the sort of constitution of power
problems of who gets to be in charge. But also,
I would say, and I think many other researchers in
the field would say, that we are not on track
(01:05:21):
for that to even work on a technical level right now.
Like the AIS don't always do what they're told to do,
and they often seem to pursue goals that are different
from what they're supposed to be pursuing, and so I
think it's a very live possibility, and in fact, I
would say it's the most likely possibility that the goals
that this sort of AI corporation ends up optimizing for
(01:05:41):
are importantly different from the goals that they're supposed to.
Speaker 7 (01:05:44):
Be optimizing for.
Speaker 2 (01:05:45):
Yeah, well, you decided to leave open AI and the
path of sort of trying to shape things on the
inside to be out in the public sphere and writing
reports like this and doing appearances like this to try
to raise public awareness. We have a very politically engaged
audience that would be listening to you right now.
Speaker 5 (01:06:02):
What do you want them to do?
Speaker 2 (01:06:04):
Like, what do you want them to be pushing their
lawmakers for? You know, how do you think that the
public can be enlisted in your project?
Speaker 8 (01:06:13):
Well, I would say the main thing right now is
wake up and pay attention because the future is uncertain,
and you know, I think that i'd much rather have
someone who's paying attention and like going to advocate for
the good things at the right time, then someone who
does the sort of like fire and forget, Like I'm
going to send my letter to the congressman about like
this one particular bill that I heard about, and then
(01:06:35):
I'm going to tune out, So that'd be the first
thing is wake up and pay attention to start tracking
what's happening, start thinking about how things might go in
the future, what could be done about it. In terms
of immedia asks, we generally I recommend transparency. There should
be transparency requirements for these companies so that in the
run up to the intelligence explosion, it is a big
(01:06:56):
topic in the news that we are in the run
up to an intelligences I want to avoid a situation
where this happens in secret, right, so I think that
there should be requirements for whistleblower protections, requirements for you know,
being transparent about what goals and principles you're trying to
put into your models, including your internal models requirement. This
(01:07:17):
is a spect transparency. I think there should be requirements
for transparency about your projections for uh, you know, how
close you are to automating AI research, and how fast
things are going to go after you automate AI research,
and how good your AIS are getting and things like that.
And I think there should be transparency about the evidence
for the alignment stuff. So when you have like for example,
(01:07:42):
Grock has this interesting tendency, or at least it did two,
do searches for what Elon Musk's opinions were, and then
copy those opinions when being asked, Right, that's an interesting tendency.
Was that, you know, put in there by XAI or
was that sort of an emergent thing that Grock decided
(01:08:03):
to do. That seems like an important thing that the
public should know. People at XAI have done an investigation
into this. Maybe that investigation should be made public, you know.
That's that's an example of the sort of thing that
I think there should be transparency requirements for.
Speaker 1 (01:08:16):
Yeah, that's a great idea, just you know, fully transparent.
I used to think about this with social media moderation.
You can moderate what you want, but you've got to
publish the standards. It can't just be up to you know, X,
Y and Z person and you can set these you know,
through the FCC, through Congress. It's not that difficult. You
just have to have the political will to do so.
Speaker 8 (01:08:32):
That's right, And I think the transparency helps with both
the conservation of power and the loss of control stuff,
and also more generally, it sets us up in a
position to make like and better and more serious actions
later and certainly there I would say, basically, let's not
do an intelligence explosion, you know, how about how about
we don't you know, how about we don't put the
(01:08:53):
as in charge of self improving rapidly. How About instead
we sort of coordinate all the different companies, including the
ones in China to proceed cautiously around that time, and
you know, get the safety stuff right, make sure we
shoulgure out how alignment and also make sure the power
is sort of continues to be spread out across a
(01:09:13):
bunch of different places instead of a sort of winner
takes all, whoever does the intelligence explosion first wins sort
of thing. Unfortunately, that's going to take serious government action, right, Like,
you can't just get everyone to stop the race by
asking politely, which is why we need to transparency first,
to sort of like make sure that the whole world
is aware of the race and aware of what's happening,
(01:09:35):
and aware of how much how little time is left,
you know, to sort of like provide the information and
the political will to.
Speaker 7 (01:09:44):
Make a deal.
Speaker 2 (01:09:45):
All seems very reasonable to me, guys. The website is
ai dash twenty twenty seven dot com. It's it's very readable,
I mean, you make it very approachable in terms of
the language that you use. You don't need to be, like,
you know, a tech guy in order to or tech
gal in order to be able to under stand it.
So Daniel, thank you so much for your work, and
thank you so much for joining us today.
Speaker 1 (01:10:03):
Thanks man, thank you our pleasure. Thanks for watching guys,
We appreciate it. Great show for everybody tomorrow with Ryan
and Emily. They'll see you then.