All Episodes

June 8, 2024 119 mins

In this episode, Mark and Shashank are joined by a special guest, Matt from Cerebras Systems. Matt, a key figure at Cerebras and a regular at the South Bay Generative AI Meetup, shares his wealth of knowledge about the cutting-edge advancements in AI hardware. He discusses how Cerebras is revolutionizing the field with their specialized ML training chips, which compete with Nvidia by optimizing for specific machine learning workloads. Tune in to learn about wafer-scale computing, the challenges and innovations in AI hardware, and how Cerebras is poised to lead the future of AI infrastructure.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hello everybody and welcome. Today this is a very special episode because it's not just, you know,

(00:07):
you listened to me and Shashan, read along back and forth about nonsense. Today we have a guest.
So we have actually, I wouldn't even call him like a friend as well, right?
At this point I think, yeah, I think so. Yeah, he's done that for quite a while.
So Matt is a regular attendee at the Southay Genai meetup and always has, is always one of the most

(00:35):
interesting people I would say that I get to talk to on, like, whenever I can, it's always a pleasure
when I'm able to talk to him. Like Matt is just like a wealth of knowledge about a bunch of stuff.
You know, whenever I talk about something he tends to know a lot about it. He's the type of guy who

(00:56):
knows a lot about a lot. So I think that you guys are going to enjoy the conversation that we're
going to have today. So Matt, he currently works at a Cerribris, which is a company which is in a
certain sense, competing with Nvidia at like the highest level. So they make the largest computing,

(01:20):
I don't know, what would you call that? Like the largest platform? Yeah, it's like the largest,
yeah, it's like probably one of the most powerful hard-ro-compute platforms in the world. And I'm just
like super honored that you're like, you're real, take the time and chat with us today. So anyways,
Matt, why is Cerribris better than Nvidia? Mark, thanks for those kind words. Yeah, I could ramble

(01:44):
on about a long time, but the just of it is that Nvidia is a GPU and it was designed to move textures
from memory, from a centralized memory through the graphics processor and process triangles and pixels
and whatnot. And because of that, it has a, it doesn't have much knowledge of the content of the

(02:05):
data. And so it does have, however, have a lot of processors in it. And back in the day, if you wanted,
you know, tens of thousands of processors for 400 bucks, you could buy a video card. And once,
once Nvidia opened up CUDA and Google got a hold of it and made TensorFlow and the ML guys realized
that they could get access to all the microcode in all those cores. They were, they were off at the

(02:30):
races. And the only game in town actually AMD was really late in releasing any kind of API for
their, for their GPUs. And so Nvidia was the only game in town. And that, but it was primarily designed
to move textures. And because of that, when you want to do a workload, you, you know, ML is often

(02:52):
moving a N by M rectangle data from memory into the processors and oftentimes back to memory.
And what, what you do, what Nvidia doesn't know how to do is, if an ML workloads, after the very
first layer, because of the nonlinear activation function, you often have a lot of zeros. And so there's

(03:12):
a single sparsity where a good portion of your data, you know, from 20, 30, 40, even up to 50, 60,
70% is going to be zeros. And so Nvidia doesn't really know how to move, not move zeros from memory
into the processor. So they've got to move that N by M rectangle, which a good portion may be zeros.
So that takes both time and energy to do. And, and when you do that and you do, you spread it out

(03:38):
once in a while, all your processors, you've also got to do multiplies and ads. And they don't know how
to not multiply an ad zeros. And everybody from great school math knows it, you know, adding and
multiplying zeros isn't know up. And so they use, they use bandwidth and processing power to,
to move those non zeros. Now, to move those zeros. Now that's, that's, that's, that's one thing,

(04:02):
right? And that's one advantage. The other one is we, we pioneered this technology called wafer
scale. And it's been tried in the past unsuccessfully. And what wafer scale is is, rather than cut the
radicals off the wafer to make individual chips and put them in packages, we leave them all on the

(04:24):
wafer and cut the rounded parts off to make a giant square of a chip. So it becomes the biggest
chip in the world. It's roughly 12 inches on a diagonal by a long shot. It's got, you know, many,
many dozens of rectangles in it or, or reticles in it. It's got trillions of transistors. It's got
close to a million processing cores in it, just in the same, just in the same chip. And by doing that,

(04:49):
when you move data from one chip to the next, it's just a tiny little trace that is extraordinarily
fast, way faster than paying the time savings of charging up capacitance to get off of a single
die through a package and then a board and getting through the inductive and capacitance of the chip
that it's talking to, which is, which is slower and takes more power. And so we got a couple of

(05:13):
things going for us. There's the, the, the scale of the, of the chip itself that saves power and time.
There's the ground up design for ML workloads. We don't have any shared memory, so memory is,
is local. So there's no moving zeros in and out of memory. The ground up design is data flow.
So whenever there's a calculation that results in a zero, it just gets dropped and the next non-zero

(05:37):
data gets moved in its place. One of the problems with this, though, is that all the benchmarks are
designed for dense data workloads. And so we don't do benchmarks because they're all, you know,
sparsity is where we get significant gains in our, in our performance. And so we, we tend to talk more

(05:58):
about, about how, how fast our customers can get an answer from the product. And so those are just
two examples of why we think we have a better solution than, than, than our competitor. Okay. So,
I think there's a lot of things that you said there. So I'll just try to, you know, dumb it down
for at least myself. Sure. And like, you know, for the listener. So I guess what you're trying to say

(06:21):
is, you know, cerebris is better because it's specialized for the particular workload of
training LLMs. Did you see that's like fair? I'd say training any, you know, we're not specific to
LLMs. Okay. We're a general purpose ML training LLMs just happened to be the rage right now. But

(06:41):
before that, it was computer vision and image recognition and things like that. That's where the first
real progress got made. And then, and then people realized. And I think our, our management realized
that, well, that's not what, what's really going to happen is LLMs are going to be there. And so
a couple years ago, we started investing really heavily in making sure we could run LLM workloads.

(07:01):
And that that proved to be a pretty impression. I guess, do you have a sense for what kinds of
models have more sparse weights as opposed to dense? It sort of doesn't matter actually. The ML
the activation function like there's an activation function called Relo. And what it means is any value

(07:24):
that result that is less than zero becomes zero and any value that's greater than zero just stays
itself. So it's kind of like, and the nonlinear part is right at zero, you've got this sort of
steep angle there, which makes it a nonlinear function. And so that means that, you know, after the first
after the first layer of processing, you could wind up with a significant number of zero data.

(07:49):
I guess you mentioned two things that Cerebrus does much better than Nvidia. One is the way for scale
computing. You guys just have bigger, more massive chips. And the other one is dealing with sparsity.
So isn't that like a software or design problem that Nvidia could also handle? Like they could just

(08:10):
ignore sparse values somehow with some fancy math or computer science?
They can. And I don't want to I don't want to speak for their their marking or anything, but my
understanding is they can do it on a stride basis. And if the if they can kind of align the data,
the zeroes to that stride, they can skip it, but they can't do it completely asynchronous to the

(08:32):
to the way the data comes in at the moment. So you would you would need to I mean, you would need
to drop the zeroes. You would need to have special instructions and things to know that it can
drop the zeroes because you know, oftentimes the matrix operation is, you know, here's n by m and
here's a thing you're multiplying it by and it just does all the values and fills them in and

(08:53):
that's that's what they're really good at. But it's not a data floor. It's more of a
Mindy kind of architecture where you're you're processing multiple data and multiple instructions,
but they all have to process at the same time in order to get the data through. And then it has to go
back to memory. And so you need special hardware to kind of figure out where the zeros and non-zero's are.
What if you have data flow just flows through? So you mentioned data flow versus like MD architecture,

(09:18):
can you maybe define like what that is? So in a data flow architecture, the data just goes from
one stage to the next and doesn't isn't necessarily that aware of anything else. Any adjacent data or
or things like memory size or whatever, it just moves as it moves through the pipeline. Whereas if you have

(09:44):
a matrix and you're trying to do an operation like a true parallel process or would, it doesn't,
it just it knows it it's got n by m pieces of data move, it moves it and then does its operation and
then does the next operation and does the next operation. And that's more of a, I don't know what you
would call it a matrix flow or the chunk of data is much bigger. Okay. So in a certain sense to like

(10:11):
you know kind of summarize is like cerebris is better because or like specifically for ML workloads
because it's off the chips are optimized for it. Right. And like in the video is more of like a general
purpose chip that sense they're like doing like mass market peeled it like those chips need to work
for like gaming and like a theory of mining and whatever. Therefore like they wouldn't be able to

(10:36):
make the optimizations that like cerebris would be able to do. And because of that cerebris is potentially
like a far better chip for training ML workloads. Yeah exactly. And early on in the company I see who
you know he's a good marketing guy Andrew Feldman. He basically said you know CPUs are like the

(10:59):
hyenas of Savannah right they'll eat they'll scavenge for food or kill their own food they'll basically
can just do anything. And they you know they can they can move around their little mobile and then you
get to GPUs which are little less like hyenas but they do general purpose work load. And what we make
is we make a cheat it it just does the one thing really really fast and everybody you know has a

(11:22):
immediate visualization of that Cheetah running through the Savannah from whatever nature show you
saw it. And it was a it was a beautiful kind of marketing thing because not only does it illustrate
the point that we're not a general purpose thing but you know when you visualize a Cheetah versus a
hyena you have this positive. I thought well that's really clever let's just cast our competitors in
this in this light as big hyenas and more this beautiful majestic Savannah animals is really hilarious.

(11:47):
So you mentioned you guys are Cheetahs at training training yes and I guess I that made me realize
that cerebrus doesn't really focus on inference as much and I was curious can your chips even do
inference well. Oh absolutely so so there's there's this thing coming up and and and we knew that we

(12:07):
needed to be in the inference game at some point but if you consider that a lot of the end result of
training models you want to work on cell phones you want to work on personal computers you don't
work on much much smaller things so it makes sense to train a model on a on a big hefty compute you

(12:28):
know resource heavy thing and then use sparsity to make that model really small and then run it on
some things with maybe some a few percentage points loss in accuracy but there's this whole market
coming out now which like if you think about LLMs and and some of the image generation networks
right now they're not what I would call real time they're you know you give it some data and it

(12:52):
comes out with an answer you give it some data and it comes in it can take milliseconds it can take
you know you can make a video that you know it might take you know a minute or two or music but it's
not really real time in the sense that nothing is depending on that coming out but now we have this
thing which I don't know I call it like edge inference where you're doing inference that the data
is required to come out really fast and and get used and once you fill the pipeline

(13:17):
your inference then becomes this sort of real time model which is a little different than
what a lot of models have done in the past so we actually partnered with
God I've got to name it a company recently well I saw you partnered with Qualcomm Qualcomm right
to help us with the inference part of it and the funny thing is you know some of these edge inference

(13:42):
tasks are so compute intensive that we think we're actually faster at inference for those bigger jobs
and so there's there's a you might train a model on and video chips and then do inference on our
box because it's throughput is faster right because it's just a big chip and everything is moving
through very fast and inference I've heard takes roughly a quarter of the compute required

(14:06):
than the training part which has this whole backward path and weights and activations and things.
So would cerebrus chips be better for inference for some of the larger models?
Sure absolutely but remember our box takes 20 kilo watts and it's you know it sits in a you know
it's 16 you high and sits in a 19 inch rack and you're not going to put it in a car and you're
not going to carry it around with your cell phone or anything like so it's a very big compute

(14:30):
heavy requires water cooling for instance so you can do inference on it but you have to have a workload
that you're going to want to commit that kind of resource too. It'll be some of the big dogs running
some of the larger models or maybe some of the some startup that's taking maybe llama threes
bigger parameter model fine tuning it and maybe serving that quite possibly yeah yeah I mean

(14:51):
there's that use cases it's you know it's a generic box right you you you have a model you've got
weights you got activations you know how big it is you know many layers you are you can
you describe it with you know tensor floor pie torture something and once you've described it and you
have your weights and activations they're just the matrix of data you just load it onto whatever
system you want to load on to and and your off the races. But maybe going back to your original point

(15:13):
you don't market the cerebrus chip for inference you market it for training it's the cheetah of
training it's not the cheetah of inference so assuming assuming no one wants to use it for inference
how big do you think the market is for just people wanting to continuously train models indefinitely
because like let's say you train llama threes for example and then you're done and then you release

(15:37):
it to people what are these chips doing in between successive model releases is there any point to
having them or is it just like a dead weight. At the moment well I can't talk about our marketing
and how their inference of mobile so we'll leave that one aside but as far as the the training part
you know you train a llama model and you're going to train it on data from say today 2024 right

(16:02):
and then two years from now there's going to be you know another large increment of data
out there in the universe that you're going to want to retrain the model on so it's not dated.
So I have a feeling they're going to be retraining models over and over again all the time and so
that's one point. The other point is there is a ton of untapped training markets like we our

(16:24):
partner g42 that's that's spot the lion's share of our systems right now they they just released a
30 billion parameter aerobic model right which you know nobody else has and so I think as far as
workloads are concerned for the next decade I can imagine they're they're being a demand for training

(16:50):
over and over again bigger and bigger models to get a more accurate and and it's you know the training
say of an LOM is exponential with the number of parameters you have right so if you have if you
want to train a three billion parameter model like I think GPD 4 is on that order of magnitude.
A trillion trillion three trillion parameter model you might use X amount of hardware well if you

(17:13):
want to go to a six trillion parameter model it won't be two X amount of hardware it might be nine
X amount of hardware because it's just it's just that much more data that much more parameters the
network is it has two dimensions rather than one so you might need significantly more more compute to
do it so there's this sort of like arms race going on and who who can build the biggest compute for

(17:33):
for training. Yeah that's true I think that Sam Altman is not trying to raise you know whatever
there's like nine trillion dollars or whatever crazy number is just because he doesn't want to do more
training right I think that right now all these big companies like are trying to do as much training

(17:55):
as possible actually I remember in an interview with her with Mark Zuckerberg he said that the biggest
bottleneck right now for training these models is not actually hardware but just like straight-up
energy so if running the energy constraints I think that we'll want to get as many optimizations

(18:16):
that we can out of the hardware as possible and with Nvidia GPUs or any GPUs in general if there is
issues with sparsity if there's issues with the chips being small I could see a lot of these companies
wanting to use a cerebrus hardware just because it's you know optimized for training ML workloads

(18:39):
because these companies are spending billions and billions of dollars training so to me like
I don't know why you if you're gonna spend that much money I don't understand why you'd want to
get like some sort of general purpose chip but I think that you'd want to get like a chip that is
specialized for that now I guess like the the town argument of that is like well if you

(19:01):
spent a bunch of money on specialized hardware for training then you could only use it for training
and then like you couldn't use it for other things so I guess maybe some companies are thinking that
okay like we have this hardware and we can use it any way we want in like a general purpose fashion

(19:22):
so it's like we can use it for training and then like I don't know but then we could use it to help
run our web servers or we could do some like Bitcoin mining with it you know whatever like you could
do that I guess but still I think that there should be like a high enough a critical mass of people
who is like hey look if I'm gonna spend like 10 billion dollars on this training like you might as
well get something that specialized for the training I'm pretty sure that there will there won't be

(19:48):
the optimal use of the hardware will be training for the next decade and they'll just be adding more
hardware over time now I don't know when when successively new generations of chip come out maybe
some of the old hardware gets a little a little obsolete but a lot of models actually can work
multimodal on multiple versions of hardware so I think that I think the training will just will

(20:08):
continue to be there and and a lot of companies have like for instance we know we work with some
pharmaceuticals in some medical industry and they have very special privacy requirements in terms
of their servers and how they need to be isolated and so they're not gonna want to buy they're not
gonna want to put ML hardware in the cloud they're gonna want to have it isolated somewhere so that

(20:30):
they can use their own data and their own you know protect their their their patients data and
whatnot and their own premises so I suppose uh to solve some of these issues uh you guys have your
own cerebrus cloud to maintain peak utilization across all your chips so that someone doesn't need to

(20:51):
buy their own chip and have it sit idle after their finished training uh yes and no uh the
the series we work with a chemical series for cloud service kind of like behavior okay I don't
I don't know that it's something you can just kind of like spin up an instance like AWS because you
need a lot of compute for it most people that are serious about it will and want to use our system

(21:12):
will come to us and we have different levels of service from white glove where we can't we have
people that handle your your model and all that kind of stuff all the way to I know what I'm doing
I just want to I just want to run on my cluster of your systems that that points on another
advantage I think we have that's that's like ease of use early on when you trained um and on it
a bunch of uh and video things you had a you you can tell it how many GPUs you want to use and do

(21:35):
it but it actually takes some amount of coding to scale out to a number of GPUs and
um uh I think our our CEO pointed out that you know it might take many tens of thousands of lines of
code to scale out to to more GPUs where for ours you you really truly just put another number and
the user doesn't know how many um how many boxes he's putting his thing on and as far as sharing

(22:00):
workloads our biggest customer G42 is the is kind of the largest cloud provider in the middle east
we've we've got two 64-known clusters spun up and that's 64 of our chips uh in in each box
they they while they they use it most of the time there are periods of time where we'll let
we'll have other customers it'll it'll rent time on it so I I grew up in the middle east in Abu Tabe and

(22:26):
I have a rough sense for the culture and like the interests there um they have a ton of money
they don't have as good of a talent pool as here in Silicon Valley but I mean I think most of the
researchers are trying to go where they have the most compute so it seems like a great play very

(22:49):
forward thinking play and uh these kings and the shakes uh and the rulers there are very aware
that oil may not last them forever that's a good move on that part um I I think they were thinking
about moving into clean energy at some point but um they've been trying to get into the tech game

(23:09):
for a while yeah yeah and I think now it really makes sense for the middle east to try to do something
big because compute is a very valuable resource sure sure I just hope the US is listening and we decide
that we want to we want to have our own compute here as well and not let it not let it go completely
overseas well I did watch the cerebrus a i day keynote and they mentioned a couple different

(23:33):
nodes here Santa Clara San Diego and one other location I think so yeah the initial answer with G42
included um three 64-note clusters how how big is that is that a super computer is that uh maybe
like just a big server I don't why answer that uh it's it's the equivalent of many thousands of

(23:58):
GPUs right so you could call that a super computer that it's a super computer yeah 64-note so
so what they said is we want well we we went into we went into G42 and they had been like you said
they're not an expert in ML and so they've been trying to get um one of their models to work on
on a on a bunch of GPUs and I'm not gonna say how many so I don't remember the number exactly

(24:20):
and they've been trying for a bunch of weeks you know several weeks to get it to run and it hadn't
finished yet and so that we saw this up to me we came in with our box or a bunch of our boxes
and we we finished the the run in a week and it finished and it gave them the accuracy they wanted
and they said okay we want this and I don't think they're the they're the experts at ML models so

(24:43):
someone at one of the hyper scalars like open AI or somebody could probably make a model better
and more efficient and have finished but most customers are not like that most customers I've got a
model I've got data can you help me run it and I don't want to spend you know higher hundreds of
engineers to optimize that down to the point where I get every last ounce of they just say I just

(25:05):
want to run you know can you can we run it somewhere and so G43 announced they're gonna buy 364-0
clusters now the initial 64-0 cluster the press was coding at about 100 million I just can't talk
about how much they really cost but the first one was 100 million and then two more after that each
at say 100 million per cluster and then right after that they said we want nine more 64-0

(25:28):
cluster or six more 64-0 clusters for total nine in the in well at the moment it's easy to have
them here in the US because you don't have to do export stuff so we originally we're going to
ship all our systems over there and then there's got to be so much import export paperwork that
we said well you know if we just put in a data center here there's zero of that and access to

(25:53):
it is the same as it is immediate anyway so why not just host it here so we're hosting it here
but to the point that Mark mentioned energy is a big cost for training these models and
they're in the Middle East they have all the energy so I'm surprised that they didn't want to go
through the paperwork to ship what was it close to a billion dollars of chips you know I don't
know for sure I can imagine that somewhere along the line there will be systems ships to them and they

(26:18):
will have on their promise system it's about a lot I mean it's a very complex system so it's a lot
easier to support when it's here if there's a we have we have we ship like 64 nodes but we actually
ship a number of nodes as spares to see it cycling in out if one of them shows a problem we'll
cycling in out and as you can imagine because we're a wafer scale system sometimes you find

(26:42):
latent defects and we have significant amounts of redundancy and so we'll swap out that system figure
how to repair that node out and then it becomes the next spare there so that's all fairly seamless for
that so that was going to be one of my other questions so when when you're making traditional ships
whether CPUs GPUs on the scale of like you know for a laptop mobile phones and even servers they're

(27:07):
they're pretty tiny and you I guess like etch out different you take you take a die and stamp a
bunch of different chips onto a large wafer and see which one of those stamps looks good and if
any one of those stamped CPU GPU GPU die doesn't look good then you kind of just toss it out but

(27:30):
with a wafer scale computing you have a massive chip I think it's maybe 10 15 20 times the size of
a regular GPU you don't want to throw that away you can't because you would have a yield of zero
we know that you know the defect density is there's going to be defects it's you know back
one of the technologies that I worked with was like 16 defects per so many square centimeters

(27:52):
or something like that so there are defects silicon doesn't come out perfect so you know they're
going to be there now whether they fall on a transistor or trace or something that's going to kill you
that's a thing but they're going to be there and they're going to wipe out certain chips so you
have to have redundancy and and and other companies have been doing this forever right like
memory companies in order to get high yields they have whole extra columns of bits that they can

(28:14):
just redundancy out right if they find that there's too many bit errors on that column they'll just
they'll just take it out and so they'll get a really high yield and when you apply the power to it
they'll apply the redundancy programming and nobody anybody using it is an unilizer so we have a
we have a standard size that is smaller than the actual total size that we use for redundancy to map

(28:38):
to map out places where the defects are so I guess the chip is bigger than it needs to be so that if
there are any redundancies in the middle you would block out that row and column I can't talk
completely about the technology but suffice to say that there's plenty of overhead that we can we can

(28:58):
map out we can map out what what where the defect is and still provide a standard size for the
for the compute rectangle I mean software sees a you know if you start with an x by y rectangle a
physical processing cores software sees an n by m rectangle that's slightly smaller than that

(29:20):
and we have enough overhead to guarantee that the software sees that n by m rectangle
and and can run on it and so you take all these designs for this massive wafer scale chip
and you send it off to TSMC right TSMC is our manufacturer yes yeah
there's only guys who can do it is there any constraint there because I assume everyone in the

(29:47):
world wants TSMC's new five nanometer or even smaller they keep going to three nanometer I think Apple's
that is there any challenges you guys face when getting the latest and greatest from TSMC
or making enough chips our our quantity is so small compared to everybody else I mean a lot for

(30:11):
us is minuscule compared to their normal daily throughput so they don't they don't really care
about that so the the manufacturing throughput is not a problem for us um I think you know in the
early days of the company we we got access you know TSMC wanted to do this near as I can tell
and that was before I joined I was I was I was employing number 30 so I joined about a year after

(30:35):
the company started but they but they had engaged with TSMC and got access to researchers there
which is pretty unheard of for a tiny startup who wanted this technology to happen and and they
made it work and they continue you know obviously it's a slightly different process because
you have to make the connection between the die as a as a one or more steps in the process afterward

(30:57):
and that's different from what everybody else does they just cut it up and put it in packages so
it's a slightly different process and so we need to engage with them on on that but for the most
part it's you know the the wafer scale part I think was actually the easiest part but the much
harder part is if you imagine like you're coming video card and you've got a GP on it and you've got

(31:18):
this you know fairly large surface area to power it you can put big fans on it you can put power
supplies all over the board you can put decoupling caps and whatever you need to power that when you
have all the die on a single wafer you've only got the vertical space above the radical and below
the radical the power and cool it so you got to imagine you've got to you've got a power and cool

(31:40):
a GPU's worth of worth of chip but only in this tiny little little volume of above and below
what that radical is on the on the wafer you don't send the power from the sides no it comes in vertical
oh and you can look on our website we show there's blob diagrams of power planes and cooling planes
and things like that you can look at yeah it's crazy how even from the Nvidia's announcement like a

(32:04):
month or two ago most of the challenges weren't in increasing the chip performance itself but
everything that goes on top of the chip the envy link connects and memory bandwidth and actually
even memory allocation to store all these weights and move them around to these different GPUs

(32:24):
it's it's a hard problem and the mechanicals alone are our
mechanical cheap mechanical guy one in a ward from from the French I forget what it's to this
there's only like a couple hundred people that have everyone this reward for the mechanical design
on our chassis because it's so unique and and you know it's it's a good design it's cool yeah

(32:51):
circle so one thing I'm kind of curious you mentioned that the there was a lot of paperwork to
to ship the chips to the Middle East I don't know I don't know for certain I'm not a bomb I'm an
engineer and I'm a low level so I have absolutely no idea yeah I just know that the if you look at the
the government you have to get waivers and things for anything you want to ship over to the

(33:12):
Middle East it can be done and there are countries you can't ship to but the Middle East is one
where you have to get waivers and things for yeah because what I want to kind of touch on is I'm
just sort of curious if you've heard anything about I think like the US has like their chips act
where they have been trying to bring like a lot of computing into the US I think like TSMC

(33:32):
built a plant here I know they've been like giving a lot of money to a bunch of different companies
have you noticed anything since like you know I'm like a little bit farther away from hardware than you
are like I don't know if you like what we hear we hear stuff I mean that it none of it actually relates
to our business okay at the moment and it won't for a really long time but it because it's a really

(33:54):
hard infrastructure to build out this special not only is there special like you can build a fab
but you know one of the things that TSMC has is they've got packaging companies and parts supplier
companies and and testing companies and all this collateral business around their fabs that you
don't just build a fab because then you have to ship stuff here and there to get it packaged to get

(34:18):
it tested to get it and and and that takes time and money to do that and everybody wants things really
fast so so TSMC is truly optimized in terms of they have a whole infrastructure much like Silicon
Valley is for startups and things you've got a whole infrastructure in place that's really hard to
replicate and so they're going to have to build fabs and they're going to have to grow their own
infrastructure around it to all the other hard parts which are you know packaging and testing.

(34:42):
So imagine you were like you know like a government official in the United States and you were trying
to you know bring TSM like a TSMC like company here or like that infrastructure like what do you
think you would do like because I think that a lot of people maybe listening to this I mean hopefully
some government officials are listening to this podcast and they're thinking like you know what should

(35:06):
be done like like what would you do like if you had like a magic wand and where you had like let's
say like unlimited funds and you were trying to you know bring some chip manufacturing into the
United States I think like in the United States for sure would you like all the design a lot of
the design is done here but a lot of the manufacturing is done right right right like you know what would

(35:28):
you do like bring that here I'm the last person to be an expert on this so yeah the thing I say
is going to be just totally off the cuff I am very little expertise in this I mean if you
if you had infinite money you just build it and and and let people use it for free right that's the
so the problem is exactly that you don't have infinite money you somebody somewhere has to see
some return on investment for building building this out so but I think that in a certain sense right

(35:52):
like the money part of side is like you know you so you could give me a billion dollars and I would
have no idea like you were to start right like I would just start asking around like you know
guys like you're like hey like what do I start like building this because I think that like it's like
I just like would have no you would find experts in you right you find experts to find experts

(36:14):
exactly so I think that like in Taiwan they have like a lot of people who like know what they're
doing when it comes to building the chips and I don't know like we have that skill set here as
heavily in the United States like I think you'd have to throw an enormous amount of money at it
yeah that's that's others to it and um you know there there it's like any any kind of technology you

(36:41):
want to succeed like take solar panels for instance you have to you have to you have to put on
the amount of money in it so the product reaches an economy of scale to where it sustains itself
right and that's a really hard thing to do with chip manufacturing because it's the investment is
really large and then you have to have a throughput to to sustain it so in a certain sense like in

(37:04):
order to do that if you go back to like the thing if you're going to be like you'd want the fab but then
you would also want to have all of the things around the fab so like what are the times of things that
like you would maybe need in a fab to um that like you know for like a steady supply of things like
I know one thing is like ASMR they have like the look at it and I assume that's one thing like the

(37:27):
only company that makes that yeah I was like a bunch of others that you need to you know I think I
think they're going about it the right way um they are encouraging companies like TSMC to build
fabs here and giving them breaks text breaks and whatever in order to sort of like seed that

(37:47):
seed that industry but I mean you got you've got you know uh engineers and technicians that you
have to train you've got other infrastructure that has to build like I said and you just have to
have to have cagus you would have to pay attention to all those things I would say make sure the
soil is fertile you know don't just don't just throw a fab in the middle of the desert because

(38:08):
they're sand there right you got to make sure that there's going to be you know um uh housing and
engineering uh uh uh uh uh uh schooling you know you got to make sure they're graduate students
you can do the research you got you got to provide all this stuff all this fertile ground for the seed
to take root otherwise it just it's just you know congress throwing money into a hole without without

(38:32):
taking care of all the all the all the stuff around it yeah and I think that's like the a really important
part because I think it's very easy uh when all you have is money um it looks like the money can
solve everything but in a certain sense uh like the money is useful but like all these things that
you mentioned like you know the the right people um the right infrastructure having the

(38:54):
the right location um like you know uh for example if there's uh if there was a fab here uh let's say
in the Bay Area I think a lot of people would maybe want to go work at it right but if you're going to
build in let's say I don't know like rural Alaska where um it's it's really cold so you don't have to
worry about cooling right but um you know it's dark like four months out of the year and then in the

(39:19):
summer uh it's just like the sun never sat so you might uh have to really do some convincing to get
people to move there so yeah it's even more than that there's a certain serendipity to it as well
I mean people have tried to replicate Silicon Valley in other places and and my thinking is one of
the reasons it's really hard is you know there's so much there's a critical mass of talent here

(39:41):
that if somebody takes a job at a startup and the startup fails um they can go to another startup
and another startup and another startup we're a big company for the while and then to a startup
whereas in a lot of places if there aren't enough startups for them to risk their career on they go
to a startup and the startup fails they don't get to go anywhere else right there is another startup

(40:03):
around the corner yeah right and so to build up that critical mass just happened organically here
and it's really hard to to to make something like that just happen there's there's
I don't know um kind of random things that have to happen to make that work that you might not be able to
predict yeah that's very true I mean I think like the Bay Area that there's so many smart people

(40:27):
um like highly concentrated in this area and I think that's due to like a ton of reasons right like
I mean uh stanford's right here uh Berkeley uh I mean now all of the big tech companies uh you have all
the money uh it's here right like all the tech money yeah all the tech money right and um you're
all within like a pretty small area like I mean you could probably throw a rock and hit a software

(40:53):
developer uh in this area which sure sure uh which you know is uh bad for the software developer
that just got hit by the rock but uh you know it's uh it's it's it's good if you're looking for like
a co-founder you're looking for money and I think it's like it makes it a really like some
matter place sure absolutely and and and starting with like you know um fair child and national

(41:17):
summer and way back in those days they had a ton of really smart engineers that came here and
settled here and um and it just grew from that and and I know tons of stories of people who've
been here for a long time talk about you know hi yeah my dad invented that you know and as a kid I
got to play with that um kind of thing and that's not true everywhere uh are you originally from

(41:38):
this area I'm from Rochester New York okay and then college at northwestern Chicago and then I did
us did several years at Raytheon company on these coast Raytheon is a military they are they are
they're well they're kind of funny vertically oriented thing they had a like both uh
beach aircraft they own they they own the Amanda radar range came out of that from it's called a radar

(42:02):
range because some engineers put their lunch in the radar transmission cone and when they took it out
they realized it was warm and then like why is that well it turns out the radio waves will stimulate water
you know molecules and that causes them to heat up and knowing behold Amanda was was formed and
the radar range was was born and so there's a bunch of and then on publishing companies but primarily

(42:27):
military military stuff um components for planes and things and radar which is where I kind of cut my teeth
doing radar I guess uh what made you want to leave that and come to Silicon Valley? I wish that I
had come to Silicon Valley right out of right out of college I really do because um there were so many
starts like at a company like Raytheon and it was great working there because they're big enough that

(42:49):
if you if you're if you're kind of any little bit motivated you can do you can take the time to
learn technology and things and that's really that's really awesome so I learned about radars I worked in
Virginia Beach on a radar for a long time I wrote operating systems uh um all kinds of you know
board design all kinds of really fun stuff um but Silicon Valley um had this startup mentality

(43:13):
and this is purely a financial bit because people would come here and startups would be you know you'd
get some giant chunk of shares to begin with um of ISOs and they would keep those really low until
right short of IPO and then then a company would IPO and they'd be worth you know you'd be done right
in a heartbeat you'd be done it's it's much harder now when when I came here you know the rule was

(43:38):
you know one startup might pop really well out of out of say 10 you know two or three might kind of
linger a while and three or four would just kind of fail and and but you knew in a couple years you
knew two or three years whether the startup was gonna fail you can go to the next one and if you
happened to place your bet at one of the big ones the ones that popped big you're you're kind of done

(43:59):
and and that's a kind of a role of the dice whether that can happen because I've been at startups that
I really like that that they're gonna do well and they didn't who knows why um but if you're if
you're if you're trying a bunch of eventually hopefully you hit on one that that happens to do
really well and if I had come out here earlier I think it would have increased my probability of
not having to work at an earlier age which is kind of a golden thing I think so um how long you've

(44:25):
been at test Rebus um I am point number 30 I've been there seven years now seven years so you said
within two years you'll probably have a good idea of whether or not this startup is gonna succeed
well so so let me let me let me when I came to the valley in 92 it was what I was describing for
in terms of its startup now it's nothing like that the valley's been through a couple of expansions

(44:49):
attractions bench and attractions and um VC money is much much uh scarcer in a way um the the
probability that a company is gonna IPO has gone way way down more companies are being acquired
uh at the time and and you know if you can if you know go back to 92 you're gonna start up you get

(45:11):
hundred thousand shares the company goes public at ten dollars a share they go to a hundred dollars
a share you're not good stop working um that that that doesn't that mentality doesn't doesn't happen
here now you you're not likely you know maybe one in I don't know 50 or 60 we'll we'll we'll pop and

(45:33):
and do something interesting and the rest will get acquired or or fail um and now we have uh
companies and and venture capital and stuff gaming the system they don't necessarily care that
the fundamentals are good if they can cash out you know um early regardless of what the fundamentals
are they'll do that right um so uh taking a step back you mentioned uh syri bris doesn't uh make

(45:59):
as many chips on the scale of like apple and video AMD um for TSMC you guys are a relatively small
customer um you guys focus on a specific niche specifically training um what what do you think the
future is is this your retirement plan when do you think syri bris will have a big enough uh market

(46:21):
uh say okay let's IPO who knows who knows I mean you you know I think I see you made a really wise
company said every every CEO out of startup should have a price in the back of their head that they
will sell the company at if somebody walks in and writes a check with that many zeros on it they'll take
it so you do what the market tells you to do right you do what what's right you you know in order to

(46:44):
IPO you need to have fundamentals down I think and I'm still a believer in that you need to have you
know some number of quarters of profitability you need to be able to look to the look to the future
and see that you're going to be profitable you have done more than just one customer there's a bunch
of things fundamental things I think you should have in order to IPO and not tank right you want to

(47:04):
IPO and stay sustainable so what what is that number for syri bris right now yeah you guys are maybe
uh at your last funding round valued at something like two to four billion dollars or something
you would have to be psychic and look into the CEO's mind to figure out what that number is I
just have to add a couple more zeros to that but if you if you compare that to Nvidia's market cap

(47:28):
they're like uh a thousand times shorter bigger than syri bris and that seems like a one even
big into guess I wouldn't even begin to guess that that's just not something I can speak to
I think that means there's a lot of room to grow though the market's big enough for a lot of players
actually I believe that's fundamentally true the market's big enough for a lot of players

(47:49):
and I certainly don't begrudge Nvidia their their market share they were clever enough to
to to do Q to the first place not even aware that this was going to become a thing big
thing we're equipped off of that matter they just saw that hey this is an interesting thing
if we release it it's kind of like a you know feel the dreams thing build it and they will come and
and they did they came in and in in enormous numbers but I think we were talking the past and

(48:13):
you mentioned that you can run kuda code on syri bris hardware no no oh no oh no okay
we've got our own custom assembly oh I see okay okay gotcha gotcha but like if I had
arey things in kuda I could like convert it over to um no so essentially models are described

(48:37):
it's interesting because the the and again I'm not an ML guy right yeah I'm just kind of
associated with ML guys my understanding is that models are described in TensorFlow or PyTorch
as a series of layers with certain operations that happen at those layers and really the very top
level of the of that is uh it's describing a network which could be abstracted and broken

(48:59):
till smaller pieces which are then mapped to kernels that do those operations and are balanced
on the hardware that you're running on like like for instance a part a big part of the problem is
if you've got a layer that is twice as fast as another layer right it's going to finish its work
before the next layer is ready for it so maybe what you do is you um you use half the course for it

(49:27):
you make them do them twice as slowly so that when that layer is done it exactly matches the
the throughput of the layer after that and so this is a job that software does in balancing kernels
not just the selection of the kernels but actually how many of them and where you place them
so that the the data flows maximally through the thing with with with a little model with a few

(49:48):
bottom-axe so I know Nvidia has had CUDA for a while um but I think that's still kind of a a
little cumbersome to work with for software developers and I was reading about Ben a new
high-level programming language similar to like Python-esque where you describe this concept of

(50:13):
threads and it just handles all this parallelization background good for them not just with a single
GPU but across like massive workloads so what what is uh I guess cerebrus is equivalent to CUDA and
how do you guys make it simple I know you weren't an MLK but if you had a like...

(50:34):
well we have I mean we have we have just like them we have kernels it mapped to our assembly code
we have operations it mapped to our kernel assembly code and you just have to do that mapping
and I guess the difference is that it's not our our assembly code is not open source
it's not something you can just look at and write your own code for all of it's now controlled by

(50:58):
BIOS um at some point probably they'll have standard developers kits out there I think there is a
standard old kit that you can use to um I think you can actually just become a developer and download
it but I'm not on certain of that but anyway there's um you know I mean they're not going to stand still
they you're they're going to optimize and that's you know that's that's great for them in the whole
market making making programming easier is a job everybody is kind of working on I think so what's

(51:23):
your stance on the whole closed source versus open source both with uh I guess the software stack
and the hardware architecture design both have their their their places where they're useful I mean it's
like I'm uh you know let's let's take a more abstract question like c versus python right the two
very different languages but they both serve very different purposes there's some things that I

(51:48):
would absolutely want to code and c or c++ and there's some things that I would absolutely not want
to code and c++ and want to do an scripting language like python and so it's the same kind of
kind of problem um there there there are things that you want to do one thing with and there are
things that you want to do with a different tool depending on what your use case is and how fast it
is and so open source great stuff it has enabled you know like like let's just take an obvious example

(52:15):
you want to run a server and you want to run software and a server well let me tell you all
of the running as some kind of free version or run a free version of of Apache and Linux on them and
you want to run compute it's probably all running Linux and there'll be some layer of layers of
open source code on top or that on top of that to get it run the VMs or what have you and so that's

(52:36):
all really great but you know a lot of time ago I worked at um rendition it was a graphics company
in a really one of the early graphics companies and and you know back in those days drivers were
incredibly different were different and difficult like people probably less frequently here the
acronym BSD or blue screen of death um Microsoft windows would crash all the time with this thing

(53:02):
called the blue screen of death and it would just be a crash and it would be fatal and you'd have
to reboot and they got they got enormous amount of heat for that and 99% of the time it wasn't
anything that Microsoft wrote it was a third party driver that did it was your sound driver or
your graphics driver or some other kind of piece of driver that was and and so they created this whole

(53:25):
certification methodology called wickle to help certify drivers so they wouldn't be shipping things
or ultimately computers wouldn't crash right and that's kind of a goal even though it's not their
software they want to enable so that you have to lock down some stuff to prevent

(53:46):
instability I mean and that's kind of where closed source stuff happens is you lock it down to
prevent instability I mean that's one of the big reasons why I've loved the Mac for a long time yeah
I use the Mac because it's got linux underneath you know and I can open up a shell and I can do all
the same stuff that I can do pretty much on a linux shell and even even though
Microsoft has tried to create a shell to do that they have some very fundamental like piping

(54:12):
you know piping in a linux shell is piping between multiple commands but in a in a in a
Windows terminal window it's not you can't just pipe stuff the same way you would you know it says
all this command doesn't accept that or argument and it's really like everything that you want to do
you have to relearn for their shell right and I'm not saying it's wrong or bad I haven't used that

(54:34):
that much I think it's just that most computer science students learn to work with linux when
they're because it's free and then transfers easily to Mac versus when that was that was that was
Apple's brilliance when the Macs I remember when the Macs first came out which is dating myself I
remember seeing that printer I remember seeing a word document and it had pictures in it now

(54:59):
it may be hard to believe but that was a really hard thing like when I when I started my career
you you hand wrote your document you handed it to the admin and she would type it up
right and then if you had figures you would reference the figure and you would have an attachment
with all the figures in the Mac and they would staple it to it and they would copy it in a copy

(55:20):
right and and when Macs came out you could engineers could then write their own documents and
embed the figures right in the documents you'd see it right there and it would print out on these
crappy little dot matrix printers but at the time it was like oh it's all embedded and so
and so like the admins went from one admin per five people to one admin for a whole department

(55:41):
because all the engineers were typing their own documents and editing and going through cycles
of editing much much faster and and and much much quicker and so the whole the whole sort of like
admin pool went you know kind of decimated by that so computers took their jobs pretty much yeah
but I mean think about think about how I mean think about you know people typewriters right you

(56:04):
imagine you imagine your IBM's electric vendor right the moment the moment computers came out all
the typewriters went away too I'm sure it wasn't an overnight it took a while it was pretty close to
all right I mean a couple years all the you know once once computers like the Mac came out all the
typewriters went away do you ever worry that with these increasing technology games with like AI that

(56:31):
were a lot of people just gonna lose their jobs like it could be all of us that's a great great question
and that's kind of one of the things I was hoping to talk about on this podcast is some of the more
optimistic views on where AI might be taking us and and one of them that I used just the other day
was you know a lot a while back they did this paper with they got together some few dozen radiologists

(56:56):
and they were looking at brain scans and they were trying to identify tumors and then they trained
an AI to do the same job and they got a few dozen radiologists I think got a score of like 96%
but the AI got a score of 95% right the difference is that you know radiologists take a long time to
train and have a very large salary and they need to be on staff at every hospital and at small hospitals

(57:22):
it's not possible to do that but if you have an AI that's just almost as good as a real radiologist
every doctor everywhere in the world can run a copy of that AI and for zero time and very little
energy and and cost can have an expert view of of an x-ray and we're talking doctors and you know

(57:44):
poor countries doctors and rural neighborhoods small clinics can get access to world class
in radiology interpretation yeah and also I think that would just help doctors in radiologists
radiologists just save time as well right because before if it takes you a lot of time to I don't know

(58:04):
I'm not a radiologist but like I would assume that it probably takes some amount of time to just
you know look at the image and just try to like make sense of it right so I don't know like how long
that takes but maybe a radiologist can do it in let's say like X amount of time yeah minutes yeah
minutes probably maybe like 10s of minutes it is something complex but I would think that the AI

(58:27):
might be able to do it within seconds or like milliseconds as well sure and and and so
you can still it won't eliminate the need for for a radiologist right because there'll be some
questionable ones for the AI it's a little uncertain and so you kick that back to your team of radiologists
you know and also I would assume that the radiologists do more than just interpreting images because like

(58:50):
they have to go and you know run on the machinery make sure they don't like burn you with ultrasound
like you know I give you like cancer from all the x-rays right I'm not sure that's the radiologists
I think that the nurses in the radiology lab are doing that yeah I think that's true although
I my understanding through talking to some like radiologists is that like there is quite a bit of

(59:10):
training okay yeah or it's like actually run a lot of that equipment yeah so I think you're on
the danger of making of of devaluing certain jobs because it can be done better right AI and
yeah and so and then you like if nobody goes into radiology you lose the expertise right right
right that's that's that can be problematic and so that there's so I heard this interesting

(59:32):
quote the other day it was it and and I kind of I don't believe that AI is going to replace jobs
wholesale the way it does fears I think though the quote I heard was it won't replace jobs
but a person using AI will replace a person who doesn't use AI for the same job and I think that's

(59:53):
true yeah I has its best I think today it's best use case in being supplemental to work that people
already do to make them more efficient and make them you know maybe maybe take some tedious parts
out of their jobs so they can focus on more creative and interesting parts of it and that's that's
my hope yeah I I think that that is probably true for a vast amount of jobs although I think that

(01:00:20):
depending on the job it may just completely eliminate it so for example sure not necessarily like
AI like all jobs but like for example I think of like truck drivers right so it's like if your
entire job is just driving from point A to point B you're like Uber taxi driver right you got

(01:00:41):
self driving cars then like that the complete population of people may be out of a job and be
replaced by like a way of self driving taxi right maybe maybe at some point yeah but but you know
number of deaths will go down and things like that and the road because I think AI will be better
driving ultimately and maybe that truck driver maybe you'll need to have a truck driver in there

(01:01:05):
in the in the vehicle just not necessarily driving and maybe the truck driver will have a train
of two or three vehicles behind them which is a company that I think has been you know in the works
before that you know from years ago and that truck driver can then do something more productive
during that time when he's watching you know they're manning the truck yeah I think that's true

(01:01:28):
and I think that like in the short term we'll probably definitely see something like that
I'm just thinking like you know let's say not in the next five years or 10 years let's say
the next like 50 years or something like that when I think that like we'll probably agree that
50 years from now like probably make it won't like driving might even be like a law skill

(01:01:49):
practically in the next all bets are off at that point right you know so like what do you think like
if the AI becomes smart enough to let's say replace all the jobs like what jobs or like what do you
think people will do in 50 years from now like you know if or like 100 years whatever like

(01:02:10):
sometime like when the AI is able to be smarter than like any human to like the point of being like a
god which I think could be the case in the future maybe let's just go there a little bit let's just
go there a little bit and and I'll outline a couple of really interesting scenarios one is
everybody will be then free to pursue whatever they want like imagine imagine people like AI will

(01:02:36):
probably be running universities imagine people learning to get degrees and philosophy and music
and and math and science and all kinds of really interesting things that they'd have time to study
and pursue as hobbies rather than now clocking into a nine to five job you know maybe somebody wants
to spend their time making music or something like that and then there are all their needs are taken

(01:02:58):
care of and they can just make music if the AI isn't making it for them um I think I think the biggest
fear is though that things get done because people have influencing money to do them and
the people traditionally in our species the people with the influence and money have always wanted

(01:03:23):
to control the technology and the things they can do and so you get a very stratified society
and so if AI can do everything and say a lot of people are out of work um who's going to provide for
them right like suppose a robot you need a robot right but who's gonna are the people that making
the robots with their energy and time and influence and whatever counts for currency the time are they

(01:03:48):
gonna just give you that robot or they're gonna demand something in return for that robot right and
and and so technology has traditionally found ways to always found ways to stratify society and so
we as a species have to find ways to counteract that tendency that we feel everything is so

(01:04:11):
scarce and so and so dangerous that we want to we want to protect ourselves by accumulating as much
as possible by having as much as possible by being as as you know having as much you know um let's
call it defense right would be that money to hire people to literally defend you or big houses

(01:04:32):
where you don't have to talk to people or or any number of things have a you know or enough
financial gain to hire lawyers or or whatever people have always been really afraid and so they
accumulate wealth and power to defend themselves against things that aren't really zero some games
you know against things that you know are as a society we can figure out but but the T.P. and
our reptilian brain were programmed to believe that things are scarce and that there's danger

(01:04:56):
around every corner and so that leads people to be very defensive and be very and want to hoard
things and my hoard I mean wealth and power and other things at the same time and so we have to
we have to crack that nut in society I think too and that's a nut that you know philosophers have
been trying to solve for for millennia yeah it's it's interesting because you mentioned like

(01:05:20):
wanting to hoard wealth and power and whatnot but in a certain sense I can envision like a world
of abundance like a complete abundance with AI but that's a smart trick world right yeah where everybody
all their needs are met they can do whatever they want to do and I think it's it's really hard to

(01:05:43):
make sure that like not to say like that abundance would be distributed like equally necessarily
to everybody but to like potentially like raise like the base quality of living for everybody yeah
and that's and that's kind of the thing if you can find a way to it's it's not enough to raise the

(01:06:04):
quality of living of the lowest 50% an increment where the quality of living of the highest 5% gets raised
a million times that you know like you could you could say well healthcare has been better for everybody
on the planet for everybody it's gone up for everybody therefore let's not begrudge the

(01:06:25):
the economic discrepancies that are occurring for other reasons but I'm like not well I mean
that's the question should there be billionaires should there be trillionaires is there something
you know that like if it's okay for people to accumulate wealth and power is it okay for one
person to have it all like if you take that to its ridiculous thing is it okay for one person

(01:06:49):
have it all to everybody else is poor and serving them yeah I think that we probably agreed that
like taking to that extreme like if one person just had all the wealth that would not be okay and
I think if that happened then all the other billion people or whatever if there's one person
and 99 people had nothing right they would just go beat the crap out of that person it's my

(01:07:12):
the counterargument is if that one person controls all the robots all the wealth all the technology
then the 99% can't do anything that's and that's the problem right and we see society's
throughout all of time being that way from kings to popes to you know all the garks to autocrats to

(01:07:32):
you know we've seen societies go through that same struggle all the time where the one person
get makes a grab for as much power and wealth and subjugates the the rest of society look at North
Korea the perfect example of that and you know what North Korea has been there for 50 years right
and it hasn't changed and I don't see it ever changing I mean the people are not going to rise up
I think authoritarian societies are awesome when they work well uh you're trying to say benevolent

(01:07:59):
Maraq is better than yes any other system the problem is you can't always pick the benevolent
mark yeah you have to find some way of succeeding like like imagine so here's here's the scenario
imagine you know North Korea and imagine the whole world that way the whole world right there's no
democratic country trying to balance that out or trying to reverse it and and that's that was the

(01:08:23):
whole I think fear for the red for the communist scare was if the whole world is that way
everybody's repressed and there's no way to turn it up turn it around if you've deep programmed
everybody and you you know you've given them such propaganda all the time and you beaten out of
them any thought of resistance and the whole world is that way um what can you do and so so that's

(01:08:45):
why this enormous amount of resources was put into try to try to go up against I think at least
amount of political scientists to go up against communism because they didn't want the whole world
to be that way I guess democracy prevents the worst case scenario but uh there is a lot of friction
and especially with our capitalist version of democracy the income inequality is just like
rapidly expanded especially and since the pandemic I think the amount of wealth that was created

(01:09:10):
in the top 1% was just in the double digit trillions I think it hasn't been it hasn't just been
since the pandemic I think although that but if you just look at a short time frame of two three
years their wealth double if not more the time one person should get back to AI but along those lines
there was a time that I understand when the the multiplier between the highest paid employee

(01:09:35):
and the lowest paid employee was like 30 so highest to lowest now the multiplier between the
average employee and the highest paid employee is 300 to 400 so it went from highest to lowest
which means everybody was in between of of 30x to average to highest of 300 400x and and uh yeah I

(01:09:57):
do consider the failure of our current current economic system but that's that's i'm not a political
scientist so uh same uh but the reason I was talking about that is because that is the one end of this
extreme um where the other end is yes things have gotten way better health here has improved uh we
have access to amazing technology um and you've been here in the Bay Area since 92 so I can't imagine

(01:10:24):
the amount of progress you've seen here um and how quickly things are moving now and to give uh the
you know billionaire's credit they're trying to make access to these technologies as easy as
possible I mean GPT 40 is now free for everyone everywhere in the world same with the google's gem and

(01:10:45):
i so I I don't know if we could have imagined a time 20 30 years ago when we would have had this
relatively um super intelligent being in our pockets for free the vast majority of people working
have no need for anything like that you don't think it's helpful day to day

(01:11:06):
not to the to the made cleaning houses or to the the garbage collector picking up garbage or the farmer
in the field or none of that stuff so it might be helpful to the corporation that has farmers in the
field but not the guy out there picking not not as an employee but just as a as an individual I think uh
I've found myself turning to these LLMs more often than going to a search engine so just for my personal

(01:11:33):
information I think these are incredible tools yeah but people don't people I think most people
don't care they don't they're not searching for stuff right like we're that's that that's one of
I think the the conceits of tech people and I and I have it too is we think that because it can make
our life so much easier and better um why isn't it making everybody's life so much easier but it's

(01:11:57):
really hard for us to put ourselves in the shoes of you know the the plumber or the or the we
don't care about that kind of thing doesn't help their job hardly at all so what do you what do you
think is the point of life then but what is the purpose oh man um and can LLMs help well and

(01:12:18):
let's let's get to there because that that gets us into some things I really do want to talk about
my personal philosophy is one of of you know the point is is to decrease suffering in the world
that's part of the point at least that's one of the edicts I go by because it doesn't require any
kind of supernatural belief it's just something you can see and and act upon and I think there's a lot

(01:12:39):
of opportunity for AI to help decrease suffering and one of the ways you do that is by um trying to
filter out information that is detrimental to the to the advancement of society and I've talked
about a couple ideas on the in the meetup on how to do this um one is to um like suppose you trained

(01:13:05):
in trained in AI to recognize um when someone's trying to convince you of something that is a
they're using either logical fallacies or using some kind of cognitive bias trick
maybe we can train AI is to detect hey what is the real argument here that this person is making

(01:13:26):
and what is this sideways attack to try to to try to win you over or get you to believe or follow them
that they're making that has absolutely nothing to do with the things that you want in your life
it's just a demonization or a a a an inciting of fear or some kind of some kind of anger

(01:13:48):
to get you to believe something so that they can convince you to to to vote the way they want to vote
or to to act they want to support who they want you to support and I can see like a future version
on a phone call similar to how in Gmail you get an email and marks it as spam so if you're talking
to someone and they they have a conversational pattern when they're trying to manipulate you it
marks that as okay this being manipulative watch out yeah no exactly and and and and really

(01:14:14):
I'm the goal isn't necessarily just to um to tag people and punish them the goal is to get people
better at recognizing it so that they can support the people who are more um um less disingenuous
that are more straightforward in what they're what they're saying and how they're how they're
presenting their argument and and and one that doesn't trigger some kind of anger or some kind of

(01:14:43):
disgust or something like that like disgust is a really powerful way to for a politician to demonize
a group of people and there was a recent study that talked about how just calling someone a name
just saying something to raga proctoria about them makes it more likely that the people who've

(01:15:04):
heard that will believe something worse right so if you say oh that guy and it may not be true
and it doesn't even have to be true it just it just you know because of our cognitive biases it
just primes our brain to to to to think to associate that person with something negative so the next
thing you said that negative we associate with men you can just ride that all the way to the point
when you're hating somebody and and and I think that's that's a that's a trick that evolution is

(01:15:29):
paid on us because it favors um treating everything as in a negative way because you survived better
but we don't we don't we're not animals anymore we don't need to do that but our brains still don't know
that right so you know if the grass is rustling um 99.99% of the time it's not the lion right it's just
the wind in the grass but if you always assume that it's just the wind in the grass the one time

(01:15:53):
it's the lion you die and your genes get out of the gene pool and the next guy that assumes that
it is a lion more frequently will survive more frequently in those genes will carry on and and
you'll have you'll you'll you'll you'll you'll you'll have a tendency to an evolutionary tendency to
give more weight to negative things than positive things and that's kind of the way I think our brains

(01:16:16):
are coded so you talked about a bunch of things that I wanted to show you all let's go but one of those
important conversations which is pointing for everyone I think is the upcoming elections and how
people can use LMS to maybe generate more propaganda do like mass manipulation at scale

(01:16:37):
create like misinformation because it's it's so easy to generate content both the text and images
and now soon maybe like video footage too and we won't even know what's going on because they will
send it directly through the platform to the person receiving it and that was one of the
that was one of the I think the takeaway from the whole camera gentleman I think was that

(01:16:59):
I think they discovered that upwards of 3 to 4% of the population is prone to believe in conspiracy
theories and and we now know and back then they were saying well you know could it be that bad
but we now know that it's actually worse than that you know that up to 20 to 30% of the population
can believe in conspiracy theories without any without any problem we also know that it only takes one

(01:17:20):
they also prove that it only takes one to 2% of the population this wing in election one way or the
other and they practice it a number of countries including Brexit including the 2016 election and
while I don't think you can say that like it made the election if it had not been used I think the
election would have gone the other way but there's a number of things that you could say that about

(01:17:42):
this is just one of them and in this upcoming election you know I'm not seeing a lot and I don't
know why and I'm hoping it's not because I'm hoping it's because there are more guardrails that technology
companies are putting on to prevent that kind of behavior and I'm hoping it's not because they've
just got better at hiding it and but we do see on social media platforms it's just enormous amount

(01:18:04):
of fear and rage and hatred and all these things that are stoked by partly by you know information
that AI could classify as hey look this is not this is not in your best interest this is not in our
species best interest let alone our countries or our democracy's best interest let's say you were
narcissistic megalomaniac how would you take take over the country if you were going for going

(01:18:30):
out for elections and narcissistic megalomaniac you you have to so like if you look at the
not all the trash and tell you this well I think maybe it's a way to prevent it yeah and I would
encourage your listeners to to look at this and and maybe look for ways to use this to prevent it

(01:18:52):
but if you look at Wikipedia there's some 230 cognitive biases that are listed in there and you know
marketing people have been really good at getting you to do something because they're cognitive bias
and like for instance think smoking right you know smoking was made cool by having movie stars and
and movies do it and having television ads saying it's really cool and all the cool kids are doing it

(01:19:15):
and stuff like that they have marketing fake you know candy cigarettes the kids and
and whatnot to make it cool um and that was all marketing and it was just
well to be fair smoking is also just it sells itself it just feels so good in the short term
and the long term uh what would you even start if you knew that it was long term
detriment or why would you even start and you said that was a long term detrimental like I eat like

(01:19:40):
a donut because it tastes good but I don't know if you find that yeah so but but anyway my point is
that these marketing people have long known that they're cognitive biases you know you put a
you know if you look at some of the ads from the 50s and 60s you're selling a radio and you put a
sexy woman on it look at the airlines the way they used to dress sturdises right there's they know
that these cognitive bias things can influence you in favor of what they're selling or pitching or

(01:20:04):
whatever but imagine an AI that uses not only that particular cognitive bias but all of them
you can train it and not only all of them but they have a profile of you so they know what works best
just for you and they can emphasize the ones that will convince you of something stoke your fears
your hopes or your whatever um playing on any possible cognitive bias that they can find that does it

(01:20:31):
um and convince you to do literally whatever I mean convince you to you know storm the capital right
to believe that that the election was stolen and that you should go fight for democracy
I believe that a lot of people really believe that they really believed that there was the election
was stolen despite enormous amounts of evidence but because the the tribe they belong to and the

(01:20:55):
leader the tribe they belong to told them that it was true and their loyal to their tribe they
they did that in a lot of them in good faith you know believing that hey this is wrong we need to
we need to turn this around and so we need to find ways to to fair it out that kind of truth

(01:21:18):
or the possibility of falsehood in those narratives and AI is uniquely positioned to
see like I said when I was talking about cognitive bias to detect when an argument's being made
and add hominem attack or strawman attack or or some other kind of enraging thing is being made
that isn't really the point but it does get you inflamed and I would rather than just say these

(01:21:42):
people are bad they're doing these things I would say hey it's the population that votes for somebody
you guys need to learn how to recognize this stuff so you're not conned right so you're not you're not
conned there's a science fiction story where one of the statements the guy made was people will believe
either what they hope is true or what they're afraid is true right and oftentimes a truth is neither

(01:22:05):
of those two things and maybe somewhere in the middle but if you can stoke their fears and hopes
you can get them to believe what you want them to believe and that's the promise of you know a lot
of religions a lot of politicians you know they hope it's true that this is happening when they
are afraid it's true so they they they do that and and and we have to get I think we I would like to see

(01:22:26):
people learn how to recognize what the truth really is and and understand and maybe AI can give
them some guidance as to how they can recognize that what to look for if you had a speech for instance
and you could you could mark it up you know or even a website we point a URL to the website and

(01:22:46):
the website would mark up all the things that were you know cognitive bias arguments and even just
the difference between editorial and and news right you know news is a news is a presentation of the
facts and the things that are happening right there's things that you can prove or disprove there was a
hurricane there was a riot that you know there's a fire you know all these things are and they may

(01:23:06):
be false but there are least things that you could prove true or false right from editorial which is
you know there was a fire because you know PNG BG needs a terrible company or something crazy like
that right those are editorial they're not and if you could if you could if you could run
tax or speech through this AI filter and you could see that 90% of it are things that could be true

(01:23:30):
of proven true or false you say that's news whereas 90% of it is things that are just you know opinion
or just something that's neither approval then it's editorial and to you know it started way back
in the days of rush limon where he was he was presenting as news not only editorial but whenever

(01:23:51):
he was taken to task on things that he said they're wrong he said I'm just comedian this isn't news
why you guys take it be seriously so under that guys of being a comedian he would be presenting things
that people were taking as news which weren't and we saw that on Facebook as well you know Facebook
didn't want to be you know a lot of people were spreading news and I air quote the news that they

(01:24:14):
thought was really news but it really wasn't it was editorial or with some other kind of opinion
that they would claim is truth when it when it really really wasn't so you could have an AI kind of
like tried to differentiate between editorial and news and maybe people would learn to detect
what's news and what's editorial and put it in a proper way instead of just going on and consuming

(01:24:34):
media thinking it's all truth when most of it now is editorial much less of it is actually news and
most of it now is editorial and news is a much more important way to be informed because it's based
on things that happen facts and whatnot yeah you know I think that this would be a really interesting
project to build so for those that are listening that want like a fun little weekend projects I

(01:24:59):
feel like this would be a thing that you could build maybe nice a little chrome extension
where it can go and look at all the articles that you're watching maybe like I don't know if you
like as a bonus it could even help like interpret YouTube videos or whatnot to look for the cognitive
biases there I think that this could be probably built within a couple days if you know what you're

(01:25:24):
doing you know and if you want some more exposition on this contact me through Shushank or Mark and
I'll be happy to work to to to kind of outline more of by thinking on how this can benefit our
humanity actually I'm gonna I would say democracy and culture you know we have a ton of

(01:25:44):
obligations in our lives we have obligations to our family we have obligations to our to our city to
our high school to our college to our to our state to our country to the world to humanity to nature
and everybody has to balance some of those obligations out but I think the obligation to humanity
which is a little different than the obligation to say nature is to try to get people recognizing

(01:26:10):
ground real truth and things and being able to move society forward rather than rather than backwards
you know I agree I think that it's really important that we can all at least get the same facts
right so you know if people are disagreeing on the fundamental thing that have happened

(01:26:35):
then it's really hard to have a meaningful conversation if you can't agree on that and I think AI
could you know help get people at least on common ground and avoid any biases because like I know
for a fact that I am like impacted by biases all the time and like it's really really hard so like

(01:27:00):
I think that you mentioned that like a large percentage of the population believes in conspiracy theories
and I think that like to defend them for a moment I think that it's it's really easy to get swept up
in potentially conspiracy theories because the problem is is there's a lot of things that I think

(01:27:25):
like in the past were told as like the truth but now in like hindsight we realize that maybe that
was like wrong right so to take like an extreme example right like like let's say like 500 years ago
like slavery was maybe like seen as like a like a normal thing and now like we realize it like okay
like I don't think anybody's like trying to defend that right but like you know at the time it

(01:27:51):
was just like a normal part of life and like you know nobody was questioning it and I think that like
in order to so like once you find out that like one thing that you found was maybe untrue then like
you start to think like well what else is you know also not true because like you know your bandwidth

(01:28:13):
is limited right like if you are you know working at 9 to 5 if you're you know busy of obligations you
have you know kids you have to like worry about your bills you know all these things and like
people don't have time to like actually go and like you know vets to see like if things are like
actually true right and if somebody you know makes like some sort of like YouTube video or like

(01:28:38):
make some sort of like a video saying like hey like I think that like the government is full of
lizard people um like right like I mean you might start to believe it right I think also wait
down in our brains it's more important for us to belong than it is for us to be honest right and

(01:28:59):
because survival is depending on belonging to your true your clan or tribe or whatever and it's
not dependent on you telling the truth often right so if if your tribe says you know we're going to
kick you out if you don't believe this you know you have this tendency to want to believe it because
you want to belong right and that's independent of you know kind of as a science tech person

(01:29:25):
you know I kind of like I'm a I'm a fact-based person but you know facts are oftentimes you know
uncomfortable right and you know it might lose your friends it might lose you family it might
lose you all kinds of things to be talking about facts where you go it's not that important for me to
take a stand this political stand if I don't get to talk with my aunt anymore or my uncle or my

(01:29:46):
grandfather or my son or my daughter or whatever I would rather talk to my son or daughter and
believe these uncomfortable things and and and participate in their lives than not and so that's
that's a thing we did that again one of those things it is as more as you know as we if we want to
prove that we're more than just animals we have to fight against that tendency to be tribal over

(01:30:13):
more important goals for ourselves for our for humanity and for the planet yeah yeah and I think
that hopefully AI will be able to help with that because you know a life is short and while it
would be nice to you know go and be logical about everything like sometimes it's just hard and we

(01:30:36):
need to take like mental shortcuts uh well find find ways I mean like like people out there find
ways for AI to help with this and I had this idea of a social media where a priori you tell people
that half of the participants in the social media are avatars for agents of some sort of bots but they're
they're designed to be affirming they're designed to be you know not hostile they're designed to

(01:31:04):
to be to be interested in you right and a lot of people won't care you know and if there's a
if there's a bully or if there's a troll the AI is designed to engage with them and slowly
wean them out of the conversation try to get them to be better citizens and the beauty of AI is like
like trolls always like to have the last word but AI can always have that word it never gets tired

(01:31:28):
it never it never breaks character never gets mad it never gets fatigued and it can be
and I'd say it has to be you know take a big stick and whack them with it it can be very
non-judgmental and really engage with the person to find out why they believe what they believe in
and how they can maybe believe something different that will make them more productive in the
in the community that they're engaging with um and and that that would be kind of like a

(01:31:52):
social media thing where you know friends and people can get together and some of the agents will
moderate and maybe be able to do a pride a little bit of uh lightweight therapy and and and actually
pull out destructive people rather than just banning them right pull out destructive people and try
to coach them and to to be to to um to engage in more more more productive way that's a really

(01:32:16):
interesting idea for a um social media I also think that um it would be interesting if you didn't know
who was bots and you're interacting with it so so I thought of that question I've turned it over and
and and and a lot of people have said that well if they didn't know who was bots and but I think
trust is tantamount yeah in this and you would have to tell them a program if someone feels betrayed

(01:32:39):
you will never ever get them back no but I guess the enemies forever so I guess like the not
betray their trust what I'm saying is like um imagine it was kind of like a game where you said like
when you signed up and like was super clear uh it's like maybe at like a big banner at the top of the
page like 50% of the interactions will be bots uh and then like you don't know who it is uh so like

(01:33:03):
it's kind of like a game to try to like guess like it's sort of like uh turn it in the test so they're
doing that there's a there's a there's a and I really don't like this show it's called the the circle
and I saw that yeah people in a building and and and it used to be that you would there be a couple
people catfishing they wouldn't be who they said they were and they would try to get people to believe
and you know they would I guess both people offer something in a traditional style and this

(01:33:25):
this season or a recent season one of the catfishes was an AI right and and they would it would be
fool and they told them that it was an AI and it would be trying to fool people at the believing
was a real a real person or or real person of something so they said that like because I know
he knows who it is so they said like one person's an AI yeah yeah like so what it would be like 10
be I don't know how does this show it's like on a 15 or 20 people and everybody knows that there

(01:33:49):
are a couple catfishes oh okay there and they know that they're there they're just trying to figure
out so are you just like talking to like like a wall or so you're texting each other who texting each
other okay and their pictures and some of them are not real right so there are pictures of
everybody and yeah and and the the reason I don't really like it is because well for one it
encourages people to be dishonest about things and it makes that into entertainment and and for two

(01:34:12):
entertainment is the worst of humanity right you know it's like to promote people being the
worst of humanity as entertainment feels kind of gladitorial to me it's kind of like let's let's
just I don't know it just rubs me the wrong way ethically yeah when you said entertainment is

(01:34:35):
the worst degree I don't know if I agree with that but I do think reality TV which brings out the
worst in people and records them 24/7 forces them to be in uncomfortable situations and make
mistakes as human beings naturally do and showcase that on TV for the rest of the scene. So succeed in
this program you have to be dishonest yeah right deliberately and and you know to some degree that

(01:35:01):
I kind of like feel worse about the people who watch it right because the people are in it know it
and they know what they're doing it's like kind of a boxer right the boxer knows he could get beat up
and and whatnot but the people driving joy from watching somebody get beat to a pulp I don't know
but the interesting part where they sent an AI to pretend to be a human being right it seemed

(01:35:22):
very successful so yeah I don't watch the show so I watched a couple episodes of that season yeah
because I was like oh this is this is fun yeah it is a tearing test and the AI was able to
construct this persona and I think it was some kind of an agent running the background that had
some system prompt that said okay be a likable contestant in a game show manipulate lie do whatever

(01:35:49):
you can't get people on your side and have them vote other people off and you know you got to be
in the game make alliances be non-threatening throw shade on other people yeah so that was their
lost version like don't get don't get a squal if you did a really good job oh no no doubt no doubt
because it's I don't think it's hard to do I don't think it's hard to trick people because

(01:36:11):
people are inherently want to believe right the things that people say I guess to be fair in that
reality show setting everyone was primed to be a little suspicious of everyone sure are ready
and statistically there's like 10 people wanting it's gonna be hard for you to you know I think I
just don't like I'm a little squeamish about normalizing that as acceptable behavior right you

(01:36:34):
see something in a show and yeah it's just Hollywood yeah it's just a show yeah everybody knows but
it normalizes the behavior so that if you see it in real life yeah you don't think twice about it
only I was on the show it was funny I like the show I enjoyed the show quite a bit yeah I was
bad for this person do that but I'm used to it agreed this is going off topic a little bit but I do

(01:36:55):
think to maybe salvage some of some good out of this reality TV world people do realize that
they have to live their lives outside of this television show and carry on that reputation that
they build in the show so I think some of the recent contestants are realizing that okay they can't

(01:37:16):
be absolute psychopaths they need to be a decent human being treat other people well because it
seems like they're all on average working class people who would really benefit from winning the
prize and helping their family their kids but but but but holding that up is a prize you know
you know for this for this period of time you have to be a horrible person and we'll give you

(01:37:38):
this prize it'll make you like that that's a choice it's a choice you have the option sure yeah I
guess but in a certain sense like people don't necessarily have the choice because like you know I
mean if you have no money and you have a bunch of like debt collectors calling and somebody says like
oh you're a horrible person we'll give you a hundred thousand dollars it's a better choice than you

(01:38:00):
know going out and trying to rob a bank or liquor store to be sure yeah society should find
better ways to provide for it's less fortunate so I had something I wanted to ask you with one of
your previous points you mentioned we have obligations as human beings to the rest of humanity to
society to nature and I would say maybe those are lower down the list of your most absolute critical

(01:38:23):
priorities closer to home maybe is you know your immediate family your partner your kids and so on
so remind me again I think you have a son who's in middle school yeah high school now okay what
advice are you giving him to be a capable member of society in this rapidly changing world to maybe

(01:38:46):
use these new technologies and what what is he gonna study what are you encouraging him to learn
what tips or things are you trying to get him to you know it's it's first of all let's let me let me
talk about the obligation hierarchy that I should have thrown it's not static sometimes some of those

(01:39:07):
obligations outweigh others at different points in time and you know oftentimes they're conflicting
with each other in order to help my family I have to do this negative thing for the environment right
or or or or something like that so there's there's that and then as far as the the advice you know
he's he's his own person the advice is the same as you would give kids throughout Malayah right don't

(01:39:30):
lie that'd be a good person be compassionate look out for a less fortunate you know maybe more
specifically since you're someone who's worked in Sultan Valley in tech if he's interested in thinking
about his career and going to a college soon tech is a good career tech tech is actually one of the
few careers that going to college makes more sense these days no colleges have gotten a little bit

(01:39:56):
ridiculous in terms of the value proposition for for students going into them and tech is one that
was is showing to be actually worth worth the education it's it's less expensive than some others
and you know you don't have to go to a stand for that my tea to learn how to be a really good
coder or or some other kind of tech thing and they'll probably be a job waiting for you they'll

(01:40:18):
they'll cover your your college expenses your debt and so you still think so when by the time he
graduates maybe five years or something from now I think that university and banking system has
let students down and so I'm going to encourage him not to go to a hyper expensive top tier school

(01:40:38):
because I don't think it's a value proposition I think you have a million dollars in that even if
you do get a dream job you know you're gonna be you're gonna be a you know an indentured servant
a voluntary indentured servant for the rest of your life and that's not a value proposition.
The public school is here in California not too bad right the UC system is relatively

(01:41:00):
important. Oh yeah they're all good. No I like the UC system especially if your residence really
really acts and if you can get in the the the the price is good for residents the problem is that the
that there's there's a a million international students that pay a month or completely willing to
pay exorbitant tuition because they have the access to the money and there are more of them

(01:41:24):
than there are people that are residents and the colleges love them because they pay three or four
times of tuition to the state person does so there's a lot of pressure on state colleges and universities
to increase the ratio of non-resident students or decrease the ratio of resident students and
increase the rate of non-resident students because they can pay for all the really expensive gyms

(01:41:48):
and food like I went on a college tour recently and I can tell you the food was a thousand times better
than what I wanted to call it. What I had to college was like a high school cafeteria you you walked in
and there were like maybe three different entrees you'd picked and you got your tray out and they
put it on it they had like 15 different stations with different kinds of ethnic food or

(01:42:10):
seafood or or or or and then it was a grill and there was there were literally 15 different
stations they could get food at you know and I'm like they got a pay for that somehow right and
and that's I think the increase in tuition is covering that cause because it makes them competitive
right and so you know students what's really funny is when you want to loan from a bank usually as an

(01:42:32):
adult you have to have some kind of collateral you have to approve that somehow you're worth it
and in the past banks that would lend money to students I think did so because they were
they were banking on the future career the student would have as being the collateral for the
thing so there's no real collateral there's no real understanding there's no like a car or real estate
or anything like that behind it so it's a future thing and because of that they could make it anything

(01:42:58):
they wanted right they could make it you know we think you're going to get a job that's going to pay
a lot of money so so the banks would would give bigger and bigger loans the universities were only
too happy to and I'm not saying this is a conspiracy theory I'm saying this is just the free market
in action right banks would give bigger and bigger loans colleges would would charge what the
students are willing to borrow and they just could go up and up and up and up and up and up and up and

(01:43:21):
it seems analogous to the health insurance industry in a way this yeah that's a lack of transparency of
of transmission of the of the real cost to the consumer to make an informed decision but
banks can't make an informed decision because they don't know what kind of job the student's
going to have so if they get no job then they're totally out of luck right if you if you if you

(01:43:44):
if you spend a $250,000 on a college tuition for a degree in economics and there isn't a job waiting
for you well now you've got this loan that's bigger than most people's house loans but you have to
pay off through your entire life right people spend 30 years paying off their mortgage right when I
got out of college within two years I paid off my student loan it was the student loan was

(01:44:06):
literally a fifth of my starting salary and now the student loan is two to three extra starting
salary yeah you know I think that that's true with you know the increasing student loans with the
banks but I also think it has to do with the government giving loans as well because you know there's
like both like I think this is the federal back loans and then there's also like the private loans

(01:44:29):
uh so I think like there's a lot of money from there as well well the banks like to give loans
because a lot of more back by the government right yeah so it's it's like those savings and loan
scandals in the past right if they're guaranteed by the governments then banks are only too happy to
make you take let you take as much risk as possible because it's not their dime yeah especially like

(01:44:52):
if the government if Uncle Sam is gonna get the back there are some things there totally I mean
I'm not saying that the government is good at that or that's the government student loan programs
are all really good but they do actually have a cap right they don't they they don't go up to
$400,000 they're you know the government the much you can borrow from the government is really
relatively small I think some of the doctors and lawyers loans do get up there uh sure but are they

(01:45:18):
government backed loans are they government loans are they bad enough that's a good question I don't
know I think government loans have a cap I don't know I couldn't be totally wrong you list of risk
I was talking to someone from one of these loans refinancing companies sofi or earners derves
something else and they mentioned the most expensive clientele or doctors or lawyers who are still

(01:45:39):
repaying their loans in their late 30s and their late 30s and probably for you know they're called it
I mean the credit card companies did this too I called it voluntary indentured servitude right you're
you're choosing to get the credit card you're choosing to run it up to to its maximum out it's not
something that someone is making you do you're choosing to go to the school you're choosing to take

(01:46:00):
these loans out right and so people the institutions can hide behind the fact that it's a choice that
an individual is making it's tough to call it a choice when there is not a good alternative there are
you need to get a big name college for a good job so it's stacked right so that I was going there
and it is you're choosing and that's what they hide behind but some of the you know when they when

(01:46:24):
they give you a credit card and they tell a college student well here's the minimum you can pay off
but they don't tell you that if you pay off the minimum you'll never repay the loan you'll never
repay the credit you'll never pay it off right so so regulation needs to set that minimum amount
so that you actually do pay it off in some period of time or at least your the person is made
clearly aware of how long it takes in the pay off like like if you buy something with a credit card

(01:46:47):
like a like a like a television a two thousand dollar television and you just pay off the minimum
every month you'll end up paying ten thousand dollars for that television set if you ever pay it off
at all right and even if you do pay it off in fairly big chunks you might end up paying three thousand
dollars for that television set that normally cost two thousand dollars and if you're the kind of
person that can't resist just buying whatever you want right because marketing is at your face you

(01:47:10):
know by this big TV by this computer by this car by this blah blah and by all the stuff then you will
rack up 20 to 30 thousand dollars in credit card and you'll never pay off you'll never be able to get
off my direct and that's partly I think our system is problem not just the individual lack of
willpower yeah that's true because I mean it's really hard to to fight against that I mean like you

(01:47:30):
have some of the smartest people in the world just like doing everything they possibly can to just like
you know selling the way yes yes to sell product to you and and they will play on those cognitive
biases that you have to to have what your neighbor had to fit in with the Joneses to to buy the big
TV at the computer with a little play on those yeah it's a it's a losing battle it's it's so hard

(01:47:54):
I mean like I think it's actually a really interesting regulation proposal to raise the the
minimum credit card payment to say like hey like maybe like the credit card payment is like 20 dollars
like oh I can handle 20 dollars it's like yeah 20 dollars a month for the next 700 years right
right right right I think they did change the regulations such that the recommended minimum payment

(01:48:17):
had to pay it off right because for a while there the recommended minimum payment would last
in brick in brick 2d yeah I mean that is just I think that's kind of predatory actually
soup that's terribly yeah but I mean you know coming on I'm sorry the let the buyer be where
it's a person's choice just because they didn't read the fine print they're still liable for

(01:48:41):
the decision they made I mean for sure yeah but sometimes like the fine print is you know really
hot like you need to have a lot of degree to to interpret it sometimes also when is the last time any
of you guys were at a fine print I mean I mean I played the fifth but actually one thing that I've

(01:49:02):
been to bring the conversation back to AI a little bit yeah so one thing that I've been doing a lot
is whenever I like you know click agree on the terms of conditions I'll throw in the terms of
conditions a chance to be see yeah like what do you like anything I should be aware of and
you know typically it's like the standard legalese but sometimes you find some good stuff but yeah

(01:49:23):
I would recommend doing that just you kind of know what you're agreeing to so so I think beyond like
I want to make people better at doing that themselves right I want to I want to train them to
recognize when these things are happening I think AI can be really useful as a tutor or counselor

(01:49:44):
or advice or whatever on decisions people might make or understanding problems so that they learn
not to make those mistakes again now I have a lot of faith in humanity and maybe that's a little
misplaced but I really think AI can be used as an equalizer that way yeah I think I think it too

(01:50:06):
I have like massive optimism for the future like I think that for sure there's a lot of
problems and it's important to recognize those problems so like we can we can fix it but I think that
you know the problems were created by humans and then I think like humans maybe along with the
help of AI like we can solve those problems too so like I don't know I think that like the future

(01:50:31):
has the opportunity to be extremely bright because the thing is like we collectively as humans are
we build the future right so like you know we can make the thing that we want I would argue that our
and this is an argument I made 30 years ago is that our ability to make ethical decisions around technology

(01:50:51):
doesn't keep pace with the leverage that technology gives us to make bad decisions and I just finished
this book called the alignment problem and one of the last things he says in the book is
luckily we're really incompetent because because the technology we've created if we

(01:51:12):
employed it perfectly would probably have destroyed us but the fact that we make mistakes and
we bumble on things and and whatnot has prevented it from actually doing the more damage than it
could actually do so maybe trying to end the conversation on a brighter note what are you most
optimistic about what is the best case scenario that you're looking forward to I think there's a ton of

(01:51:37):
people out there that feel as I do that that are like markers are optimistic about and who are trying
to drive humanity to better better positions and I think AI can do that but it's a tool like any
other tool can be abused can be you know it's just a tool it's in people's hearts and minds but I think

(01:52:00):
there needs to be you know unfortunately the the incentives to use it not for evil per se but for
personal gain which causes certain bad things to happen just because everybody looking out for
themselves doesn't mean that anybody's looking out for a say nature or anything like that right the
those things are really hard to change but AI can kind of maybe level the playing field somehow can

(01:52:28):
can play on our like imagine you can have AI play on our cognitive biases in a way that gets us
to do good rather than in a way that in rage us or makes us afraid right and I'm not saying
trick people into doing good using their cognitive biases but you know recognize when you know
when things like the desire to be with a tribe is is is a you're doing it because you want to be with

(01:52:56):
your family and you're you don't change who you are based on that you can still recognize
I've got facts I've got information on my side that says that this and maybe you'll be able to
fight the good fight and nudge them to a more honest and honest position and one that looks at
humanity in general and some of those other obligations other than the ones to yourself

(01:53:18):
so shashank I know you're trying to like semi wrap this up but I want to ask one more follow on
questions the way which hopefully won't turn into much more rabbit hole but you gotta stop me then
so you kind of mentioned how like AI may be able to help society and whatnot and I'm curious do
you think there's ever going to be AI in government do you think that there could be maybe in the future

(01:53:46):
and I don't know exactly how we would be able to do this with the current political process
it's going to be in weapons systems yeah so but like when I say like in government I mean like
a leadership in government like do you think we could ever have like an AI president maybe starting
small an AI governor I think AI a highest AI an AI avatar has the potential maybe not as a

(01:54:11):
a governor or president but as a as a voting member let's say with a certain amount of power
like say human 51% AI 49% right humans have veto power ultimately but as an agent that can
so think about AI is it doesn't get offended right it doesn't have the emotional things that happen

(01:54:34):
in a lot of government positions it can look at the information and the data it can look at what's
been effective and what's not been effective and it can recommend things that it thinks are going
to be more effective or less effective and it does there's no BS right it'll just if it's if it's
if it's programmed to you know solve a problem say homelessness or or or or world hunger or

(01:54:57):
something like that or it could actually probably balance all of the problems at once and it
can certainly balance all the problems at once and you know it can it can it can handle more data
simultaneously right what you're getting at than a human could exactly and it wouldn't have the same
it would have prejudices based on the data it was trained with but if there's enough people

(01:55:18):
understanding some of the prejudices I think some of the prejudices we have are very
very sort of isolated and personal to a particular politician or a particular thing you can
just like radiologist you can average out all those things to an ideal of what we want a
a governor to be or a legislator to be or a law to be and it could it can tell you what that

(01:55:43):
ideal is crunch all the data tell you how successful it can be within that system with with real
information that people can follow or not follow and then there won't be any BS there won't be any
spin there won't be any of this you know is capitalism inherently evil or is it inherently good yes
there should be millionaires those that shouldn't be but you can say well I want I want everybody to
have a a minimal lifestyle how do we achieve that and maybe it'll say no billionaires right and

(01:56:11):
and that's be the solution maybe it'll say like here's one that I think would be really radicals
no taxes through all your life except at the very end 100% inheritance tax right
everything goes goes back goes in goes back to the government and so that way you know there's
all this complaint about people on welfare and food stamps and snap and all those programs

(01:56:32):
taking advantage of the program and not working and being lazy well you know the the the offspring
of wealthy people don't have to do any of that stuff they they actually cannot do anything right so
how do you incentivize them to participate in the society with everybody how do you prove to
the wealthy people well look let's mobilize the plaintiff not just the unfortunate people but everybody

(01:56:55):
and so in your life you can't count on any of your parents and heradins you have to make your own way
in your life right and then nobody's taxed right so all the money you earn you keep
you want to be a billionaire if you can get there you get there but at the end of your life span
none of it goes to your children it all goes goes into helping society become better I think maybe

(01:57:16):
Mark is might be thinking about something that I'm also thinking about that only works until we
start living forever yeah exactly wow yeah that's that's a long way out and I think that's a long
way I think that's a longer way out than other problems we have I think it's coming I did some great
science on this this is a number of problems that if we solve I think it's like 10 or 11 problems

(01:57:40):
that we solve with human biology and and you can make longevity be in the many hundred year
range or augmented human computer sure sure sure I think then maybe we get into the definition of
what it means to be a person if you're living on the chip well maybe a cerebrus chip
that you know and this is where technology will stratify society right like if I've got if I'm

(01:58:05):
augmented if I can afford to be augmented with it like say a super brain right that gives me access
to information about everybody everywhere and and and and calculators I can do in my head and all
this information I'm going to be a better employee than most people only because I have this super
brain and so I might be able to do the job of 10 people I might displace those 10 people right and

(01:58:25):
so there's that technology danger to again technology is it's really like like the end of this
book that was being on the line of problem talked about technology as being this this this stallion
galloping across the across the field and and our ability to to ethically handle that technology
is being this weak lumbed fold just trying to find its footing right and and it's it's it's kind of

(01:58:50):
true that you know a thousand years ago we didn't have the technology to destroy the world
accidentally no one person could probably do it now one person or a handful of people can do an
enormous amount of damage including possibly eliminating humanity as a whole right by just by
making mistakes by using the leverage of technology and one could argue it might be happening in tiny

(01:59:16):
chunks by the technology is being used for instance for finance for politics for you know fake fake
news and creating all these all these all these things that are really believable those tiny chunks
are bite set of a out of humanity's probability of surviving right and people need to be out there

(01:59:38):
and that's what I want to encourage people looking out for how to
counteract that sometimes not even conscious erosion that happens just just by people trying to
make the best for themselves
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.