Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
3C
(00:30):
Hey Chris, how you doing?
Good, how you doing, John?
I'm doing good.
We are one step in front of the other, you know, just trying to figure it out.
I feel like I say that every single week, but it's a mantra, man.
(00:52):
Just keep moving forward.
This would be interesting tonight to see how this all works doing it remotely.
Yeah, we're doing it remotely again.
It seems to be working just fine right now.
I think the real winner here is that you're on that audio Sigma device over there at your house and we don't seem to be having nearly the problems we were having before.
(01:20):
I'm excited. I mean, just buy gear that works. I guess that's the...
Yeah, I'd still love to know why I have all the trouble that I've had when I've had my roadcaster here on this side of it.
Yeah, that's one of those things that I can't like.
(01:45):
It's hard to be a problem solver for a problem solver.
I feel like all of my friends whenever I'm like, you're having trouble with something?
You're not supposed to have trouble with something.
(02:06):
I have trouble with something. You tell me how to fix the thing I'm having trouble with.
I'm not used to you having trouble.
I'm only human.
Yeah, I mean, I'm grateful because most of the time it doesn't seem like it.
Not in the technology side of things.
I guess that's the fun we all get to deal with.
(02:30):
I have got new toys.
Yes, you do.
My wife gave me new toys and because I got new toys, I started messing around with my setup here at the house.
So I've actually got a roadcaster duo now over here on this side.
(02:51):
So hopefully everything's fine and we don't have any weirdness because of that.
But everything's working okay right now.
And then I have an old Hero 9 that I have plugged into my computer as a webcam now.
(03:13):
So I'm actually getting...
Is that the same mic arm that you've had or is that different as well?
This is an XTM 100.
Okay.
So it's running into the roadcaster, which is kind of funny.
Oh, so you're running USB-C into it?
Yeah, I'm running USB-C into the duo.
(03:35):
So I have two more.
The idea would be that I use my Aston as one of the microphones.
The real reason, I don't think I said this while we were recording,
the reason that my wife got me this duo is I have a plan to...
(03:56):
I started a plan a long time ago or a few years ago to start recording my parents telling stories
because there's a lot of stories that my grandparents told that I'm starting to forget a little bit
or just not remember the exact details of.
(04:17):
And they have memories of their parents, my grandparents telling those stories.
They also have memories of their grandparents telling stories.
So I'm going to try to get as many of those family history anecdotes down as I can.
And so the roadcaster duo is going to help me do that.
(04:40):
I'm going to put an XLR microphone on my mom and an XLR microphone on my dad.
And then I'm going to use this guy.
Or if I can squeeze the $250 together, I'll put a wireless mic the road has that can connect to this guy.
(05:01):
And that'd be pretty fun.
You can do just the normal, like, little microphones, but they also have like an actual, you know,
larger diaphragm capsule on an actual mic looking thing that you would use for like interviews and stuff.
I think it's their purpose behind it, but also for this.
(05:22):
Yeah, you're looking at like the, I think they call it the go me.
Something like that.
And then, no, I think the me is one of the, it's one of the little ones that you can buy like a,
you can buy two of them and then the squares.
Yeah.
Oh, sorry.
(05:43):
No, I'm talking about, this is interesting.
I mean, maybe we haven't talked about this.
Let me find the thing real quick.
It's the, it's the road interview pro wireless handheld condenser microphone.
I want to put this in the chat for you so you can look at it.
(06:05):
It's a, or you can just look it up.
I've seen that one before.
Okay.
I think they've upgraded it over the last couple of years, but it pretty nice little condenser microphone.
(06:27):
So you don't have to deal with that little exciting bit.
But anyway, it's an idea at least.
And that thing only costs like 250, which, you know, looking at the other microphones that I've looked at buying for this type of thing.
(06:51):
That's not too bad.
So, or maybe I'll just go get a Luit.
Can't go wrong.
I definitely recommend that.
There's, there's a bunch of different options there for the other microphone, but the idea would be that I would be able to use the nice microphone that I always use and then a nice microphone that I haven't used in a little while and then another nice microphone that I'm yet to buy.
(07:19):
And I'd be able to record all these things for posterity.
Yeah, looks like what I was thinking of is the road wireless go to.
And that gets you the two small square mics with the single receiver, but you can get their love mics that would pop onto that.
Yeah. And I thought about that. I thought about, I thought about just getting those are just getting the DJI or just getting road has a new one that'll plug into your phone.
(07:50):
Sorry, the, the receiver will plug into your phone and then it has two smaller microphones that I mean clip on there even smaller than the squares.
And you can clip those on and then record using their wireless technology to your phone.
(08:11):
And those looked, those look pretty good.
I, you know, it's just kind of one of those, I think the duo gives me the most flexibility without having to go all the way up to the pro.
But we'll see.
Maybe I'll have this for a while and go to the next one.
(08:34):
Yes, holy cow.
You're gonna sell something here?
Me? No, I'm not telling anything.
I didn't think so.
Okay. Hey, what are you drinking?
I am technically breaking the rules, I would say.
(08:56):
Well, we've had three shows in a row where the beer just hasn't been that great.
I mean, yeah.
I couldn't, I couldn't do a fourth in a row.
Okay. So I brought a go to good old Guinness.
I don't think that's breaking the rules.
Well, we've already done it before.
(09:17):
Well, I don't think we have to do it every time, right?
So, but I had to, I had to go with a guaranteed and I brought an extra just in case.
A Shatterbock.
I like it.
I'm also breaking that rule, I guess, because I have my last dragon's milk.
I am excited to drink this bad boy and then go try to find some more.
(09:42):
That is a good one.
Yeah. So here's to opening.
I think we've read all these, right?
Yeah, we've read the blurb before in a different show.
All of the different...
That did not sound good.
That was a good one.
That did not sound good.
(10:05):
Good job.
Yeah.
So, cheers.
Cheers.
Tink.
Oh, that's the good stuff.
Oh my gosh.
Okay. Yeah.
I mean, when you don't...
(10:27):
Are you okay over there?
Your lights went off.
You got a friend outside?
Who knows what happened there?
I'm picturing... This is not true, but I'm picturing your closet where your recording is having one of those buttons when the door gets closed.
(10:49):
It turns off sometimes.
It does?
It does, really. It does.
Yeah, if I go pull on that door enough, that light goes out.
That's hilarious.
Okay, well, yeah, not bad.
I think probably for this show, we need to include in the notes a picture of at least your setup.
(11:11):
Mine's not really anything that you want to look at, but...
Well, mine looks like I know what I'm doing.
Yeah. That's why I want a picture of it in the show.
I'm very much still like, is this how this works?
I guess the one thing I have done that I hadn't talked to you about,
and so that maybe I'll have to take a whole new picture from the last one I sent you.
(11:35):
Okay.
But I had some of the equipment up on the desk.
Uh-huh.
And it was like, I really want to get that off the desk because I brought in the new monitors and the monitor controller.
So I actually bought a vertical rack.
Nice.
Got it off here to the side of me.
Nice.
So the switch is in there.
(11:56):
My power conditioner is in there.
And then I've got a, I guess you'd say power distribution,
but it has the toggles for the individual plugs on the back.
Awesome.
So I can turn those off and on.
Depending on what you're...
Depending on what I need.
(12:17):
Huh.
So...
That's a good idea.
I'll take an updated picture for what it's worth.
So...
But I think I'm set for a while now.
Well, I mean...
It's always amazing to me.
This happens to me as a musician.
I'm like, yes, that is my dream setup.
(12:39):
I have my dream set up and then like 15 minutes later, I'm like...
But...
Yep.
I really want that too.
Yeah, the only thing that I would eventually change, but I'm in no rush,
is I thought about either building something simple to go in here,
(13:00):
or I could actually take this rack gear and actually have it mounted into the desk.
Because I've looked at some of the purpose built furniture and it's nice, but it's expensive.
Yeah, no, it for sure is.
I was looking at a rack for underneath our TV and the living room.
I used to just find 19 and a quarter inches apart from each other and drill those ears in.
(13:31):
And then no big deal.
But anyway, seems like it's harder than it used to be.
Yep, I would agree.
What was I going to say?
Oh, okay.
I'm wanting to jump in, but also I feel like I should just say for anybody listening who wants to talk more about the beer,
(13:58):
this beer is really good.
Yes.
Because these beers are so good, we don't have anything else to say.
We just want to drink the beer and enjoy it.
Just moments of silence.
A lot of just sitting here looking at each other, drinking beer.
Well, not in that way though, the way you said that.
(14:22):
Just looking at each other.
Okay, well, we don't have to make it weird.
Fine.
Yep.
All right, but let's go ahead and jump in then.
Since we're going to make that weird, let's jump into the...
(14:43):
What we're talking about today.
This is a fun one.
Yeah, we've got a lot here.
There's probably at least two shows here if we tried to go through it all.
So I don't think we're going to get it all tonight.
I think this could be a series.
Well, it could be, yeah, but we are going to kind of dip our toes into the world of AI.
(15:05):
Yeah.
So much AI.
On another podcast that AI is anonymous Indians.
Okay.
I appreciate the humor in that.
Yeah.
Dots or feathers.
(15:29):
Okay.
So you got some definitions of AI to start us off here.
Yeah.
Let's go with that.
Pulled it from a couple of different places.
So from Britannica, they define AI as the ability of a digital computer or computer controlled robot to perform tasks commonly associated with intelligent beings.
(15:52):
And I would agree with that.
I don't have any issues with that definition.
Oxford says the capacity of computers or other machines to exhibit or simulate intelligent behavior.
No issues with that one.
Sorry.
Cambridge, the use or study of computer systems or machines that have some of the qualities that the human brain has such as the ability to interpret and produce language in a way that seems human, recognizable or create images, solve problems and learn from data supplied to them.
(16:25):
I think that one's getting closer to what I would call it.
And because it references qualities of the human brain, instantly do comes to mind.
Yeah.
For me, because of the way that he approached it, it wasn't the terminator version.
It was humans using AI to control other humans.
(16:49):
Yeah.
And so it became illegal to create a machine in the likeness image or the ability to think like a human.
And then the last one here comes from Google and they say artificial intelligence is a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken in written language, analyze data, make recommendations and more.
(17:19):
And so that's, to me, that's very Google.
That's very Google.
Yes.
So that's kind of the core definitions.
I agree with these for the most part.
I think I take more of a old school purist of what I would define AI.
(17:40):
So if I take an AI system and I train it on all things football, history, players, types of plays,
what's the best play for defense against this offense, everything you could ever want to know about football.
And all of a sudden it starts teaching me basketball, baseball, tennis, talking about history of players.
(18:03):
Okay, something happened because I didn't train it on that data.
I didn't give it access to that data.
Right.
But somehow I pulled it all together.
So to me, that would be how I would define artificial intelligence.
Okay.
It truly learned somehow on its own in a manner that I didn't give it the ability to do.
(18:25):
You may have given it the root function of it, but you didn't specify that behavior or
correct, provide it with that information.
So, okay, let's talk about neural networks.
Yeah.
This is where it gets kind of weird.
And again, we'll have this in the show notes, but neural networks kind of ties into machine
(18:53):
language and deep learning.
So the concept of a neural network, it's trying to mimic neurons in the human brain.
Okay.
And so you have three main layers.
There's the input layer.
So whether that's something that you're feeding into a video, a picture, audio, text of some
(19:15):
kind, whatever that input may be.
So you can imagine that coming in from the left.
And then there is a layer next that's referred to as the hidden layer.
So on a simple neural network, you may have one or two layers of there.
And then on the far right, you ultimately have your output layer.
(19:35):
And that's the data, the information, the picture, the video, whatever it ultimately
outputs from the neural network.
Okay.
So, and then when you get into deep neural networks or deep learning, you get into lots
of hidden layers in between.
(19:57):
And that's what allows it to break down the input further and further and further, refine
it, refine it, refine it to try to make sure that you're getting the ideal outcome that
you're looking for in the end.
Whereas just a simple neural network may struggle to give you what you're after.
Well, a simple neural network is more, I mean, this is like the difference between, have
(20:27):
you ever seen somebody who, sorry, this is probably really primitive.
And I mean, we all had to take the computer class in high school, I'm assuming, the computer
literacy class in high school that you had to learn how to use Word and Excel and PowerPoint.
(20:50):
But when you initially started doing like in power or not in PowerPoint in Excel, when
you started using the different cells to create, you know, to do equations and things like
(21:12):
that, you're using formulas and you're seeing that like, okay, and then maybe like, so that's
one hidden layer is it's not just giving you the information back that you're putting
into it, but it is doing something with that information, but it's still a function that
you are telling it to do.
(21:32):
It's just not visible to you.
So that's like one hidden layer.
And then you had, you had to, at some point in the class, you had to take that information
put in there, make sure that it did what it was supposed to do in terms of the formula
or the function that it was supposed to have.
(21:53):
And then you had to, you know, it could create like a pie chart or a diagram for you or something
like that.
That's like another layer of hidden layer or something like that inside of it, but it's
still pretty easy for you to see what's going on.
These deep neural networks are more like, did you ever see Randy Collins Excel sheets?
(22:14):
Oh, yeah.
So if you ever met one of those guys that like, they're just wizards on Excel and all
of a sudden this thing is doing like amortizations for you that you just like, that's referencing
a different sheet or whatever they call it.
I think sheets actually call them sheets.
So sorry about that.
(22:35):
Yeah, it's a whole different Excel.
My bad.
But yeah, like different workbooks and you've got macros going on and things like that.
You've got these massive formulas that are being put in, but you can't see any of it.
That's more like deep neural networks.
Oh, yeah.
So you may like, we've all put some input into a computer and gotten a response.
(23:03):
You know, you've done the input layer and then received the output layer and it feels
really one for one.
Yeah.
It's just the like order of magnitude of those hidden layers is now much greater.
(23:27):
And that's what I've seen with artificial intelligence is this, it's doing a lot that
we can't see, but it has still been programmed to do those things.
Where we actually have artificial intelligence that's further than that.
(23:52):
That's when people are starting to get nervous and you can start to see that now because
it looks like magic.
Yeah.
I like him what they're doing to almost filling a database full of everything you could ever
want and then asking it to go retrieve that data and put it together and it can just do
it way faster than any person could ever attempt to do.
(24:17):
Yeah.
Yeah.
It's because it's not breathing.
Correct.
Don't have time for breathing.
Yeah.
Yeah.
So a few more things here on neural networks.
We call them neural networks for a reason.
(24:37):
Yes.
This is mimicking.
Absolutely.
So each one of these, I know you all can't see the picture, but you will if you look at
the show notes and we'll definitely put this in the chapter order as well.
Once again.
But each of...
Podcast two, baby.
Podcasting 2.0.
Get you a good app.
So each of these are referred, these colored dots are referred to as a node and each node
(25:02):
is meant to try to mimic the neuron on the brain.
So they say here neurons are the basic units that receive inputs.
Each neuron is governed by a threshold and an activation function.
The connections in between are links between neurons that carry information regulated by
weights and biases.
(25:22):
I want to stop real quick on weights and biases.
That's where this kind of gets in interesting and that's some of the pieces that you can
kind of change and define for this neural network.
Okay.
So you have an impact on how it perceives and what it gives you in the end.
So depending on what an individual is doing, they may put their own bias into those, I
(25:48):
know bias is one of the settings, but your personal opinion could determine how you set
the weights and the biases.
And so you could, in theory, get an output that you didn't mean to get, but your own
what you were putting in drove it to that outcome versus another person can do it.
(26:10):
It could be driven by their thoughts and opinions on how they set the weights and biases and
get a completely different outcome from the exact same neural network.
So you've got to try to be careful in what you're doing and how you're setting your weights
and your biases so you don't ultimately skew some of the output that you get.
(26:31):
So that's a part that I think makes it feel more human to everybody because there's a
mirror that's happening inside of it.
And every time somebody uses chat GPT and says, no, I want it a little bit more like
this.
It's moving a weight.
(26:53):
Yep.
It's saying this user prefers this.
Yep.
Which is why next time it may look closer, faster.
Propagation functions.
There you go.
That is propagation functions, mechanisms that help process and transfer data across
(27:17):
layers of neurons.
And that's really going to get into more when you're in a deep learning because you have
so many potential hidden layers in between.
And then the learning rule, the method that adjusts weights and biases over time to improve
accuracy.
Is what we were just talking about.
(27:39):
Absolutely.
Okay.
And so the next as we go through this long dissertation, learning in neural networks
follows a structured three-stage process.
First stage is input computation data is fed into the network, output generation based
on the current parameters, the network generates an output and then iterative refinement.
(28:03):
I think I said that right.
It's iterative.
You did.
The network refines its output by adjusting weights and biases gradually improving its
performance on diverse tasks.
So similar to what we already said.
Yeah.
What's interesting to me is the like, what I seem to be like, man, I hear it.
(28:29):
I think it's one of the things from people that are using AI that they like that they
value the most is that iterative refinement.
But the fact that even in itself, it's been loaded with the correct propagation functions
to be able to understand what you mean when you say, I need it to be less flowery language
(28:57):
or make that simpler or embellish that.
I mean, it's not well embellish and simpler probably easier functions to be able to write
in there.
But if you say less flowery language, the machine learning can actually understand what
(29:23):
you mean by that.
And then it like that iterative refinement is trying to take you through a weight, a
bias that's different in order to find the right one for you.
And everybody like that's what I see people going crazy over is I don't.
(29:45):
And it's probably because it's what humans do worst whenever you've got somebody working
for you, it's really hard for them to break.
It seems like to me, it's really hard for people to break their initial idea in order
to come up with something new as a creative.
I know like that's, that's the there's like significant cost to doing that.
(30:15):
Like it, even when I try to just do it myself, even when I try to say like, oh, now put that
like you've you've created that put that aside.
Let's try it a different way.
Like there's a fatigue involved with that that computers don't have and just the ability
to cycle it to say like, oh, we're going to try a different one.
(30:36):
Oh, we're going to try a different one.
Oh, we're going to try a different one.
So much to the point where that that seems to be the majority of the time what they're
what they give you, especially in the art sides of chat, or not chat, GBT, but in the
art sides of AI, they, they are giving you multiple options from the outset.
(30:57):
Yep.
Anyway, yep.
There's going to be a future and people who are excellent prompt jockeys.
Prompt jockey.
Yep.
Okay.
So next, continuing on.
In an adaptive learning environment, the neural network is exposed to a simulated scenario,
(31:19):
a data set, parameters such as weights and biases are updated in response to the new
data or conditions with each adjustment, the network's response evolves, allowing it to
adapt effectively to different tasks or environments.
So that's the taking those weights and then applying it to a different function.
(31:40):
Yeah, or a new, new data that was input into the model.
So, the more you use it, the better it gets.
Yeah.
So next we go into types of neural networks.
This is where I'm still trying to learn myself.
So there's the feedforward artificial neural network.
(32:04):
It says, as the name suggests, a feedforward artificial network is when data moves in one
direction between the input and output nodes, data moves forward through layers of nodes
and won't cycle backwards through the same layers.
Although there may be many different layers with many different nodes, the one way movement
(32:25):
of data makes feedforward neural networks relatively simple.
Feedforward artificial neural network models are mainly used for simplistic classification
problems.
Models will perform beyond the scope of a traditional machine learning model, but do not meet the
level of abstraction found in deep learning model or in a deep learning model.
(32:48):
Interesting.
So yeah, when I was reading that initially and it was talking about, it won't cycle backwards
through the same layers.
I'm like, oh, I thought it all just left, right, left, right, left, right, keep going
people.
Yeah.
Evidently not.
Apparently not.
So, but that went into me.
That's pretty straightforward.
(33:09):
Goes in on the left side, comes out on the right side.
Oh, no.
No, no, nothing goes backwards.
No vice-versa.
Yeah, it doesn't go backwards.
Yep.
Okay.
So next is the perceptron and multilayer perceptron neural networks.
A perceptron is one of the earliest and simplest models of a neuron.
(33:30):
A perceptron model is a binary classifier separating data into two different classifications.
As a linear model, it is one of the simplest examples of a type of artificial neural network.
Multilayer perceptron artificial neural networks or networks add complexity and density with
(33:51):
a capacity for many hidden layers between the input and output layer.
Each individual node on a specific layer is connected to every node on the next layer.
This means multilayer perceptron models are fully connected networks and can be leveraged
for deep learning.
(34:12):
And they're used for more complex problems and tasks such as complex classification or
voice recognition, cause of the model's depth and complexity, processing and model maintenance
can be resource and time consuming.
Interesting.
Yeah.
So it's intriguing that you've got your initial input.
(34:34):
It'll have each of those nodes connects to every other node in the next layer and so
on.
So from layer to layer to layer, all nodes connect to each other.
Yeah.
So you just, yeah.
There's not just a one for one translation there.
Or one for two.
Yeah.
(34:55):
I think the one for two is probably where that made sense to me before.
But all of them being connected, that's, yeah.
So then you jump into radio basis function artificial neural networks.
Yeah.
Radio basis.
Goodness.
You read it or me?
(35:15):
No, you read it.
I was saying goodness because that is a mouthful.
Yes.
So radio basis function neural networks usually have an input layer, a layer with radio basis
function nodes with different parameters and an output layer.
Models can be used to perform classification regression for time series and to control
(35:37):
systems.
Radio basis functions calculate the absolute value between a center point and a given point.
In the case of classification, a radio basis function calculates the distance between an
input and a learned classification.
If the input is closest to a specific tag, it is classified as such.
(35:58):
A common use for radio basis function neural networks is in system control such as systems
that control power restoration after a power cut.
The artificial neural network can understand the priority order to restoring power, prioritizing
repairs to the greatest number of people or core services.
(36:19):
That's interesting because it sounded like, in that first part, it sounded like estimation.
Yes, it did.
But the fact that it uses estimation to then do prioritization is, that's an interesting,
I don't think I would have made that connection.
(36:41):
Obviously, that's really funny.
And then they reference tags there because that's important because in certain models,
when you're inputting the data, you can associate a tag to that data.
For as it's learning, it tries to determine, is it coming as close to that tag as it needs
to be as it tries to learn and understand.
(37:04):
Right.
Is this close enough?
Correct.
So, the idea of this restoring power is kind of intriguing.
Yeah.
I don't know if good old Encore or TXU do it this way, but it's intriguing that it could
go, okay, we've got a power outage.
This is the order of operations to get us back up and running, minimizing the input,
(37:28):
minimizing the effect on the people who are without power.
Yeah.
And that would obviously be way faster than a human sitting there trying to figure this
out.
I mean, yeah, most humans.
Yes.
There's some humans I trust more than a computer to make those calls.
(37:49):
Yeah, there are not many of them though.
Okay.
Recurrent neural networks.
Yes.
Recurrent neural networks are a powerful tool when a model is designed to process sequential
data.
The model will move data forward and loop it backwards to previous steps in the artificial
neural network to best achieve a task and improve predictions.
(38:11):
The layers between the input and output layers are recurrent, and that relevant information
is looped back and retained.
Memory of outputs from a layer is looped back into the input where it is held to improve
the process for the next input.
The flow of data is similar to feed forward artificial neural networks, but each node
(38:33):
will retain information needed to improve each step.
Because of this, models can better understand the context of an input and refine the prediction
of an output.
For example, a predictive text system may use memory as a previous word and a string of
words to better predict the outcome of the next word.
(38:53):
A recurrent artificial neural network would be better suited to understand the sentiment
behind a whole sentence compared to more traditional machine learning models.
Recurrent neural networks are also used within sequence-to-sequence models, which are used
for natural language processing.
Think of Dragon naturally speaking, if anybody remembers that one.
(39:17):
Two recurrent neural networks are used within these models, which consist of a simultaneous
encoder and decoder.
These models are used for reactive chatbots, translating language, or to summarize documents.
I mean, I understand what it's saying inside of what these are used for, but this immediately
made me think of predictive text, which has been one of the greatest examples to me of
(39:42):
why I'm not scared of AI.
Yes.
Super great at saying, oh, there's probably going to be a noun next.
You want this noun?
You want this preposition?
You're not really good at understanding what else is going on inside of it.
(40:06):
The more it gets looped back, the better it's supposed to be.
I do appreciate that it's retaining those sequences and measuring them against the one
before it, the frequency and what then turns into, I guess, priority of language, though
(40:26):
I wouldn't call it priority.
It's just more about what's normal in any given sentence.
The more you sound like a computer, the more the computer can guess what you're going to
say.
True.
I did think that this one was going to start, going to be where we started talking about
(40:54):
I guess because of the fact that it is doing the looping backwards into itself.
This is looping backwards into itself is why we get the horse with the five legs in the
drawings.
Yes, in humans with six fingers.
Yeah, right.
(41:15):
Anyway, modular neural networks.
This is our last one in this grouping.
Modular artificial neural network consists of a series of networks or components that
work together, though independently to achieve a task.
A complex task can therefore be broken down into smaller components.
(41:35):
If applied to data processing or the computing process, the speed of the processing will
be increased as smaller components can work in tandem.
Each component network is performing a different subtask which when combined completes the
overall task and output.
This type of artificial neural network is beneficial as it can make complex processing
(41:59):
more efficient and can be applied to a range of environments.
This is like the idea being, I think, the idea being that instead of just nodes being
connected together in this matrix of nodes that are all talking to each other and everything,
(42:20):
you're getting whole networks connected to each other in the same type of matrix.
So it gets exponentially more complex but also more useful, I guess, inside of that.
(42:42):
Yeah, I kind of liken this to the SETI at home project.
Yeah.
Because you've got all these people who've opted in with the software and so you have
all these computers working together, crunching their piece of the puzzle to all optimally
put it all back together.
Yeah.
(43:03):
And I mean, this is the micro-trip at work in terms of, I guess, the same type of thinking
going into it.
Like if you're breaking these processes apart, if you're saying like, no, I'm going to, you're
going to focus on this and this one's going to focus on this, but we're still going to
(43:26):
be like everything's going to be connected, then you're not running into cross traffic.
You're waiting like everything gets done and then gets put into the same either line of
networks or you are then able to run it through more seamlessly.
(43:50):
All of this to just try to create a human.
Or give me a summary of a book.
Yeah.
You got a couple different definitions here, or not definitions, but like, I guess, takes
on these different parts of AI.
(44:13):
Yeah.
So, in looking into this deep learning, machine learning and neural networks all kind of got
blurred in the way people trying to explain it all.
They all kind of work together.
It's all, I guess you could call it sub layers of the larger context of AI.
(44:39):
Yeah, but they are independent.
They're not out of the same thing.
Correct.
So, next we'll jump in the machine learning.
It's focused on enabling computers and machines to imitate the way that humans learn to perform
tasks autonomously and to improve their performance and accuracy through experience and exposure
to more data.
(45:00):
Supervised machine learning models are trained with labeled data sets, which allow the models
to learn and grow more accurate over time.
For example, an algorithm would be trained with pictures of dogs and other things all
labeled by humans, and the machine would learn ways to identify pictures of dogs on
its own.
(45:21):
Supervised machine learning is the most common type used today.
Unsupervised machine learning, a program looks for patterns and unlabeled data.
Unsupervised machine learning can find patterns or trends that people aren't explicitly looking
for.
For example, an unsupervised machine learning program could look through online sales data
(45:42):
and identify different types of clients making purchases.
And then last, reinforcement machine learning trains machines through trial and error to
take the best action by establishing a reward system.
Reinforcement learning can train models to play games or train autonomous vehicles to
drive by telling the machine when it made the right decisions, which helps it learn
(46:06):
over time what actions it should take.
Yep.
Have you seen the, not the derailist, but have you seen the recruit?
I don't have it.
Okay.
So there's a, now I can't remember which, it's probably Netflix.
(46:27):
There's a Netflix show called The Recruit.
Lawyer, like 20 something lawyer is in the CIA now, but he's in, he's in like legal aid,
I guess they're not legal aid, legal, whatever you call it for the CIA.
(46:47):
And one of the, one of the problems that they, actually one of his colleagues gets sent to
deal with is a interrogation robot that's, they're using machine learning to try to make
a robot that can interrogate and it, because it's part of the CIA has been given some enhanced
(47:15):
interrogation options or whatever, but they forgot to program it with the information
that the people or the individual it is interrogating is not a robot.
So like the first interrogation it does, it rips a guy's arm.
(47:39):
Anyway, it was, it was pretty funny whenever it was like, so you didn't program it to know
that it's not interrogating another robot.
And the guy was like, it's way more complicated than that.
But yes,
They forgot the simple rules, do no harm to people.
(48:01):
You know, it didn't think it was doing harm to a person.
Thought it was interrogating another robot.
Yeah.
See, made a mistake.
Yeah.
Deep learning.
Deep learning.
Here we go.
Deep learning networks are neural networks with many layers.
A layered network can process extensive amounts of data and determine the weight of each link
(48:24):
in the network.
For example, in an image recognition system, some layers of the neural network might detect
individual features of a face like eyes, nose, or mouth, while another layer would be able
to tell whether those features appear in a way that indicates a face.
Like neural networks, deep learning is modeled on the way the human brain works and powers
(48:48):
many machine learning uses like autonomous vehicles, chatpots, medical diagnostics.
The more layers you have, the more potential you have for doing complex things well.
Yeah.
(49:08):
That's the one that kind of freaks me out.
I think it's the one that has the most potential for what you hear.
Someone like Elon Musk who's running around saying that we should be afraid of AI and
what it's capable of.
I'm not seeing anything that backs up his fear, at least not yet.
(49:30):
That always makes me nervous because I feel like guys like that see the things that I
don't see.
Correct.
Meaning, not meaning that, well, they probably do understand things that I don't understand.
I'm not on the same level, but that they have access to things that I don't have access
to.
Yes.
(49:51):
It definitely, the way that some of our friends were talking about the AI that they were experiencing
when they went to a conference and got to, they were in the, is it the sphere?
Oh, yeah, in Vegas.
In Vegas.
Yeah.
And you too, I think, opened it up with that concert.
(50:13):
Yeah.
And they had the AI that was talking to them and it was like making conversation with people.
And it was, I think it was Cody that was talking about how he thinks that they actually programmed
to act a little more robotic because it would have freaked everyone out if it did what it
(50:42):
did and didn't seem robotic.
Potentially.
So there was, there was definitely part of that that I was like, I'm intrigued because
I've seen AI robot.
Some scary stuff there.
Oh yeah.
And then there's.
(51:04):
Terminator scary for a different reason.
And then you got Johnny.
I robot was scary because it's starting to look real.
Yep.
Can't forget Johnny five though.
Man, Johnny five is alive.
Johnny five is alive folks.
Love that show.
That's how we all thought it was going to happen.
(51:26):
We put some programming into something and then lightning would strike it and then it
would become human.
Yeah.
The new version of Frankenstein's monster.
There you go.
Yep.
Okay.
And so the last bit I've got here is challenges of artificial neural network models.
Although there is huge potential for leveraging artificial neural networks and machine learning.
(51:47):
The approach comes with some challenges.
Models are complex and it can be difficult to explain the reasoning behind a decision
and in what in many cases is a black box operation.
This makes the issue of explainability a significant challenge in consideration.
That one is the one that kind of just strikes me is odd.
(52:11):
You built the model.
You're training the model.
You know the weights and measures that you're, I'm sorry, weights and biases that you're
putting into the model.
So why would you run into so much that you couldn't explain?
I think it's because we don't really understand the way that we do things.
Perhaps.
(52:33):
That's one of those things.
It's always been, what I've seen as the potential for a Pandora's box here is that we consider
ourselves experts on so much now.
Like, we got this, we know this and there's just too many things that we don't know.
(52:57):
And you start messing around with stuff and you know, that not to sound too paranoid or
whatever, but it is not surprising to me when there are things that, I mean, I've been working
in IT for too long to not, or to be surprised when I do everything right and it doesn't
(53:23):
work right.
Sure.
I mean, maybe in this regard, I take too much of the traditional programming approach to
this and going, if you're not getting out what you expected, you coded something wrong.
And maybe there's something unique in the way these models function that it does introduce
(53:46):
some uncertainty there.
And maybe that's why sometimes they're surprised on what they get.
Yeah.
I think it has a lot to do with that and just the complexity that's inherent in it.
You know, like I don't, I think we take a lot of things for granted and I think there's
a lot of things that we don't understand and then we think it's binary.
(54:10):
But it's not.
Fair.
Okay.
With all types of machine learning models, the accuracy of the final model depends heavily
on the quantity and quality of training data available.
Data, data, data.
Good data in, good data out, bad data in, bad data out.
(54:33):
Cali.
Yep.
That's why I still have people emailing me and calling me.
They want me to sign on to their service that I'm already using their service.
Don't do that.
Not a good look.
A model built with an artificial neural network needs even more data and resources to train
than a traditional machine learning model.
(54:55):
This means millions of data points in contrast to hundreds of thousands needed by a traditional
machine learning model.
And then next, the most complex artificial neural networks are often referred to as
deep neural networks, referencing the multi-layered network architecture.
(55:16):
Deep learning models are usually trained using labeled training data, which is data with
a defined input and output.
This is known as supervised machine learning, unlike unsupervised machine learning, which
uses unlabeled raw training data.
A model will learn the features and patterns within the labeled training data and learn
(55:36):
to perform an intended task through the examples in the training data.
Artificial neural networks need a huge amount of training data, more so than more traditional
machine learning algorithms.
This is the realm of big data, so many millions of data points may be required.
And this makes sense for why they keep trying to get as many data points from you as they
(56:01):
can.
Correct.
It's important to all of them that you keep plugging in.
It's important to all of them that you keep clicking on things.
It's important that they are allowed to have access to the app failures or whatever information
they can get off of any app.
(56:23):
They want to know more about you so that they can program better or better, I guess, predict
all that waiting and biasing.
And that brings me back to Johnny Five.
Need more input.
Need more input.
(56:44):
Yeah.
Such a good movie.
Yep.
All right.
So next, the need...
Way before it's time.
Yes.
Such a large array of labeled quality data is a limiting factor to being able to develop
artificial neural network models.
Organizations are therefore limited to those that have access to the required big data.
(57:06):
The most powerful artificial neural network models have complex, multi-layered architecture.
These models require a huge amount of resources and power to process data sets.
This requires powerful resource-intensive GPU units and system architecture.
Again, the level of resources required is a limiting factor and challenge for organizations.
(57:30):
And that's why so many of these data centers are talking about building their own power
plants.
Yeah.
Because they need more power.
More power, more reliable power, more...
I mean, just every single time.
I don't need to have to worry about whether or not Homeboy flicked on his lights.
(57:50):
Right.
I mean, he doesn't...
All of this needs to work all of the time.
Yep.
And it makes sense why this is being dominated by these big companies.
Oh, yeah.
Deep pockets.
Yep.
All right.
And then last, the method of transfer learning is often used to lower the resource intensity.
(58:12):
In this process, extensive knowledge from other models, an existing artificial neural
network can be transferred or adapted when developing a new model.
The streamlines development as models aren't built from scratch each time, but it can be
built from elements of existing models.
Yeah, that makes sense.
And that, I think, is how the Chinese pulled off what they did with DeepSeek, but they're
(58:36):
just not one to talk about it.
I mean, why would they?
That doesn't help them come out on top, but they didn't start from scratch.
They started from existing data.
They started from existing models.
They started from, oh, but it didn't cost us that much.
(58:58):
I'm like, okay, but probably it would have if you started from zero.
From the ground up, absolutely.
And you know, like there's just, I don't know.
To me, that's why you don't take it seriously when somebody says that it wasn't that hard
(59:20):
to do that.
Well, the idea was already out there.
Yeah.
I don't, it takes a lot of time to come up with the idea and how to actually execute
it when you've got an idea of what it may be or what some of the problems may already
be, then it's easier to tweak it.
(59:41):
But regardless, like I think all of these are just going to keep getting better as long
as we don't let things like that kind of setback mess with us.
(01:00:03):
Yeah, what was so exciting to me about talking about this is kind of funny.
I was actually thinking about this from a different perspective or a different, yeah,
I guess perspective whenever you brought it up to me, because I realized how much I was
(01:00:27):
being fed in social media AI solutions to different problems.
I have a friend who was talking to me recently about how he thinks that what young people
(01:00:54):
need to be putting effort into now is becoming AI gurus or even like AI sherpas, I think
was the term that was used in the conversation that we were having.
Knowing so much about different AI tools and what AI is, which tools are using which, I
(01:01:23):
guess, functions of AI really well in order to be the guy that when people ask a question
about, well, what does my business need to be using AI for?
This person would be able to say like, oh, these are the best resources for that.
This is maybe a way that you haven't been thinking about using AI that is out there and whatever.
(01:01:50):
I just took a second, not a second, I took 30 minutes of my scrolling and I wasn't actually
scrolling on Instagram in order to see things that would interest me on Instagram.
(01:02:11):
I just scrolled through Instagram to see how many AI related advertisements came up.
In 30 minutes, one, two, three, four, five, six, seven, eight, nine, 10, 11, 12, 13, 14,
(01:02:40):
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26 different advertisements for different
AI products.
Now, three of those are AI classes, which I thought was really funny considering the
conversation that I had had with a friend of mine about people needing to be like gurus
in AI and to be able to know what tools are out there and how best to use AI in any given,
(01:03:11):
I guess, sector.
And one of them is straight up from the University of Texas at Austin.
I think the other two were just people's classes on AI, which seemed more homegrown than anything
else, but I mean, there's a class at the University of Texas in Austin.
(01:03:38):
And I don't know if that's a full degree.
Nature seemed like it from the advertisement, but maybe it's just like an add on.
It didn't seem like a certificate.
It wasn't like that.
But anyway, it's a lot.
You know, I had been looking at VR technology lately for some other things that I was thinking
(01:04:02):
about and the hand in hand of AI and VR right now is kind of a next five years growth model,
it seems like.
I think both of those will see just a different level inside of both of those in the next
(01:04:24):
five years is what I'm predicting.
I mean, it may take 10, but I would think five will see pretty serious growth.
Anyway, there was a lot there.
And I mean, it ranges from the huge majority of what I think about whenever I think about
(01:04:49):
AI right now because of these advertisements are these like handhelds or wearable tech,
whether it's glasses or little lanyards or just a voice recorder.
I kid you not, Chris, there is a little device that's I did.
(01:05:14):
Again, I didn't spend a whole lot of time looking into each one.
I was just trying to get through how many there were in that period of time, but it's
a stinking breathalyzer for your diet.
Really?
It's analyzing your breath and then telling you what you need to eat next.
(01:05:35):
That I.
Okay, so I'm just going to say it.
I thought it made me think of a movie.
Half-paked.
Which one?
Huh?
Half-paked.
I think it's, well, maybe I'm wrong.
Maybe, maybe I'm wrong.
Maybe it's actually, no, I'm wrong.
(01:05:56):
It's, I think it's Biodome.
Oh, yeah, no, it is Biodome.
I know which one you're talking about.
They burp and blow it in each other's faces and they take a breath and tell them what
they ate.
Yes.
Yes.
It's actually one of the Baldwin boys with Polyshore.
Yep, that's it.
I can't remember which one.
(01:06:16):
Yeah, so, yeah, sorry, wrong movie reference, but we ultimately got there.
The Biodome.
Biodome.
Yes.
Yep.
So, that's what that made me think of.
You're saying that it's a breathalyzer for what you eat.
Yeah, I mean, it's not telling you what you ate.
It's telling you what you need to eat, what you're deficient in based on what you, you
(01:06:42):
know, what your breath smells like.
I, it was, I mean, it was hilarious.
It's not smell, you know, anyway.
Sure.
That one, that one got me because I was, I was, you know, all of these seem to be, or
not seem to be, most of the like apps and different wearables and glasses, it's pretty
(01:07:04):
obvious the productivity, sorry, productivity side of this that it's aimed at.
This is, this is trying to take things off of your plate.
It's trying to make certain decisions easier, you know, I think there's more ADD, ADHD in
(01:07:26):
our world today.
Yes.
And, and that like executive function paralysis and the, the idea that you, you can offload
some of that is really attractive.
I mean, the number of these apps, especially that are related to calendar management or,
(01:07:53):
or just transcription that can be turned into either notes or tasks.
I was like, wow, it is, it is for sure making long strides in this.
And I think, you know, the, the nervousness that people have is they see things like that
and they're like, oh, it's stealing my information.
(01:08:16):
I mean, it is, I don't, I don't know how to tell you like that, that, that iterative
aspect of AI where it has to take what you did, like what you, what it received from
you, what you asked of it, and then what your response to what it gave you, like it
(01:08:37):
has to keep putting that up against everybody else's to see what's, you know, normal where
that, where that weight needs to be, where the bias needs to be.
So it's really, it's really funny to me.
I think the real, it's not, you know, it's not the danger here in my mind is not machine
(01:09:07):
is taking over in like a, you know, the, the, you know, Skynet fear that we have, it's more
herd mentality.
Yes.
It's more, it's, I wouldn't even call it, yeah, I wouldn't even call it echo chamber.
It's just the, at some point, I think we choose what the machine gives us in its first attempt.
(01:09:45):
And when that happens, when that's the normal or even second attempt, whatever the like
predicted option is there, like if, if we always wait until the second attempt or if
we always take the first attempt or whatever, like that becomes the, I mean, it's going
(01:10:10):
to learn that that's the thing.
It's waiting for the second option or it's, it's going to give us what it thinks the first
time and then they always take it and then it doesn't learn anymore and then we're stuck.
Yep.
If we rely on the computer to think for us, then the computer will think for us.
(01:10:33):
And that's, that, that to me is more makes me more nervous than the other side of it.
Oh, well, you said you wanted peace, so that means you need to be destroyed.
Yep.
Okay.
Well, yeah.
Yeah, I think for me, we may have given it too much credit there.
(01:10:54):
It's two things.
It's one, it's, you're going to give up more and more control to the computer than you
should give it.
Right.
And that leads you down a path of a thousand paper cuts until you realize, oh, I've given
up way more and it's too late.
Yep.
Type of a thing or the idea that this starts to become so common that you get a couple of
(01:11:21):
generations that just grow up and this is just what you do.
This is life, kind of like what's happened with smartphones.
And you just have people who just like, oh yeah, this is just kind of the way it is.
And they just inherently trust the AI and don't question it, don't know how to reason
(01:11:42):
and essentially fact check and think for themselves.
And you just kind of end up in a, in an odd world that leads to an idiocracy.
Yeah.
I think that those are valid concerns.
(01:12:03):
We try to make life easier and then you realize by making life easier, you took away the things
that made you better.
That almost sounds like something Jordan Peterson would say.
I'm just saying I'm not Jordan Peterson.
I'm not smart enough to be.
(01:12:24):
Anyway, super interesting.
We may have another show on this just because of the amount that there is to it.
But thanks for bringing this up.
I'm grateful to get to talk about it and drink this beer while I'm doing it.
Absolutely.
Never hurts to have a good beer with a friend.
(01:12:46):
Nope.
I'd agree with that.
Well, and not just because you said it.
Yeah.
No booster grants tonight folks.
Jason Corden didn't keep up his end of the show.
Jason, I mean, I guess we've got to have him on again.
Yeah.
I actually had somebody ask me if we were going to have more people come on the show
(01:13:12):
as a guest.
We just need to recruit.
Yeah.
If you want to be on the show, let us know.
Yeah.
Anyway, I'm all for it, especially if there's, I mean, this would be a good, this would be
a good show to have Cody on.
Yeah.
He's probably got a lot of thoughts in this area.
(01:13:33):
Yeah.
Him and probably Robbie Jones.
Oh, Robbie Jones.
Oh, Brad Wofford.
Yeah.
Brad Wofford would love to talk about at least the, what are the two that he uses?
(01:13:54):
Oh, for the art.
Yeah.
He uses one for art, he uses another for either copy or like even songwriting.
Jordan shared the song that we used in our D Now video that just got, like our church
(01:14:14):
just did Disciple Now this past weekend.
And the song that they used or that the guy who made the video used for the video was
completely AI generated.
I was like, you're listening to the song, you're like, oh, it's like a contemporary
(01:14:37):
Christian music.
And then all of a sudden it said like our church name and I was like, what?
Oh, no, that, like that happened.
This is not a song that was independently recorded by an artist.
(01:14:58):
Yeah.
So I wonder if Brad is either using Leonardo, there's Da Vinci and there's Dolly for the
art at least.
I didn't think that's the one, I could be wrong.
I had Chad knows all of this.
I'll just ask him.
But yeah, we should definitely, let's put our heads together in the next week, try to
(01:15:22):
figure out if there's somebody that can come talk to us about their particular use of AI.
Yep.
Especially if they like beer.
Oh yeah.
Absolutely.
That sounds like a good job we could put together.
I like it.
Thanks, Chris.
Enjoyed it, man.
(01:15:43):
Cheers.
Cheers.
One of these remotely.
Empty.
Right?
I feel so much better about this now that we've done it.
Yeah.
And it worked.
Yep.
I'm afraid I'm empty too.
Thanks, Francisco.
I appreciate you making this possible.
Do love the Pod Mobile.
(01:16:04):
Pod Mobile, get you one.
Okay.
See you all later.
Yep.
Excellent.
I could listen to another hour.
(01:16:32):
There is no way anything can be this good.