All Episodes

September 12, 2024 29 mins
Robert Greenway from the Heritage Foundation talks about who’s to blame for Alabama’s Growing Fentanyl Crisis.  Then Michelle Causton talks about the ethical implications of A.I.  We end with Dr. Parminder Jassal, CEO of Unmudl to talk about the tech skills that are most needed in today’s jobs market.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
In Alabama, there is a rapidly expanding crisis of fentanyl,
and the root cause might not be Mexico, it could
actually be China. Hello, I'm John Mount and today on
Viewpoint Alabama, I'm speaking with Robert Greenway. He's the Heritage
director of the Allison Center for a National Security Robert,
Welcome to the show.

Speaker 2 (00:19):
Thanks very much for having me.

Speaker 1 (00:21):
We always hear about fentanyl, and we think of drug
cartels in Mexico being the root cause. But I guess
they get their fentanyl from somewhere, and I understand this
new report says they're getting that fentanyl from China. And
this isn't a one off. This is like a concerted effort.

Speaker 2 (00:36):
There's no question about it. And it's not just us.
It's also the work of the House Select Committee on China.
They came to the same conclusion in a report. We
expanded on that work and looked at the responsibility, the
culpability of the Chinese Communist Party, and it's clear nearly
all of the fentanyl that's killing over seventy thousand Americans
the last three year each year is coming from China,

(00:57):
through Mexico and into the United States.

Speaker 1 (01:00):
In fentanyl is it's a fairly easy drug for them
to make because the chemicals are readily available.

Speaker 2 (01:07):
Yeah. What's interesting is that it is illegal in China
for good reason, and yet it is incentivized in mass
production for export to the United States. So the Chinese
Commanist Party knows the deadly threat that it constitutes. They
are providing massive tax benefits, in some cases the highest
benefits they provide to manufacturers, and ensuring that the product
is shipped seamlessly to Mexico for infiltration in the United States.

(01:30):
So it is a very deliberate, a very conscious effort
to kill Americans.

Speaker 3 (01:34):
Robert.

Speaker 1 (01:34):
Is there really any good use for fentanyl or is
it exclusively used to heighten the high given by other drugs?

Speaker 2 (01:43):
No, there is, in fact a legitimate use. Was discovered
really as an opioid as a pain killer, and it
is still used and in hospitals you can prescribe. One
of the ironies is if you overdose on fentanyl accidentally
or otherwise, you're probably going to be given fentland as
a painkiller when you're admitted to the hospital. Ironically, it's
unlikely to do a great deal of good. It's all

(02:04):
too often fatal so it does have a prescription purpose,
but it is illicitly introduced into the market in this case,
and all of what Chinda is shipping to Mexico for
infiltration in the United States is all elicit and it
ends up in all kinds of things that it's not
supposed to be in. And so you may think you're
getting an off the market or elicit xenax, but you'll

(02:26):
be getting just enough fednel to kill you, and unfortunately
is killing. More Americans now are dying each year from
fentanyl than have died in all the conflicts since the
Second World War?

Speaker 3 (02:36):
Is the reason?

Speaker 1 (02:37):
Is the reason it's so deadly because it's just poorly measured?
Or is it because the people who are taking it
they're not expecting, you know, their bodies having built up
a tolerance and so much like you her cases with
first time users of say like cocaine, where that can
cause a heart attack, is it something similar to that.

Speaker 2 (02:56):
It can be, But chiefly it's the power of the
drug itself. It doesn't take much, and when it's introduced
into the system, if it's introduced in quantities that exceed
the human capacity, and that's what happens all too often
on the illicit market, it will kill you. And so
it's the quantity and the power of the drug that
make it so incredibly deadly. It's also super addictive, and

(03:19):
it again anything in the illicit market is going to
be difficult to control purity levels and other contaminants. And
also there's the combination of fedanyl with other things that
can exacerbate the effects of it.

Speaker 1 (03:30):
And Robert, in China, you said it is illegal for
the Chinese citizens to use, but I guess it's not
illegal for Chinese companies to manufacture the drug.

Speaker 2 (03:42):
Correct, And so synthetic opiods, including fentanyl, are almost entirely
illegal in China, and they're made under a tax subsidy
for export only. And that tells you all you need
to know about the Chinese Communist Party.

Speaker 1 (03:56):
Is it that they actually they just don't care or
is it a more sinister like they actually mean to
use it as a way to sort of because I
still have my questions as to exactly just how the
COVID virus made it out of that lab. I still
think it came out of a lab, and how it
made it over here, like there might have been some
research going on, there was a reason why it was

(04:17):
introduced to the world. Is it the same case here
where they would like this to be out there in
affecting the Western world even though it doesn't affect them
because it's illegal there.

Speaker 2 (04:25):
Yeah, so you're right to bring up the COVID nineteen comparison.
I think minimally the facts all support negligence at a
catastrophic level and a cover up which made it even worse.
In the case of COVID. In a case of FEDL,
it's much more sinister. They know exactly what it's doing.
They are the sole source in many cases of this
particular chemical, and they know exactly what's happening as they

(04:47):
export it. And we've you know, the worst we've done
is bring it up to them as a topic of conversation.
But they've paid no price, which is also the reason
we wrote our research paper. But the end of the day,
this is very deliberately done to undermine the United States
and the fabric carbar society and kill Americans, and that's
what it's doing.

Speaker 1 (05:04):
So Robert, of course we should be aware, and I
think most people are aware of how dangerous fentanyl is.
But in addition to that, is there something that the
United States government as a whole can do to China
to hold them accountable for this happening.

Speaker 2 (05:16):
We are dumping tens of billions of dollars a year
into China in the form of foreign direct investment. We're
giving them massive advantages as a most Favored Nation status
to import Chinese goods into the American marketplace, often at
the expense of American workers and jobs. All of that
occurs to their benefit. They're getting paid while they're producing
this deadly threat in the United States. That all has

(05:37):
to stop until they correct this problem. Second, Mexico, as
you said, is not completely without culpability here. They also
have to bear a price, and they also have a
tremendous economic connection to the United States. None of it
has been impacted as a result of fentanyl. So the
anger needs to be there, directed under our elected officials
to point towards China and Mexico and conquer preach steps

(06:00):
ken and should be taken right now. They're getting paid
to kill Americans, that's the fundamental reality.

Speaker 1 (06:05):
Great work, Robert Greenway with the Heritage Foundation, Thank you
so much for joining us today on Viewpoint Alabama.

Speaker 2 (06:10):
My great pleasure.

Speaker 4 (06:11):
Thanks for having me you're listening to Viewpoint Alabama, a
public affairs program from the Alabama Radio Network.

Speaker 1 (06:17):
Welcome back to Viewpoint Alabama. My name is John Mountce.
I'm hosting the show this week. And joining me right
now is a business and ethics expert Michelle Costin. She
is here to talk about AI and the ethical implications
of utilizing AI.

Speaker 3 (06:33):
Michelle, Welcome to the show.

Speaker 5 (06:35):
Hey, John, thanks so much for having me. Delighted to
be here.

Speaker 3 (06:38):
So, AI is just like a lot of things.

Speaker 1 (06:41):
It's a tool, and tools can be used for good
or they can be used for bad. So how do
we differentiate between that when using AI to perform tasks
that we would rather not do ourselves?

Speaker 5 (06:54):
Yeah, exactly, or sometimes or even TEUs we like to
do but we can see that perhaps it would be
more efficient. So yeah, there's a lot of temptation to
use AI and I really it's very simple. It's always
difficult and when you try to do it, but the
simple thing you do is to start with is it wrong.

Speaker 2 (07:13):
To use AI?

Speaker 5 (07:14):
So any decision that you're trying to make, you start
with that lens and think about it. Is it wrong?
And from an ethics perspective, there's three ways to be wrong.
And so the first is is it against the law
or specific company policy or rule, something that's quite clear,
And if it is, of course, then you probably shouldn't
do it. The second thing is to look around and say,

(07:36):
in my social economic group, you want to expand that
to include everyone that's going to be affected by what
you're planning on using AI for would they think it
was wrong if you know, if I told them? And
anytime you're tempted to not tell somebody something you know
that probably should give you some discomfort. And the third
way is that it's a departure from the truth. And

(07:59):
so if planning on using AI and presenting that as
absolutely original work, if you're being paid to do creative work,
if you're misleading someone to think that you're more competent
or better at it than you really are, obviously you
probably shouldn't do it. So once you think about that,
if you're still not one hundred percent sure, you know,

(08:21):
maybe gotten, maybe it's a little bit wrong, so you're
still wondering about it, you want to then look at
it from the ethical paradigms, And in this case, for
this topic, I think the ones that are relevant are
simply one that we call individual versus community. So whenever

(08:42):
we're doing something that benefits ourselves personally, you know, we
might be suspicious that that's being kind of self serving.
You need to then think about who else is benefiting
from this, and often it's making everything easier, the company's
doing better, So there may not be an issue there
where you can balance what you're getting out of it

(09:03):
versus what the broader community is getting out of it,
and you can see that it's fair and equitable, you're
probably okay to do it. We've been using boiler plates
and templates for years for communicating in business, and nobody
questioned the whether that was ethical or not.

Speaker 6 (09:23):
Right, that's true.

Speaker 5 (09:23):
I wanted you to use it.

Speaker 1 (09:25):
Right because because it's a cut and pace situation of
you know, we've got somebody applying for a job. Okay, well,
here is the application, and there's a bunch of stuff
that just don't worry about because that doesn't apply in
this situation. You're right, We've been doing that for a
long time, and I guess the AI is just able
to generate stuff that seems customized and yet isn't exactly

(09:46):
because it's not well, a human doing it. You mentioned truth,
and I think that's interesting because to AI, AI has
this strange ability to create its own truth. I've noticed
where you ask AI to do something, it will generate.
Not I'm not talking about like say garbage and garbage out,
but like it actually goes and looks at an amalgam
of truths and makes a brand new truth that never

(10:07):
before existed.

Speaker 5 (10:09):
Yeah. Yeah, it's still hallucinating, which is great fun when
you can make it do it. However, yeah, that creates
a problem. And whenever anyone is relying on an expert
for something, they need to vet that. So whether it's
you phoning me and talking about this, you know, if

(10:29):
I'm you know, if I'm talking nonsense, and then you shouldn't
be listening to me. It's as simple as that. And AI,
we shouldn't just assume that it knows everything. We to
the extent possible want to take a look at where's
it getting its information. That one's difficult. It's a black box.
We don't really know, So you want to make sure

(10:50):
it makes sense and then you just maybe check it
against some other places that are already AI as well.
Google searches and things. They're all AI driven. This isn't new,
it's been around for since nineteen fifty some odd they
created the term. But the same basic rules apply is
be thoughtful, be mindful. This is an opportunity to really

(11:12):
elevate our critical thinking skills. Ai'll take the drudgery out
and then we can really look at things critically and
make one would hold better decisions.

Speaker 1 (11:22):
And I think that's a big part of it is
the fact that AI probably still should not be placed
in an executive position, a decision making position. It's more
of a place to create, like the as you said,
the boilerplate, but then and then check over that boilerplate.
But then it really shouldn't be deciding, you know, whether
we should do this or not do this, whether we
should fire this person or kill this person, or do

(11:44):
the surgery on this person, because those will probably result
in very bad decisions.

Speaker 5 (11:48):
But that's absolutely right, John, you nailed it. Because AI.
One of the problems that exists is it doesn't create
anything new, although as you say, it does create some
strange little truths that it makes up. It's getting better
at that, so one would hope that'll go away. But
what it is doing is it's basing its information on
what it already knows. And we do know that when
we do that, even in our own thought processes, that

(12:11):
we overlook things. So what are the unknowns? What are
the variables in this data that we're not immediately seeing.
And you know, one of the things that we've been
using a long time in banking is that they use
algorithms to make a determination as to whether you should
get a loan or not. And we know that that
has resulted in some bad decisions, both ways where they've

(12:32):
granted money that you know, really shouldn't have done so,
and the other way where people are disenfranchised because of
certain social economic conditions that exist.

Speaker 1 (12:42):
Well, another ethical implication I was just thinking of is
the fact that AI, essentially it has no ethics of
its own. Number one and number two, it's not a person,
and as such, if it makes a bad decision, who's
to be held accountable the AI? Because you can't put
AI on the stand. If you could, it would.

Speaker 5 (13:01):
Well, this is something, of course, that's being tested already
where people are saying, well, we relied on this, and
I think there's a you know, we would like to
forgive people for doing their very best and still coming
up with a bad decision. When I say the bad decision,
that's in hindsight. Right the time, it was a good decision.

(13:22):
It was based on all of the information that was
available and the setout parameters. We used financial modeling, for example,
and it all made perfect sense. And yet in hindsight
we look back and we go, oh, I wish I
hadn't made that decision. We're seeing it all the time.
In business, the business loves numbers, right. They think that

(13:42):
numbers are so precise, so accurate. And I've been talking
for years about the danger of looking at what your
computer would spew out to you, and it's correct. To
seventeen decimal places does not actually make it more correct.

Speaker 3 (13:56):
Right. In science they call that significant digits.

Speaker 1 (14:00):
As you multiply two numbers together and or you divide
two numbers and you get this number that is, you know,
like you said, seventeen decinal places. But in reality, the
two numbers that went in, we're not seventeen descinal places.
It's not that accurate. And that's the same thing with AI.
It can create a lot of noise, we could call it.
That really can't be relied on for that granular truth

(14:20):
that we think it's providing.

Speaker 5 (14:23):
Exactly correct. Yeah, this is the problem. I mean, we've
been trying to automate tough decisions for years. Why because
as human beings, we often find that we have not
made the optimal decision again with hindsight right where we
wish we had known more, We wish we had had
more information available. We feel that even in our personal

(14:43):
lives when we're making decisions. So now we've come along
with this tool and been building this kind of concept
for years where information isn't the problem. We've got that
and compiling it now isn't the problem because AI is
going to do that so so quickly. Just as it's amazing,
we have to remember that it still requires human intervention,

(15:05):
and I think it's going to require that for well,
certainly the foreseeable future, maybe at higher and higher levels.
I can't predict the future anymore than anyone else can,
but we have terrible difficulty as human beings realizing that
we cannot predict the future. We've been trying. We've been
you know, rolling dice and doing all sorts of mystical

(15:26):
things to try and predict the future, from reading your
horoscope or the lines on your hands to using algorithms
and very very sophisticated modeling, and it's still doesn't always
work the way we thought, and we shake our heads
and we think we can do better. Maybe we have
to come to grips with the fact that at some
point this is imprecise and we'll remain imprecise and we

(15:48):
can only do the best that we can do.

Speaker 1 (15:51):
This is Viewpoint Alabama on the Alabama Radio Network. My
name is John Mountson speaking with Michelle Coston. She's a
NBA business expert and seasoned business leader, and we're talking
about AI and ethics. And Michelle, let me ask you
this question, how do we spot, because I know it's
getting more and more difficult, how do we spot and
challenge AI when we're presented with it? Because it's not

(16:14):
to say that it's wrong, but at least it should
be I guess, you know, inspected at a more critical
level than if it's done by a real person with
by you know, they have some skin in the game,
and we know that they don't want to crash the
plane because you know, here's what you always think about
is when you're on an airplane, you always have a
little bit of assurance of hey, the guy in the cockpit.

(16:36):
He might not even be that great, but he at
least is flying on the same plane as me, so
he it's in his best business not to crash this plane.
Now the AI, it's not necessarily in its best interest
not to crash the plane. So how do we spot
and how do we challenge AI?

Speaker 5 (16:53):
Oh my, this gets, of course, to the real heart
of it, and part of that is unfortunate. We're a
little behind this because we need to change what we're teaching.
We need to We're still in a lot of business
programs teaching people how to do things that the computers
do and do very very well, and so we need

(17:15):
to stop doing that, and we need to spend more
time talking about the implications and what we garner from this,
how we can use that information. We really need to
amp up our critical thinking skills and not just leave
those in the social sciences, but bring them in business.
We've got to stop relying on catchphrases and things that

(17:35):
were true fifty years ago to make decisions today. So
part of the solution will be in how we train
people going forward, in how we hire people, what skills
we would like them to have, and all of us
paying attention. So in the short term, the best we
can do is to pay attention and do our best

(17:56):
to try and spot and to challenge to ask the
important questions.

Speaker 1 (18:02):
Because really the cat is out of the bag. We're
not going to be able to make AI go away.
It's here to stay, and we have to do our
best to I guess, mitigate the damage and use it
to make our world better and not worse.

Speaker 5 (18:16):
That's exactly right. And I think that the idea that
it's not going away is really important because I hear
a lot of people getting very, very anxious when I
talk to people about AI. Everyone has an opinion, and
the course often very informed, and they're concerned about identity, theft,
and all sorts of things that are things to be
concerned about but aren't necessarily relevant to the discussion at hand,

(18:38):
which is around AI. We kind of didn't do a
great job with computers. This is again we've learned a
lot just from that. We can take some of that learning,
we can apply it here. The government may, and hesitant
to say this may play a role in how AI
is developed, try and intervene with some limitations, because of course,

(19:01):
in the wrong hands, the technology as you pointed out,
can be misused, and as a user, you'll be stuck.
You won't know, you'll have no way of discerning that,
so that ethical decisions at a different pay grade. I
don't know anything about how they actually write these things
or you know, get a out their searching things. There's

(19:22):
copyright issues, there's all sorts of things to be concerned about.
I have great faith that we will do a fairly
good job of getting there, provided again that we're mindful
and we don't only focus on how can we make
more money quicker, how can we replace people with machines,
because that's that's not a good approach.

Speaker 1 (19:43):
But machines to do the drudgery stuff, the stuff we
don't want to have to do anymore. That makes sense.
That's the reason why we have you know, we don't
have people out in the field picking things one one
seed at a time. Now we have a great, big
harvester that comes and does that work for us. A
computer or AI can do a lot of the same
sort of things, and it does amazing things. You've seen
some of the creative stuff where it melds two different

(20:05):
works of art together and makes us great. And this
wonderful thing that it would take a person. They could
do it, but it take them forever. So it does
do some good work. But like you said, it just
we need a check and a balance on what it does.
And like I said, I wouldn't put it in charge
of anything, at least not anytime real soon.

Speaker 5 (20:23):
You got that right. I think some of the possibilities
in medicine are quite fascinating because of that ability to
take a huge amount of data and apply it in
a certain circumstances, and their results are a more obvious
quite often. And also they are dealing with human beings,
so they're very mindful about their mistakes and how they're

(20:44):
going forward. In business, we don't have that. We have
lagging indicators. By the time we realize something's gone wrong,
it's too late. So this is a slightly different problem.
And you know, I really, as an accountant, I hate
to keep saying we got a star up blindly believing
the numbers. As if here one more time numbers don't lie,

(21:06):
I'm going to have to scream, right, it's just yes,
they do, and they do it very well, thank you
very much, very.

Speaker 1 (21:12):
True, very true. Well, all things to keep in mind
as we go forward into the brave new world. As
Alvis Huxley said of AI, I'd like to thank you
so much Michelle Costin, she is a business expert and
an ethics expert.

Speaker 3 (21:25):
Thank you so much for joining us today on Viewpoint Alabama.

Speaker 5 (21:29):
Thank you very much for having me. It has been
a trip. Thank you.

Speaker 4 (21:32):
You're listening to Viewpoint Alabama, a public affairs program from
the Alabama Radio Network.

Speaker 1 (21:36):
There's something that I really worry about because there's a
lot of people out there have what I call do
nothing jobs. They go into work and they answer emails,
and they get on some zoom calls, go out to
the lines, come back, do some more of that, go home,
and it seems like here in America, we're not making stuff,
we're not building stuff, We're not a lot of people
are not contributing. And that's the reason why I wanted

(21:56):
to talk with doctor Paraman and Jessels. She is the
CEO of Unmuddel, a leading skills to jobs marketplace for
developing skills through hands on training, propelling workers into roles
as people who do stuff. Doctor essel welcome.

Speaker 3 (22:11):
To our show.

Speaker 6 (22:13):
Thank you so much for having me.

Speaker 3 (22:15):
John, and you.

Speaker 1 (22:16):
See that you probably see this all over the place
where people they don't they're not doing anything. Sometimes, okay,
we say we're more of a labor economy than there
are a skills economy than say a product manufacturing economy.
And even that, sometimes the skills. I can't figure out
what skills people have. So tell me what it is
that you guys do and how you're helping school the

(22:37):
next generation of workers to do something with their hands
and with their minds and create a product.

Speaker 6 (22:45):
Love that question. We focus on it twenty four hours
a day. So I think what is really missing out
there in the conversation is this whole idea of a technician.
And when we start thinking technicians, there's semiconductor technicians, electronics technicians, manufacturing, controls, instrumentation,

(23:07):
all kinds. It's this emerging layer of the workforce and
this technician gap. It pays minimum of fifty five thousand
full benefits as a position, but nobody really grows up
thinking about that. So what is a technician? What are
the skills that you need? It's a four part skill

(23:28):
set and that skill set is around number one technical knowledge,
kind of electrical mechanical programming, hands on number two. Those
are skills and experiences like getting your hands on safety
procedures equipment like in the semiconductor industry wayfer steppers, spectra analyzers.

(23:53):
Number three is analytical problem solving and troubleshooting, especially with
attention to detail. And then, and lastly, what we all
need people skills. You need to be able to talk
to people. You need to be able to work with them,
you know, work through a conflict, resolve, work on a team,
or lead a team if that's what you're doing. So
it's that combination.

Speaker 1 (24:14):
And that last one is so important. I hear this
all the time from our managers here at the radio station,
that quote, we all work in sales. And that's not
just in the radio industry, that's really every industry. We're
all to a degree a salesperson. Because we're selling a product,
we're also selling ourself and our value that we bring
to the product. And I think that's something that's often
overlooked in the job force these days.

Speaker 6 (24:38):
You're exactly right. I think sales. Everybody thinks sales man
or sales person, or you know somebody that's slimy and
trying to get away with things. But you're exactly right.
Sales is just simply think of TikTok, think of Instagram,
think of all social media. What are people doing on it?

(24:59):
It's all being able to set yourself up and to
be able to communicate, whether it is communicating in voice,
whether it's communicating visually, whether it's communicating using gestures written,
it's all around communication.

Speaker 1 (25:20):
And doctor Jessel another thing that I'm concerned about. I
have a daughter and she's fourteen, she's almost fifteen, and
she sometimes asks me, like when she's doing her math homework,
why do I have to know this Pythagorean theorem or
this thing or that thing? And I tell her, look,
the reality is, you're probably not going to use this
unless you're actually going to go into like say, I
don't know, building houses. You might not ever use the

(25:42):
Pythagorean theorem again. But it is the learning how to
how to think, and that's really what education is about.

Speaker 3 (25:49):
Learning.

Speaker 1 (25:49):
Teaching you not facts, at least it shouldn't be memorizing
the facts, memorizing the test, but learning how to think
and learning how to solve a problem. Problem solving skills
seem like they're so overlooked and so important in being
able to really take on the challenges and move to
the next level in whatever job you have. And that's
a lot of what you do at a muddle.

Speaker 6 (26:09):
Right, Yeah, that's exactly right. I love the advice that
you're giving your daughter. And even in a timeframe of
a time period where folks are still learning tisiagrein theorem
or whatever it is, high level calculus in high school,

(26:30):
we do need to take a look at that, because
you're right, the goal is to develop critical thinking. How
do you attack a problem? How do you address the problem?

Speaker 1 (26:40):
And doctor Jessel, one of the things that always bothers
me is you hear about these kids going to college
and the coming out with a four year art history
degree or not to be say there's anything wrong with that. Look,
I have a degree in communication, so I understand. But
there's a lot of people going to college and getting
these degrees that are not going to be useful or
profitable in our marketplace. If you could offer those who
are entering college now or soon an idea, what is

(27:04):
the skill set, what is the major they should be pursuing,
so they're prepared to go on enter the workforce and
have a job where they're able to support themselves.

Speaker 6 (27:14):
The jobs that are out there are jobs that solve problems.
How do I build an AI farm? How do I
make sure that cars can be controlled to avoid hitting
passenger or avoid hitting people in the street. Yeah, pedestrians

(27:35):
there we go.

Speaker 3 (27:36):
Or passengers either.

Speaker 6 (27:41):
I think the problem solving is where we're at. Computers
can do calculators can do the work of information processing.
So what humans do is solve problems, and we do
it really well.

Speaker 1 (27:55):
And it's garbage and garbage out with a lot of
these things. You can the calculator will absolutely saw whatever
problem you put in, but you got to put the
right stuff in. And it's a lot more complicated than
a calculator. But when you put stuff in, and that's
what you look at at with AI, because this aioh
it can do amazing stuff, but it can only do
amazing stuff if you give it the right parameters. If
you give it the wrong parameters, you are not going

(28:16):
to get something you can use. And I imagine it's
the same thing in your field, doctor Chessel. When you're
developing a product, if it doesn't meet the needs of
the person who hired you to do it, if it
doesn't do the job well because you didn't put the
parameters in right, you're not going to get a workable
product and it's not going to be useful.

Speaker 6 (28:35):
That is exactly one hundred percent right on, John, and
that's why it's so important to get your skills that
lead to some kind of job, which goes back to
your question art history. Art history is really really important. Again,
what problem do you want to solve? That's why we
have jobs to solve problems very ways we get rid

(28:58):
of those jobs very well.

Speaker 1 (29:01):
Doctor Permitter Jessel from Unmuddled, thank you so much for
joining us and kind of unpacking this because I think
it is such an important issue for us to understand
as we think about going to work tomorrow and thinking, gosh,
I wish I spent more time learning the important stuff
and well, I hate to say it, the Pythagorean theory.

Speaker 6 (29:19):
Thanks so much for having me, John, and check out
unmuddled dot com or Texts of Tomorrow. Texts of Tomorrow
is where we educate everyone around us, including companies, on
that technician role and how important it is to our economy.

Speaker 3 (29:36):
Absolutely, so thanks for having me.

Speaker 4 (29:38):
You've been listening to Viewpoint Alabama, a public affairs program
from the Alabama Radio Network The opinions expressed on Viewpoint
Alabama are not necessarily those of the staff, management, or
advertisers of this station.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.