All Episodes

June 5, 2025 16 mins
In this episode of Betting on Me: Inspiration Moments, Coach Lynn F. Austin shares insights from her doctoral research on how artificial intelligence and data ethics are transforming the future of higher education.

From dynamic capabilities and innovation strategies to the ethical ownership of learner data, this episode blends academic frameworks with personal reflection—making complex ideas accessible and empowering.

Whether you're an educator, a lifelong learner, or a professional navigating change, you'll walk away with fresh perspectives on how to adapt and thrive in a world shaped by AI.

🌐 Explore more reflections and resources related to this episode at:
www.BettingOnMe.com

Timestamps:
  • 00:00 – Welcome & Introduction
  • 01:40 – Why AI Matters in Education
  • 06:00 – Business Models and Innovation
  • 11:30 – Ethics and Data Ownership
  • 15:45 – Spiral Dynamics and Adult Learners
  • 20:00 – The Delphi Method & Forecasting
  • 23:30 – Final Reflections & Close
👣 Let’s continue the conversation!
👉 Follow me on LinkedIn for more insights and updates:
https://www.linkedin.com/in/lynnbonneraustin/
🎧 Catch more episodes of Betting on Me: Inspiration Moments:
https://www.spreaker.com/podcast/betting-on-me-inspiration-moments--4129457
🌐 Visit my website for articles, newsletters, and coaching resources:
www.BettingOnMe.com

Stay mindful, stay focused, and remember—every great change starts with a single step. Keep thriving, understanding that life happens for you, not to you, to live your purpose.Follow Lynn “Coach” Austin for more episodes, articles, and updates:
🌐 https://www.lynnfaustin.com

📩 Connect or share feedback: https://www.lynnfaustin.com/contact-us/
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Hello, and welcome to today's episode of Betting on Me
Inspiration Moments. I'm your host, coach Lynn Austin, and I'm
truly excited to dive into a topic that sits at
the intersection of my doctoral work and my passion for
innovation and education. Today's episode is titled Learning in the

(00:28):
Age of AI. It's where I've been exploring how artificial
intelligence and data ethics are reshaping the.

Speaker 2 (00:37):
Future of learning. But it's not just about academics. It's
about how we all grow, how we learn, and how
we adapt in a world that's changing faster than ever.

Speaker 3 (00:58):
Welcome to the deep dive. We're looking at something huge
happening right now in higher education.

Speaker 4 (01:04):
Definitely AI integration. It's incredibly rapid.

Speaker 3 (01:08):
It really is. It feels like AI is suddenly everywhere
on campus, you know, changing research assignments, everything.

Speaker 4 (01:14):
Yeah, the adoption rate is stunning.

Speaker 3 (01:16):
But here's the thing. This question keeps nagging at me.
We have the tools, yes, but are the people, the leaders,
the faculty, teaching students, are they really equipped.

Speaker 4 (01:27):
Equipped to understand what the AI is actually telling them.

Speaker 3 (01:31):
Exactly to effectively use the outputs? There seems to be
a disconnect.

Speaker 4 (01:35):
But pretty significant one based on the research. Having the
tech isn't the same as knowing how to interpret it wisely, right.

Speaker 3 (01:41):
And that tension, that gap is what we're really digging
into today. We've got a stack of sources and we'll
be focusing quite a bit on Lynn F.

Speaker 4 (01:48):
Austin's work, things like preparing to teach an AI faculty
roles for ethical AI. She really dives into this interpretive challenge.

Speaker 3 (01:56):
Yeah, and we'll use other research to see what skills
faculty and leaders actually needed now in this AI world.

Speaker 4 (02:02):
It's crucial.

Speaker 3 (02:03):
So if you're involved in education or just interested in
where learning is going, you'll want to stick around for this. Okay,
let's dive right into that core challenge. Austin's work highlights
a study by Johnson and others from just this year,
twenty twenty four. The numbers are quite stark.

Speaker 4 (02:19):
You really are. What did it find?

Speaker 3 (02:21):
It found nearly seventy percent of US higher ed institutions
had adopted AI tools. Seventy percent, which honestly feels about right.

Speaker 4 (02:30):
Yeah, that sounds familiar, but what's the catch.

Speaker 3 (02:32):
The catch is this, less than thirty percent of their
leadership teams felt they actually had the interpretive skills needed.

Speaker 4 (02:38):
Wow, less than thirty percent.

Speaker 3 (02:40):
Yeah, to use AI outputs effectively for making decisions. That's
a massive gap between having the tool and knowing what to.

Speaker 4 (02:47):
Do with it, And that gap is precisely what Austin
identifies as a central problem. It's not about if universities
have the tech anymore most do. It's whether the leadership
and crucially the faculty can truly make sense of what
these tools are generating. Austin emphasizes, these aren't just basic
reading skills, No, what are they?

Speaker 3 (03:05):
Then?

Speaker 4 (03:06):
Well, it's the ability to critically evaluate, to contextualize, to
really understand the nuances and limitations of AI output, especially
in a university setting. It's complex.

Speaker 3 (03:16):
So it's less about being an AI programmer and more
about being a critical thinker who understands the educational context.

Speaker 4 (03:24):
Exactly that it's applying human judgment to machine output, and
Austin points out the consequences when this skill is missing.
What happens, Well, transformation efforts stall institutions by the shiny
new AI tool, but then nothing really changes strategically because
they can't leverage it properly.

Speaker 3 (03:43):
Okay, I can see that.

Speaker 4 (03:44):
And stakeholder trust erodes. If leaders can't confidently explain why
they're making decisions based on AI, or if faculties seem
unsure how it impacts learning.

Speaker 3 (03:53):
The students, parents, everyone starts to lose faith.

Speaker 4 (03:55):
Precisely, Austin calls it a critical gap in practice, there's
just not enough specific training focused on developing these interpretive
skills for leaders and faculty.

Speaker 3 (04:05):
That makes total sense. You can't just hand somewhat a
powerful tool and expect magic. Okay, So let's unpack this more.
Why is interpretation so critical? Why not just let the
AI do its thing?

Speaker 1 (04:16):
Well?

Speaker 4 (04:16):
That gets to the heart of what AI actually is.
The Artificial Intelligence Literacy for Higher Education source really clarifies this.
AI like chat GPT, it's a tool, a powerful one maybe,
but still just a tool.

Speaker 3 (04:30):
And tools aren't perfect.

Speaker 4 (04:31):
Exactly, as Siek and others pointed out back in twenty
twenty two, AI isn't infallible. Its outputs can be inaccurate,
they can be incomplete, They might lack crucial context right.

Speaker 3 (04:42):
It might sound plausible but be subtly wrong or missing
the bigger picture you need for say, a specific student situation.

Speaker 4 (04:49):
Precisely, and the AI Literacy source warns about this false
sense of learning. If students or even faculty just rely
on AI on critically.

Speaker 3 (04:58):
They might get the task done, but have an actually
learn the skill or thought it through themselves.

Speaker 4 (05:02):
Exactly, they haven't developed the critical thinking, the research skills,
the problem solving abilities. That's why interpretation is vital. Faculty
and the students they guide need to learn to cross
check AI outputs, verify the information, yes, verify it, understand
the AI's limitations, including potential biases, which is huge, and
just recognize when something feels off or needs a deeper look.

(05:22):
It's about using AI to support thinking, not replace it.

Speaker 3 (05:27):
And this whole interpretation thing gets even trickier, doesn't it
when you factor in who today's students actually are.

Speaker 4 (05:32):
Oh? Absolutely, Fenzi's source points this out. Clearly, we're not
just talking about eighteen year olds straight from high school anymore.

Speaker 3 (05:38):
Right, Over forty percent of undergrads are twenty four or
older adult learners, and distance education it's over half.

Speaker 4 (05:46):
Exactly. These learners often have jobs, families, complex lives, and
when you combine that with online learning environments, interpretation becomes
even more nuanced. Helso well, think about sources like Machado
and tourison. They discuss using AI and Learning Analytics LA
to track student engagement, maybe even in for emotional states
like is a student discouraged or satisfied?

Speaker 3 (06:08):
Okay, I can see how LA might try to flag
that based on what log in times, forum.

Speaker 4 (06:13):
Posts things like that, but interpreting that flag that takes
serious human judgment. Kaiser at All and Fenzi talk about
managing cognitive load fostering real interaction online. These are huge
challenges for diverse learners.

Speaker 3 (06:27):
So the AI says, student X seems discouraged, But.

Speaker 4 (06:31):
Why exactly is it the course material? Is it a
tech issue? Is something happening at home or work? Is
it just a moment of struggle before a breakthrough? The
AI can't tell.

Speaker 3 (06:42):
You that the human instructor needs to step in, understand
the context, maybe reach out absolutely.

Speaker 4 (06:47):
Relying only on the AI metric risks misinterpreting the situation completely.
You need pedagogical experience, empathy, knowledge of that specific adult
learner's context. Kaiser at All and lock at All also
touch on the that's a potential for technology to feel disconnecting,
adding another emotional layer that needs human sensitivity.

Speaker 3 (07:06):
It's that hearts on aspect the museum source mentioned right,
not just clicks in data.

Speaker 4 (07:10):
Points precisely, It's about interpreting data about humans within a
learning context. And you mentioned auto at all earlier, AI
for adaptive study support.

Speaker 3 (07:18):
Yeah, that sounds like it needs careful interpretation too.

Speaker 4 (07:20):
Definitely, if an AI system suggests this student needs remedial
module B, well does that align with the faculty members
understanding of the student's struggle? Is that really the best
pedagogical approach right now?

Speaker 3 (07:33):
You can't just blindly follow the algorithm, No.

Speaker 4 (07:36):
You risk depersonalizing learning. This ties right back to Austin's
work like preparing to teach an AI. Faculty need training
not just on clicking the buttons, but on critically weaving
AI insights into their actual teaching, like adapting active learning strategies,
making sure the text serves the student, not the other
way around.

Speaker 3 (07:55):
Okay, this is clearly a massive shift. It feels like
institutions need some kind of map to navigate this. How
do they even start to get a handle on such
a complex transformation.

Speaker 4 (08:04):
That's a great question, and it's where using analytical frameworks
can be really helpful. Len Austin actually uses one quite
effectively in her work. To structure this exact problem.

Speaker 3 (08:12):
Frameworks help make sense of the chaos.

Speaker 4 (08:14):
Essentially, you could say that they provide a lens. The
main one Austin uses is the Dynamic Capabilities Framework or.

Speaker 3 (08:20):
DCF Dynamic Capabilities Frame RK.

Speaker 4 (08:22):
Yeah, it comes largely from the work of David Teesse.
The core idea is about how organizations build the capacity
to sense, sees and transform themselves when their environment is
changing rapidly, like with AI hitting higher ed, sense.

Speaker 3 (08:39):
C's and transform. Okay, break that down for its How
does that apply here?

Speaker 4 (08:42):
Well? Austin finds DCF really useful for looking at strategic
AI literacy among academic leaders because it highlights adaptive leadership,
that interpretive judgment. We keep talking about and aligning the
whole institution strategically.

Speaker 3 (08:56):
Gotcha, and Teas talks about micro foundations.

Speaker 4 (08:58):
What's that Those are the under line building blocks, the
skills people have, the processes they use, the organizational structures,
the decision rules. That's where this interpretive capacity needs to
live deep inside the institution.

Speaker 3 (09:10):
Okay, so let's walk through the three parts.

Speaker 4 (09:12):
First, sensing right Sensing, according to Teas, is about actively
scanning the horizon, monitoring technological shifts like AI, yes, but
also changing student needs, what other universities or providers.

Speaker 3 (09:24):
Are doing so, paying attention.

Speaker 4 (09:25):
But it's more active than that. It's about interpreting what
you see, making sense of AI platform data, student feedback,
faculty anxieties, research papers, conference chatter.

Speaker 3 (09:35):
All of it, trying to understand what it all means
for your institution exactly.

Speaker 4 (09:39):
Tea says this requires knowledge, creativity, really understanding your user's,
students and faculty, and practical wisdom. It's about forming hypotheses
about AI's impact, maybe testing them out, synthesizing insights for
a university. It's actively exploring AI, interpreting its implications for teaching,
for research, for everything.

Speaker 3 (09:58):
Okay that makes sense, active, figuring out what AI means
for you, then seizing.

Speaker 4 (10:03):
Seizing is about identifying the actual opportunities that arise from
sensing this new AI landscape and then deciding how to
act on them, how to capture value, So making choices right.
In higher ed this means choosing and designing new ways
to teach that effectively use AI, deciding how learning and
skills will be delivered, and importantly assess something Austin really
focuses on in assessment for items. It means picking which

(10:26):
AI tools or strategies to invest in.

Speaker 3 (10:28):
That requires real judgment.

Speaker 4 (10:29):
Huge judgment, strategic insight, creativity to maybe rethink old models,
and judging what will actually work for learners and the institution.
Tease points out, you need to validate these new approaches,
Pilot things, collect data, see if they're sustainable. It's about
committing to a strategic direction for AI.

Speaker 3 (10:49):
Okay, since Seese and the last one, transforming.

Speaker 4 (10:54):
Transforming is about making the necessary changes inside the organization
to actually do the things you decided on during season,
changing how things work exactly. Tease talks about restructuring, changing processes,
developing your people. For universities, this could mean rethinking faculty
roles like Austin explores and faculty roles for ethical AI.
It might mean changing department structures, adapting ADMIN processes and

(11:17):
definitely building those skills.

Speaker 3 (11:18):
The interpretive capacity we started.

Speaker 4 (11:20):
With right back to that. Developing the faculty, the staff,
the leaders. It might involve changing HR practices, fostering more teamwork,
maybe adjusting incentives to support these new ways of working
with AI.

Speaker 3 (11:30):
So the DCF framework gives a whole life cycle view.
Understand the AI disruption, decide how to respond strategically, and
then actually change the institution to make it happen.

Speaker 4 (11:40):
Precisely, Austin uses it because it frames AI integration not
just as adopting tech, but as a deep, dynamic strategic adaptation.
She considered others like Toey or To's which adds sustainability
and ethics.

Speaker 3 (11:55):
Ethics feels pretty crucial with.

Speaker 4 (11:56):
AI, absolutely, and while To's highlights it exploit, Austin weaves
that ethical dimension throughout her work on faculty roles and
responsible AI use, fitting it within the broader DCF adaptation process.

Speaker 3 (12:09):
Okay, let's bring this back down to earth practically speaking,
what's the takeaway for universities, for faculty, for administrators listening
right now?

Speaker 4 (12:17):
The big takeaway is the urgent need to build this
interpretive capacity and strategic AI literacy, as Austin calls it.
It needs to happen at all levels.

Speaker 3 (12:25):
It's not just an IT problem, not at all.

Speaker 4 (12:27):
It's core for leaders making strategy and absolutely essential for
faculty in the classroom, designing courses, assessing students.

Speaker 3 (12:35):
It's not enough to just get the software licenses. You
need the human capability.

Speaker 4 (12:39):
Exactly, and we have some clues about what that capability
looks like from sources like the AI literacy For higher
education one faculty development can't just be here's how you
log into the AI needs to be deeper, much deeper,
covering the basics of AI, sure, but critically things like
prompt engineering, how do you ask the AI good questions
to get useful reliable answers.

Speaker 3 (13:01):
Learning how to talk to the machine effectively yes.

Speaker 4 (13:03):
And grappling directly with the ethical implications, Understanding potential biases
and AI output, how to spot them, how to mitigate them,
knowing the limitations, and exploring how AI applies or doesn't
apply within their specific subject area and teaching style.

Speaker 3 (13:20):
That sounds like what auto at all. We're calling for
prospective qualification of university staff, planning ahead to build these skills.

Speaker 4 (13:27):
It is, and crucially, this isn't just tech training. Mastering
interpretation and education means blending that tech understanding with a
deep pedagogical knowledge with human judgment.

Speaker 3 (13:37):
Right. It's about interpreting AI information in the context of real.

Speaker 4 (13:41):
Students exactly understanding diverse adult learners like Fenzi and Kaiser
at All discuss recognizing their social emotional experiences online, which
Machado and Terresen touch on thoughtfully weaving AI into good pedagogy,
like active learning approaches from the GW.

Speaker 3 (13:58):
Source, connecting the AI data to the minds on and
hearts on parts of learning we mentioned earlier.

Speaker 4 (14:04):
Precisely Austin's work on preparing to teach and faculty roles
for ethical AI really drives this home. The role of
faculty has to evolve to include this mix of AI literacy,
interpretive skill, and ethical responsibility.

Speaker 3 (14:18):
So let's try to summarize this deep dive. Then we've
established that while AI tools are flooding into higher education,
then quickly the absolutely vital skill of interpreting what these
tools produce is seriously lagging behind, especially among leaders in faculty.

Speaker 4 (14:33):
Yeah, there's a concerning gap, and frameworks like Dynamic Capabilities
show us this isn't just a tech issue. It's a
fundamental strategic challenge. Universities need to get better at sensing
these changes, seizing the right opportunities with AI, and transforming
how they operate, particularly how they develop their.

Speaker 3 (14:48):
People, ultimately building this human interpretive capacity. This strategic AI
literacy grounded in solid teaching principles and ethics that seems
to be the key.

Speaker 4 (14:59):
It's essential if we want to navigate this effectively, responsibly,
and make sure technology actually helps education and supports all
our learners.

Speaker 3 (15:06):
So here's a final thought to leave you with. If
this ability to interpret AI outputs, blending data skills with pedagogy, ethics,
human judgment is truly the new critical skill for educators
and leaders. How do institutions actually measure that? How they
cultivate it? It feels much deeper than just a technical skill.
It feels more like yeah, well, like practical wisdom. How

(15:26):
do you teach that? That's something to ponder as this
AI transformation continues.

Speaker 1 (15:33):
Through our Inspiration Moments podcasts, we provide information, coaching, and
guidance to help you make meaningful choices, choices said in
richer life and foster personal and professional growth. If today's
episode resonated with you, I invite you to visit my
website for articles and insights on how technology is reshaping education.

(15:57):
While there, please check out our powerful resources to support
your personal and professional development. You can find it all
at betting on meeat dot com. Take care until next time,
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.