All Episodes

September 14, 2025 42 mins

 

Leaders in the Loop – Pilot Episode #1: Show Notes

Welcome to the very first episode of "Leaders in the Loop," where we’re exploring the intersection of leadership, learning, and the rapidly evolving landscape of AI. In this debut episode, your hosts Dan Jenkins (leadership professor and development expert) and Gaurav Khanna (Cisco executive and AI educator) lay the groundwork for their unique partnership—and the podcast itself.

Episode Highlights:
  • Origin Story: Dan and Gaurav share how they met via a LinkedIn cold call and quickly found common ground as educators passionate about generative AI. Their collaboration grew out of the International Leadership Association (ILA) Global Summit, where they co-chaired sessions and later co-authored a scholarly article.
  • The Value of Networks: Both hosts discuss how community-building and professional networks have played a pivotal role in advancing their understanding of AI, especially at this early and ever-changing stage of the technology.
  • Teaching AI and Leadership: Gaurav walks through his journey of developing an AI course series at Stanford Continuing Studies. He shares insights on curriculum design, scaffolding complex concepts, and weaving ethics throughout the learning experience.
  • Real-World Applications: The hosts talk in-depth about how generative AI tools like ChatGPT, Claude, Gemini, and NotebookLM are integrated into both teaching and daily workflow—whether it’s for writing, curriculum development, or analyzing qualitative data.
  • Ethics and the Human Element: The conversation turns philosophical as Dan and Gaurav discuss their biggest hopes and fears about AI. They stress the importance of remaining “in the loop”—ensuring that AI enhances rather than replaces human thought, and that ethics remain a central conversation.
  • Rapid Fire Round: To finish, the hosts answer quick, fun questions about their favorite AI books, influencers, teaching prompts, musical tastes, and more.
Key Takeaways:
  • The future of AI is not just for engineers; leaders and educators from every discipline must develop AI literacy and ethical awareness.
  • Real stories and practical tools are at the heart of integrating AI into leadership, education, healthcare, business, and beyond.
  • Building communities—and keeping humans at the center of AI conversations—will be critical as the technology continues to evolve.
Next Up:

Stay tuned for the next episode, where Dan and Gaurav dive into their co-authored article for the Journal of Leadership Studies, breaking down actionable takeaways for bringing generative AI into learning and development.

If you enjoyed this episode, please rate us five stars and subscribe to help more leaders stay in the loop!

Mentioned Resources & Influencers:
  • Cointelligence by Ethan Mollick
  • The Alignment Problem by Brian Christian
  • The Coming Wave by Mustafa Suleyman
  • StatQuest (YouTube, Josh Starmer)
  • The Neuron newsletter
Connect with Us:

Questions, comments, or want to join the conversation? Reach out via our channels and stay tuned for more episodes!

Thanks for listening to "Leaders in the Loop"!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:06):
Visit leaders in the loop where we're fighting way through the new digital world of AI's brand new day.
He the professor of leadership design.

(00:36):
Welcome to Leaders in the Loop where leadership, learning and AI meet.
We're Dan Jenkins, a professor, author, and leadership development expert in Garaf Kna, a computer engineer educator, and generative AI specialist.
We're modeling how to navigate generative AI by figuring out which tools to use, how to use them ethically and when they can make a difference.
Along the way, we're also building and sharing the AI literacy leaders and educators needs.

(01:00):
Because understanding these tools is just as important as using them.
We believe the future of AI won't just belong to engineers.
It will be shaped by accessible ethical leaders and practitioners across every field.
Each episode will share real world tools, challenges, wins, and discoveries.
Because whether you're in education, business, healthcare, or tech, we're all using the same engines and facing the same problems.

(01:22):
So if you find value in these conversations, please take a second to rate us five stars and follow the show that helps more leaders stay in the loop.
So let's stay in the loop and make the feature of AI together.
Welcome.

(01:48):
Welcome to Leaders in the Loop.
I'm Dan Jenkins, a professor of Leadership and organizational studies at the University of Southern Maine.
And I'm Gora Kna, AI executive at Cisco based outta San Jose, California.
Right.
Well, we're excited to jump in today.
This is our pilot episode.
We're gonna take a few minutes to share a little bit about our backgrounds and our journeys and how we got to starting this podcast.

(02:11):
So I'm gonna jump in a little bit with how we met.
I'm, as I shared a faculty member at the University of Southern Maine where I teach leadership and organizational studies.
And that is my primary discipline, although I did come from a background of, higher education and my terminal degrees actually in that field.
And focused on curriculum and instruction and basically the power of designing effective learning systems and evaluating effective learning.

(02:38):
And bridge that with.
The science and discipline and I guess the application of leadership, education, training and development.
And so where those two things meet as kind of my world.
And through that I had the opportunity and we'll get into some more depth about this in just a bit but had this opportunity during my second ever sabbatical last fall of which was 2024.

(03:02):
To chair a global summit for the International Leadership Association, which is my biggest professional discipline.
And through that opportunity, which started planning for that with the grassroots effort and some members of the community within the association around August and maybe September of last year, 2024.

(03:24):
And, I was also adjuncting for an institution, TKA Nazarene university.
During that sabbatical teaching a graduate level course called introduction to AI and Implications for Leadership.
One of the sessions that I wanted to curate.
For this global summit was one where I would have an opportunity to share a little bit about what I'm teaching in that course, but also find some other individuals on this planet that were teaching courses that had some intersection of AI and leadership as well.

(03:52):
And lo and behold I found this guy agar, Kana that was working with Stanford's continuing studies component there out on the other coast.
He was working with a leadership series about demystifying machine learning and AI algorithms and, it was literally a LinkedIn cold call.
I think I sent him a direct message on LinkedIn having no idea if he would ever respond to me or care what I was working on.

(04:18):
But lo and behold I got lucky and he reached back out.
And quite an interesting partnership emerged.
We've had some great conversations and great opportunities to the point where we decided to check in almost weekly.
And one of the other culminating things besides preparing together on this panel which was during the actual event was on January 9th, 2025, this big global summit where we ended up having over 400 folks from about.

(04:44):
30 different countries join us and just fantastic.
But we also ended up co-authoring an article for the journal leadership studies, which we'll talk about a little bit later as well.
But am I getting all this right? Well, the one part of this story, that's great, Dan, is, I didn't actually use GPT for this, but I'm like, I don't think this is spam.
I think this guy's legit 'cause he is reaching out because at first, you get a lot of people reaching out on LinkedIn and you can tell it's automated, but I felt this was.

(05:10):
Somebody genuinely reaching out, and I could tell from our first conversation that a lot of our.
Mutual interest aligned.
In fact, one of the things I'm really passionate about this podcast, and I think we shared this as well, is that it sits at the intersection of so many interests and threads in both my personal and professional life.
You mentioned teaching.
Teaching is a relatively new gig for me.

(05:30):
You're a veteran.
You've been doing this for a long time.
I'm really enjoying this.
I started teaching this class that you mentioned at Stanford, which we'll unpack a little bit more.
About three years ago, but just I've been a individual contributor and a leader at Cisco for a very long time.
And it's really there.
There's a duality here that I think is emerging.
Which we're gonna talk about a lot more, which is this AI for leadership and then leadership for ai.

(05:54):
I think those things are just fascinating topics to discuss, and it was very clear in our first conversation that we were both very interested in exploring this a lot more.
And it's still a relatively new field, Dan.
Right? So we are, we're very early in this game.
I pride myself on some things being an early adopter.
It just turned out that generative AI was one of those things.

(06:14):
I mean, I remember where I was when I was listening to, I believe it was an episode of the New York Times podcast, the Daily, and they were sharing some of the early applications.
I feel like this was like December of 23.
And they were talking about how they were using this new open source model of chat, GPT.
Write scary stories and write mafia vignettes and just really using it for and showing off the power of the large language model.

(06:39):
But, doing it in a way that was somewhat humorous.
And, I don't think we had really yet at that point is certainly on something like.
Daily, which was meant for kind of like broader consumption to talk about coding and things like that.
And as I was thinking about, its ability, these large language models and generative AI to manipulate, use language in the way that they were designed to I thought, wow, well, I wonder how this could be used for developing or revising curriculum.

(07:04):
I geek out like crazy on how we teach leadership the most.
Effective ways to facilitate leadership learning.
So I spent a lot of time in my profession designing and facilitating programs for other leadership educators or people that work in leadership, education, training and development spaces.
Sometimes it's faculty that teach in leadership programs like mine or business faculty or folks that work in human resources and training and development.

(07:29):
For me.
Always been passionate about helping to develop the capacity of individuals to do that work.
And for me it was like, oh wow.
So there's this great tool.
It's going to make us better at doing this work, at designing leadership, learning at evaluating leadership learning.
How can I learn how to do this well? And also in turn create opportunities to develop literacy and capacity and competency in others around this work.

(07:56):
Yeah, and I shared that same passion too.
But back to that leadership summit, since that was the origin story of how we got together that was a wonderful summit.
So I really appreciate the invitation to participate.
We were on a panel together.
There were quite a few AI topics that were woven in throughout the different sessions in that leadership summit.
I'm curious what is dive a little bit more into how this idea came about and what were your some key takeaways from.

(08:21):
Organizing it and when you look at the survey data that came out of the summit or what the participants said, what were like one or two key takeaways from that whole experience that you had organizing this in January.
It was interesting 'cause I didn't go into my sabbatical with this idea.
But I had been a part of a, I, maybe I'll call it a think take of some individuals that were faculty and in instructional technology or administrative roles at universities around the us and we co-facilitated a workshop that would've been the fall of 2023 and the fall of 2024 at the International Leadership Association the ILA does a global conference every year, and it was in Vancouver in 2023 and then in Chicago in 2024.

(09:03):
And this group of us started to do these.
Sessions on how we were using it in our classrooms, in our working environments what the ethical concerns were, things around academic integrity, or what we might just call, cheating.
What kind of policies were being created around these things.
But also wanting to not just talk about these things, but demonstrate examples of how we were making some of these decisions at our institutions.

(09:28):
Like I sit on our institution's AI task force as an example.
But also how we were using these tools to enhance our teaching and learning spaces and all of us sharing this discipline of leadership and leadership development.
And there's a, someone that I work closely with at the ILA that helped to support planning this summit.
And before the idea even, evolved I had applied for and been asked to be a fellow kind of a faculty fellow.

(09:55):
The association takes on individuals to do different research projects or support the association, build communities, build resources, a variety of different things, I hadn't quite materialized exactly what we would do in that space until Deborah said, Hey, do you have any interest in maybe putting together some type of, meeting or summit or event, based on all this work you're doing with ai? And the idea immediately started to materialize in my brain and I thought, Hey, this could be really big, this could be great.

(10:21):
I have a sabbatical to work on this and I have some flexibility in my projects.
I would also love to engage members of the association in this work, and did it in a very grassroots way.
We sent out, kind of an open call of interest to the members of the ILA.
They've got, several thousands of members that attend the global conference and a part of the association.

(10:42):
And I was very excited to see that I wanna say around 30 folks showed up for that first meeting.
And of those about more than half.
Said, Hey, yeah, I'd like to be part of the planning component.
And we met a few times on Zoom in September and October.
Started to flesh out what the session should be on understanding that we wanted to focus it on leadership, education, training, and development.

(11:06):
That was kind of our umbrella, right? There's obviously so many things we could have focused on with ai, but that's where we wanted to focus.
And through those conversations, this agenda started to materialize.
And also everybody had great networks and so I would say.
Most of the folks that were part of this early planning committee did end up being presenters at the summit, but they also said, you need to call my friend, or you need to call this person.

(11:29):
I saw, I just get, I just attended this amazing webinar and you should have this person.
Or I saw this person speak and you know what materialized were 15 different.
Breakout sessions that were workshops and panels, and they were all, 75 minutes each in plenaries.
We talked about ethics, we had keynotes.
I mean, it was and I can go on and on about the different topics as well, but just sounds like you wanted to jump in and ask me something.

(11:52):
Yeah, it's interesting because this power of the network is, obviously we started with this right? The network, whether it's people or the people that you might be connected to and search from.
But I think that has been a big part of the AI story so far, especially in the early years when I wanted to just learn about things.
My network was so important.

(12:13):
To getting people to explain me stuff or join me on talks and panels because it wasn't, and I think we need to appreciate this field is still evolving.
It's new.
It's not like chemistry, which has been around, for 300 years.
We're really talking about something that evolved in the past decade and accelerated since 2023.
So the power of the network has really important and I anticipate will be, for the foreseeable future just because of just how new everything is.

(12:37):
Yeah, and that's why I think we had this opportunity to somewhat be on the cutting edge of, what was, or what is the intersection of these, this.
Incredible new technology and how we're going to consume it and use it and apply it in the places and spaces where we, teach and facilitate leadership learning.

(12:58):
And so as I mentioned before, we had everything from sessions on, policy and organizations and in industry and in higher education.
As well, how to use generative AI tools like Chat JT and Claude and Gemini to generate, leadership curriculum and learning activities.
We had sessions on prompt engineering and and ethics.

(13:18):
We had several sessions on things just like choosing the right tools, right? We've got this.
It's the wild west.
There's this grab bag of different tools that you can use and try to understand what does what, and why do I need to use it? Am I, do I need to create a presentation? Do I need to create a video or do I need to use Canvas Or what's my context? Am I doing research? Am I doing qualitative? Am I doing quantitative research? What's gonna help me to build my, literature review for a paper or just to work on some type of project, within an industry.

(13:46):
Perspective.
And I think those were the big streams and themes.
It was ethics for sure, but also applying generative AI tools for both research and and learning and of course everything kind of being within this leadership umbrella.
And so I think, folks really did.
Believe with tools that they could use not only to develop their own capacity, but tools to help to develop the people that they work with, whether in their organizational setting or in a teaching and learning setting to develop their capacity to be better users of these generative AI tools that are now kind of ubiquitous and are places and spaces.

(14:22):
Yeah, yeah, Jason, you have been working with this continuing studies group over at Stanford and focusing on the intersection of AI and leadership, which again, that's, I wouldn't we wouldn't be talking right now if I hadn't come across that course.
Tell me a little bit about, and of course if there's any takeaways from the summit or some of the intersections there.
But tell me a little bit about, how that work.

(14:44):
Has become so important to you and continues to be.
Yeah.
And so the story behind this is when I got into ai, I was just overwhelmed learning.
So much of the topics, I'd never intended to teach the material, but I was sitting on top of a lot of training material because to the point we were talking about earlier when I started in data science in 2017, I mean, I could count on one hand or two hands just how many people you know at my company were working on this.

(15:10):
And so I found that part of getting people excited about AI was training them on AI and showing them early examples of how these algorithms were fundamentally more powerful than say, just plotting things on an Excel spreadsheet and making some predictions that way.
And so much of our business and I would imagine so many businesses even still today, are just run, on spreadsheets.

(15:31):
And so I was personally inspired by what I saw in terms of the predictive capability of what we now call traditional algorithms, right? This is the pre GPT world.
And so I ended up teaching quite a bit here and there.
An hour here, an hour and a half here.
Eventually graduated to, my first time I ever taught in 2018 was a was a three lectures on AI that were, three hours each.

(15:54):
So my sympathies and great admiration for all teachers, Dan, because standing on your feet for three hours is.
Hard.
Okay let's just put it that way.
But very rewarding.
And I've been a fan of Richard Feinman as I think a lot of people are the Nobel Laureate in physics.
And the phrase that's commonly attributed to him is you don't really know something until you teach it.

(16:16):
And that was the basis of the Feynman technique, right? You master complex topics by explaining them in simple terms either to a real or a hypothetical audience.
And that was something I had.
Really just adopted in my life since graduate school teaching.
So I just got the teaching internally within Cisco and at some point I had so much material, I was like, there's really a class here.

(16:36):
Hundreds and hundreds of slides worth of explanations about the spectrum of algorithms that we use in data science.
So I reached out to Stanford continuing studies and, it was a similar story, dad, to how we met.
It was just a cold email to them.
The fact that you're a Stanford graduate, they don't know when you email them about a proposal for a course.

(16:57):
I got a rejection, but my spidey census told me that rejection was not a, computer generated email.
It seemed to come from a human, so I said, Hey, you know what the heck, let me reach out.
There could be a person on the other line on the other end of this.
And sure enough there was.
And that person, angel, Evan and I have had a, two year long great partnership where he is, he's my coach for this class and he has taught me an incredible amount, not just about ai, but also pedagogy.

(17:27):
And so it started a partnership where he really helped me develop a class that was coherent.
I threw all these topics at Stanford and said, look, I wanna teach a soup to nuts AI class.
Well, that's not really.
How you design curricula.
Again, I'm talking to a veteran here, so you could, and I expect you will be guiding me as you have through this journey as well.
But one of the key things that Angel and I talked about is this concept of outside in approach to teaching these topics.

(17:52):
And the concept here is a lot of Mike.
Curriculum that I developed just started with the algorithms.
It's like algorithm A and B and C and how they build on each other.
That's actually not the best way to teach these, I found especially to an audience that doesn't really know data science.
You wanna start with the business case.
You wanna start with why it's important and you want to pick business cases like, for example, e-commerce, using AI and e-commerce you want to use.

(18:16):
AI in healthcare.
These are all things that are very visceral to us because we experienced them and you want to use those business cases.
And then the next thing is the intuition behind the algorithm, how it can help, and then the math.
So rather than starting with the math and potentially turning people off, just outside in approach, turned out to be just one of the great lessons I learned in teaching it.

(18:38):
And then it was a standalone class, and because it was successful the good folks at Stanford said, let's construct it as a three part series.
We called it ai and leadership.
And so I teach the kickoff class, the foundations class, but then the next set of courses teach how to build AI into a business.
So build a business out of it and also build a product and how they relate to each other.

(18:59):
And then how you can deploy these.
So the three part series is gonna be a once a year thing at Stanford, but I'm also still teaching the standalone class.
And then, the other just great thing that I learned is two, two great things was how do.
Structured topics in a sequence, right? Like pedagogy is a thing, but part of that thing is sequencing and scaffolding topics, not just within a class, between the different courses.

(19:24):
So it looks coherent, the students, and you build on concepts slowly.
And then, every time you get to a more complex topic you can remove a little bit of the support that you gave them in terms of the instruction and the complexity.
You can increase it from primarily because they've been seeing.
Increasingly complicated topics as they go along.

(19:44):
So it's just amazing to watch students go through simple from like simple linear regression algorithms all the way to advanced concepts in generative ai.
And then the last thing I would say, and this is more recent and certainly an area I want to explore more and just be better at, is how do you weave ethics? Into the conversation, so it's not Hey, today's lecture will be on ethics, and then we move on to other algorithms.

(20:06):
I'm working very actively with weaving ethics into ev just the entire fabric of the course.
So it starts in lecture one, goes all the way to lecture eight.
We have some conversation about ethics because it does matter.
And even from the simplest stuff, and I, I think it's resonated with a lot of the students.
They're not gonna be data scientists, but I think a lot of 'em see the writing on the wall, Dan.

(20:27):
They see how their jobs are already being impacted by ai.
They want to pivot a lot of 'em to work adjacent to data scientists.
So they need to understand the concepts and vocabulary.
So that's a little bit of the story of the class.
One of the interesting things we've had a chance to talk about is not just our respective fields and what we're doing but what we find really valuable in this, because while we're doing different things with AI in terms of our respective domains, I think fundamentally what's driving us, Dan is the value we just get out of just using this stuff every day.

(21:00):
Yeah, totally.
And for me I think it came from being exposed to Ethan Malik's work really early on about, he wrote a book called Co Intelligence and it.
Was also writing newsletters that were coming out every month.
Really also an early adopter in the world of generative ai and, really pushing to see what was possible with some of these tools.

(21:24):
Really, of course, starting with things like chat GBT that were readily accessible and open source, but, as different tools.
Came, down the pipe.
He would play with these and test them in certain ways and, put them through the same kind of sequence of of tasks to see what the outputs would be.
And was someone who shared lots of resources on prompt engineering and putting libraries of crops all over his websites.

(21:49):
And I think it's called One Simple Thing that he has out there.
And I.
Sorry to think of this, or I guess conceptualize, these generative AI tools, as it's not replacing us.
It's our co dash fill in the blank.
It's could be our co-teacher, it could be our collaborator, right? It could be our co-author, our co-writer.
As I think about the things that I'm, doing most often in my work as a faculty member, it's a lot of email.

(22:15):
Although I'm a recovering department chair.
I did, I run that role for six and a half years.
At, in my department.
But, I am still, not only working with people at my own institution, but people across the world, and through the different professional associations that I'm a part of, or different consulting work or, doing webinars or workshops online, on things like generative ai and how to use them in different leadership learning contexts.

(22:37):
But, I even more for my writing.
My research tools like, like clot are incredible for writing.
Things like perplexity and both chat GBT and Geminis Deep Research.
I found those to be really helpful for developing literature reviews and.
And for writing projects either by myself or with graduate students or with, other faculty members or professionals that I engage in in different writing projects with.

(23:03):
But, IE even.
On a day-to-day basis just the amount of email that I have to send.
And it can be really helpful to say, Hey, chat GBT, this is the context, this is the email I'm responding to.
Can help me with this tone.
I wanna be kind, I wanna, be empathetic to, maybe student, just lost a loved one or is dealing with some real, challenges and, being able to make sure that I'm responding in a way that is clear and kind.

(23:26):
And obviously I edit it af before I send it, but it can really help with that first draft.
And I can say, Hey, I need an email that does this, or I need to respond in this way.
And I find that, really does streamline.
My day to day.
And I find that more often than not, I, when I think about the tasks that I have to complete on a given day which is changes every day, whether I'm, preparing a conference presentation or or grading papers or what have you, but that I think about how can I integrate generative AI into my workflow, into my processes.

(23:56):
And I kind of feel like there's no going back.
I would say it's the dark side.
But, I think about how do I streamline what's, how can I be more efficient but also authentic in my day to day? What about you, g? Yeah.
And you bring up a couple of really good points.
One of my mentors at Cisco has a great phrase.
And the way he describes inflection points is not only are they an important.

(24:19):
Shift, whether it's technological or sociological, but inflection points, mark periods where there is no going back.
You simply cannot imagine doing things or interacting with things the way you used to before.
And I think that's really why AI fundamentally represents an inflection point for a lot of people because it has changed.

(24:40):
How we work and we're gonna explore that in this podcast.
I mean, for me it's just the sheer amplification of effort.
I mean, Dan, I have found my lifelong learning partner and it is open ai, Claude Gemini, you name it.
These are my learning partners.
And my approach to AI was a lot more theoretical and mathematical.
'cause that was just my background and my, I came from, I was never a good python.

(25:02):
Coder, because I just didn't have to be, I was blessed to manage a data science team.
I had two amazing data scientists.
They were also just incredible programmers.
So they took care of that because as I mentioned, in the early days, a lot of my time was spent getting stakeholder buy-in into ai, right? What is this AI thing and why is G asking me for funding for this thing? Right? What are we selling with ai? And I just got mired in a lot of politics.

(25:29):
Of trying to get AI to be important.
I mean, I'm very grateful, like very different world today, night and day.
But the bottom line was I couldn't spend a lot of time coding because as a leader, I had to spend a lot of time managing the stakeholder relationships.
I mean, Dan, you should see me now.
I mean, you should see the code that I'm cranking out.
I mean, it is just it's great.
But not only that a lot of websites and, people who are explaining the code or the writing code, they put comments in and even very well done websites have comments and they explain the code.

(25:57):
But even sometimes that's not enough because a lot of people, they just they don't show up every day and say, look, let me teach this.
They want to get some really good content out there and they want you to just be able to get up and running.
And so what one of the greatest things AI has done besides just generating code is I could sit there and I can ask it, look, what is this code doing? Can you walk me through what this means? Like why is this important? Is there a better way of writing it we should not expect this of all the people who are publishing websites and blogs about coding and ai, right? They just simply don't have the time or they don't have the inclination.

(26:29):
So being that sort of co collaborator a, a learning partner besides just generating output, I mean, that has been immense value.
I mean, beyond anything that I can imagine and I will guarantee I would not have had the time.
I would be doing Google search after Google search for hours to find what chat GPT and Claude can generate for me in, a minute.

(26:51):
So it has been really cool.
I think Dan, we've explored a lot of cool things here.
I'm really looking forward to continuing with this podcast.
I think we wanted to include a li Rapid fire, lightning round of kind of cool topics and how we'll just get to know each other and our guests that we're gonna bring on towards the end of the episode.
Should we wrap things up with rapid Fire, lightning Ground? Yeah, let's let's try it out.

(27:15):
All right.
Let, I'll go first.
So what's your biggest fear? About AI and it include anything including ethics.
Yeah.
Things like that.
Yeah, I, yeah, for sure.
I think my biggest fear is that generative AI is going to replace human thought instead of enhance it.
All right? I think that people are just too trusting of the output, not because they're lazy, but because the output is so voluminous, it's becoming difficult to scrutinize everything.

(27:39):
So that's my fear.
Awesome.
All right.
So what AI evolution do you think would be the greatest benefactor to do what you do? Agentic flows.
Call me a fanboy, Dan of ag agentic flows, and I think one of the reasons why it's important is that not only will agentic flows, help you execute tasks.
Tasks, excuse me, but they'll also be checks and balances built in which will be really powerful.

(28:04):
How about you? Yeah, I think for me it's the, rag the retrieval, augmented generation, these tools.
I think about it for things like research journals in my discipline where if instead of doing like Boolean searches and things, I can have all of the leadership, journals at my disposal and be able to say, hey agent, what are all the articles that talk about using like simulations and leadership learning? It will just literally give me, and then I can, have this quote unquote, being careful not to anthropomorphize these things, but have a conversation with the literature and be able to, interact with it in that way.

(28:37):
But I think we're close, right? I know that some library scientists or and programmers are working with some of our.
Big databases like our J stores and our psych nets and our Google scholars to get us to that point.
But we're not quite there.
So I'm on the edge of my sea with that evolution.
For sure.
Cool.
Well, what anything about leadership that excites you about ai? Yeah, I think the evolution of.

(29:03):
Using these chat bots.
And I guess chat g Bt is kind of, kind of the leader in this way, but I know that folks like Gary Lloyd and others are creating tools of where you can have coaching conversations and be able to bounce ideas and a safe space.
Right? I think it, there's this huge opportunity with these chat bots for having conversations in, environmentally or psychologically safe environments.

(29:27):
Right.
The, you're not, if you took off your chat bot, it's fine, right? But if you took off your coworker, there are severe implications.
So I think, right.
I think that there's some huge opportunities there.
And we're just on the, we're just starting to explore that.
What about you? Yeah.
I, Dan, I think there's an opportunity, and I think you're in a wonderful place with this, through both your own teaching and what the work you do with ILA.

(29:49):
To and the teaching that I'm doing to influence the first generation of leaders for ai, like true generation of leaders, because as I mentioned, the vast majority of people are taking my class.
They don't want to be data scientists, but they wanna work in the field and it's so wide open and I think, having the ability to get people to understand it, and they look back and they say, you know what? I learned it from this person, or the, they influence how I thought about something that's gonna be with us for the rest of my life.

(30:15):
That, that's just super exciting as a teacher.
What more could you want? Right? You students, your students tell you, you are very influential and that just warms my heart.
Yeah.
So that's what I'm psyched about.
Yeah, well, gi what what's the book we should all be reading about Artificial intelligence? I say this with a bit of trepidation, but I'm so first of all, I'm a fan of co intelligence and I think you are too, but the book I'm reading now that I was skeptical about, but it's kind of a warming up to his, the Coming Wave by Mustapha Soman.

(30:44):
He's a bit too dystopian for me, but I think we should heed the warnings that he has.
'cause he, yeah, he is really, I think an important voice in this field.
How about you? Yeah, I Read that in the last year as well, and definitely another shout out to Ethan Lik and Co intelligence if you're in education at any level.
Read that one.
But one that was introduced to me about a year and a half ago that I had just poured poured through was a book by Brian Christian called The Alignment Problem Machine Learning and Human Values.

(31:12):
And the, kind of the thesis of that is this idea of how do we align human values with machine learning and how do we, we're creating these algorithms we're teaching machines to learn certain processes for us to, in some ways replace some of the tasks that, that humans have to do.
But in doing so, like we need to make sure that we are.
That human in the loop and that if it's making mistakes or it's showing bias in one way or the other that's on us, right? We need to augment the way that algorithm works or provide different examples for for these AI bots to to learn from.

(31:45):
And they, that, that book really puts that in a very accessible and digestible way.
And definitely had a huge impact on how I think about.
Some of these generative AI tools and some of the ways that organizations may need to or I say, may need to, must ethically adopt them, should they choose to use them in their day to day.
Yeah.
Yeah.
Yeah.

(32:05):
Yeah.
Sticking on a similar vein in terms of just influence what's an AI tool? You were just blown away by recently.
Yeah, so I'll try to keep the example brief, but I think it's a combination of of Google's some of their new tools.
So they've got the AI studio, and then they also have notebook lm.
So I am teaching a summer.
Seminar for our PhD students at our program, at my university on qualitative methods.

(32:31):
And one of the, I love teaching by teaching buy and with examples.
And so I had written a qualitative study that was published about seven or eight years ago.
And of course I know the outcomes and I know those, the transcripts well.
And I wanted students to be able to use notebook, lm to analyze transcripts and to learn how to use that tool for qualitative analysis.

(32:52):
Of the, of interviews.
But I wanted 'em to use my use case so that they could then learn how to use it when they do their own qualitative projects during the semester.
But I knew that I couldn't.
Give them the raw transcripts because they would have all the identifying information of the people that I had interviewed for this study about seven or eight years ago.
And this is no what's the word I'm looking for? No criticism or maybe it is on like chat GPT or club.

(33:14):
But I asked each one to anonymize the transcripts and I found that it couldn't quite do it with the full transcript.
The documents were just too long, but.
I had been on a webinar with Ethan Bolick about a year and a half ago, and he was showcasing this Google AI studio and that I could handle these really large tasks and had a lot of tokens available.
And so I thought, well, let me see if it can do it.

(33:37):
So I asked either chat or Claude to help me to generate a prompt that I could use to explain what I needed the AI studio to do, right? So it's kind of funny that I'm asking one generative AI tool to gimme a prompt for another generative AI tool, but in, in the end.
It was able to completely anonymize everything in these transcripts to the point where, you know taking out all the, obviously taking out all the names and the locations and the names of the universities and things.

(34:01):
And so in the end, I ended up with 13 interview transcripts that were pretty long.
I mean, some of these were three or 4,000 words, long, completely anonymized, and then was able to load them into Notebook, lm, and we use, we're Google Campus.
So now the students can use that feature and that tool.
As a teaching tool and can learn some of the basic skills that I need them to learn as they build and scaffold towards more advanced skills with qualitative analysis using some of these generative AI tools.

(34:27):
Blown away by, by both.
And I'll definitely be talking about Notebook LM at a future episode 'cause there's just so many opportunities for that with some of the things that I do in my work.
What about you? I'll second the motion on notebook.
Oh my goodness.
I mean, not only did it blow me away, Dan, when I teach this, there, there are people who worked in AI and they may or may not have heard it or used it, but when I demoed it in my class, I, I see, last time I did it, when I just finished teaching my class in March, I woke up in the morning to three texts from three different students saying OMG.

(34:59):
This is unbelievable.
I mean, it really just also speaks to where.
Things are going, which is the large foundational model.
Players like Google, meta, OpenAI, certainly Claude, they're all moving towards multimodal models where there's integrating images, video sound and text into sort of one gen of ai integrated tool that you can work with.

(35:21):
And I think that I mean, it's a game changer within a game changer.
So I couldn't say enough about Notebook.
Lm.
Yeah.
Yeah.
Yeah.
That's great.
What what about prompts? I know we talked about prompt engineering earlier.
What's your favorite prompt right now? Alright, well, for multiple reasons.
The prompt that starts is, explain it to me like I'm a novice or I'm in high school.
Is just so great.

(35:42):
And this was one of the powers that we saw right away.
With even chat chip team when it was launched, its ability to innovate a persona and then use that persona as the context for how it even wrote things and the context and the wording and even the sophistication of the vocabulary was all determined by that persona you gave in a prompt.

(36:02):
Partly also, I like I had a good time in high school.
It was a great journey for me growing up in New York and so I imagined myself.
Asking this prompt, walking around with a Walkman and my football gear on, 'cause that was me in high school.
So yeah, it brings me back to my days in Chap New York.
But anyway, how about you? Yeah, I think, i'm gonna go with having it play a role.

(36:23):
That is something that I can't remember where I came across that some webinars, some professional development, but, I found that, these generative AI bots, they're, they are agents.
They just wanna play a role.
And I'll push you, I'll say, Hey, play the role of, the world's greatest qualitative researcher, right? And so I can have a back and forth, conversation with that or hey, play the role of for me, a lot of times it's be an expert in this discipline or this approach, or this model, or this theory or whatever.

(36:48):
So that I can, expand, my understanding of how to use something or to get some advice on, some type of project or paper that I'm working on or something like that.
But I it's really incredible to see how it.
Pulls from the world's knowledge that it's trained on to really play a role of a, of an expert or a coach, or even I've heard it even does a decent job of being a therapist of certain types that you could go through have some have some intense conversations with it about those types of things, if that's what you need.

(37:18):
Yeah.
But no, no substitution for a trained professional, right.
Yeah, absolutely.
Well, we've discovered we have a lot of academic interests.
We've also discovered we have quite a few interests outside of just academia.
So one area is music.
Dan, how would you describe your musical taste? Yeah, I would say it's this balance of of classic and alternative rock hip hop and and and.

(37:41):
And jazz.
And definitely more like the bebop hard bop eras and big band eras of jazz more so than some of the contemporary stuff.
Yeah.
So yeah you and I remember the days when you actually listened to songs in a sequence.
'cause that's how it was intended to be on vinyl, right? That's right.
All right, well, well, for me, my musical taste, I would say just, I can sum it up with just the words Pink Floyd.

(38:03):
I mean, you can just imagine the derivative bands that I would like based on that one, which has been a long time favorite.
And for a while I was quite a, I was collecting a lot of Pink Floyd stuff, but just a great band, wonder what they would have to say about ai.
But no, let's not ask them another wall.
Right? Yeah.
Yeah.
I don't know that I want Roger, I don't know that I want Roger Water's opinion on ai.

(38:25):
I think that would be another three hour diatribe.
But anyway so yes, pink Floyd.
Yeah.
So speaking of influences, right? Music would be an influencer, but what's an AI influencer or a newsletter that you're reading every day or that folks should check out? I'm reading a lot.
It's Josh Starr.
He has a YouTube channel called Stat Quest.
Oh, what a great guy.
I actually had a chance to meet him.

(38:45):
He is one of my heroes, simplifies topics, cartoon drawings.
He's got little characters like a Sasquatch and a dinosaur explaining, the most advanced concepts in Gen ai.
But it just speaks to just, you know how there are people out there just through the sheer.
Love of this are making this topic so accessible to everybody.
So I mean, he said to record one of his YouTube episodes and he is got hundreds, takes four or five hours, sometimes it takes a whole day.

(39:12):
And I'm like, my gosh, the amount of time he's invested is amazing.
But how about you? For me and a colleague of mine, Scott Allen, turned me onto this, it's called the Neuron.
It's a daily newsletter, all about ai.
And I, they cover everything from, some of the big news stories, around ethics or industry or or the next model that's coming out as well as different tools and how to use 'em and, just tons of links and resources and good commentary.

(39:35):
I just think it's a nice bridge between what's, industry and teaching and it's just really accessible stuff.
I always have a takeaway from each newsletter that I read and, maybe takes five minutes to get through the whole to get through the whole newsletter.
Yeah.
Yeah, doesn't, not a, not spam at all, definitely digestible and just really good to to have a good sense of what's going on with ai at any given time.

(39:57):
Yeah, Dan.
So since we've met, I have a folder called Dan's Suggestions, and it's a list of articles I need to read.
I have a wishlist on Amazon called Books Dan Recommended, which just grew every time we talk.
And now there's another list called Stuff to Check Out and now I've just added the neuron to that.
So I'm gonna be spending quite a bit of time just catching up on stuff you recommended.

(40:19):
So great.
My pleasure.
Yeah.
So before we take off from this first pilot episode, next time we're gonna put together what, why I guess effectionately be called our pilot episode number two before we go into a regular rotation, which is our goal.
But I mentioned earlier in today's episode that g and I worked together to eventually co-author an article which we wrote for a special issue of the Journal of Leadership Studies that came out earlier.

(40:45):
This year in 2025, and it was titled AI.
Well, I think we actually ended, it might have been Title I.
Oh, that's right.
Ai.
That's right.
We, one of the last edits we made was replacing the iterations of the, of just the term AI with generative ai.
'cause it turned out we were actually talking a lot more about generative AI than we taught.
We thought.
So in any case, the title is AI Enhanced Training, education and Development, exploration and Insights into Generative AI's Role in Leadership Learning.

(41:11):
And we'll definitely put a leak.
To that in our show notes.
But excited to dive into that, talk about kind of the journey of writing that article and some of the key points and applications that we'll tell some stories and make some connections to things from, certainly my wheelhouse of higher education and leadership learning and working with organizations.
But also, g jumping in with his connections to industry and technology and engineering and.

(41:36):
And it's just, we have just so many opportunities from, to learn from one another from the different places and spaces where we spend the majority of our professional time.
Yeah, just really excited to get this out there and to get into our next one.
Thanks so much for joining us, and please keep an eye out for our next episode drop.
Thank you.

(41:58):
For listening to leaders in the Loop.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.