Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
In November 2024, we moderated a panel at the OLC Accelerate Conference that used
the universal design for learning (or UDL) framework to consider the impact generative
AI has on equity and access. This episode is the live recording of this session.
(00:24):
Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective
practices in teaching and learning.
This podcast series is hosted by
John Kane, an economist...
...and Rebecca Mushtare, a graphic designer...
...and features guests doing important research and advocacy work to make higher education more
inclusive and supportive of all learners.Welcome to “Equity and Access: Artificial
(00:57):
Intelligence in Support of Universal Design for Learning - an expert panel.” I'm John Kane,
an economist and Director of the Center for Excellence in
Learning and Teaching at SUNY Oswego.I'm Rebecca Mushtare, a designer and Associate
Dean of Graduate Studies at SUNY Oswego. The widespread adoption and rapid evolution
of generative AI tools like ChatGPT In the past two years has sparked many conversations about
(01:21):
the impact of AI, as we've seen at this conference so far, on many aspects of higher education. This
panel, however, will focus on AI's relationship to accessibility and universal design for learning.
Our panelists are Liz Norell, Sherri Restauri, and Thomas J. Tobin. Liz is a political scientist and
Associate Director of Instructional Support at the University of Mississippi Center for
(01:45):
Excellence in Teaching and Learning. She is also the author of The Present Professor: Authenticity
and Transformational Teaching, which has recently been released as part of the Oklahoma University
series on teaching and learning. Sherri is a faculty member in the Department of Psychology
at Coastal Carolina University, having recently left administration in her role overseeing digital
(02:07):
learning and access. She has been working in the field of digital and online learning for 24 years
and now runs an educational consulting business to provide support to educational companies and
institutions alike throughout the world. Sherri's research and work focuses on neurodiversity and
mental health in higher education, and she has published, as well as presented, extensively
(02:30):
on these topics over the years. Tom is a founding member of the Center for Teaching, Learning,
and Mentoring at the University of Wisconsin, Madison, and the author of the forthcoming book,
UDL at Scale (02:40):
Adopting Universal Design for
Learning across Higher Education, as well as
Reach Everyone, Teach Everyone (02:46):
Universal Design
for Learning in Higher Education and several other
works related to teaching and learning.This wouldn't be a complete episode of Tea
for Teaching if we didn't ask about tea. So today's teas are: Liz, are you drinking tea?
I mean, I already know the answer, but…I am drinking two non-conventional teas. You
may have heard of them (03:07):
One is Diet Coke, and the
other is water with a little bit of Mio flavoring
in it, which I think counts as tea.Tom?
I'm drinking a lovely decaf rooibos. That sounds good. Sherri?
So I actually am drinking my favorite iced coffee. I have a salted caramel mocha with me.
Great.And I am
drinking a Tea Forte black currant tea. And I'm back to a good old favorite that I
(03:32):
picked up when I was at Epcot yesterday, which is English afternoon tea. Our last segment for today
will feature questions from the audience. Because we're managing two sets of mics,
as you can see, one for amplification in the room and one for our recording,
we ask that you use the QR code on the screen to submit your questions for the panelists.
(03:53):
Technological advancements like touch interfaces and speech recognition were initially developed to
provide access for people with disabilities, but now the standard features used by many. Likewise,
early AI innovations, such as real-time captioning and transcription services
and tools like spelling and grammar software, initially designed in support of accessibility,
(04:13):
have been widely accepted and adopted as useful supports for all students. In what
ways can instructors use AI tools to efficiently develop accessible digital content or design
multiple means of representation or options for interaction? We'll start with Sherri.
Thank you. So I had an opportunity to think about this question in advance, and in my opinion, based
(04:36):
on a couple of decades of watching AI come to the forefront, I feel like AI tools themselves are,
in their current state, something that provides faculty and students, as well as instructional
designers, with the ability to have different perspectives than we would otherwise. I kind
of highlighted, and I wanted to go ahead with our questions by highlighting two of my favorite tools
(04:56):
that I figured you guys may not have found yet, because we all know what ChatGPT is, but one of
the tools that I have evaluated, and I feel like is going to change our world in course development
is called LearnWorlds, and I encourage you to take some time to dive into their free 30 day service,
because it is amazing. The one that I use most extensively with my students is called Goblin
Tools, and both of those tools are ones that, over the last three years, have extensively changed
(05:21):
the space of building what is available for us as faculty and helping us if we don't know anything
yet about UDL, it's actually helping us to create those without having to have too much background
knowledge, because UDL is built into the. Central mechanism of how these tools work as AI. So I
encourage you to take a look at Goblin Tools and Learn Worlds if you've not looked at either of
(05:42):
those yet, because they are very interesting, and it will get you kind of started with a tool that
was pre built on the foundation of UDL.Tom?
Well, the key here is that it used to be up to the designer or the instructor to create and provide
the multiple means of engagement, representation, action and expression for the learners, the 3.0
(06:04):
version of the universal design for learning guidelines recognizes that artificial intelligence
and other tools now put the ability to create in everyone's hands. So the verbs changed from
provide to design. This acknowledges that we can both give UDL options and teach learners how to
recognize and craft their own customized options as well. For example, before large
(06:28):
language models came along, if I wanted a text version of a video clip or I wanted to hear it
in another language. I had to rely on humans for those transformations. Now I can ask AI tools to
create the alternatives I'd like, and then, and this last part is crucial, do a double check with
the humans who have the skills to be able to say, yes, that's accurate and trustworthy,
(06:53):
As noted earlier, AI tools can also help students with grammar and spelling and
basic writing and organizational skills. Will these uses of AI allow students and instructors
to focus more attention on the development and higher order thinking skills. Liz?
I just want to say that when I was invited to be on this panel, I thought these people all know a
(07:14):
lot more than I do. So when I was thinking about how to answer these questions, I did what I think
many of our students might do, and go to ChatGPT and say, like, “give me some thoughts on this.”
And I found that to be really helpful to kind of clarify how I want to answer this question
about whether AI tools can help us focus more on the higher-order thinking skills. So in response
(07:38):
to this question, I would say “yes, and…” because generative AI can create opportunities for us to
focus on those higher-order thinking skills, and to do so as educators, we have to invest time and
thought into creating assignments and projects that do that, that engage those higher- order
thinking skills. And this is something that I've tried to make clear to my colleagues when talking
(08:02):
about generative AI, both in my discipline of Political Science and with the faculty I work with
in our center for teaching. And I'm just going to do a quick nod to the 1992 presidential campaign,
because that's my age, so please don't take this as any sort of like derisive comment, but it's
the pedagogy, stupid. It's always been about the pedagogy, and John Warner's book Why they Can't
Write (08:24):
Killing the Five-Paragraph Essay and Other
Necessities, sort of gets at this in his title,
that a five-paragraph essay may not engage those higher-order thinking skills. It's a formula,
and he wrote that book well before ChatGPT was a going concern. So I think we have to do the really
hard work of figuring out what an assignment looks like to engage those higher-order thinking skills,
(08:47):
and that requires work, not just from our students, but from us, as we work with faculty
and do the hard work of education. Sherri?
So, I agree with Liz on her premise that it is really about the pedagogy. It's always been
about the pedagogy. I am in a position now where not only am I teaching at my home institution,
but I also teach at a couple of other institutions as needed, and each of them have exceptionally
(09:10):
varying policies and restrictions on the use of AI. I won't name the one that has a restriction,
but I will point out that because my work is extensively in students with what we would call
hidden disabilities, a lot of times there's a misunderstanding about the need for AI. AI is
absolutely a requirement. And for example, Goblin Tools I use with all of my students who have any
(09:32):
kind of learning disability, ADHD, like myself, if they have any kind of mental health concerns, then
Goblin Tools is an essential tool for allowing them to bolster their executive functioning. And
I've had to share that research and share that type of understanding with my colleagues at the
institution that has a restricted AI policy that tells students they will fail if they are known to
(09:53):
use AI. And I've had to tell them that this is an equity issue. It is a requirement for these
students to be able to utilize this and there was a presentation two sessions ago, earlier
this morning, where they shared that less than 20% of institutions in the United States have college
students that disclose a disability. That means we're missing about 80% of them, and that's an
important fact to keep in mind. They may not tell us they have a disability, but it is important
(10:16):
for us to advocate at all of our campuses, that AI actually is allowing individuals who
may struggle to do these basic skills of spelling and grammar and outlining without these tools.
Students enter our classes with quite a bit of variation in their prior learning experiences,
and that can create a lot of challenges for instructors. In thinking about designing
outcomes to sustain student effort and persistence. How might AI tools
(10:40):
be used to provide enough challenge to individual students and enough individualized support that's
tailored to the needs of each student? Tom?Well, beyond just creating multiple formats
as materials, and that's something that large language models and predictive AI do very well,
instructors and students alike can use AI tools to create self-quizzing questions about materials and
(11:03):
differentiated study pathways based on learners’ own engagements. The most promising access barrier
we can lower with AI has to do with customizable study methods that learn about you and help you
practice, prepare, engage. Of course, many of us know that the least effective way to
study is print out a text and review it with your highlighter marker in hand. The better
(11:25):
way is to read and create application and recall questions as you go along, and then after a while,
quiz yourself about the materials. Our learners may not yet be experienced in crafting their own
self-quizzing questions; they can ask. Ai, not for a summary of the reading that would be doing the
work in place of the student, but for questions to ask about the reading that's augmenting and
(11:50):
customizing the study skills of the learners. Liz, you got thoughts on this too, don't you?
I do, and something that I don't hear discussed very often, although, Sherri, I'm glad you brought
it up, is that these AI tools can be really helpful to support executive functioning by
offloading some of the things that executive functioning is all about. So that can be really
(12:12):
helpful for our neurodivergent learners, and also for us if we're neurodivergent. So for those who
struggle to get organized, AI tools can help. If you need to break a project into discrete goals.
AI can help if you need to create a schedule or reminders or you want to come up with some ideas
to gamify, AI can help with all of those. That's a really nice affordance of the technology students
(12:37):
can take advantage of these tools to help them identify areas where their work might need some
strengthening, or provide feedback on where they've made mistakes in some of their work,
which can help students who might otherwise feel left behind or feel lost to catch up
and stay with the class. So just two days ago, I had a study session with my statistics students,
(12:58):
and one of them told me, when I get to a question in the homework that I don't know how to do,
I open up ChatGPT. I say, don't give me the answer, tell me what are the steps to do
this question. And that's a really nice way for them to learn the process and then take
those steps and apply them themselves. This is a really smart use of AI tools by a conscientious
(13:20):
student trying to learn and improve.Continuing on with the principle of
designing multiple means of engagement, one UDL guideline is to design options for welcoming
interests and identities with considerations like providing students with a higher level of
autonomy and choice or making sure that learning experiences are meaningful and relevant. What
roles do AI tools have in assisting faculty and supporting student identity and interests? Sherri,
(13:45):
do you want to start on this one?Absolutely. So I had a different answer for
this until yesterday. So I want to tell you that I'm editing on the fly here because I feel like I
have utilized the principles of inclusive design and UDL, I've built in identity until yesterday,
and somebody presented on a wonderful session about how they submit their syllabi into AI with a
(14:07):
specific prompt telling them what your background and what your history and what your varied
identities are. And it changed my perspective, because I haven't done that. I've been trying, on
my own, to tell students I am also neurodivergent. I also come from this background, and I've tried
to use that to identify with them, but the solution that was presented yesterday was really
unique, because they submit their syllabus and they say I come from these five or six identities.
(14:31):
Help me make sure that my language in my syllabus is open and inclusive to all identities. That's a
technique I'm now going to be implementing as of yesterday, because I've been trying to do this all
on my own with thinking of the other identities. But the real value of artificial intelligence…
and just allow me to nerd out for a second as a psychologist… is intelligence is just what
the general community knows. That's how we define intelligence. Artificial Intelligence, they know
(14:55):
a lot more than us, because it's big and broad, and it touches every culture and every continent
on the entire world. And so from that perspective, I will always get a less biased response from AI,
instead of trying to rely on what my own brain can tell me about trying to be open
and inclusive of everyone's identities. So that's a tip that I'm taking directly from our colleagues
(15:18):
from yesterday, of just even starting with a syllabus and saying, “These are my identities,
help me make sure that my work is representing all other identities who might be in my class.”
It's a great example… continuous development, continuous improvement. Liz?
Yeah, so I want to echo what Sherri said, and just say that one of the uses of AI that I think we
often overlook is this idea that it can help us do some reflective practice about our own potential
(15:45):
cultural gaps or lived experiences and those of our students. And so if we want to support student
identity and interests, we probably need to have at least some fluency with that, or the ability to
get that, and especially for many of us who are not the same age and even close to the same age
as our students, or for those of us who teach very large classes, this can be very hard. So I would
(16:08):
like to suggest that AI tools can help. Sherri, don't want to go off on too much of a tangent,
but I was listening to another podcast episode this morning with a conversation with Maha Bali
of the American University in Cairo, and she was talking about the implicit bias that comes
from large language models, because they're reflecting back the text that they have been
trained on. And so if you ask these generative AI programs, what is terrorism? or who are the
(16:34):
people who are terrorists? It won't answer. But if you ask it to give you five examples,
they'll all be of a certain kind. And so I think that there are opportunities here as well for
students to think about and for us to learn about implicit bias in the culture by doing a
critical analysis of the stuff that generative AI gives us. So Bonni Stachowiak talked about asking
(16:59):
ChatGPT or some other program to give her a picture of a classroom that was a philosophy
class at Harvard, and they were all men, and the women in the class noticed that,
the men did not. So these are opportunities for some critical reflection. I also just want to say
one more thing, AI tools can be really helpful for our students who are not native English speakers,
(17:19):
because they can help them build fluency by parsing text and correcting their own writing.
Traditional online assessment techniques such as discussion boards and essay assignments on
traditional topics in our disciplines may not align well with the diverse interests
and lived experiences of our students. How might AI tools be helpful in designing well scaffolded
(17:40):
assignments that better connect to the diverse interests and lived experiences of students,
providing them with multiple means of action and expression? We'll start with Liz.
So, from instructors’ perspectives, AI tools can help those of us who are subject matter
experts and may have a hard time adopting a novice mindset, break down projects into smaller steps so
(18:02):
that our students can approach those in a more accessible way. If someone asked me to write a
journal article in my discipline, I would know how to do that, soup to nuts, right? No problem. But
a sophomore or junior in a class who's asked to write a term paper may not even know where
to get started or what kinds of things will help them get there. So AI tools can help instructors
(18:22):
who need some help returning to a novice mindset and understanding what those discrete steps are
to create that scaffolding that you mentioned. From the students’ perspectives AI tools work
well here, if and when, instructors allow students at least some choice over what the topics of their
assignments or projects are, and especially to create assignments in different formats. So be
(18:45):
that, video, audio, visual, written, etc. In those cases, AI tools can help students identify topics,
refine ideas and then create structures for their eventual work products. That helps
the instructors meet the students where they are, and as an instructor, it makes grading those
assignments far more interesting, because you're not reading 100 papers on exactly the same topic.
(19:08):
I love getting to know my students through their assignments, and I suspect many others do as well.
Tom, I think you have some more thoughts. Yeah, to build on what Liz is talking about here,
we're most often engaged in creating assessments of learning: tests, quizzes,
papers, exams. AI can help us construct those kinds of things and suggest alternate ways that
(19:30):
learners can show those skills. But where I see the greatest potential for artificial intelligence
is in thinking about assessment as learning. In our 300-person online lecture courses,
they're a terrible format to begin with… don't get me started… there are scarce opportunities
for engagement and for showing what you know. Designers can craft ways for learners to use
(19:54):
artificial intelligence tools, almost like a private tutor. Ask AI to create study flash cards,
self-quizzing questions, like we talked about earlier, fill in study guides and the like. The
key is to make sure that what the AI produces is good information. Vet the content yourself during
the creation process or design in a tech check as a course activity so that the instructional
(20:19):
team can assess the quality of the self- assessment materials that AI is generating.
So we've talked a lot about how AI can facilitate learning, but haven't yet addressed some of the
equity issues in accessing and using AI tools. What barriers might some students face in using
AI tools in their online learning experiences? Tom, do you want to start this one?
(20:40):
Yeah, I can take this one. There's a lot of ethical considerations to using generative AI
tools in online education. I'll spotlight four of them here. First, we already see the haves
and the have nots, the folks who can and can't afford to use customized targeted data sets and
tools. We have customized tools for the C-suite and hallucinations and errors for the rest of us.
(21:03):
Second, water and electricity usage… for every image that you ask AI to create of a butterfly,
unicorn, kitten flying through space eating a cheeseburger, three liters of water and
10 watts of electricity are consumed. Third, we preach respect for intellectual property,
while the most common LLMs and generative AI models have been trained on oceans of copyrighted
(21:25):
content without consent. And fourth, and perhaps the biggest barrier for widespread adoption of
AI tools for UDL purposes, the high prevalence of racist, sexist, and pornographic inputs to
the most general models. Ask AI for an image of a doctor, and it will always create a white, male,
older doctor. These are all elements that we should share with learners about the tools that
(21:50):
they're using. That was a bit of a pessimistic turn. Sherri, do you have something different
here, or you want to follow in the same way?Mine is going to support what Tom said
and actually what Liz brought out earlier. So at one of my universities, because I use AI
as a teaching tool in the field of psychology, I actually build into week one and two of our
(22:11):
classes how to use AI effectively. And then, after my first semester, because so many of
my students love and are familiar with the free tool, Canva, that you might be familiar with,
Canva now has a free functionality to generate images. However, Canva is, to date,
the single most biased image generator I have ever seen. And as soon as I recognized that
(22:35):
when they input things like mental health or ethnic diversity or poverty or crime,
that it always provided the same specific images of individuals, then that led me to needing to
narrate and modify the way I teach them about which types of AI tools and what to trust,
(22:55):
because as future counselors, they do not need to be creating content or using AI's incorrect LLMs,
as Tom and Liz have pointed out, to create a belief instead, that only these individuals are
representative. Tom mentioned doctors, and they always have one particular group. In my area,
we're talking about mental health disorders, and it is so, so problematic as a professor to have
(23:19):
the student potentially create a presentation that only represents certain ethnicities and
certain genders when we're talking about mental health disorders, when that is incorrect. And Liz,
you highlighted this so well, even without knowing what I was going to say, because the intelligence
behind AI is growing. It is growing every day. And one of the statistics I wanted to share with
you that just came out last month was some of the original developers of AI estimate that AI is in
(23:44):
its infant stage, it's not even a toddler stage, but it currently has an IQ of 160. That's five
points below Albert Einstein, but it's only a baby, and so once it grows, it, we hope,
will become less biased, because it will receive input from us, but it gets its knowledge from
us. So if we are inputting biased information, it will continue to output biased information.
(24:06):
So it's only as good as we are good at not being biased about the information that we're publishing
as well. So thinking about that diversity and the equity, I think that we cannot encourage
students and we cannot utilize AI ourselves without also making sure we're teaching them
about the inequities. And if they understand the dynamics of how AI is created, its intelligence
(24:28):
is based on the fallacy of humans, sometimes are also biased inherently, then they know to be a
little bit more conservative about evaluating the quality of the material it presents as well.
So as we move into the audience questions segment, we'd like to remind you that this session is being
recorded, and if you'd like to ask the panelists a question, please do so using the Google form
(24:49):
that we provided. And we do have some questions in there. Do you want to ask the first one, John?
Our first question is from Elizabeth Blythe-Lee, from Arizona State University
Online. Her question is (24:59):
“How do you see AI
being used to support personalized learning
and supporting UDL? What will that look like?”Alright, I raised my hand so I get to go first. So
one of the things that I see it doing… if you guys have ever heard of the concepts of choice boards,
or of the idea of student agency, where students have a choice, sometimes it's
(25:21):
difficult for new faculty, new instructional designers, to come up with enough choices to
make learning personalized. And so one of the things that I've seen happen, whether you're
using the built in AI idea generator or content generator in Pearson or Cengage or LearnWorlds,
or any of the ones that are about to come out in our LMS colleagues, they will give you ideas
of potential projects that may suit different types of learners. And what I've always done
(25:45):
in my classes is I've created a final project that has allowed students to make a choice:
which modality suits your strengths best, pick this modality versus that modality versus that
modality, but I can only come up with so many, and so using AI to not only come up with the ideas,
but say, “AI come up with these ideas and build the rubric.” It's a huge time saver for me. And so
(26:07):
I already had the ideas about potential projects, but AI has enhanced that and truly made it more
personalizable for my students going forward.Many of you might know the concept of
differentiated instruction, or DI, it started in the K-12 realm,
and a lot of us in higher education are doing this as well. It's very difficult to do
(26:28):
differentiated instruction when you're teaching a lot of people. Differentiated instruction is
paying attention to patterns that appear among the learners who are in your class now and then
doing design work in how you respond to them. To pay attention to those patterns and respond to
those patterns. Universal Design for Learning is what we do before day one of our online courses,
(26:53):
it's how we design, not knowing who's going to be there, and so we assume that there's going to
be wide variability. Differentiated instruction, on the other hand, is what we do after day one,
and artificial intelligence helps us, not only in the proactive design in terms of UDL, how do
we give people more on-ramps to get started? How do we give people more representative samples or
(27:18):
methods or means for the content? How do we give them more than one way to show what they know?
The flip side of the coin is also intriguing. We can design in opportunities to ask students to use
artificial intelligence to do personalized, differentiated instruction for themselves,
and then share that information with us so we start seeing the patterns more clearly.
(27:41):
Our next question comes from Marcus Popetz from Harmonized Learning. And the question is:
“How do you feel about the AI note takers that record the class and offer recall and
quiz questions to help the student? Is the note taking and creation of questions the
important part, or the quizzing and recall.”I appreciate this question a lot. So first of all,
(28:02):
I just want to acknowledge that we are having this conversation in a context where being recorded in
our classroom can feel unsafe for some instructors and some disciplines. And so I think that there is
reasonable concern from faculty when a note taking AI tool might be recording everything
they're saying, getting a transcript and then generating questions. Because many of us feel
(28:27):
quite rightly, under increased scrutiny. And so I sympathize with the idea behind this question,
that the summarizing and the question generation is an important part of learning,
but it may not be as accessible to all students as we would want it to be. And I think to what we've
been talking about here throughout this panel, is that we want to be accessible to learners
(28:49):
at different stages of their learning. And so to say no, you can't do this, because question
generation summarizing is an important skill of learning, might then preclude some students from
ever getting to that higher-level thinking. So I'm personally not comfortable with saying no,
but to engage in some reflection about what is our resistance, where is it coming from,
(29:11):
and how can we think about this in an as inclusive of a way as possible, of different learners?
So I was sharing with Tom right before our presentation, that I am doing something for
the first time in 24 years of teaching that I never thought I would do. I'm excited to no
longer be in administration, but this semester, I'm teaching eight classes,
and somehow I've survived it. And so I just want to point that out, that in those eight classes,
(29:35):
across eight different classes themselves, I may have 60 students with ADA accommodation
letters. It's a lot. It's grown significantly, and out of those 60 accommodation letters,
at least 75% have note taking as a core component of their varied accessibility needs. Again,
I want to throw this back to executive function, which, regardless of what type of disorder you
(29:59):
have, executive functioning, your ability to pay attention in class and take notes is likely to be
impacted if you have any kind of diagnosis, and if you don't have a diagnosis, and by chance you have
trauma in your background, your executive function is also negatively impacted. So you might not
have a formal diagnosis, but you need these note taking functionalities, as Liz kindly pointed out,
(30:20):
in order to even have a level playing field. So yes, in psychology, we talk about some things that
get very uncomfortable, and more than myself, I'm more protective of the other students’ comments
and those being shared in the note taking. It's not myself, it's what's being disclosed in our
classroom, because these are individuals being trained to become clinicians, and they learn best
by sharing their own stories. So I'm protective of it, but in the same way, I want to recognize the
(30:45):
fact that it's important for everybody to see that these are useful tools for individuals who may and
may not ever be diagnosed in order to have a level playing field in their learning experience,
We have a question from Richard Powers from City Colleges of Chicago, and his question is Beth
Stark and Jérémie Rostan developed Ludia, an AI tool that reviews lesson plans, syllabi and other
(31:08):
documents through UDL lenses. Reactions have been really good. Do you see educators using specific
UDL review tools such as these in the future? Yes. No, the Ludia tool is a splendid thing. It
was developed just at the end of last year in late 2023 and what the researchers, whom Richard is
(31:28):
referring to, what they did was they took a chat bot model and customized it only on a data set of
universal design for learning documentation, so if you're putting in “Hey, I have this challenge,
or here's this barrier, or my students are having these kinds of challenges,“ Ludia will say, “Hey,
this sounds like this is the barrier. This sounds like applying this particular checkpoint from the
(31:53):
universal design for learning guidelines applies.” And I use the word checkpoint here because it's
still on version 2.0 of the UDL guidelines, they're working right now to update that to 3.0. I
can see that one of the have and have not barriers that is likely to be lowered in the near future is
(32:13):
that we're in the adoption curve with artificial intelligence, where the tools are still designed
for expert users. They're still designed for the coders, the instructional designers. They're not
designed for everyday folks. Here I'm going to hold up my mobile device. Think about every app
on your phone. All of those apps started out as something that only people with specific skills
(32:36):
and knowledge could use, and they've now been designed into things that everyday people can do
and use and participate with. So that's going to happen with artificial intelligence tools as well,
and that have/have not divide right now, the people with all the money and the specific
business cases they get to train their AI on sort of higher quality or narrower niche kinds
(33:00):
of data sets. And Ludia is a wonderful step in that direction because it's open source and
everyone can use it. So good plug for that.Thanks for your great questions. I know we
didn't get to all of them, but we got to the majority of the ones that came in. So
thank you. We always wrap up by asking (33:14):
“What's
next?” and we'll start with Liz for this one.
Okay, I actually want to get some clarification on this question, because I don't know if
“what’s next” means for me or for this topic.We always leave it on our podcast to be very open.
It can be like, I'm gonna go eat lunch..…or I'm going to Disney World…
(33:35):
Yeah, or it can be the big existential question, so it's really up to you,
Liz, or it could be both/and… either/or.My particular flavor of neurodivergence is autism,
so I just need, like, some clear expectations. So what's next? I think my relationship to AI
continues to evolve. I think when it first came on the scene, I was curious, then I felt overwhelmed,
(33:58):
and so I just started saying, “but the environment” to avoid thinking about it
very much. Now I'm kind of in this uneasy frenemy sort of relationship with generative AI, where I'm
sort of like, “okay, I'm curious, but also kind of skeptical.” So in terms of what's next, I'm just
gonna say our relationship continues to evolve, and I'm not quite sure where it might end up.
(34:21):
And from my perspective, I also feel like we're gonna see some big things happen. My big takeaway
that I would love for you to think about is remembering that underneath AI are humans.
The intelligence of AI comes from humans. And I think there's some real value in solving big
world problems by the combination of all of our human intelligence together. There's some
(34:44):
major issues that we've not been able to solve in medicine and other areas that I think we're gonna
see AI actually give us some solutions for. So keep an eye out for that. And I think it's
dependent on us. It's not AI that's solving it. It's our ideas joining together to find really big
world solutions that we can't find unless everybody puts their information in.
And if the question is, “what's next?” we will soon see a diffusion of innovation
(35:07):
shift around AI use in UDL practices. We will move from training people how to craft
effective prompts to how to use AI tools to shortcut already expert processes.
Well, thank you all for joining us. Thank you to our experts on the panel,
for providing us with your responses, and thanks to everyone who's attended. [APPLAUSE]
(35:29):
Thank you.Safe travels home too.
If you've enjoyed this podcast, please subscribe and leave a review on iTunes
(35:57):
or your favorite podcast service. To continue the conversation, join
us on our Tea for Teaching Facebook page.
You can find show notes, transcripts and other
materials on teaforteaching.com. Music by Michael Gary Brewer.