Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Student feedback is important to improving teaching, but may not
be aligned with evidence-based teaching practices. In this episode, we discuss
a midterm student feedback instrument focused on critical teaching behaviors,
an AI-assisted tool for analysing the feedback, and strategies for debriefing with students.
(00:25):
Thanks for joining us for Tea for Teaching, an informal discussion of innovative and effective
practices in teaching and learning.
This podcast series is hosted
by John Kane, an economist...
...and Rebecca Mushtare, a graphic designer...
...and features guests doing important research and advocacy work to make
higher education more inclusive and supportive of all learners.
(00:55):
Our guests today are Lauren Barbeau and Claudia Cornejo Happel. Lauren is the Assistant Director
for Learning and Technology Initiatives at the Georgia Institute of Technology. Claudia
is the Director of the Center for Teaching and Learning Excellence at Embry-Riddle Aeronautical
University. Lauren and Claudia are the authors of Critical Teaching Behaviors: Defining,
(01:16):
Documenting, and Discussing Good Teaching as well as a whole series of other resources related to this
book. Welcome, Lauren and Claudia.
Thanks, John.
Thank you.
Today's teas are:... Lauren, are you drinking any tea today?
I sure am. I'm always drinking tea. So I am drinking a passion fruit peach tea,
courtesy of my friend and colleague Karen, who went to the SOTL conference in Savannah
(01:41):
and brought this back for me from the Savannah Bee Company. So if you're listening, Karen, thank you.
That's a nice little story. How about you, Claudia?
I am also drinking tea that comes to me by way of a conference, mine was in Albuquerque,
New Mexico, and the tea that I'm having is a Rajasthani chai tea.
Oh, yum. How about you, John?
(02:03):
I am drinking tea that came from a Wegmans,
which is a black raspberry green tea from the Republic of Tea.
And I have a Blue Lady tea from Scotland.
Very nice. We've invited you here to discuss the critical teaching
behaviors framework that you have jointly developed and some recent extensions of
that framework. Could you tell us a little bit about how this project got started?
(02:26):
Sure. So this project really started when we were working together at Georgia Southern University,
and at the time, we were very much in need of a campus language in which to discuss
what good teaching is. What happened after that is we found out lots of other people
were in the same place. They also needed some sort of a common language in which to
(02:47):
ground their conversations about teaching on campus. And so what started off as meeting a
campus need ended up expanding to meet a broader need in higher ed.
Can you provide a little bit of an overview of the framework?
Sure. So the critical teaching behaviors framework is based on sn extensive literature review that
Lauren and I did, starting with the Scholarship of Teaching and Learning educational research,
(03:12):
primarily in the context of higher education, but also some K through 12 insights found their way
into it as well, and based on that literature review, we narrowed it down to six important
behavioral categories that all of us as we teach, should ideally engage in in some way,
(03:33):
shape, or form to support student success, and those six behavioral categories are align,
making sure that our course materials activities align with each other; include,
making sure we're creating an environment and opportunities for all students to be successful;
engage, making sure that students are taking an active role in their learning journey;
(03:55):
assess, making sure they have a chance to self-assess, to get feedback on their learning;
integrating technology, we cannot really think about teaching and learning without
technology anymore, and so because it's such a big part of teaching and learning,
we need to think about it intentionally; and then finally, reflect, which is what comes back to us
and the importance that we play a role also in continuously growing and improving our teaching.
(04:20):
You've also developed a number of instruments for faculty use in alignment with this framework.
Could you tell us a little bit about these tools and why faculty might be interested in using them?
So, this book was designed to be a one-stop shop for faculty. The first half of the book is an
overview of the research we did on those six categories of behaviors, as well as reflection
(04:42):
questions to help faculty readers think about how they can implement those behaviors. The second
half of the book is really all about helping faculty document the fact that they're doing
those things. So we start the second half with some narrative starter templates and some deep
reflection to identify your core teaching values, because our values should guide the principles
(05:05):
that we use when selecting strategies to implement in our classrooms. That should frame everything
that we're doing. After that chapter, we do a deep dive into peer observation of teaching,
which, coincidentally, is our next project. I know I'm jumping ahead to the end where we
talk about what's next, but we do have a book under contract with Routledge to explore peer
(05:26):
observation of teaching further. But in this chapter, you get just the basics of what you
need to go out and do this right now. So that includes email templates, logistical information
collecting templates, some reflection starters for the faculty being observed. It also includes
a note-taking instrument for faculty who need help figuring out what I should take notes on when I go
(05:50):
into the classroom. And finally, it includes a report form for faculty to collect what they saw
in the classroom, with space for the faculty being observed to write down their own reflections. That
way, we're giving faculty agency and voice in the process of creating this artifact. Chapter after
that moves into thinking about student feedback, and how we collect student feedback from our
(06:12):
students, and how we use that to make decisions about our teaching as well. That chapter really
focuses on student feedback, in particular at the midterm, not necessarily at the end of course,
which may be what most of our listeners think of when they think of student feedback,
but a lot of the work that we've done more recently has been about collecting midterm
(06:32):
student feedback. As we know from research, this can help boost your end of course evaluations
when you debrief it correctly. I always want to add that caveat when we say this, it isn't
a magic wand that we can wave and ensure that we get better evals at the end of the semester,
but when we debrief it with our students correctly and meaningfully, one of the things we found when
(06:54):
we looked at the research on it is that, yes, it can boost your end-of-term evaluations.
So we're hoping we can dig into that midterm feedback aspect a bit more. In the Fall 24
issue of To Improve the Academy, you published a formative midterm feedback instrument that
you developed and validated. Can you talk a little bit about this instrument and why
faculty might want to use this instrument and midterm feedback just generally?
(07:17):
Yeah, happily. The critical teaching behavior, midterm feedback instrument,
we started implementing at Embry-Riddle in 2021 and we have since collected data on its
use across the different departments and across different instructors. And one thing that I'm
super excited about is that from the first year that we implemented it, we had about 20 or so
(07:42):
faculty using the instrument per year, and now we are reaching about 20% of the Embry-Riddle faculty
with that tool, which is pretty amazing. It has been very well received. It combines differently
than some other midterm feedback processes. It combines multiple-choice rating questions with
(08:03):
open-ended feedback. Generally, midterm feedback often focuses on only the qualitative responses,
so open response feedback from students, and that has really resonated with the STEM faculty,
specifically at our institution, because it gives them a place to look at, okay,
here's where we started, here is where we are next semester. This is what we had at midterm. We made
(08:26):
changes, we talked to students. Now, looking at end-of-course feedback, we can see that there has
been growth or not. So that's been really helpful for a lot of faculty here, because we've had such
a lot of participation with it, it also gave us really good data to look at: “Okay, does the tool
that we're using, is it actually statistically reliable and valid?” And so that is the study
(08:49):
that we published in To Improve the Academy, we validated the tool and did a confirmatory factor
analysis, and it was valid and it was reliable, but importantly also, as part of this study,
we asked faculty on their perception of the tool, and we asked students to also tell us,
“Okay, these are the questions that we're asking in the survey, how do you interpret it? What does
(09:12):
that mean to you when we're asking this question? What would you look at in a course to give you
information on how to answer the question?” And so triangulating all of that different
information and data, we found that the instrument is working really well for students, for faculty,
and also the data that we have shows that it is reliable and valid. So that was exciting to see.
(09:33):
It's always nice when we take on a project like that, and it works out, right?
Absolutely
A lot of the teaching evaluations used in many departments often are not very well designed,
and they often include many categories that students may not be the best source of judgment.
One of the nice things about your evaluations are that they focus on observable behaviors that
(09:54):
are evidence based and they are things that should be observable to students,
and if they're not observable to students, the instruments would reveal that in ways
that suggest maybe the faculty needs to address those. So one of the things you suggest is that
this instrument might also have some impact on the bias, which has been demonstrated to show
up in student evaluations of teaching. Could you talk a little bit about that?
(10:17):
Yeah, and you see me nodding along and getting really excited, because that is another thing
that, when we first published the book and the article, we had some preliminary data that
suggested, yes, it reduces bias broadly, but we actually had an opportunity to gather additional
data on that and to look at that, and we are still in the final stages of preparing the data
(10:38):
and analyzing it, but we have done some analysis on this now that shows that if we're looking at
multiple factors that all impact teaching and learning and student feedback in some way,
the questions that we are asking with a critical teaching behaviors midterm feedback instrument
actually show now that statistically, gender in particular, is no longer a significant factor.
(11:05):
So that means that, basically, gender is not one of the demographic categories that seems
to be influencing the student responses to the questions that we're asking. We do see,
in the data analysis that we have now, that teaching experience, so years of teaching,
as well as age, are still statistically significant, but gender is one of the
(11:29):
categories that is no longer statistically significant. So that was exciting to see.
You mentioned that age and years of teaching experience were still significant. What was
the nature of the impact of teaching experience and age on the evaluations?
So if we're looking at the data that we have, it shows that teaching experience and age, so they
(11:50):
go parallel, the longer teaching experience that you have, as well as the older you get,
it generally signifies an increase in the rating that students give you on how frequently you are
engaging them, on how frequently you are creating an inclusive teaching behavior in the classroom,
on how frequently you are providing feedback that is meaningful to them. So overall it
(12:13):
increases until around the plus 55 where it falls off. But that is also probably a limitation,
because we had very few respondents in that age group. So, that's what we found.
I can imagine at this point in the conversation, audience members might be like, yes, this sounds
good, but what does it look like? Can you talk a little bit about what some
(12:36):
of the questions are? You mentioned that there's some multiple-choice questions,
some open-ended questions. Obviously midterm implicates when in the semester you might
implement this tool. Can you talk a little bit about what those questions actually say?
So let me talk first about what midterm student feedback is. Obviously this is something we
collect at midterm, specifically between weeks five and 10 is what research recommends. There
(13:01):
are a lot of tools out there to collect midterm feedback. The standard forms are all open-ended
questions. Typically they're two to three questions long, maybe a little bit longer if
you as an instructor choose to add an open-ended question or two. The most common questions that
are asked are (13:19):
“What's working well in this
class? What's not working? What additional
feedback do you want to have or want to share?” There are also versions of this that look like,
“What should I start doing? What should I stop doing? and What should I keep doing?” So I just
want to put that out there that we are not the originators of midterm feedback. This
has been around for a while, and there are lots of different ways of doing it, although they're
(13:40):
fairly similar and standard in the fact that they are qualitative, open ended. What makes ours
different is that we did choose to depart from the purely qualitative aspect of this by introducing
some quantitative questions. So with our feedback tool, there are still the standard questions:
“What's working well, what's not working well?” From a data analysis perspective,
(14:01):
I think it's interesting to note that we chose to put the quantitative questions first on our survey
for students. So we have on the validated tool 15 quantitative questions that students answer
on a never to very frequently scale, and those questions are broken down by each of the CTB
(14:22):
categories, except for reflect, because students really can't see us reflecting on our teaching, so
that's not something we ask them to give feedback on. But an example question would be… so from
the aligned category, “The instructor states the learning outcomes, development of specific skills
and knowledge to be accomplished in the course assignments and activities.” And then students
would answer never or very frequently. So each of these questions is designed to capture how often
(14:47):
an instructor engages in particular behaviors that we know from the framework research are proven
to promote student success. When the instructor gets this report, they see a breakdown of their
behaviors by category, so that they're able to look at the quantitative data and see overall
strength by category. So for instance, you might see that you're doing really well in “engage,” but
(15:10):
students said that you only rarely or sometimes gave feedback that was helpful for them. So maybe
the “assess” category is an area for development. So what's nice about these quantitative questions
is that it's allowing you to get the nuance of individual behaviors while also capturing overall
pictures of what's going on in these categories from the student perspective. Now, I mentioned
(15:33):
that we put the quantitative questions first on the survey. There's some interesting stuff going
on there in terms of what students end up putting down in their qualitative feedback, and this is
something we would like to study in more depth, is how much does the language of the quantitative
questions prime the students in terms of what they give feedback on and the language that they use.
(15:55):
This is not something we've run a study on yet, because we're running all sorts of other studies,
but it is a study that we do plan to run on the data we've collected through midterm feedback,
because if this bias study has shown us anything, the tool does help mitigate bias to some extent,
if the tool also does some priming of students to help direct their feedback to particular
(16:16):
areas of teaching and take them away from comments about “I love her shoes” or “I
hate his shirts,” then it has an additional use factor, which is that the quantity of questions
are doing some work to teach students how to give good feedback. That's our hope, anyway.
So let's imagine that I'm an instructor, and I use the midterm feedback form and
(16:38):
I get some feedback about areas that maybe I'm doing well in,
and areas that I might need to improve upon. What do you suggest are the instructor’s next steps?
Well, I would suggest that you consult with your Center for Teaching and Learning. Oftentimes,
this is a service that Centers for Teaching and Learning offer, and there are trained consultants
like me and Claudia who can help you make sense of what you get out of that experience. In fact,
(17:03):
we can help facilitate that experience as a neutral third party, which can help students feel
like they're not necessarily going to be judged on their responses. Even if you make it completely
anonymous, there are always those students who say, “Well, but if you really want to,
you can figure out who said what.” But if you have a neutral third party who is facilitating this
experience, it can make it a better experience for students overall. Not everybody has the
(17:27):
luxury of access to a Center for Teaching and Learning. And when we were designing our tools,
we were thinking about that. So the next thing I would say is, if you don't have a Center for
Teaching and Learning and trained consultants, you do have colleagues. One of the things that is very
important to us in this project is that we're creating dialog around teaching and learning.
So if you have a friend or a colleague who would also like midterm feedback, maybe you can conduct
(17:51):
this for each other and have conversations with each other about this instrument. Now,
of course, you can always just administer the tool yourself, collect the data and sit down to
analyze it. If you're going to sit down to analyze your own data, here's what we have for you. If you
request access to our folder of materials, you'll see in there an Excel spreadsheet that Claudia,
who is an Excel guru compared to me, has created that will help you decipher the
(18:16):
quantitative data. It'll spit out some really nice graphics for you to be able to analyze,
but I'm super excited to be able to share that we've also developed a way to help you make sense
of the qualitative feedback on teaching. This is where we are debuting the CTB FIT, critical
teaching behaviors feedback insight tool that is an AI-powered platform that we developed in
(18:39):
conjunction with a student at Georgia Tech, Elliot Roe, and his partner in crime, Duncan Johnson,
who's at Tufts. And these students helped us train the AI platform on the critical teaching behaviors
categories, again, not including “reflect,” because it's not something that students
can give feedback on. And what this platform will do is you feed it the student feedback,
(19:03):
and it will output an Excel spreadsheet for you where it tags the qualitative data with the five
CTB categories. It'll pull out snippets and code them into each of the CTB categories. Then it will
do a count of the number of times it tags the category and it will produce a nice bar graph,
so you get a visual of how many times something shows up, for instance. This has been really
(19:26):
useful to the faculty who are conducting midterm feedback. We've gotten some great
preliminary insights from faculty saying that this helps them make sense faster of the data
that they have collected from students, and it helps them focus less on that one-off comment,
you know, the one, the one that gets under your skin, and you have all these other wonderful
(19:48):
comments praising your teaching, but there's that one that sticks with you, because we just can't
shake the one negative comment. This is why it's so helpful to have a consultant do this,
or to have a peer do this, because they can move you past the one negative comment. But
if you're doing it for yourself, it’s so easy to get hung up and miss the bigger picture.
What FIT does is it organizes the data, provides you AI summaries in each of the CTB categories
(20:14):
that it tags so that you're not focused in on that one negative piece of feedback, you can now focus
on the big picture and take away insights that allow you to report back to your students and
make informed decisions about how you're going to teach your class. Claudia, what would you add?
Just reiterating, I think the importance of using this also as a tool for conversation.
(20:36):
Lauren already mentioned that colleagues can play a really important role in helping us make
sense of this. We have the tool that's really available for anybody to use who's interested.
But I think the other important thing to mention also is when we're looking at the feedback that
we are receiving, one important conversation partner is also our students, and so making sure
that whatever we're learning from the feedback, whether that is positive or room for improvement,
(21:01):
we go back to the students and say, “Hey, I learned this from you. I appreciate you for
providing the feedback. And here's something that I can change, maybe in response, here's
something that I will not change because I have a good reason. Let me explain that.” But yeah,
so I think just reiterating the importance that students have in this process,
not just in providing the feedback, but also being a conversation partner with us.
(21:22):
And I would close that thought by adding to do this recent study that we're working on,
we ran a couple of focus groups with faculty. One of the really interesting insights that
came out of that focus group for me was that faculty who do midterm feedback and debrief
it with their students, go back and they talk to them about what's not working in the class,
what they can do to change it, and what they don't have the power to change.
(21:46):
But there was this big revelatory moment in the conversation where a faculty member said, “I have
never talked to my students about what they say is working well. I've never gone back and said,
‘Thank you for letting me know.’ I have never debriefed that with them.” So if there's anyone
else out there who is realizing that they also only focus on what's not working and what they
can and can't change, this was a really great moment where faculty came together and said,
(22:10):
“Oh, we also need to talk to our students about the positive feedback that they're giving on our
teaching and what is working. So that's part of the loop that we need to close as well.”
Now, you mentioned that this is an AI tool. Was it created as a GPT in ChatGPT,
or did you use some other system for creating it?
This is a question primarily for Elliot Roe, who's not here today, but Elliot created a pipeline that
(22:33):
is trained on the critical teaching behaviors, and it goes to ChatGPT. So what we were careful to
do thinking about (22:39):
this is student data, this is
faculty data, we want to protect privacy. We don't
store your data. First of all, that costs a lot of money, and second of all, that gets into all
sorts of privacy concerns. So if you run any sort of data through the platform, you need to download
that report immediately, or the next time you go to run a report, it will be gone. So it's actually
stored in your cache, it is not stored on our site, so that data belongs to you. As we've set
(23:06):
it up, it's not saving the data in any way, shape, or form, and it's not being used to train ChatGPT
on anything, as far as we can tell from talking with Elliot, who is our technological guru here.
We have encouraged faculty to consider using ChatGPT on anonymous student surveys on our
campus, because it has all those nice advantages that you've talked about,
(23:29):
that it can summarize things very nicely, and it can focus on the major themes that occur,
and it can also offer some suggestions on ways in which faculty might even address
that if that's programmed into the chatbot. So it sounds like
a really great tool. Are you sharing this more widely beyond your campus?
Yes, that is the goal. It's free to use where we want to keep it free to use,
(23:51):
because our goal is to improve teaching in higher education, and we believe that this
is a tool that can do that. To be clear, you're not going to sign into ChatGPT when you do this,
CTB FIT is its own platform that you sign into and put the data into and it will output the
Excel spreadsheet for you through that platform. We do ask you to sign up. One of the ways that
we're trying to protect privacy and security is to have sign ups, but that's not because
(24:13):
there's a pay wall, it's just because that's an additional security measure to have a sign up
on it. So we're putting it out there. We hope that faculty use it. Our idea behind this is that, yes,
you can run your data through any AI platform you want, and it will give you some insights,
and it will probably do some analysis for you. The problem I've seen when I run data
(24:35):
through it is that it's not trained on any sort of teaching database in particular,
so depending on its mood that day, you will get all sorts of different answers out of it,
and some of them might be good, but as we know, some of them might not be. We think that
the value of FIT is that it's grounded in that common language. So if you get your report back
(24:57):
and you see that students are seeing a dearth of behaviors, for instance, in the “Engage” category,
and you want to know what behaviors you could implement in the “Engage” category, well,
we've got a whole book on that for you. You can go to the chapter, you can reference the research
that we've done. You can dig deeper into that category. There is a real knowledge base behind
(25:20):
this platform that allows faculty to dig deeper into their own teaching. You can dig as deep as
you want. You can go reference the chapter in the book. You can start looking at the references and
go look at the research for yourself, and then the book, of course, is structured to produce
reflection, so that the reflection that faculty engage in produces growth. So if you then want to
(25:42):
set a goal for growth, we've also got tools in the book to help you do that. So really,
the idea of FIT is that it can be as surface-level analysis of data as you want it to be, but because
there's this knowledge base behind it, you can dig deeper and deeper and deeper as much as you want.
One of the things that you've talked about is that debrief,
(26:04):
but I imagine the setup of the midterm feedback is equally as important. So can you talk a little
bit about what the setup is as you present the opportunity for feedback to students,
as well as what that debrief ideally looks like? You've hinted at some of those things,
but I know people are always looking for very concrete, tangible ways of doing things.
Yeah. And so when we are facilitating midterm feedback as a third-party facilitator from the
(26:30):
Center for Teaching and Learning, we usually go into the classes, if it is a face-to-face class,
and give that introduction to students, telling them that this is something that's voluntary
for the faculty. They are not required to do that. They are engaging in midterm feedback because they
really want to know how things are going for the students and whether the strategies and materials
(26:50):
that they have chosen are meeting the students’ needs, and so emphasizing that it's voluntary,
but then also telling students that not all feedback is equal. And so if I'm just saying, “Oh,
this class is awesome, that is great affirmation, but it does not help me really get better at my
teaching.” And so also telling the students how important it is to think about providing feedback
(27:16):
that is specific and actionable, pointing out, if they are saying it's amazing presentations, giving
some more feedback on well the PowerPoints are great, or it's a great interactive lecture that
helps me think along. So really providing some of that detail. If we are not in a face-to-face
classroom, we do have that information also available as a script that faculty can use
(27:38):
and then share with their students through the learning management system, as an announcement,
again, mentioning it's voluntary, it's really helping me, as the instructor in your class,
grow, it's helping you as a student provide me feedback now, rather than waiting until the end
of the term, please provide feedback that is as specific as possible. And so that is usually what
the setup looks like. For the debrief for our campus, we are meeting with the faculty member
(28:04):
one one-on-one to facilitate the debrief, and it is really about the conversation where we are
looking at the data primarily and then make sense of it together. So for us, it is really important
to not be prescriptive, even if we're saying, “Oh yes, maybe engagement is a category where there's
room to grow,” it is not that we're saying, “Oh okay, engagement is not good right now,
(28:28):
let's do think-pair-share every single day of the week.” So really thinking through also there are
many ways in which we can engage our students. And what are some options that really work for
your course, for your context, if you are online, face to face, what is your personality like? What
is your teaching persona? And really finding strategies that are helpful for the individual,
(28:49):
rather than a silver bullet that works across the board, which doesn't exist.
One of the things I really liked about your book is part two. Part one, I really liked it was
very comprehensive, and it provided some really good strategies that are evidence based. But the
second part of it, and the appendices as well, are very helpful, because a lot of faculty, especially
when they start their teaching career, but often continuing for quite a while, suddenly get hit
(29:13):
with this need to write up a summary of their teaching activities and a reflection on their
teaching that's used for promotion and tenure. And there's very few resources out there that
address that, and your book does a really nice job of giving faculty information on what they
may want to include in these reports and what they may want to submit as part of the reflection. This
(29:34):
seems like a gap in the professional development literature that your book fills very nicely.
First of all, thank you, that's a lovely compliment. When we were developing this
framework, the need to have the conversations around teaching grounded in the same language
was a paramount issue on the campus we were working at, but we were very attuned to the
(29:55):
needs to document teaching, because this is part of the conversation. So we can have conversations
with peers, we can have conversations with administrators, we can have conversations
with students. Those are sort of the three different types of conversations we have,
and they all need to be grounded in the literature of good teaching. But it's really hard to have
that conversation in any sort of personalized way if you have none of your own data to back it up.
(30:19):
This is a very timely conversation, because just yesterday, I ran a two-hour session on reflecting
on teaching for faculty at Georgia Tech, and part of what we did is we started with, “What do you
think are your teaching strengths and what are your areas for growth?” We just put that out there
into the ether and let them put something down. But after we had talked about the CTB framework,
(30:44):
we went through a more structured teaching practices inventory and had faculty sit down
and go through the framework and say, “What are you actually doing in each of these categories?
Now look at it across categories. What is your teaching strength and what is your area for
growth? How does that compare with what you set when we first started this session and you had
(31:05):
no common language, you had no scaffolding to help you make that decision?” It was a really
interesting conversation, because someone said “it was hard,” like, “well, it was hard in the sense
that it was work. It was a lot of work to sit down and really look this closely at my teaching.” So
I think part of the reason this doesn't happen more often is because faculty are busy. We're
(31:26):
busy people already, and making time to formally reflect on teaching is difficult because it's just
another thing that we don't have time for, and it seems like there's always another fire that we
need to put out. So it doesn't become a fire to put out until you need to create your materials
for promotion, and then it's an issue because you haven't created any of this documentation along
(31:47):
the way. The goal of our book is to help faculty reflect as they go to build in just a little bit
of time to do that structured reflection so that they can identify areas of strength and
set intentional areas for growth. Why? Because a narrative of growth over time is persuasive,
and when you go up for promotion, being able to show intentional goals, data you've collected in
(32:12):
support of those goals and data doesn't have to be a full-blown research study. It can be student
feedback over time that shows you're getting stronger in these categories. Having all of these
aligned CTB tools gives you a common language to collect data from students, peers, and yourself,
so that when you go to create that portfolio to showcase what you've done, you no longer have to
(32:37):
reconcile “My peers are talking about my teaching this way, but students talk about teaching this
way, and admin understands it this way. How do I put all these different perspectives together in a
way that's going to make any sense when they seem to all be speaking different languages?” Well, the
tools we've developed are all aligned to the same common language, and when you go to put together
(32:58):
your portfolio, it's that much more powerful and persuasive to have the common language,
to have collected the data over time, as opposed to be sitting down at the last minute saying,
“I've got to put this together, and now I've got to figure out what I did,”
as opposed to 10 minutes a week could allow you to create a much stronger teaching portfolio.
(33:18):
And you provide a number of resources in various appendices at the end of
the book that faculty can use for some of these purposes. C ould you
just briefly talk a little bit about what's included in these appendices?
Yeah, so in the appendices we have the framework, of course, but also we have
the peer observation report form, as well as associated materials that help peers set
(33:42):
up the observation as for information before we even go into the classroom,
or before we go into the online class. It has a prior version of the midterm feedback, so there
is an updated version that's available on our online share documents that anybody can ask for,
but it gives a nice preview. But it also has some reflection questions, as Lauren said,
(34:04):
that help prompt some of the reflection for faculty to start putting together the material
into a coherent format for a portfolio. Also you mentioned it is Creative Commons licensed,
and that is true, that was also an intentional choice, because we recognize that while the
critical teaching behaviors framework, we hope, provides a great starting point for everybody,
(34:27):
there are institutional preferences. Sometimes, when it comes to language, there are institutional
focus areas that may not necessarily be captured with what we are providing. And so
it is intentionally Creative Commons licensed. So it is a starting point that people can build on,
based on departmental needs, based on their individual needs based on institutional needs.
(34:51):
It was important to us that these materials be Creative Commons licensed for meeting contextual
needs. I think as much as we recognized that a common language was necessary in higher ed,
we also recognized that institutional context matters, individual context matters,
and that there can't be one definitive way of doing things. The Creative Commons
(35:11):
license doesn't just allow you to adapt to your context. It allows you to make these
multifunctional. So let's say that you really want to do a dive into your assessed behaviors,
and you want to have a peer observation report that's just focusing on your assessed behaviors.
You can do that if you have an institutional initiative. Down here in the south we have
(35:32):
quality enhancement plans for our accreditation. Let's say you wanted to focus on just one aspect
of the critical teaching behaviors to align with institutional objectives. You can do that. The
only thing that's not Creative Commons licensed is the framework itself. So that still stands,
but we've had a number of institutions take that as a place to develop their own institutional
(35:54):
framework. So Harvey Mudd, in particular, has done a great job with this. Most recently,
they even added a whole new category on mentorship, because that meets their campus need.
But we've worked with a number of institutions now that have really done a good job of taking
this not as (36:08):
“we told you, this is the language,”
but saying “somebody went out and did the research
for us so we can develop the language that we use here at our institution.” We see more
and more of this happening, but when it's most effective is when it's grounded in evidence-based
practice. And not everybody has the time to go out and do that research, but apparently we did,
(36:30):
so we went out and provided that for people to go and build their own frameworks.
It's always nice when there's a starting point and you don't have to start from scratch,
and that there's templates and things that you can iterate on so that you can move
forward faster. So we always wrap up by asking, what's next? You've given a little preview, but…
(36:51):
We just look forward to talking more about critical teaching behaviors no matter what,
because we believe it can make a huge difference in fostering conversations
across campuses. But then also, as Lauren said, we are really looking into peer observation,
more specifically, as a next focus to really dig deeper, as a way to continue those conversations,
(37:14):
as a way to apply critical teaching behaviors, but also as a way, again,
to have a starting place to then customize something that meets somebody else's context.
So we always have a little bit too much in the way of what's next. In the way of what's next.
I think we would like to continue the research projects on midterm feedback in particular. So
(37:37):
midterm feedback, we would like to do that study on whether the quantitative questions
have any bearing on the qualitative responses we're getting from students.
That is an area we'd like to follow up on, but we do have that book contract,
and we're supposed to have that manuscript done by June of 2026, so that's gonna have
to take precedence over any other projects that we might want to delve into in the next year.
(38:03):
Well, thanks so much for joining us today and sharing some of your work and the framework.
Thank you for having us.
Thank you for having us. It’s been great.
Thank you for joining us. And again,
this book is an excellent resource for faculty at any stage of their teaching.
(38:24):
If you've enjoyed this podcast, please subscribe and leave a review on iTunes
or your favorite podcast service. To continue the conversation,
join us on our Tea for Teaching Facebook page.
You can find show notes, transcripts and other
materials on teaforteaching.com. Music by Michael Gary Brewer.