Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Eric, really a pleasure to have you with me on the road to accountable AI.
Thanks Kevin, it's great to be here with you.
You have been leading what was the Wharton Analytics Initiative for a long time.
How long has it been actually?
Well, it depends how far back you want to go.
mean, we started 20 years ago.
My colleague, Pete Fader, and I started the first ever center in any business school inthe application of, if you'd like, and data science to business problems.
(00:29):
I've been vice dean of, well, then analytics for eight years.
And now for the last two years, since everything's about AI today, I'm now the vice deanof AI and analytics.
So it depends how you want to count.
I've been a researcher for 30 years.
Center of Wharton for 20 years, guess officially at Wharton for eight years.
So what then is new and different about AI from your perspective?
(00:51):
Yeah, so I think one has to look, the way I've always liked to bifurcate it is AI has beenaround for 50 plus years, but that's more of what I'll call traditional AI, which is, you
know, I have a video file, I have an audio file, I have text data, or if you like, havenon-numeric data, and I want to encode that non-numeric data.
(01:14):
I mean, obviously in my home field of marketing, that's a big deal because we haveadvertisements, we have online ratings and reviews and
All of that stuff is non-numeric, but we want to code that non-numeric data.
That's not the new part.
That's the traditional AI that computer scientists, statisticians like myself have beenworking on for years.
The new part is, of course, the word generative, generative AI, where if you like, thelarge language model engine can actually generate content for you.
(01:45):
And that could be generating text, audio, video, pictures.
PowerPoint slides, all kinds of things.
So the new part is the ability of these models to generate new content.
But also I think the new part is its facility to handle what we call multimodality data.
So maybe you have a text data file, you have a PDF file, and a picture, and an audio file,and you want these all simultaneously ingested.
(02:13):
So I think there's new stuff, what I'll call, on the ingestion side.
And I think there's new stuff, especially on the content generation side.
So what does that mean in terms of our businesses that you interact with?
And also for those of us in academia as scholars, so many of the basic techniques havebeen around for a long time.
And if we broaden it out to all of analytics, it's anyone using Excel is doing analyticsat some level.
(02:39):
You talked about what's new with this concept of generative AI, but what's reallychanging?
Yeah, the way I view it, and think it's a good term to use on a podcast on the road toaccountable AI, is it's really a form of democratization.
Which is, know, I've been programming in what do you want to call it, S and then R andthen free versions, online versions, then Python for like 30 years.
(03:04):
But that's because I'm a statistician.
Most people don't do programming.
But if you actually think about what large language models can do,
they transform business because they put the power of analysis, data science, datacleaning, querying data in interesting ways, collecting data, cleaning data into the hands
(03:26):
of so many people today.
So the reason I think it's so transformative to business is there's twofold.
One is I can't imagine anybody listening to your podcast or my show on SiriusXM, WhartonMoneyball.
I can't imagine anybody that says,
wow, there are parts of my job that are rote and mechanical, I really don't enjoy doingthem.
But a large language model could do it for me.
(03:48):
Everybody has that.
And then there's the second part of it, which is there are just people that want to beable to analyze data and do data-driven decision making.
And now large language models allow a much broader set of people to do this.
Okay, and you mentioned you were the Vice Dean of Analytics, we had the Analytics Award,now it's AI and Analytics.
(04:11):
So just in terms of what you're leading at the Wharton School, what has changed?
Yeah, so I think it's important that, you the way I view it is there's kind of at least, Iprobably will skip a few, but there's at least four audiences we're trying to impact
through, you know, we think we have a pretty cool acronym, WAIAI, which is the word in AIand analytics initiative.
(04:32):
We think there's at least four audiences.
One, of course, is not surprisingly researchers.
So we're trying to provide resources, both financial data sets, connections to companies.
so that the scholars at the Wharton School and around the world can do better researchbecause of access to data.
The second thing, of course, is students.
So one of the things I'm trying to innovate a lot on, including in my own class thatstarts tomorrow, which is I'm trying to bring AI and data science into the classroom.
(05:01):
But my course is, course, in data-driven marketing strategy.
I'm the chair of the marketing department.
It's not a statistics course, but statistical methods can be used.
to understand marketing strategy.
So how do we use AI to do that?
So first, researchers, second, educators.
Third part, of course, is we're the Wharton School, so we're trying to impact businesses.
(05:22):
So how can we partner with companies, which has always been our secret sauce?
The way I like to describe it, at least it helps me sleep well at night, is other schoolshold conferences on AI and data science.
we partner with companies and ingest data.
So we actually provide students with experiential learning opportunities where instead ofinterviewing Eric Bradlow at a company and saying, so what have you done in AI and data
(05:47):
science?
And like, well, I took a bunch of courses and they're like, next?
Well, or how about I worked with, I'll make this up, I worked with Google on a projectdoing X, oh really, tell me about that project.
You're hired Mr.
Bradlow.
And then the fourth part of course is,
what we'd like to call, I don't know, hashtag analytics or AI for good, we want to have apositive impact on society.
(06:07):
And so Wharton has the opportunity to take a leadership role, whether it's on things likeresponsibility, accountability, or the application of AI for public good, whether it's for
nonprofits, issues like policing reform.
There's tons of educational benefits where Wharton can play a big role.
So really, WAIAI's mission, again, is fourfold.
(06:30):
businesses, students, business, and society.
Those are the four areas we're trying to impact.
And as you know, the way we do it is think of me as the vice dean as being the CEO of aparent holding company, where we have 10 centers that sit underneath the parent, WAIAI,
one of which is the center we just launched that you're running on Accountable AI.
(06:54):
But there's 10 centers that make up the heartbeat of what we do at Wharton around AI anddata science.
Great, and we'll get into that in a minute, but I just want to get you to paint thepicture a little more broadly about the overall initiative.
When most people think about AI research centers in academia, their probably first thoughtis computer science or some of the more technical fields.
(07:18):
So can you say a little bit more, give some more examples about in the business context,how is this relevant?
Sure.
So you could even find this for those listeners of yours that want to go to just go toyour favorite browser and type in WAI AI.
You'll see our mission statement is actually very clear.
We're the business school.
We're the applied group.
(07:39):
So if you're looking to develop new statistical methods, whether it's machine learning,transformer models, et cetera, around AI, that's exactly the kind of research that is done
at the engineering school and is very well done at
whether it's pen engineering, et cetera.
We're an applied school.
And so our mission statement is about the application of AI to a bunch of differentproblems.
(08:04):
And so while we borrow the methods of statisticians and computer scientists, and whilethere are many of us, like myself, and maybe I'm making a number up, Kevin, 15 to 20 of us
at the Wharton School that are applied or methodological statisticians that do develop newmethods, most of the Wharton faculty in their research want to use state-of-the-art
methods
to solve a research problem in their domain.
(08:26):
So that's really the difference is that most people at Wharton aren't developing newmethods around AI, although some are.
Most of us, most people, are actually using AI and data science to solve eithersubstantive questions, policy questions, statistical optimization questions, things like
(08:47):
that.
What's been the biggest challenge since we made this pivot to have more of an AI focus?
Yeah, it's a good question.
I would say that one challenge, which is not going to be that surprising, I'm sure many ofyour guests have said this, it's a constantly moving target.
So what are you building for?
Are you building for what exists today, like the capabilities of large language modelstoday?
(09:13):
Are we building for five years from now, 10 years from now, even though we have no realability to forecast what that's going to be?
So I would say that's the number one challenge, which is in many fields, statistics,machine learning, optimization.
It's not that there aren't breakthroughs, but they're not at the rapid pace that we seehappening today.
(09:34):
So my concern as the Vice Dean, and when I think about the investments we make at theschool level, I almost like, you know, it's kind of like as someone that's written a lot
of code in his lifetime.
I almost don't want anything hard coded.
Like any parameter I put into something that said,
Well, assume chat GPT4, well, that's not necessarily true.
(09:54):
Or assume the large language model can handle this type of data, but not in this type.
Well, that's not true.
Assuming it can only handle a data set of this many tokens, well, that's not going to betrue forever.
So it's almost like the way to stop yourself is any time something literally getshard-coded, like you're typing in a number, or a specific company, or a specific large
(10:15):
language model, you probably aren't building something for the long run.
So this is challenge that many companies have as well with AI.
And so what do you do in your organization to be adaptable in that way, given how fasttechnology is moving?
Yeah, that's the great question.
So we're doing stuff as quickly as we can, which means we're actually utilizing largelanguage models.
(10:44):
We're noting what they're good at, what they're not good at.
We're learning how to train them better so that they can become better at what they'redoing.
And so, and we're doing our best to speak to thought leaders about, what is coming next?
Is the next great benefit quantum computing?
which will allow for even smarter trained models.
(11:05):
And so it would still be of the same format, just the models will do better or more typesof data or larger data.
Or our large language models going to become, this is the classic agentic problem, ourlarge language models are just going to become embedded in everything we do.
And therefore it's not about using large language models.
It's like you will have no choice because whether, you know, whether you're a Microsoftoffice user or Salesforce or
(11:31):
speaking into your phone or whatever, large language models are going to get built intothem anyway.
And so my guess is we're in the very active phase now, what I would call active learning,where you actively have to do things to use large language models.
But my guess is, Kevin, I don't know if it's two years, three years, five years, but Ithink probably most of us agree we will be in the passive phase sooner than later.
(11:57):
where large language models are just, just like other types of algorithms, are embedded inall the things that we do.
And we just won't know it because, and then we could talk about whether that's an ethicalthing, but they will just become embedded in the software and other programs we use.
If that happens or when that happens, how do you think that's going to change what we doas academics and how do think it's going to change what business people do?
(12:26):
Yeah, I think one of the questions we have to ask ourselves and you know, this type of,I'll call it revolution.
Look, when COVID happened and you know, everybody went virtual for a while.
The question that was asked then is, is brick and mortar university based education dead?
And then we figured out that actually not only do learners learn better in a collaborativeenvironment, they're more engaged.
(12:51):
And as matter of fact,
How many times, Kevin, did you as a scholar, me as a scholar, me as an educator, you as aneducator say pre-COVID, wow, it would be great to teach online.
And I don't know anybody afterwards that didn't say, wow, this is the worst.
There's no way I'd want to spend the rest of my career teaching online.
I'm not saying some.
I'm actually the counter example on both of those but that's a separate question.
(13:13):
But your point is well taken.
I also know because one of my sons took your course, you led at Penn, certainly atWharton, one of the largest online courses that existed during COVID.
And it was highly, my son told me, it was highly, highly successful.
But most people still prefer in-person teaching.
(13:34):
And so I think one of the questions we have to ask ourselves as educators is,
What are we educating students in a business school to do?
Now, I may be, you know, I'll call it, well, this way, given my political leanings, Iwould never describe myself far to the right on any scale.
So I'll say far to the left.
What does far to the left mean for me?
It means I want to teach students how to use large language models to address the kind ofbusiness questions I want them to learn from my course.
(14:03):
Other faculty might want to prohibit the use of large language models.
To me,
Again, I think of this as an applied school.
I know for a fact that most of our students are going to use large language models intheir jobs.
Therefore, my view is I should train them how to use them in a responsible, ethical way inthe type of work that they're doing.
(14:27):
And so that's my goal.
And that's why my job as an educator, as I mentioned, I'm starting to teach tomorrow, haschanged dramatically because I have to think about if this were me,
taking Professor Bradlow's stuff and actually trying to apply it, what would I actually dowith this now that I have this, if you'd like, AI, large language model assistant that can
(14:49):
help support me?
I think all of us as educators have to answer that question.
So you mentioned the ethical dimension to it a couple of times, which I appreciate.
And you've always been very supportive of the work that I and others are doing on legaland ethical aspects.
And so I'm curious just to start, and then we do want to talk about the Accountable AI Labthat we've just launched.
(15:13):
As a statistician and a marketing expert, why does that seem important to you to havelawyers, ethicists?
philosophers, behavioral scientists, people like that working in this space.
Yeah, people ask me this all the time, like, why are you so, it's not what I do, why areyou so interested in this?
(15:33):
I think there's a couple reasons.
One is more, I don't want to call it, there's nothing, there's no subterfuge or anythingabout it.
One is just, it's a unique differentiator of, I have multiple reasons, let me start withthe first one.
If I was trying to build the most differentiated product at Wharton around AI and datascience,
(15:55):
that would make Wharton unique and that would have an impact on society.
Well, we have something almost no other business school, certainly any competitor businessschool has.
I don't have to tell you, you're the chair.
We have a legal studies and business ethics department.
So why wouldn't I leverage that as an asset?
So that's number one.
Number two, I spend half my day talking to companies.
(16:17):
They're asking questions about ethical, transparency, responsible AI.
The third,
purely as a statistician, one can view problems in ethics, let's call it accountability orprivacy, one can treat those as mathematically constrained problems.
(16:39):
mean, there is ways to, it's not like actually, and that's why it's interesting when youtalk about all the different people that you bring together in accountable AI.
Computer scientists have a lot to say around accountability, transparency, responsible useof data, privacy, etc.
And their job and my job is to translate those types of things into mathematicalalgorithms that solve certain properties.
(17:06):
Now we can debate about what it means to be ethical or fair or responsible, but if youtell me what that definition is,
I can then translate that into something that I'll incorporate into my algorithm.
I do it really, again, you asked me a question for three reasons.
One is, why wouldn't we leverage one of the most prized assets of Wharton, which is ourLegal Studies and Business Ethics Department?
(17:29):
Two, companies are demanding it.
And three, it's actually a set of fascical statistical and mathematical problems to thinkabout how to bring these fairness, responsible, accountability principles into algorithms.
what are the constraints of that algorithm before someone can actually build it.
I leave that to you.
(17:49):
And this is why I joke actually in my own research.
You know, for years people since my background is statistics, computer science, but if youlook at my CV, I've written tons of papers with quote unquote consumer behavior colleagues
of mine.
And people say, why do you do that?
I said, because I know enough math and computing.
I need people that know something I don't know.
(18:10):
And that's why, you know, I try to stay in my lane.
I'll let you.
and the ethicists and the linguists and the philosophers, all of them, you know, hash outall kinds of interesting definitions and then we'll work together on it and then I'll say,
okay, well, here's how we would implement that in some sort of algorithm.
And so, one of my favorite things in life is I try to stay in my lane, that type ofphilosophical or, you know, legal or, because there's difference between philosophically
(18:38):
and legally.
Those discussions are had by others and I love being part of them, but I don't see myselfas a contributor to them.
You mentioned that companies are asking for this.
Can you say a little bit more?
I agree.
I think people have this misnomer that businesses are only concerned about profits andtherefore anything where you say, you know, do it responsibly, they run away.
(18:59):
But what has been your experience with the organizations you talked to for Warden AI andAnalytics?
Yeah, one is just due to policy.
mean, lots of companies in the tech space, for example, that you and I both deal with alot, they actually, there's all kinds of policy restrictions.
Now, under the new administration, which started yesterday, those might change.
(19:20):
We don't know.
But I mean, there's literally, whether you want to call it gates that are put up betweenthis type of data cannot be used for this type of, let's call it, whether it's targeting
or identification, et cetera.
That's one thing.
It has to do with companies that care about policy.
Second is consumers care.
Consumers care about the respo- not everyone, there's heterogeneity, but many consumerscare about the responsible use of their data, the ethical use of their data.
(19:50):
So if in some sense you have to, you know, I could make more profit as a company, but onthe other hand, the backlash could come if consumers know that I'm using the data in this
way.
I think every company, it's kind of like a risk reward trade-off.
I why can't we just view it as what it is?
Yeah, I mean, and by the way, I do not agree with the premise, and I'm sure you, I'mguessing you agree with what I'm about to say.
(20:15):
I do not agree with the premise that one has to trade off profits to be ethical orresponsible.
I think that's a, whatever they call it, it's a false dichotomy.
I don't think it's the right way to think about it.
And people will say, well, you're constraining profit making to be this.
And I don't think that's the right way to look at it at all.
Yeah, no, I totally agree.
(20:36):
you know, going back to your earlier point, no one is entirely sure how we're going tomaximize the profitability and the economic benefits of this technology anyway, given how
fast it's changing and the uncertainty.
So yeah, it's just, it's another important piece of it.
So, so what do you think?
So we've just, you know, now as we're recording this, recently launched the Accountable AILab, as you mentioned, I'm leading as part of this larger Warden AI and Ambulance
(21:00):
initiative.
What's your hope for this activity?
Yeah, so there's different, let's call it a pyramid, there's different pyramids of hope.
So one hope is I'll call it just purely from the brand perspective, when companies andother universities think about what school or business schools are leading in accountable,
(21:27):
responsible, AI, they think of Wharton.
So that's number one.
And that helps us make connections with companies.
I can imagine many alumni interested in supporting us.
I can imagine it making easier for us to hire.
There's all kinds of benefits that come with that brand value.
So that's number one.
Two, at the heart of what sits at us, we're a research and educational center.
(21:51):
So I'm hoping that by forming this lab, whether it's post-docs, doctoral students, whetherit's a
You can hire better faculty in legal studies and business ethics because of the existenceof this.
The way I describe it, Kevin, is that I'm on an intellectual capital campaign.
My job is to help Wharton acquire, whether it's faculty, doctoral students, postdocs, thebest students, RAs, et cetera, around AI and data science.
(22:21):
And I think there are many, many faculty, students, et cetera.
that will be more excited about Wharton's work because of the accountable AI lab.
And so it could be the brand level, the research level.
I also think, you know, I get questions all the time.
of fact, I'm predicting starting tomorrow when I'm teaching the Wharton MBAs, someone'sgoing to ask me a question and I will say, well, you should go to Kevin Werbach and you
(22:49):
should go to the legal studies because this is an ethical question to which I'm not sureof the answer.
Students all the time ask me questions about like, you think it's okay for companies to dothis?
And I think students are the eyes of the industry.
They reflect what everyone's thinking.
No, absolutely.
I mean, it works the other way too, that there are people who are legal scholars orbusiness ethicists who, if you're too unmoored from both practice as well as the
(23:19):
technical, I mean, I can say, this is what I think the law should say, but if companiescan't actually implement it, if you can't actually build a statistical model that does
what the law says, then it's not very helpful.
So I do think this is a partnership that really is essential and doesn't happen enough.
Yeah, no, I mean, I'm excited about, mean, obviously you've been teaching a course on thisand related topics for a long time.
(23:45):
And now the Economal AI Lab, and I know you're trying to grow the faculty around this andhire scholars around this.
And this is what WAIAI does where, you know, I joke with people, more cocktail party kindof stuff, which is they ask me, so what do you actually do as the vice dean?
And I tell them what I do, but I say, one of the things I do,
(24:06):
is I'm a venture capitalist.
invest in what I think, it's my opinion, I invest, the school has entrusted me with this,I invest in what I think is the next big idea in AI and data science.
And I think Wharton having a leadership position in accountable AI will put us in a uniqueand differentially advantaged way.
(24:27):
And that's why I'm doing it.
It's my opinion for all the reasons I said, the brand, the connection with companies,student interest, research.
teaching, think it will affect all.
And admissions, I think people will come to Wharton because of the work we're doing inAccountable AI.
Sounds good to me, not surprisingly, but no, it's great to hear.
(24:49):
And I think, I mean, for me, I also keep coming back to when I talk to companies, theseare issues they're struggling with.
And it's also, there are lots of conversations about regulation, but I find a lot ofcompanies that I talk to, I'm curious if you have a similar perspective, they are really
worried about being in an environment where the law is not clear, where the governmentsays anything goes,
(25:14):
from a legal standpoint, from a regulatory standpoint, because then something's going togo wrong.
You're going to deploy some system and someone's going to say, look, this is causingterrible bias or look, a user of this AI companion service killed himself and we're going
to sue you and we're going to go to the press.
And I talked to a lot of companies that say, well, we want to be able to say that'sterrible.
(25:36):
We didn't want to see that happen.
Here are all the things we did and we did everything that we are expected and required todo and the best practices to mitigate and prevent those harms.
mean, cars crash and blow up and sometimes things go wrong.
And I hear from lot of companies saying, well, actually we are more worried about being ina situation where we can't say the government has told us this is the standard.
(26:00):
We can say we're here, we're above it, or at least we're there.
Now we're on our own.
So I'm curious if that reflects what you're hearing from the organizations that you're
It is, but I think part of it is, I'm sure you know this, it's part of who are you talkingto in the organization.
So probably not surprisingly, the organizations I'm talking to, I'm talking to either theusually, maybe it's the CMO as a marketing professor, but not necessarily.
(26:26):
It could be the chief financial or chief revenue officer.
It could be the chief information officer, the person that runs data science.
It's rarer.
that I would speak to somebody that might be in legal or compliance, et cetera.
The reason I get involved with people in legal and compliance has to do when it comes tosharing data or access to data.
(26:47):
But in some sense, as you said, I like the way you framed it, which is let's suppose wehave students or faculty work on some algorithm based on AI or data science to solve some
sort of business problem.
How much are we thinking about, okay, so we've provided some algorithm.
What are the downstream consequences of this?
(27:09):
Like suppose it leads to some, as you mentioned, horrific type of outcome, which can bemeasured in all kinds of ways.
Well, how coupleable, how responsible is the firm for deploying the algorithm that ledwhether it's to someone's death or to some loss of job or to something else?
The answer is we probably
(27:31):
don't think about it enough when we're developing the algorithms.
I'm just speaking for WAI.
I'm not saying, well, maybe no one does.
But I think it's a fair question to ask.
And partly also maybe because the people we speak to aren't really telling us or theydon't know necessarily what those regulations are.
That's possible as well.
(27:52):
Yeah, no, everyone is really struggling with this and the point that you make about, youknow, responsibility and liability is a really crucial one and a really uncertain one
because it's in the nature of these technologies that there is a gap between, you know,especially talking about something like an LLM.
it's, you know, it's pre-trained and it's trained and then it's, you you get some.
(28:17):
intermediate process at the inference stage and so forth, and then some other company isdeploying it somewhere and then something bad happens.
It's a really hard question about what we just the right thing to do or even the formallegal thing to do about redress for those harms.
So those are definitely things that we hope to look to.
And I think there are companies that think about it more actively and there definitely arecompanies in regulated industries where they have existing.
(28:44):
structures and finance and healthcare and so forth that they have built for datagovernance or otherwise But no one really knows the answer to those questions
Yeah, I think the two areas that I think are the most mature in the kind of the lane thatI sit in are, would call it privacy and data governance.
Those are the two that are a little bit more mature.
(29:04):
Like I think people have some sense about, know, whether it's, you know, de-identifyingdata or what can be shared, et cetera.
I think people have some sense about
know, data governance and security and some sense.
I'd say, I'm not saying those are fully mature, but I'd say those are more mature than Iwould call it, you know, ethical and other kinds of moral types of investigations, at
(29:30):
least from the lane that I sit in.
And matter of fact, a lot of the algorithms I've been working on now, it's literally beenfor 15 or 20 years, is most of the work that I do in my research career is about using the
most granular data.
possible.
But I also did forecast, it's no Nobel Prize for this, that there were going to belimitations on data availability.
(29:54):
And so a lot of the algorithms I've been working on for the last 15 or 20 years are notassuming that you have as granular data at the individual level.
And what can you do as a firm, know, given I'm in the target marketing or, you know,business.
What can you still do when you lose access to data that's maybe it's captured at thatlevel, but you don't have access to it?
(30:15):
Or it's not even captured at that level because the firm's not even allowed to store it atthat level.
So there are many of us in marketing and other places that have been working on these, youknow, what happens if you're not living in the land of granular data plenty and what do we
do in those cases?
Matter of fact, I will say.
my colleague peter and i wrote a paper maybe ten years ago which we presented to a legalstudies conference which was it's not obvious in the in many situations that profits from
(30:45):
a purely from perspective have to be compromised
by using more aggregated data.
of course, this audience was eating that up because they're like, wait a second, theclassic argument is, how do you expect us for-profit businesses to operate if we can't?
And we actually showed that that's not actually true in many cases.
So that's a problem that's purely as a marketer, as a statistician that I'm interested in.
(31:08):
No, absolutely.
This is a really great example you're giving of this conversation that we describedearlier that really needs to continue happening between different disciplines because we
identify a problem like privacy.
It's true about bias and explainability, any of these things.
Typically, there's a simple answer.
If you're giving me all the data, here's what I'll do.
(31:28):
But that doesn't necessarily mean if you don't give me all the data or even if I have allthe data, I'm not going to that.
That doesn't mean that necessarily I'm inherently disadvantaged, but I need to
think differently about how I'm going to use it.
No doubt about it.
So, we're almost out of time.
What do you see going forward as some of the most crucial issues?
We can talk about the accountable AI piece or generally in terms of, you mentioned agenticsystems and so forth.
(31:54):
What do you see coming that is going to be most important to think about in this realm ofAI more broadly?
Yeah, I mean, it relates to a meeting that you and I were just at earlier today.
I think the thing I'm trying to do both as an educator, but also as a researcher, as adepartment chair, as a vice dean, is I'm trying to reduce barriers.
(32:15):
And I think the big challenge right now is, you know, we have a large fraction of thepopulation that are actually sitting there that haven't used large language models,
haven't tried them.
don't know where to get started, don't know necessarily what their limitations are, don'trealize that, you know, I was working on something today using large language models and I
(32:39):
had to have identified in just two hours of using them 15 to 20 errors.
Like the answers were just plain wrong and I know they're wrong.
There's no ambiguity.
These weren't subjective.
Like Eric thinks it's wrong but it could be right.
No, it's just.
but you know they're wrong, you know to ask whether they're wrong, because they might be.
asked it and then it said, you're right.
I'm like, well, if you knew I was right, mean, it's like I'm having a conversation with aperson.
(33:01):
I mean, once I pointed out it was wrong, it knew it.
But then of course, the next time it gets it right.
See, that's the thing about training and stuff.
But I think one of the challenges I'm trying to do is I would like it to me, it's a toolthat people should explore.
I don't know how anybody today can't see it as, you know, in some sense, you
(33:24):
If we think about it, we've all been doing search for a long time.
Google's made a lot of money off a box, typing stuff into a box, learning about you, andselling advertising based on that.
Well, now we have something much more powerful than search, because it's not justretrieving information, it's generating information.
(33:47):
And by the way, we could argue...
If you think about Google, think Google's Gemini, right?
That's Google's product, right?
If you think about it, they're building a future where they're going to put their ownGoogle search out of business.
I they have to be realizing that search in a box is not necessarily, and just retrieval isnot the future.
(34:10):
And AI has made it so.
So I think that's what I'm thinking about going forward is
How do I use it more in my own research?
How do I get more of my colleagues to use it?
How do we teach people to ask the right questions?
I I'm sure you've had people on that have talked about prompt engineering.
And also, how do we train these models to do better in the future, but in a way thatsecures the data and privacy?
(34:37):
And the way I describe it to people is they're like, well, chat GPT is free.
How are they going to make money?
I'm like, well.
Some versions are free, but not the enterprise version.
As a matter of fact, that's the same with Google.
I get Google for free, but the enterprise version's not for free, and it's not free toadvertisers.
And that's the same thing that's going to happen here.
(34:57):
Let's imagine in your area of the world, let's imagine I'm a law firm and I want to builda large language model.
don't want all my, matter of fact, I'm not even allowed to have all my content sucked upinto the cloud and used for training.
But localized, so localized large language models,
are going to be a big, big part of the future, not just for privacy and regulatoryreasons, but because they will perform better.
(35:21):
Localized models perform better.
So that's what I'm thinking about.
And how do we do this at scale?
Because again, I'd love to not only impact Wharton's researchers and educators, but I'dlove to do stuff that impacts hundreds of thousands of learners around the world.
But that means we have to figure out how to do things at scale.
All right, but it's a great challenge to have.
(35:42):
We could keep talking for a while.
We will be talking about many other things outside of this podcast, but Eric, it's really,really great to have you.
It's exciting for me to be part of an institution that thinks in such a creative andinnovative way about this area and the opportunity to integrate the accountability and
legal and ethical piece with the other aspects of AI.
(36:04):
I really think there's a lot of exciting things coming.
So thank you so much for being part of this.
you're welcome.
And thank you for having me on your podcast.