All Episodes

February 26, 2025 34 mins

Are you wrestling with ethical questions about AI while also feeling curious about its potential?

In this thought-provoking episode, Sara welcomes Dvorah Graeser, an "old internet" technologist who brings a unique perspective on AI democratization. From programming for the Human Genome Project to founding RocketSmart, Dvorah shares insights on how we can approach AI with both skepticism and agency.

Discover why those who shy away from AI might be surrendering power to tech giants, and learn practical considerations for responsibly engaging with generative AI tools in your work.

Episode Highlights:

  • Dvorah's background programming before the GUI and her journey from the Human Genome Project to AI development
  • The ethical considerations of generative AI and how to navigate them as business owners and creators
  • How to evaluate AI models based on their transparency, data policies, and public commitments
  • The democratization of technology and why bottom-up AI adoption benefits everyone
  • Why small businesses might leapfrog large corporations with open-source AI models like DeepSeek
  • How generative AI affects workplace satisfaction differently across roles and experience levels
  • Practical advice for protecting your intellectual property in an AI-driven world

Key Concepts Explored:

AI Ethics & Transparency

  • Role of AI in data analytics and predictive modeling
  • AI as an enabler vs. driver of outcomes
  • Practical applications in OKR workshops
  • Limitations and considerations

Generative AI for Work

  • The role of AI in automating tasks vs. augmenting human work
  • AI as an enabler vs. a driver of outcomes
  • Practical applications of AI in different industries

Small Business vs. Big Tech

  • Can AI level the playing field for solopreneurs and startups?
  • How large corporations control AI access and development
  • Opportunities for smaller businesses to leverage AI effectively

AI for Strategy & Execution

  • Integrating AI into decision-making without losing human creativity
  • Using AI for data analytics and predictive modeling
  • Limitations and considerations for AI in strategic planning

Episode Chapters:

[00:00:00] Introduction: Welcome to Thinkydoers and introduction to Dvorah Graeser

[00:03:00] Dvorah's background: From programming before GUI to AI development

[00:05:00] Ethics of generative AI: The challenge of retrofitting ethics

[00:08:00] Choosing trustworthy AI models: Evaluating data policies and transparency

[00:10:00] Democratizing technology: The historical context and importance

[00:14:00] Advice for Thinkydoer leaders: Focus on process integration

[00:17:00] IP concerns for creators and business owners: Strategies and policies

[00:20:00] AI and the future of work: Research on workplace satisfaction

[00:25:00] The potential of open-source models like DeepSeek for small businesses

[00:29:00] Individual action: How to participate in shaping ethical AI

Notable Quotes

"If a company is using AI in a way you don’t like, let them know—preferably on social media, so others can join the conversation." – Dvorah Graeser (00:31:00)
"Generative AI is more about curation than creation. It gives you 100 ideas, but you still need the expertise to pick the right one." – Dvorah Graeser (00:22:00)
"Most small businesses don’t need the latest AI model. They need AI that works with their data and processes." – Dvorah Graeser (00:26:00)
"AI isn’t just a tool for big corporations. Small businesses that use AI strategically can be more agile and...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Welcome to the Thinkydoers podcast.
Thinkydoers are those of us drawn todeep work, where thinking is working.
But we don't stop there.
We're compelled to move the work frominsight to idea, through the messy
middle, to find courage and confidenceto put our thoughts into action.
I'm your host, Sara Lobkovich.

(00:23):
I'm a strategy coach, a huge goal settingand attainment nerd, and board-certified
health and wellness coach, working atthe overlap of work, life well-being.
I'm also a Thinkydoer.
I'm here to help others find moresatisfaction, less frustration, less
friction, and more flow in our work.

(00:46):
My mission is to help changemakers likeyou transform our workplaces and world.
So let's get started.
Hello, and welcome to thisweek's episode of Thinkydoers.
I am excited to introduce you to DvorahGraeser, founder and CEO of RocketSmart.

(01:08):
Dvorah brings a unique perspective onAI, having started programming before
the graphical user interface and thewidespread adoption of the internet.
Dvorah is an old internettechnologist like me.
Before we dive in though, we aregetting closer to the release
of my upcoming books, and I needyour help to get the word out.

(01:32):
You can join the launch serverat findrc/co/launchsquad.
I need advanced readers, socialmedia amplifiers, and really just
for folks to help me be as excitedas possible to get this book in the
world when I get nervous or scared.
Alright, so, today in part one ofthis two part conversation, we'll

(01:53):
explore the ethics of the generativeAI tools that are multiplying like
bunnies, and we'll get Dvorah's takeon democratizing technology and how
to approach AI with both skepticismand curiosity and openness like I do.
So if you've been wrestling withquestions about responsible AI use or
feeling anxious about AI's role in ourwork and world, this episode is for you.

(02:19):
We don't have all the answers.
But you're not alone in asking questions.
I would like to welcome DvorahGraeser to the show today.
Dvorah, do you rememberhow we got connected?
I do, because of course, I'm a hugeadmirer of you and everything that

(02:40):
you do with OKRs and the Thinkydoers.
I've been a fan for a long time.
So I had to do with OKRs.
I figured we had to do OKRs, knew nothingabout OKRs, went and looked it up,
and you were the only person who gavea really human presentation on OKRs.
There are other wonderful humans in OKRs.
I think I'm just kind of doing alot of online content right now,

(03:02):
so you're seeing a lot of me.
But I am increasingly inviting thoseother awesome people on, to do lives
and things like that with me too.
So we'll spread the love around.
So, for our guests, I'd love foryou to introduce yourself, tell
us a little bit about who you are,what you do, and where you're based.
Happy to do that.
So I'm Dvorah Graeser.

(03:23):
I'm the founder and CEO of RocketSmart,rocketing your IP out of the laboratory
and into a great licensing deal.
So that's to help universitieswith commercialization.
I am mostly based in the Netherlands,although sometimes I'm in Chicago,
Illinois, freezing in the winter—yay.
I actually startedprogramming when I was 16.
I learned how to program before theGUI, the graphical user interface.
I learned how to program before theinternet really got started, when

(03:45):
everything was still on dial-ups.
And what I've seen since thenis that technology is great
when it is democratized, whenit helps benefit all of us.
But there Always seems to be a tendencyfor technology to get into the hands
of the powerful and stay there.
One example, I got my PhD in pharmacology.
I was programming.
I programmed for the Human Genome Project.

(04:06):
Everything was great, but a lot ofpeople were afraid because Craig Venter's
company was getting patents on all thegenes, and the researchers were like, We
won't be able to do anything with genes.
We're gonna be stopped.
Now, in the end, it wasn't likethat, but that was a really big fear.
And I have to admire Craig Venter, hedid a great job with the human genome.
He was a very big part of it.
But there was this tension, and thistension has continued throughout

(04:29):
my life as a U.S. patent agent,helping people protect their ideas.
But now, also through AI programming,I try to help people find
others and connect with them forcommercializing early-stage innovation.
That, overall, is what I do, and I wantAI to be a force for democratization.
But, of course, like every othertechnology, it can start getting into

(04:53):
the hands of the powerful and makingthe rest of us feel like we don't have a
way to come into it and make use of it.
And I think that's the wrong,and thing I want to change.
It's always so much fun to me totalk to other early internet people,
and early programming people.
Early as in our generation of earlyprogramming people, because I swear
that mid-'90s internet changed meforever in how I interact with the

(05:16):
world, and communicate, and operatewith people, and behave online.
Let's just start with theelephant in the room of ethics.
And what's your point of view onethics and how people can ethically
participate in generative AI, if we can?
Ethics has always been aconcern with me for AI.
So, if we rewind a bit, I started trainingour company's AI models back in 2015.

(05:40):
So they were not generative AI.
They weren't even, like,really like neural nets.
They're really simple kinds of AI.
We had to bring the data.
We had to clean the data.
What it meant was that the output waspredictable and we controlled the input.
So if it was being used ethically,that was totally on us, right?
If we screwed up, the only way wecould be unethical is if we screwed up.
It wasn't because theAI was doing something.

(06:02):
Now, we fast forward to generative AI.
And, to be perfectly honest,we're not fully in control of
what happens when we're using it.
And a large part of it is because,of course, these models have
been trained using a ton of data.
Not our data.
Not data that is necessarilybeen ethically sourced, given the
number of people who are suing, forexample, Open-AI, or some of these

(06:22):
other companies are being sued.
Some of them have come to agreements.
There's a lot of thoughts abouthow we can make this fair.
But the fact is that the generativeAI models were trained first, and now
we're trying to retrofit fairness andethics onto an existing technology.
So, here I would look at it in two ways.

(06:42):
One is, how do we protect those around us?
So, our clients, our employees,those may be further downstream.
If we're making a product that will beused by our clients for other clients,
what will happen to these other clients?
That, I think is a very strong concernbecause that's where we have more control.

(07:03):
How we design it, how we allow our clientsto use it, how we instruct them to use it.
Maybe they don't understand, that's on us.
Now, when it comes to how generative AIwas originally trained, that is a problem.
Because it's already done.
You can't unring that bell.
So, here, I think we need tokeep an eye on what is going on.
I do believe that in the future,there'll be more of a split between AI

(07:27):
models where they pay more to creators,where they're transparent about where
they're getting their data from.
They're transparent abouthow they're using their data.
They try to make the best of an ethicallydubious situation, and they try to fix it.
But right now, It's kind of a fog, andso I don't have a clear-cut answer for

(07:47):
what to do with the existing generativeAI models, because it's a problem.
Yeah, it makes me wonder, forsomeone concerned with ethics
and what's right, and we're bothheavy users of these models.
How do you choose whatmodels you work with?
Or what would you tell people who areconcerned to look for when they're

(08:11):
reading the fine print on a model?
Well, so, first of all, it dependsif it's a general-purpose model
or if it's special-purpose model.
Now, if it's a special-purpose product,let's say it's intended to help you write
marketing content, the question thenis, are they using an existing model?
Are they using an open AI model,or did they make their own?
If they made their own, then youhave the opportunity to query how
they got the data, what they did.

(08:32):
If they're a company that just has alot of access to a lot of data, then
that's a completely different situation.
But let's talk about the moregeneral generative AI model.
So we have ChatGPT from OpenAI, wehave Claude from Anthropic, Google
Gemini, and now we have DeepSeek,which is available in an open-source
model being hosted in the US.

(08:53):
So for example, throughHugging Face and other hosts.
Or you can get it as an app.
And I believe that the serversare in China, but to be
honest, I don't even know.
So there, I would want toknow, where are the servers?
What data was it trained on?
And also, will it be trainingon what I'm inputting into it?
Now, Anthropic makes a very big point ofstating that they do not train on data

(09:16):
that their users put into their model.
They have a different way ofdoing things, and they are trying
for AI safety and for AI ethics.
So there, if I had to pick anymodel or any generative AI, I
would probably stick with Claudefrom Anthropic for that reason.
Also because they do try to betransparent about what it is they're
doing and how they're doing it.

(09:38):
ChatGPT will train on your data.
They do say this, more or less, up front.
I mean, it's there, you can look at it.
Some of the other ones, like ifyou're using the DeepSeek app
that they are hosting, I don'thave a clue what they're up to.
It could be anything.
So that's why it's important to lookat not only the fine print of the
legal agreement but also, what arethey stating in their public-facing

(10:01):
voice and their brand voice?
What are they telling you about how theyhandle data and what their beliefs are?
Because for me, that does go a long way.
You talk about democratizingtechnology and the role that AI
plays in democratizing technology.
What's your point of view,and where does that come from?
Well, so my point of view, it comes alittle bit from the kind of early days

(10:24):
of software when we widely believethat everyone could have access to it,
that everyone should have access to it.
We were against the model of, like, theearlier IBM, which is one big computer
in the room for the whole company.
And We're going to have gatekeeperswho could control how we can use it.
We said, no, individuals should havethe right to use it and to make it
do the things that they need for it.

(10:45):
And I've always believed thatto be true with technology.
Technology is not something that shouldhave gatekeepers, to the extent that
we can make it open and available.
And a lot of times we don't, more forreasons of money or power than, well,
because you don't want someone who'snot trained flying a plane, right?
It's a different kind of a thing.
A computer is not like that.
So I always believe that, I was veryhappy when everyone got a computer,

(11:08):
and then there was the internet.
And I said, okay, great, we'reall going to be able to talk
with each other and share things.
And people don't have access to all theresearch will still be able to get access.
And then we just ended up with a ton ofgatekeepers, is what it came down to.
Every possible thing, from not beingable to access scientific articles.
There were, like, a few attemptswith people to gain access for that.

(11:29):
And there are copyright fights, and itkind of settled down to an uneasy truce.
In the case of things like AI, Ihear a lot of corporations, big
corporations, who are saying,yes, we must go full steam ahead.
But really, where I see thebenefit is for solopreneurs,
individuals, and small businesses.
Just because the efficiencyis so much lower there.

(11:50):
And I also see it as beingsomething which could potentially
help people all over the world.
It could help small businesses in Africause their phone to access something.
So maybe their phone isn't that powerful.
Maybe it's only a feature phone,it's not even a smartphone.
But if they could message and chat withan AI, they could get that information.
So the power doesn't have to be on anexpensive phone they can't afford, in

(12:11):
a computer they certainly can't afford.
It can be handled upstream and theycan get the downstream benefits.
The issue is I don't seethat happening right now.
I do see folks are trying to make itmore widespread by offering relatively
less expensive subscriptions, butthere's no clear path forward.
Even with the big companies I'vespoken to, a lot of times they're

(12:33):
cramming the AI from the top down.
What I see is, we need to actuallyhave a groundswell from the bottom up.
And that will help in a few respects.
First of all, it'll help us individuallyunderstand what we feel about AI ethics.
But this means we have to educateourselves about AI, and we have to want
to take the power back into our own hands.
I believe that those of us who shyaway from AI are actually letting
the powers that be kind of runroughshod over the rest of the world.

(12:56):
We all have to get into the fightand decide what's important.
So that's one aspect.
But another aspect, and this maysound surprising, if we as individuals
learn more, and we get into thefight, and we want to democratize
it, we take it into our own hands.
Yes, it will help us as individuals andas a society, but it'll also help smaller
businesses and even larger corporations.

(13:17):
One of the big problems right now is,even in a big company, they'll have an AI
specialist who's way the heck over there.
Oh, I just vanished.
Boy, I vanished into that wall there.
There we go.
But that is indicative of what happens.
They're behind a wall, and then allthe people who need the AI, they're
on the other side of the wall.
But because the people who need it aren'tlearning about it, wanting to empower

(13:41):
themselves, wanting to say this is whatwe feel should be done, the corporation
ends up with folks on two sides ofa wall, never the twain shall meet.
And then it doesn't work.
So, you see, bottom-up democratizationisn't just good for individuals and
society, it's even good for big companies.
You mentioned the power of these modelsand tools for solos and small businesses.

(14:04):
And a lot of my people, a lot ofmy people are employees and trying
to build happier careers in thoseenvironments you just talked about.
But a lot of my people are also, a lotof our listeners are solopreneurs or
entrepreneurs, they're leading companies.
So, what would you tell Thinkydoerleaders who are so busy running

(14:28):
their businesses or just tryingto keep up with what they have to?
What would your recommendationbe if there was one place for
them to become more aware?
It isn't even about usage of AI, but whatwould you tell them to be aware of if you
only got a little bit of time with them?

(14:49):
I would actually ask themto look at their process.
And the reason why is that generativeAI works best when you have a
process and when you integrate thegenerative AI into your process in
a way that feels comfortable to you.
Now, if we think about running a smallbusiness, I run a small business, you run
a small business, lots of people do it.

(15:09):
Even if you're a solopreneur, you probablystill get help with taxes and accounting
and other things that I'd rather not do.
So, you know, I try toget help with those.
So even there, you stillare working with the team.
There is still someone elsewho is working with you.
So then there's a process.
And one of the things I have foundis that where things break down is
in communication between humans.

(15:31):
That is where processes run aground,that's where time is wasted,.
That is where John didn't talk toJane, or Jane didn't talk to George,
or you end up with something that comesback like it's a broken telephone.
And at the end there's, like,five people down the line, and the
last person down the line, let'ssay, like, Bill—Bill's like, what?
What?
This is not what I was expecting.
So, there, it's a matter of process.

(15:52):
When we're doing things manually, ifwe're in the same office, we can just
go and knock on Bill's door and say,"Hey Bill, I'm sorry, that was a little
confusing. Can I talk with you about it?"When we're working remotely, when we have
a widely distributed team, maybe whenour teams are super part-time, or when
we're trying to do more with less andwe're all under a lot of stress—that's
when process becomes super important.

(16:14):
And process is very important forAI, also for larger companies.
I spend a lot of time talking to bigcompanies about their process as well.
And somehow, even with larger companies,there's this idea of, 'Well, there's
these humans, and there are thesesoftwares, and we're just gonna smoosh
it all together with AI.' Doesn't work.
I giggle because that's what I see withOKR software implementations as well.

(16:36):
It's the same pattern.
So, what would you tell people, youknow, creators, artists, writers,
business owners, who are generating IP?
I know this isn't a conversationabout IP law, but you're a fellow
business operator who generates IP.
So, what would you tell folks they shouldbe aware of as we all continue to generate

(17:00):
and publish IP in this new world order?
Do you have any thoughts orrecommendations for folks?
First of all, any content that isput out there is likely to end up
in some kind of generative AI engineif we're publishing on social media.
So, you know, I like LinkedIn; otherpeople publish on Facebook, Meta,
or Twitter/X, if you're publishingon a social media channel, I would

(17:22):
assume that that material is goingto be sucked up into some giant
generative AI training session.
Even if you're publishing on your ownblog or on your own website, there
is a way to ask the robots text, etc.And you can play with that, but that
can also affect how well the searchengines can find you, in my experience.
Now, there might be people of likedifferent ways of doing this, but

(17:44):
to be honest, I've talked to a bunchof people and they're just like,
"Assume that if it's out on theinternet, if you want it to be found,
then you have to assume someone'sgoing to be using it for training."
so then the third part comes in.
Well, what about the super sensitivestuff that I would never publish on the
internet or in a social media channel?
What happens with that?
There's where you got to be careful.
You want to read the policy, thedata and privacy policy of every

(18:05):
single AI tool you're using.
I don't care if it's Gen AI or not Gen AI.
You need to read those policies carefully.
If you're not sure, geton the phone with them.
So earlier in the days of generativeAI, like a couple of years ago, when
folks were starting to use it, I hadone software, which I'll go unnamed,
where it wasn't clear what they weredoing, and I just got them on the phone

(18:26):
and I said, "Look, this isn't clear.
I also train AI models.
Here's why it's not clear.""Oh, you know, you're right.
We meant to make that moreclear." They changed it.
So get them on the phone,get it in writing though.
if once they get on thephone, do get it in writing.
And then you have to makethe best balance choice.
So this is especially true ifyou're worried about your own data
coming in, but also about ethics.

(18:48):
There's always going to be atradeoff and a balance here.
And unfortunately, I don't have a reallynice clear-cut, tied-in-a-bow answer.
It is more thinkingthrough it with yourself.
I do recommend that small businessesand large businesses have an AI policy.
What are employees allowedto do or not allowed to do?
What is sensitive data?

(19:08):
So I did talk with one firm, andthey said, "Well, yes, we do have a
policy that you're not allowed to usethese generative AI softwares with
sensitive data." And I said, "Great.
What is sensitive data?" "Oh, weknow when we see it." And I said,
"No, no, because everyone will havetheir own interpretation." So you
need to have a policy, and it needsto be something that you and your
employees are comfortable with.

(19:29):
It needs to be clearly articulated,and then you revisit it periodically.
But there isn't going to be,unfortunately, a super great solution
to a lot of these issues at this time.
Yeah, it's funny.
That was the "What do we trustwhich model with?" is one of the
ongoing side jobs I think all ofus business owners have right now.

(19:54):
I do think it's really important,though, your point that if it's
on the internet, it's likely to bevacuumed up, is a really good one.
I think about people affirmativelytraining models on my IP.
I didn't think so much about thevacuuming up is still happening.
I worked with one of the largemultinational global technology

(20:16):
companies it wasn't this era of AI.
It was more like when we wereforecasting this era of AI.
One of the talking points thatwas always made was AI isn't
going to eliminate human jobs.
AI is going to change human jobsand improve worker experiences.

(20:40):
And I've heard that for years.
I know that's the talk track.
I struggle to see, even I do see theways that the generative AI tools
we have now can improve people'sworkplace experiences and even workplace
satisfaction by using the tools.
As someone who's seen it from the verybeginning and who comes at it from

(21:01):
this bottoms-up kind of viewpoint,what's your perspective on the role
of AI when it comes to human labor?
Well, it's complicated.
Unfortunately, I don't think there's goingto be like a clear yes or no answer to
"Will it improve workplace experience?Will make it worse? Will replace jobs
or add jobs or do something else tojobs?" There is some research that I've

(21:24):
seen, which has been informative for me.
So in one case materialscientists were studied.
And this was published in, I wantto say Bloomberg, but I wrote
about it also in my LinkedIn.
People can check it out or hit me upif you want me to send you the link.
He did a study on material scientists.
And what happened was, it was alarge material science company.
They rolled out generative AItools to their scientists across

(21:48):
two- or three-year period.
So not everyone got it at once.
It was an experiment.
And what they found was that the mostexperienced scientists got the most
out of it, because it required a lotof knowledge to kind of curate, right?
So generative AI is more aboutcuration instead of about building.
You get 10, 20, a hundred things.
You're like, "Whoa, what am I goingto do with all these?" So yes,

(22:08):
they could curate it, but theyalso express less job satisfaction.
Even though they were more productive,even though they could see more things
being chosen and more products being made.
And they still felt less jobsatisfaction because they liked
solving the puzzle themselves.
They liked going throughand doing the work.
They liked sitting with the differentoptions and playing around with them.

(22:29):
And generative AI didtake away some of that.
On the other hand, it has improved jobsatisfaction in call centers for very
junior call center people because theyhave access to immediate coaching.
They're not getting on the linewith some person who's screaming at
them, which isn't their fault, butthey're still getting screamed at.
And then AI, in that case,generative AI can help them get

(22:50):
out of that situation, diffuse it.
Either solve the problem or atleast make the person calmer and
able to have the conversation.
Make them feel less likethey're under attack.
They feel that they have tools.
So that's where it'sreally, really tricky.
Two completely different situations.
I agree.
But in one case, generative AI wasbeneficial to the most junior people.
In the other cases, most beneficialto the most senior people.

(23:11):
One group liked it.
One group hated it.
And I think also, in terms of the kindsof jobs we'll end up doing, it will end
up taking away a lot of kind of busy work.
Things that, quite frankly, couldhave been automated, but maybe
there's a little bit nervousbecause there's some edge cases.
So they wanted a humanto take a look at it.
So that will go, but what it'll mean isit'll end up changing a lot of our jobs.

(23:31):
On the third hand.
All right.
So I got all these hands going here, buton the third hand, what might also do
for some of us who have a specializedexperience and skills, we may find
ourselves potentially not workingfor a single company, but instead
specializing deeply in one particulararea using generative AI and then working

(23:51):
for multiple companies doing that.
Because our experience and beingdeeply specialized, you match that
with generative AI, you're goingto have a lot of power to get a
lot of really great things done.
But within the corporation, whereyou have multiple pieces and the
idea behind the corporation is thepieces are working together, but quite
frankly, even if the pieces are boredor doing repetitive work, as long as

(24:14):
the system works, that's one thing.
Generative AI is going to change that.
So we're not going tohave the same system.
Now, where that's going to lead.
I gave one example of what itcould be, but I don't know.
Okay.
It brings me back to your originalpoint that this is being done to us.
And it is also possible, instead ofsitting on the sidelines, for folks

(24:35):
who are concerned and thinking aboutthese things: A, to get involved.
And I keep hoping to B, seealternatives to the mainstream or
mass-commercialized kind of approaches.
And that's my early internet showing,that I think there can be alternatives.
My favorite social mediaplatform is Mastodon.

(24:57):
I run my own server.
It's early-internet-like.
It's very light on commercialization.
And so those non-commercialor less commercial options are
out there, if we build them.
Uh-huh.
This is tough because I'm springingit on you and it's brand new.
the news is all about DeepSeek.
Have you looked at DeepSeek?

(25:18):
Do you have any point of view on ityet, or is it just too soon to say?
I've looked at it.
I tried it through another software,not through DeepSeek itself.
I tried it through another softwarethat gave me access to the model.
I liked the reasoning that itwent through, it was nice to see
that reasoning because that ishelpful to avoid hallucinations.

(25:38):
You can say, "Aha !That's where itwent wrong," and you can come back.
So I think that part is quite good.
I think what DeepSeek shows is thatit is possible to have quite good
models, which can be released as opensource run on a variety of platforms,
which I believe could actuallyleave lead to better specialization.
That's my feeling.
My feeling is that an open-sourcemodel like DeepSeek —a small business

(26:01):
owner could take their data, couldeither fine-tune, train it or,
could use something called Rags orretrieve augmented generation, which
is basically taking all your dataand shoving it into a format that
the AI can easily access, right?
A small business owner could takeone of these open-source models, and
these are hosted in various places.
You could even host it.

(26:21):
You could even make a copy of themodel and run it yourself if you
wanted to, and so you can completelycontrol what's going on with your data.
That, I think, offers the reallybig chance for democratization
because most small businesses, smallto medium-sized businesses—do not
need the latest and greatest in AI.
What they need is AI that works withtheir data, that is set up to work

(26:42):
with their data and their specialsauce, and all their specialties
to give them that big boost.
Now, small businesses in the U.S.have not been growing in terms
of their, ratio of the GDP.
And it's not because theyhaven't been growing.
It's because big companieshave been growing faster.
Small businesses are estimatedto maybe be 40 to 50 percent as

(27:04):
efficient as large companies.
In some cases, I've seen lower numbers.
Generative AI with a model likeDeepSeek, don't run it on their
platform, bring it into another platform.
Lots of ways to do it.
This could actually be a great business.
You could have like your own thingwhere it could be set up for you.
You run it with yourdata and your processes.
And then that is the kind of thingthat could allow small businesses to

(27:26):
actually leap ahead of large businessesbecause small businesses are flexible.
Large businesses have this giantsystem, and they have to be careful.
The system has to keep going.
You break part of the system.
The whole thing falls down.
Small businesses can get everyonetogether and actually make this change.
And then, in my opinion, they couldactually leap ahead of the big
businesses, become more efficient,but also make more personalization

(27:49):
for their customers because theyhave access to places like Mastodon.
They have access to relationshipsthat they make because
they're more people-oriented.
In my experience, small businessesare more people-oriented.
So that plus generative AIcould really enable them to
grow at a really great rate.
And outdo the big businessesin a lot of areas.

(28:11):
So you can see, I actually seethis as a mark of something great,
something that could really helpsmaller companies get a leap ahead.
It's really cool.
I hadn't thought of it that way.
And I have had an increasing number ofmy intake calls start with, "I Googled. I
saw you on Google. I asked chat GPT for anOKR expert, and it recommended you." I'm

(28:35):
like, I never would have thought that thatwould be an inbound method, but it is.
If people are using it that way.
Awesome.
I'm also hopeful.
I was excited to see DeepSeek happenwith the hope that it is slightly
more environmentally responsible.
That if we can have models thatrun, more efficiently, than we can

(28:59):
do a little less damage from anenvironmental perspective in terms of
resources needed to run these models.
Because they just vacuumeverything up the, resources
to operate as well as content.
So, before we make the pivot to talkingabout practical application, is there
anything I should have asked you or thatyou want me to ask you that I haven't?

(29:21):
We touched on this briefly, but Iwould like to get back to the question
of what we can do as individuals.
So in my opinion, as individuals,we should educate ourselves on how
these models are being trained.
There is lots of information availableout there about what's going on.
There's lots of news stories.
If you find like a great source thatyou trust, you can continue to review

(29:44):
that source, but there are differentways to get this information, and
I suggest that we each find a wayto get the information and do it.
Now, I do it because I'm ageek, and I like these things.
But also, I'm doing it because peoplecome to me with questions, and I
also come to others with questions.
And so we want to be fully informed.
The second point is notto shy away from it.
So some folks' responses like, "Well,we should just shut it all down.

(30:06):
We should shy away from it, and weshould stop it." I'm not a lawyer,
but I honestly don't see the SupremeCourt shutting down this business.
So I'm a U.S. Patent agent.
I have seen patent decisions whichdid not make a whole lot of sense
in terms of the law but we'redone to preserve an industry.
So people do payattention to the industry.
It's not just what thelaws or what logic is.

(30:27):
So I just honestly don't think that thiswhole industry is going to get shut down.
So the question is then, okay, ifthis is the case, if we assume that
it's not going to get shut down,what are we going to do about it?
And that is where we can join togetherin groups, understand how it works,
you can join various nonprofits.
So I'm a member of ForHumanity.
We do AI ethics and guidance.

(30:47):
We work with the European Union,and the Austrian government,
and the UK government, butthere's lots of them out there.
Find one, join it, join with othersto make your voice heard, and make
certain that others know how you feel.
If a company is using AI in a wayyou don't like, let them know.
On social media preferably, so thatothers can jump in and say, "You

(31:07):
know, I don't like that either." Andso you have the force of numbers.
It is very important insteadof trying to sweep the AI under
the rug or hoping it goes away.
Neither of which are likely to happen,in my opinion, for us to take a stand to
get together, to figure out how we wantas individuals for AI to work, to talk
to brands, talk to our companies, butalso think about how we can benefit our

(31:30):
employees, and our clients by using AI.
Because we can do that.
And I think we also have anobligation to at least consider that.
As solopreneurs or small company owners,or even as employees and small to medium
size or even large corporate businesses.
We do have thatresponsibility in my opinion.
So this means we each need to take action.

(31:51):
So, that is a perfect segue intowhat will be our next episode.
Dvorah is going to come back for our nextepisode and we're going to talk about
actually using generative AI and whatwe do with it, and the benefits that
Dvorah has seen in her business and work.
That wraps up part one of myconversation with Dvorah Graeser.

(32:15):
Join us in our next episode for parttwo, where we'll dive into practical
applications of AI tools and how to shiftfrom anxiety to agency in using them.
As always, you can find episodelinks and resources at findrc.co/pod.
If you enjoyed this episode, please shareit with other Thinkydoers in your world.

(32:36):
Your shares really help.
All right, friends, That's it for today.
stay in the loop with everythinggoing on around here by
visiting findrc.co/newsletterand joining my mailing list.
Got questions?
My email addresses are too hard tospell, so visit findrc.co/contact

(33:00):
and shoot me a note that way.
You'll also find me at@saralobkovich on most of your
favorite social media platforms.
For today's show notes, visitfindrc.co/thinkydoers if there's
someone you'd like featured onthis podcast, drop me a note.
And if you know other Thinkydoers who'dbenefit from this episode, please share.

(33:22):
Your referrals, your word of mouth,and your reviews are much appreciated.
I'm looking forward to the questionsthis episode sparks for you, and I
look forward to seeing you next time.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.