Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
I'm TT and I'm Zakiyah, and this is Dope Labs.
Welcome to Dope Labs, a weekly podcast that mixes hardcore
science with pop culture and a healthy dose of friendship.
We're back with a follow up on AI. So we
(00:24):
previously talked about the environmental impact of AI, specifically large
language models or lms like chat GBT, and the need
to have a way to measure the resources required to
execute tasks. Yeah, but this time we're taking a look
at AI from a different angle and we're diving into
more of what's possible, the tools that are available, and
(00:46):
what's needed to leverage them if you choose to use them.
So let's jump right into the recitation. What do we know.
There's been a lot in the news on AI and
social media, and there was an environmental impact that we
talked about in Lab ninety four. Yeah, there's been discussions
of AI and education, particularly students' use of it. I've
seen everybody writing about that and a cheater Yeah.
Speaker 2 (01:10):
Yeah, yeah.
Speaker 1 (01:12):
Trump issued an executive order about early exposure and AI integration,
plus the opportunities for lifelong learners by creating a task force. Yeah,
let's move on to what we want to know. Yeah,
I feel like there's so much on the market for me.
I'm like, how do we keep up chat GPT? You know,
there's like this new AI tool.
Speaker 3 (01:33):
Now.
Speaker 1 (01:33):
I'm getting ads on Instagrams where people are like, I
created this AI tool and it does xyz you can
take a jpeg and blah blah blah. And I was like,
what am I clicking on? Did they think this is
what I want? It's AI overloaded at this point. It's like, oh,
you can use this AI to brush your teeth. I'm like, whooh,
okay that I am overwhelmed. And so what I want
(01:57):
to know is what's the key if you're a opting
and using AI, Like, how do you sift.
Speaker 2 (02:03):
Through all of those AI tools?
Speaker 1 (02:06):
Yes? And on the I guess the other side of
that coin, TT is what should you consider if you're
not using AI, Because just because you're not using it
doesn't mean you shouldn't know things about it, right, That's right.
So I think we're ready to jump into the dissection.
Perfect for this lab, we reached out to a creative
entrepreneur who is an expert at AI and teaches others
(02:27):
about it. Lauren is that girl when it comes to
branding it a designer educator. Yeah, call yourself our AI auntie.
Speaker 4 (02:34):
I'm Lauren de Vane and I branded myself as your
aiu T on Instagram and on the Internet. I used
to run social media creative at All to Beauty before
I started my own things. So I kind of come
from a creative background. My degree is in graphic design.
So I'm the creative AI girly that you find helping
(02:56):
creatives entrepreneurs figure out how to be using ai I
in their creative workflows but also just like in their
everyday life and figuring out ways that it can really
be beneficial to us in all of the things.
Speaker 3 (03:09):
So yeah, that's what I do.
Speaker 2 (03:10):
Oh my goodness.
Speaker 1 (03:11):
So what made you start weaving AI into your workflow? Like,
what was the thing that popped up where you said,
uh huh, I've got to tap in.
Speaker 3 (03:21):
So for me, it was discovering mid Journey.
Speaker 4 (03:25):
So chajubt existed before mid Journey, but it was early days, right,
it was still like twenty twenty two. It couldn't do
that much yet, And so I tried Chad Jubet, I
tried Dolly, which is open Ayes image.
Speaker 3 (03:38):
Generation tool, and I was like, this is so bad.
Speaker 4 (03:41):
Fast forward a couple months later, we were like December
twenty twenty two. I always see these images of just
such surreal show and I'm like, these aren't photos, Like
where are these coming from?
Speaker 3 (03:51):
How are people making these?
Speaker 4 (03:53):
And so I found a YouTube that was like mid Journey,
but it was on Discord at the time, and I
had no knowledge of Discord, and so I needed to
find out like how I could be using this tool.
So I like went into like a forty eight hour
like I just dove so deep in. It was just
like making all these images, learning what I could do,
and it came out of it and just had such
(04:15):
a different perspective based on what I had done for
the last ten years working at Walgreens and working at Alta,
being a designer and a creative and an art director
and a stylist and photographer, and I just was able
to see, oh my god, the way that this could
impact and change the way that creatives are executing photo
(04:35):
shoots or any like any sort of creative execution. How
this could change it in terms of speed, intent, in
terms of cost, in terms of everything, and this just
kind of it all like crescendoed at the right time,
and I just started teaching designers how to use AI
and that just turned into what it is now.
Speaker 2 (04:54):
Amazing, amazing.
Speaker 1 (04:56):
The reason I thought it was so great that we
had the chance to talk to you is because I
just saw MIT drop a study that was saying the
more we rely on AI tools, particularly like chat GPT,
the less our brains light up, like our neurons are
just saying I'll pass. And they're calling it cognitive debt.
So basically they're saying, your creative muscles can get lazy
(05:19):
when you use these tools to do a lot of
the thinking. But there's also a twist because there was
another study and they were saying, if you pause and
reflect and get kind of meta, which is not just thinking,
but thinking about how you're thinking, that AI tools can
actually boost your creativity and originality. And I want to
know your take, because you've gone from traditional design and
(05:41):
branding support to AI enhanced work. Do you think the
AI tools are making a sharper or too comfortable?
Speaker 4 (05:49):
I think the harder thing to grasp is that PREAI,
when we look at different tools, they generally.
Speaker 3 (05:57):
Are doing like one thing for you.
Speaker 4 (06:00):
Whereas AI it's like I could use it to help
me learn how to code, where you could use it
to learn how to come up with an entire strategy
for your business, while someone else could be using it
to learn how to like understand what their dietary needs
or whatever it is.
Speaker 3 (06:13):
Right, So there's so many.
Speaker 4 (06:15):
Use cases for AI that I think when we are like, oh, well,
it's gonna make you stupid, or it's like but in
what capacity?
Speaker 3 (06:22):
Right?
Speaker 4 (06:22):
And then I also think that like the other bigger
piece of that is most people don't know how to
use it properly.
Speaker 3 (06:29):
And that tends to be the issue because if you're
using it, and this is one of the biggest thing
that I teach my students and all my followers is
process over prompt.
Speaker 4 (06:38):
It's conversation over command. So it's not just like telling
it to do this one.
Speaker 3 (06:42):
Thing and then like that's it.
Speaker 4 (06:44):
It's like, okay, you ask it to do this thing,
and then it gives you an output, and then you say, oh, well,
based on that, can I have three ideas that would
you know, blow up from here? And then we pick
one of those and we say, okay, I have this, this.
Speaker 3 (06:56):
And this idea. So it's more about the way that
you use it.
Speaker 4 (06:59):
I think it probably maybe isn't going to make you
any smarter if you just take the first output and
run with it and say that's it.
Speaker 3 (07:05):
But if you.
Speaker 4 (07:06):
Understand how it works and how to actually work with
it as a collaborator and a partner, and you have
the domain expertise yourself, and that's another thing that is a.
Speaker 3 (07:17):
Huge piece of it.
Speaker 4 (07:18):
It's like, if you're already good at what you do
and you understand all that this is really just like
asking another person to come into the room that's also
at the same level as you and have a conversation,
whether it's a person or a group of ten different
people that you say I want you to come in.
From this perspective, it's like when you know how to
prompt it and you know the ways that you can
use it, I absolutely do not believe that it's making
(07:39):
us stupider, number or slower. I mean, since I've been
using it, I feel like my creativity is only exploded
in the way that I'm able to think about ideas
and the way that I'm able to quickly iterate on
ideas and then be like this is the best option, right,
Like I just think it's all about the way you
use it and depending on how they ran these tests, right,
(08:00):
what does that look like. Is it people that are
that know how to use chad GPT, Is it just
a bunch of students that have never learned right?
Speaker 3 (08:06):
So I think there's always going to be this side
or that side of the coin when you look at it.
Speaker 1 (08:11):
The other thing that I always tell folks is that
the tool is only as good as you are, Like
the analogy I always use because people are like, oh, no,
using AI is cheating.
Speaker 2 (08:22):
All these undergrads are cheating. They're cheaters.
Speaker 1 (08:24):
They're cheaters, they don't know anything, they're dumb, all these
things like that, And I'm like, listen, back when the
abacusts was made, mathematicians thought using an advocust was cheating.
When the calculator became a thing, they thought that that
was cheating. But the thing is is that the calculator
is only as good as the user. I can give
(08:45):
you a really advanced calculator and say, calculate the area
under this curve. If you don't know what to put
into the calculator, you won't get the right answer. You
can get an answer, it probably won't be right, and
so I always say, all of these AI tools are
only as good as the user. It'll only get you
but so far, which is the reason why when students
(09:08):
are using AI, it can only get them to a point,
and then their professors or teachers or whoever will ask
them for more, and that's when they have to start
really thinking critically and using it as an assist rather
than the only thing that they're using. As the only
tool that they're using, they have to use their brains.
Speaker 4 (09:29):
Yeah, I totally agree, and I think you know, when
you look at the whole idea of like students and
the way that they're using it and whether it's cheating
or not, I think we also probably need to look
at like the way that teachers are evaluating students in
their work. So maybe it's less about what is the
final output, but like show me how you got to
this output, Like where what was the thinking? If the
(09:51):
technology is shifting, I think so needs to.
Speaker 3 (09:54):
The way that we are teaching children how to use it.
Speaker 1 (09:57):
Right, Men should definitely include assessing process, not just output.
That's what we think about for dope labs. It's like, yes,
I want you to know these facts, but i'd like
you to think about the string of questions that got
us here. We start with what we know, and then
what we want to know and what questions do we
ask to get to those answers?
Speaker 3 (10:16):
And I think so many people don't know the questions
to ask.
Speaker 4 (10:19):
This is what I tell people. Context is key. The
more context you give it, the better the output's going
to be because without any knowledge of what it is
they're actually trying to do or achieve, how is it
able to like know what you want and then also
being able to write a prompt that is actually going
to get you good stuff out of it. So never
taking something just and running with it on its own,
(10:42):
but always looking at it and evaluating it and saying,
is this what I actually.
Speaker 3 (10:46):
Want out of it?
Speaker 4 (10:46):
I think people look at AI and chat Gypts it's
like the scary techy thing, and it's not. It's all
about communication. If like, I think teachers and journalists those
are the people that are going to be using these
tools the best because they already know how to like
communicate properly right and ask questions well.
Speaker 3 (11:03):
And I think that's so much about what this is.
Speaker 4 (11:05):
And if you are, like I said, a domain expert
in what you do. Then you're gonna know those questions
to ask.
Speaker 3 (11:11):
You're gonna know.
Speaker 4 (11:12):
When something's wrong, and you're gonna be like that sounds
like a terrible idea.
Speaker 3 (11:16):
Let's not do that.
Speaker 4 (11:16):
Why did you give me that idea? So it is
all about that, like back and forth with it.
Speaker 1 (11:22):
Yeah, what are your go to AI tools? Folks are
always talking about Chat GBT, but there's so much more.
Speaker 2 (11:43):
Can you talk about those?
Speaker 3 (11:44):
Yeah? So Chat, I would say, Chat is my like
go to right now.
Speaker 4 (11:50):
We are in a place where stuff moves so fast, right, yes,
but right now, Chat has deep research that is so good.
It has image general that is so good. It has
these three advanced reasoning models that are like next level.
I'm not on the free plan. I'm also not on
the Plus plan. I'm on the Pro plan. I'm paying
two hundred dollars a month for this thing, which is
(12:11):
insane to me. I remember saying when I was paying
the twenty dollars a month that I would pay tw
hundred dollars a month for this, and I was like, no,
I wouldn't. But now I'm like, absolutely I would. Because
with the Pro subscription you're getting almost unlimited messages. On
all of the models, and these advanced reasoning models are
like next.
Speaker 3 (12:30):
Level what they can be doing.
Speaker 4 (12:31):
I mean filling in for what you'd be paying five
ten thousand, fifty thousand dollars to a consultancy to do
that you can run and do in an hour on
your own, Like it's it's crazy. So Chat, I do
think is a really great option. You've obviously got Gemini
from Google, which I if you follow me on Instagram.
Speaker 3 (12:52):
A while back.
Speaker 4 (12:53):
I used to just I hated Gemini anything for the
Gemini the time.
Speaker 3 (12:58):
It's just like it's so bad, Like it's so bad.
Speaker 4 (13:01):
And now I'm like, no, dude, it is good, and
like their video model is so good. I had like
a moment a couple of weeks ago when it came
out where I was just like, Okay, maybe I'm a
little bit nervous about this because it's creating these videos
that are so lifelike.
Speaker 3 (13:18):
With native sound.
Speaker 4 (13:19):
And that's scary is you've got voice that matches lips
that matches something that's totally fake and someone can just
put in any text.
Speaker 3 (13:27):
They want for it to say.
Speaker 4 (13:28):
So like, the video models are getting really really good,
and then Anthropic has Claude, the brother and sister that
founded Anthropic, actually started at open Ai with Sam Alman
with Elon Musk. They left and they were like, we're
gonna go and create.
Speaker 3 (13:45):
A safer AI.
Speaker 4 (13:46):
So there's a lot more like guardrails around Anthropic and Claude.
But it's so good at creative writing. It's so good
at creative writing, so it's real, and it's good at code.
Speaker 3 (13:56):
Right, But the problem there in lies that they all
freaking leap froggy every week.
Speaker 4 (14:01):
So it's like this is the best one and now
this is the best one, and it's like I can't
keep up. But Anthropic, Open Ai and Google are going
to be remain for like large language models and then imagery.
Speaker 3 (14:13):
I love mid Journey.
Speaker 4 (14:14):
They just came out with video last week and it's
really really good.
Speaker 3 (14:18):
I was not expecting it to be good.
Speaker 4 (14:20):
They've got a bunch of new stuff with like omni reference,
so being able to you know, take your company's product,
take a shot of it on your counter, bring that
in and have it reference it and be able to
get it really close and then turn that into video, right,
And it's just like so like what you can do
now is so crazy, But those I think would probably
(14:41):
might be my my, Well, I have one more, okay, okay,
So and it just opened in beta like recently to everyone.
Speaker 3 (14:51):
I've been using it in alpha for a long time.
Speaker 4 (14:52):
It's called da and it's a browser, so it's from
the browser company. They have another browser are called Arc,
but Dia is an AI browser. So basically on any
website that I'm on, that is the context. So I
could just open up a chat and be like, hey,
(15:13):
can you tell me x y z and it'll just
take everything on there, or a YouTube video you can
be like, can you summarize.
Speaker 3 (15:19):
This YouTube video?
Speaker 4 (15:20):
So say I'm like planning a trip right and I've
got like six different Airbnb tabs open. I just tag
in all of them and say, can you create a
comparison chart for me for this? Give me the pros
and cons, like whatever it.
Speaker 3 (15:30):
Is, but no more switching taps. Yes, it is so and.
Speaker 4 (15:34):
It's so good because it's running GBT four point one
on the back end, so it's it's quick and it
just came out and beta like a week and a
half ago, two weeks ago.
Speaker 1 (15:44):
So the way you felt about what was it, The
way that you didn't like Gemini Ja. The way you
felt about it is the way I currently feel about Copilot.
Speaker 3 (15:52):
I never even tried. I mean I used Copilot for
like two days.
Speaker 4 (15:55):
But that was It's I don't know Windows, they don't
know Microsoft, they don't know any of it.
Speaker 1 (15:59):
I'm like, it's from such a waste of time. Love Claude, though,
Love Claude.
Speaker 4 (16:03):
Okay, speaking of though, Google, Notebook LM is another.
Speaker 2 (16:07):
Really Notebook LM is good, so good.
Speaker 1 (16:13):
I haven't tell you my.
Speaker 2 (16:14):
Notebook LM story.
Speaker 1 (16:16):
Tell it. At work, my supervisor, we were we were
doing we had an off site, so were doing all
this stuff.
Speaker 2 (16:22):
And this is part of my next question.
Speaker 1 (16:25):
A lot of industries are now making it so that
everybody that works in their company knows how to use
AI and knows how to use it effectively. So we
were doing these kind of like experiments where we were
using AI to build certain things, and we used Notebook
LM to create a podcast, and do you know, we
uploaded what the input was a picture of something that
(16:46):
he drew on this whiteboard in his handwriting. It's like
chopped up words, and it just like kind of showed
like the this ecosystem that we were trying to create
and Notebook LM took it and made like a twenty
five minute podcast and it was spot on. There were
two hosts, it was crystal clear, everything made sense.
Speaker 2 (17:08):
I was like, we barely gave it anything. It was amazing.
Speaker 3 (17:13):
It's nuts.
Speaker 4 (17:14):
It's nuts, and what's even crazier now, So like for
anyone listening, basically what you can do is you can
give notebook LM, which is a Google product. You can
give it sources. So I could just like go to
like five different websites and be like use these as sources.
Or I could upload a transcript, or I could upload
I actually had chatgybt do a deep research on myself
(17:36):
and my brand, fed it that and then it basically
was a podcast of two people talking about me. And
I was like, this is so like weird, meta, narcissistic,
but also like amazing, Like this is so crazy because
now they've introduced like the like live version, so like
basically they can be having their conversation and you can
(17:58):
be like I want to raise my hand and as
a question and they'll it'll respond to you.
Speaker 2 (18:03):
It's oh my.
Speaker 3 (18:03):
God, it's so crazy. It's so crazy.
Speaker 4 (18:06):
There's so many, so many tools though too that you
can clone your voice, which is a little bit scary.
As just like a quick aside. I think everyone and
your family should have like a safe word now, so
that like if you're getting phone calls from someone and
it sounds a little bit weird and you're.
Speaker 3 (18:22):
Like, is this really?
Speaker 4 (18:23):
Then oh, you know, we have this because this is
what's happening is people are like scammed. And so if
you have a word that you can say, hey, what's
the safe word or whatever, and they don't know what
that word is, then like they're probably not that person, right,
So just something to think about as a weird world.
Speaker 1 (18:39):
Yeah, thinking about recreating voices, how about recreating your own persona?
I tried out hey Jin, which is a generative AI
tool that lets you make avatars of yourself. Now I
wasn't sold, but I'm curious, Laura, what's your go to
for this sort of thing? Do you make a videos
of yourself?
Speaker 4 (18:59):
So I've like messed around with it a little bit,
Like when it first came out, I made a real
that wasn't actually me. But I don't do it a
lot because here's my kind of thought on it.
Speaker 3 (19:12):
In with all of the AI happening.
Speaker 4 (19:16):
People are really looking for personality and want to connect
with people still, So if the concept that I'm creating
on my stories is a fake version of me, then
it's like, well, where do I actually come in there?
Speaker 3 (19:31):
Right, And like, I get it.
Speaker 4 (19:32):
There are plenty of people that want to have this,
and like I'm all about like creating a custom GBT
that's maybe trained on your knowledge and trained on your
IP that people can go and chat with and like
be able to like learn from. But I just don't
really see the value in terms of like using it
to create I mean, there's plenty of creators that are
doing it and their videos are going viral and like
(19:54):
it works for them, But for me, it just seems
like just as much work for me to like go
and type out what I want to say and then
go and generate it. Where I'm like, I don't really
have much of a.
Speaker 3 (20:06):
Strategy when it comes to my social media.
Speaker 4 (20:08):
When I have an idea, I will make the real
and I will post it right then, and like that's it.
Speaker 3 (20:13):
So for me, I'm happy to just.
Speaker 4 (20:15):
Pull up and start talking and post rather than going
and having a fake version of myself do that. But
I don't there are use cases for everything, right, So
maybe it's on your sales page and you want to
have like a little version of yourself down there answering questions,
so whatever. But I would think I think Hagen is
probably the leading version tool for that. I've used phish
(20:40):
dot ai for audio cloning just because I had heard
about it and it was different than eleven Labs. I know,
eleven Labs just came out with something. But this is
what's so wild is there are so many tools for everything,
and it's it's really hard to keep up, especially when
you have to pay for all of them to like
even test them.
Speaker 3 (20:58):
Right.
Speaker 4 (20:58):
So yeah, I'm kind I'm entering this like version of
myself that I'm.
Speaker 3 (21:04):
Like, Okay, Lauren, you need to like just focus because
if I try and be the A on two to everything, right,
it's like I.
Speaker 4 (21:10):
Truly like the last two months, I've kind of just
like taken a step back because it's so much and
especially with everything going on in the world, it's just
like to try and keep up with all of it,
and it's just too hard. So I'm like, I gotta
I gotta pick like one or two or three and
focus and like because I am, you know, a creative
director and a designer, and that's kind of my world.
(21:32):
I think that's where it's gonna kind of end up
living and landing. But it's just hard for me because
I have ADHD and I want to you know, I
want to help everyone because I see the value and
what it can do for everyone, and so I'm like, oh.
Speaker 3 (21:43):
But what if, like what about these people, what about.
Speaker 4 (21:46):
The marketers or what about these people? And it's like, yes,
it can help everyone, but I I gotta like figure
out how to like keep it on, keep it on
the tracks, otherwise I'm gonna end.
Speaker 3 (21:54):
Up off the tracks because it's just so crazy.
Speaker 5 (21:57):
Right now.
Speaker 1 (22:12):
You're bringing up so many good points, which is making
my head like grow bigger and bigger with all these questions.
Like I mentioned before, it seems like now jobs are
expecting you to know how to use AI when you
start the job. So Morgan Debaum, she's the CEO of Blavity,
and she created her own GPT so that her assistance
can write emails. They can do a lot of things
(22:35):
for her because they just prompt her GPT, which has
basically her brain, and they can do things for her
and those are the types of skills that folks want
you to have. But the fear is is that people
are like, oh, I'm going to lose my job. There's
not going to be jobs. All the jobs are going
to be AI. Can you talk about the future and
(22:55):
what that might look like and like technology that you
feel like going to change the game and how people
can navigate these new AI waters for sure?
Speaker 4 (23:06):
And actually, funny story, uh Morgan, I actually did branding
for her Maacha brand.
Speaker 2 (23:10):
Oh my god, that's amazing.
Speaker 4 (23:14):
A little bit of a full circle there, But no,
I think that you know, I get it. I understand
why people are nervous about it, and I understand that
there is this fear, but I also think that that's
kind of been what has happened forever. Right, So, like
you gave the calculator example, but if we look at
the example of.
Speaker 3 (23:33):
Like in the creative space.
Speaker 4 (23:35):
Okay, so we had people that were like painting portraits, right,
and then the camera came along and I was.
Speaker 3 (23:41):
Like, well, we don't want to sit for a portrait
for eight hours. We want the goal of technology generally
is to get.
Speaker 4 (23:47):
Things done faster whenever. Right, So Okay, now we've got
film photography. Yeah, you still got to sit there for
a while when it first came out because it takes
a while for that to work.
Speaker 3 (23:56):
But then okay, now.
Speaker 4 (23:57):
We're getting into like better cameras and it's quick clicking,
push and boom and we're done. But now we've got
digital cameras, right, So what happens to all of those
people that are working at like the film processing plans
or building cameras that are not digital?
Speaker 3 (24:08):
Right, those are going to change. And now we've got photoshop.
Speaker 4 (24:12):
But now we have a company like Adobe, right, So
like where did all those jobs come from?
Speaker 3 (24:16):
So I truly.
Speaker 4 (24:18):
Believe that it's all about like we have to evolve
and we Yet just because you had one job and
that's what you learn doesn't mean that that's just that's
that's life.
Speaker 3 (24:28):
And that's what you get to do forever. And I
know that maybe.
Speaker 4 (24:30):
Comes across a little bit like privileged. But at the
same time as if you want to stay ahead, it's
going to be on you to make sure that you
are evolving with the technology and changing with the technology
and shifting how what your role even is. Right now
you can offer you know, these other things that you
couldn't offer or now you can say, hey, I can
give you five different ideas in the same amount of time,
(24:53):
or I can not give you five ideas, but now
I can spend more time on your project because I'm
taking less time I'm on the upfront to come up
with the ideas.
Speaker 3 (25:01):
Right, So it's all about.
Speaker 4 (25:03):
Kind of shifting how you're bringing AI into your like
workflows and understanding Okay, because like for me, like I said,
I have ADHD, so a lot of it's there's a
lot of people out there that preach, Okay, I'm gonna
teach you how to use AI to save.
Speaker 3 (25:18):
You, you know, twenty hours a week.
Speaker 4 (25:20):
That is not what I pre I don't tell people
I'm gonna save them time because for me, just because
I'm using AI doesn't mean that I'm spending less time
on something. It just means that I'm spending different time
on different things, I would say, right, So it's more
about like I'm using it as like this creative lever,
whereas other people are using it as this like automation lever,
(25:42):
and some other people are using it as like a
time saving lever or all of the things combined.
Speaker 3 (25:47):
But I do, I really do think that it's up
to everyone.
Speaker 4 (25:51):
To make the determination of like, Okay, I know I
need to learn how to use AI. I know, like
and if if they don't want to learn to use AI,
then that's their decision. But I just I don't see
the value of ignoring something as large as what we
have here, Bigger than the Internet, was bigger than all
of these things that, Like, if you look at somebody
(26:11):
that said, oh, yeah, I don't you know, I don't
need a website for my business, and you'd be like, we, well,
good luck out there, right, And it's like, what do
you think is gonna happen when all of your competitors
are adopting AI and can offer your potential clients stuff faster, stuff, cheaper,
whatever it is because they are using this tools yeah,
(26:32):
or better.
Speaker 3 (26:32):
Yes, because we have a better idea exactly.
Speaker 4 (26:36):
And if I can go and say, oh, yeah, sure,
I'm going to run an entire deep dive on your brand,
I'm gonna be able to tell you exactly.
Speaker 3 (26:42):
Who your audience is.
Speaker 4 (26:43):
If you don't know who your audience is, We're gonna
figure out who your audience is based on all of this. Like,
there's just so much more value when you're using it
properly that you can provide to customers and clients that like,
you know, I do I It's it makes me nervous
for peop that don't want to even at least try
because they think it is cheating, or they think it's stealing,
(27:06):
or they think it's whatever they think it is. But
it's almost like somebody, you know, giving a restaurant like
a one star review and they've never.
Speaker 3 (27:14):
Even been inside.
Speaker 4 (27:14):
It's like, how can you make that determination when you
really like, maybe you played with it twice and you
didn't know what you were doing and so it gave
you a bad answer, and now you've written it off
and it's like, all right, well okay, I don't know.
I just I'm so I feel so strongly that it
is going to change the course of everything that if
(27:37):
you don't understand it, you are not in a driver's
seat position anymore. You are following everyone else and you're
hoping that you can understand what's going on. But so
it's like, just start learning now so that it can
get easier, rather than you jump in and you're like.
Speaker 3 (27:52):
What is all this? You know?
Speaker 1 (27:54):
Yeah, I think that's so great. Recently, TC I don't
know if you know this. My mom has been taking
an AI class. Okay, they have them for seniors. Because
I was telling her, I was like, oh this, I'll
use chat GPT And I think it's so important to
keep your brain active and to be thinking about what's
new and understand technology. And it also helps you see
when you see scamming and phishing attempts, how are they
(28:15):
using these things to trick me? Right? So I think
you know, these tools I don't think are inherently nefarious.
I think it is just like any other tool how
someone is using it. A kitchen knife can be great
for chopping onions, or it can be used to commit
a crime. So I think it's just like, yes, exactly exactly.
Speaker 3 (28:33):
That bad people are going to do bad things. It's
about like you know what I mean. And it's like the.
Speaker 4 (28:38):
Technology and the tool itself, that is, it's own without
anyone driving it, it's nothing, it's just sitting there.
Speaker 1 (28:44):
So but you have to know what's possible. So I
showed her Hegen so she could understand. I said, this
is making a video of me and if you saw it,
And I gave her multiple things like what do you
think is a real video of me versus what do
you think is generated right to just so she could
see the limits of like how far it could go
and how much could look like me. And I think
that makes a better case for what Lauren just explained
earlier about having a password, so even if it's not
(29:05):
something you're gonna adopt, so you can get out the
way when it's coming.
Speaker 2 (29:09):
And that's the same way I feel about AI.
Speaker 1 (29:10):
If you won't get in the driver's seat, if you
won't get in the passenger seat, you do need to
know how fast it goes so you can get out
the way or see it coming.
Speaker 3 (29:18):
And put your seatbelts on, buckle up.
Speaker 1 (29:21):
Yeah, Lauren TT shared a AI a generated creator and
they had on like a bonnet. It was a black girl.
She was talking, she was using black vernacular. Was I
was like, oh, it was given homegirl, but not connected
to anything. And you know, I worry about like the
(29:44):
adoption of different group aesthetics and kind of digital blackface
in this space and what it means for groups that
may not kind of know that this is happening. And
I'm curious, like, have you seen these kind of personas.
I guess they're not people, and then like as a designer,
(30:05):
a brand designer, and a person that's teaching other folks
how to do this type of work, Like, what responsibility
do you feel like we have when we consider the
ability of tools to replicate races and cultures and identity
for whether it's financial gain or just for the aesthetic value.
Where do you land on that?
Speaker 3 (30:23):
For sure?
Speaker 4 (30:24):
So I will say I'm not really on TikTok because
I've told myself if I got on TikTok, I would
never come out of it.
Speaker 3 (30:31):
But I know what I've seen what you're talking about.
Speaker 4 (30:33):
But I don't think there's quite as much like on
Instagram yet Again, it's all about like the user's responsibility,
and I think a lot of the people that are
doing stuff like that are not the most responsible people, right,
And I think they're just like, oh yeah, sure, and
I'm gonna throw this up there and like I don't
care if it's true, I don't care if it's biased.
I don't care if any of this stuff because they
(30:53):
they're generally trying to just.
Speaker 3 (30:55):
Like make money and don't pair.
Speaker 4 (30:56):
Especially they can kind of hide behind this like fake
person now, So there isn't as much accountability as well.
The other piece of it is like generating like images
in general, whether it's like a clone of a fake
person or like just an image, you know. Like I
worked at ALTA. I would never say like, yeah, let's
use AI people for our photography, right, I would say, hey,
(31:20):
let's use AI to help us like cast who do
we want to cast as models?
Speaker 1 (31:25):
Right?
Speaker 4 (31:25):
Or what is the shots that we want to get
or what should the makeup look like? But then of
course we're going to go and hire a real person
and have that shoot because I just like, again we
come back to the thing of like people being real
and authentic, and we have to make sure that like
we're not misrepresenting and to be fair, so much of
the data in large language models is biased because everything
(31:48):
on the Internet is biased, and it's the way that
we have this information that it's like learning from.
Speaker 3 (31:54):
Right, Jem and I got in trouble.
Speaker 4 (31:55):
They had to shut down their video or their image
model for a while because it was misrepresent like historical thing.
Speaker 2 (32:02):
You know.
Speaker 4 (32:02):
They would ask for, oh, can we see like this,
and it was like no, no, that's not what that
should be. And so they should because I knew it
wasn't doing it properly. But there's just so much still
that is like can happen that it's not right and
it's not appropriate that's going on, Like we're seeing deep fakes,
like all sorts of stuff. And I think again it
(32:23):
comes back to like it's about the person using it,
having the morals and the values of what it is
that they're creating. But I do think like when we
talk about, you know, creating people of all different colors
and races and nationalities and sizes and skin tones and
hair types and body types, like there's so much and
I think, you know, as somebody that did work at
(32:45):
all to where we were trying to cast diverse models,
you know, or you know, we go and try and
find stock photography and it's like, okay, well, when you're
looking for stock for photography, you do have to get
specific about like okay, yeah, I'm looking for like a
plus sized black or I'm looking for like a smaller
Asian WM or whatever it is. You do have to
be specific. And so I do think people look at
(33:06):
it and it's like, oh, well, now you're having to
like say what you want, and it's like, well, you
still would have needed to do that when you're casting
a model.
Speaker 3 (33:12):
Or you're casting anyway.
Speaker 4 (33:13):
So yeah, but it's a matter of like, okay, now
when we see this image, is this realistic to what
that person would actually look like based on real real shit,
you know what I mean, rather than like what you
think what AI thinks it is, because AI imagery is
the way that it's created through, like diffusion models is all.
Speaker 3 (33:34):
I mean, that's a whole other thing.
Speaker 4 (33:35):
But I think that's the piece too that you have
to be able to like have again that domain experience
of like somebody to look at that and be like,
we're not using that. That doesn't look appropriate, that doesn't
look like someone from Jamaica, or that doesn't look like
someone from wherever, right, and we're able to look at
it and make those determinations for ourselves.
Speaker 3 (33:56):
And I think the other piece of that too is
when we a lot of people talk about okay, you know,
when we're putting information in and we're giving information to
chat gybt like it's learning from us.
Speaker 4 (34:07):
And I do think that like it is a little
bit of our responsibility to put stuff in there now too,
that's new to kind of negate a lot of the
bias information that it has, because if we don't, then
like how is it gonna ever be able to kind
of change?
Speaker 3 (34:22):
And I saw something today that was like Elon.
Speaker 4 (34:24):
Musk is going to try and use Grock to like
rewrite history because he thinks that all of the data
that we have is like bullshit or something.
Speaker 3 (34:32):
And I'm like, oh, good, good, good good.
Speaker 4 (34:34):
That's who we want to rewriting history.
Speaker 3 (34:37):
So yeah, I don't. I mean, it's all it is
all very like.
Speaker 5 (34:42):
Scary.
Speaker 4 (34:43):
And that's the thing, right, is like awareness and understanding,
and like that's what needs to happen is people need
to understand how these tools work, what's wrong about them,
what's right about them. Sam Altman, the CEO of Opening Eye,
just came out and was like, why are you guys
trusting these things?
Speaker 3 (34:58):
You shouldn't be trusting. This is the one thing you
shouldn't be trusting. And people are like, what why are
you saying that. It's like, because he's right, we shouldn't
just be blindly trusting AI models. That is not what
should be happening.
Speaker 4 (35:10):
So I think, just generally, I think the main message
here is like learn what these things can do, learn
the good they can do.
Speaker 3 (35:19):
Learn the bad they can.
Speaker 4 (35:20):
Do, and really just have a good understanding of what
it's capable of so that we can understand just like
your mom, like is that fake?
Speaker 3 (35:26):
Is that real?
Speaker 1 (35:27):
Like? What? Like?
Speaker 4 (35:28):
Awareness is so important right now, just across the board
to understand, like what's.
Speaker 3 (35:33):
Real and what's not.
Speaker 2 (35:35):
I love this.
Speaker 1 (35:36):
This is the information that people want to hear. I mean,
you talked about all of the good things with AI
and then some of the potentially really scary things with AI,
and like you were saying, it's good to just know,
Like you might say, oh, I don't really want to
engage even though people are engaging with AI and they're
not really realizing it. Understanding and having awareness of what
is going on, where we're moving to as a world,
(35:59):
Like what is going to be possible in the next
six months?
Speaker 2 (36:03):
Is it's wild?
Speaker 1 (36:04):
It's wild, and regulations are trying to keep up and
it's just a wild landscape, right.
Speaker 2 (36:10):
And I'm excited to try all of these tools. Oh, somebody,
I have them drop those things.
Speaker 1 (36:15):
I'm gonna drop these in the episode description and you
can see all of these things because I think that's
gonna be important for sure.
Speaker 2 (36:22):
Thank you so much, Lauren. You are amazing.
Speaker 1 (36:25):
I am so excited to Zakia introduce me to you,
and so I'm excited to continue following. I'm gonna be
diving even more into AI because I was like, I
feel like I'm an AI expert.
Speaker 2 (36:36):
After listening to you, I'm not. There's still so.
Speaker 1 (36:38):
Much I could be doing, and it it just got
everything fire my mind. I was like, oh, I could
be more productive in this way and that way and
this way and that way.
Speaker 2 (36:46):
Let's get to it.
Speaker 1 (36:54):
You can find us on X and Instagram at Dope
Labs podcast, tt is on X and it Instagram, at
d R Underscore, t Sahoe, and you can find Takiya
at z said So. Dope Labs is a production of
Lamanada Media. Our senior supervising producer is Kristin Lapour and
our associate producer is Isara Savez. Dope Labs is sound
(37:17):
design edited and mixed by James Farber. Lamanada Media's Vice
President of Partnerships and Production is Jackie Danziger. Executive producer
from iHeart Podcast is Katrina Norvil. Marketing lead is Alison Kanter.
Original music composed and produced by Taka Yasuzawa and Alex
sudi Ura with additional music by Elijah Harvey. Dope Labs
(37:40):
is executive produced by us T T Show Dia and
Takia Wattley.