Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
(upbeat music)
(00:02):
Welcome to the Good Fit Careers Podcast,
where we explore perspectives on work that fits.
I'm Ryan Dickerson, your host.
Today's guest is Chad Connally,
a senior conversation designer with Amazon's Alexa team.
Chad, thank you for being here.
- Hey, Ryan, how are you?
- I'm doing great.
I wanted to bring you on
because I find your work truly fascinating.
(00:24):
You've done everything from product management with IBM
and the Weather Company through conversation architecture
and conversation design with Google and Amazon.
I would love to learn more about your perspective.
Would you tell us a little bit about
what you were like as a kid,
what you wanted to be when you grew up?
- Sure, I always say like when kind of
you get the like pre-roll of my career,
it's a nice way of saying that I'm old now.
(00:45):
(laughing)
Which I always tell people too,
it's a good way of saying like the opposite of that is worse.
Like if you're not old,
then something bad happened along the way.
- Right, lucky to make it here.
- Right, I think like as a kid,
I feel like I was like a weird kid,
but a very well adjusted weird.
I was a gay kid and grew up in West Texas
(01:07):
and somehow I was also well parented for that reality.
Was sort of raised to think that I was no better
or no worse than anyone else,
but also very aware that I was different than everyone else
just by the fact of waking up every day.
And I think somewhere along that journey,
I just learned to embrace that difference
(01:29):
and kind of lean into it and do what I do
and do what I wanted to do, which is still true to this day.
I was an avid reader.
That's a thing that was true then and is true now.
I don't know if I have one story where I was like,
where I wanted to be an astronaut or I wanted to be a doctor.
I think maybe there was a period
where I wanted to be an artist,
(01:50):
but history became my favorite subject was my college major.
Coming out of school in 1999, technology was blowing up
and sort of took jobs in that space and never looked back.
- Sure, so what did you study?
When you were thinking about your education
in college and whatnot, how did that transition
to that first full-time job after school?
(02:13):
- I think I was fortunate with timing.
At least as far into the late '90s,
consulting firms would still hire full classes
of like 50, 70, hundreds of new grads.
And you would graduate having gone to school
and go back to school for consulting school.
(02:34):
- Oh yeah, exactly, teach you how to actually do the job.
- For us, sometimes a summer
before you actually start working,
I think mine was I started consulting
the training program in June
and didn't start client work until September.
I think having a liberal arts background
and just kind of knowing how to learn
really stood me well in that space, but yeah.
(02:57):
So I think that prepared me well.
I think I was surprised that the technical things
weren't as challenging as I thought they would be.
- What do you mean?
- I was such a like, we were learning how to write
like SQL queries, for example.
And I thought, oh, like I thought it was like code
at the time, like hardcore, like little did I know.
(03:18):
But I thought, oh, that's gonna be like really hard.
And it turned out like it wasn't that hard for me.
It was just as simple as learning like syntax
and kind of like a, not a vocabulary,
but learning like a grammatical structure almost.
- Yeah, a little more intuitive than you may have thought.
- Yeah, and then having like familiarity with language
and learning how to read and to process
(03:41):
and structure language, I was like,
oh, I was able to figure it out pretty easily.
- Right, SQL, got it, right.
- Some things that I thought would be easy
were hard and some things that I thought would be hard
were easier than I thought, so.
- So you started out in consulting.
Where did it go from there?
When did you feel like you started to hit your stride?
- Late.
(04:01):
- Yeah, sure.
- Late bloomer, as I think you know
from earlier previous conversations.
So it took me a long time.
So that was '99, 2000, somewhere around,
oh gosh, I wanna date things.
Like President Obama's first election,
I got a job in advertising as a program project manager
(04:24):
and managed the, or helped manage the redesign of usps.com
was the big project in that space.
Not to go into too many details,
but stayed in kind of the advertising, digital program,
product management really up until 2016, '17.
And by then I had made my way to IBM,
(04:46):
was in product management roles.
And by then had kind of grown like disillusioned,
not disillusioned with tech,
but just kind of program project management.
I just was, wanted to do something
that was a little more challenging, a little bit richer.
By then I was turning 40 as well, or about to turn 40.
Somehow I inherited the voice and conversational AI products.
(05:08):
And I remember my manager at the time asking me like,
hey, start figuring out what kind of resources you need.
And so I started reading, talking to people,
I started learning about this conversation design thing.
And when I started digging into it a little bit more,
I thought, hmm, I think with the amalgamation
of like skills and experiences I have personally
(05:30):
and professionally, I think maybe I could do this.
I also knew I wasn't gonna get a person.
So I self nominated and had to kind of do
like a hyphenate thing and be the conversation designer
and product manager for an Alexa skill
for the Weather Channel, but I did it.
And that was kind of the aha moment.
(05:52):
But it didn't happen until I was, you know, 40 or so.
- Remind us the story here.
IBM acquired the Weather Company.
That was one of the predominant
kind of weather apps out there.
Alexa was really just emerging.
Large language models weren't really a thing
that anybody talked about outside of the research world.
How did you get into a position where you were building
(06:15):
the first Alexa skills and how did you design
some of those first interactions?
- Conceptually, I don't think I obsessed too much
over the tech.
I had good enough technical partners
that I didn't need to understand
like all of the under the hood stuff.
I needed to understand kind of the basics
of intent discovery, intent detection,
(06:36):
how to define kind of basic system logic
and then how to kind of write prompts.
And that's kind of how I describe the profession in general,
or at least the profession as it was kind of pre LLMs
is to kind of put it in three pillars, which is language,
what are the things people say, logic,
(06:57):
what are the ways that a system needs to handle,
things people say, kind of the mediating logic
between two parties in a conversation.
And then prompting, what are the things
the agent needs to say back, right?
So kind of got to handle all those three elements
and was able to figure out the rest by trial and error,
how to really, one of the better teams
(07:18):
I think I've been on across my professional career
at that place.
- When you're thinking about intent,
you mentioned intent detection.
Was this something that you had to code out at this point?
Would the models recognize what people intended?
- In legacy and LU systems, you absolutely define that.
So there are different ways to do it.
(07:38):
It has often been like human identified.
So you'll have teams of labelers
who look through essentially a spreadsheet
and say rows three, seven, and 17 mean the same thing.
Different rows mean different things.
So essentially you kind of create these like bucketings
of meaning, and then you point each of these intent,
(08:02):
buckets become intense.
You point each intent toward a different path of logic
so that they're handled differently.
I don't know if that may be hard to process like orally.
It's usually easier with a visual.
- Well, I mean, I think I get it.
So it sounds like if you've got a few hundred examples
(08:23):
of Alexa, tell me how to do this,
or tell me what the weather is like,
and you have different people say it in different ways,
you'll tag or label or categorize those things
and say, hey, look, just spit out the weather, right?
Tell us what the temperature is
or what the forecast is gonna be.
- You'll wind up with things that line up
to just something that's like,
give me a general weather report.
(08:44):
You'll have another lane that means like,
tell me about a general weather report,
but focus on precipitation.
Like, will it rain?
You may have another one that's precipitation,
but that's about snow.
You may have like pollen, allergy focused.
So just kind of, so sometimes you'll go from like
that kind of like very specific down to some nuances.
(09:07):
But yeah, you're generally like grouping things
kind of around like topically, subtopically,
and in a weather case.
- And weather data itself is so complex.
There's so much depth to it.
Did you think that in 2016, 2017,
that voice interaction was gonna be
how humans interact with computers in the future?
Was this the vision that you saw?
(09:29):
- I think I've been around, had been even then,
I thought I had been around long enough
to every time a new hype cycle comes along,
the new thing is the one everyone says
is going to up in the others.
And the reality is always that the new gets added
to the existing mix.
And it's just the proportion of things,
the things we, the range of things we can do just expands.
(09:53):
So for me, voice just becomes another voice,
or let's just say conversational inputs,
because as you can text with agents now
and Facebook Messenger, WhatsApp, and others,
conversational inputs are just another means
of interacting with bots.
So I think it just gives another level of convenience
(10:15):
and lets us all kind of do what we need to do
the way we need to do it whenever we need to do it.
- Sure, just another layer of complexity
that makes a lot of sense.
- Or another layer of convenience,
if you wanna say it that way.
- Yeah, yeah, yeah, sure.
- So you were in product management,
you were one of the kind of emerging conversation design folk
at IBM and the weather company.
(10:37):
Was there anything that you really struggled to master
or to become proficient at to be good at your job
at that point?
- Patience?
Is that a weird thing to say?
- No, yeah, elaborate.
- I saw the potential of the technology like very clearly
like for like weather use cases,
(10:58):
gosh, I wanted to put like AR layers on cameras.
Show me the tornado and like,
is it coming toward my house or is it somewhere else?
And then just learning that's really far away
for lots of reasons.
So I would say that patience in one sense
(11:18):
of just like the potential versus like
what are kind of the incremental steps to get there,
which maybe is like an inventor or designer's dilemma.
Sometimes it's just that that's the reality.
That's unfortunately the reality.
But I think like where I certainly got better over time
is just depth of technical understanding.
(11:39):
So that's maybe where I was started with frustration,
but that's also where over the course of moving Google,
you know, from IBM to Google, to Amazon,
is just the level of technical understanding
has grown exponentially.
- So from there, you eventually landed at Google.
Can you tell us a little bit about your experience there?
- So caveat, I was a TBC or I never remember
(12:01):
what the acronym means, but I was a contractor
on a huge program there, Contact Center AI, CCAI,
servicing large telecommunication clients.
I wound up managing a pretty large team
of conversation designers, data labelers,
data scientists, data engineers, and QA analysts.
(12:25):
I think by the end it was 65 people.
It was very, very big, which coming to Amazon,
I have no people and I think I prefer having fewer
rather than that many. - A little bit of help.
- Having fewer rather than that many.
I like to know people's names
and know what's going on with them.
But the experience there was very good
(12:47):
just in terms of, I mean, Google is Google
and you kind of see like what the best, you know,
among the best of the best,
like what is kind of going on in the industry.
So just kind of lessons in like scale and organization.
- At that stage, in terms of what sort of interactions
at a high level you were thinking about,
(13:07):
were there specific things that you were excited
about being able to build,
were perhaps there things that you were limited by
technology wise or, you know,
whatever you can safely talk about.
I'd love to get a little bit more insight
into how one would actually begin to help
such a large kind of population of people
that the people who would be calling into call centers,
you know, that kind of thing.
(13:28):
- It really is an interesting proposition
coming from weather and even some of the time
prior to that in the advertising space.
You know, I mentioned USPS, I had done work for Nike,
I had done work for Audi of America, Hershey, Nickelodeon.
I had done a lot of like consumer products stuff.
So kind of being in a,
you could say like contact centers
(13:48):
don't seem as exciting maybe.
But if you think about the amount of time
a person spends on hold with customer service
or in the middle of like customer service transaction,
if you actually can make that easier
with technology somehow,
it provides a lot of value to whomever
(14:11):
is having to resolve a problem with a bill
or a question they're having about a product.
So kind of being able to just reframe
what seems like a more mundane problem in a positive way.
I think maybe being closer to some of the data operations.
Just, I was, you know, it's a little bit like,
I guess people use the term,
have used the term dogfooding for a long time.
(14:34):
I didn't have to do data labeling myself,
but I had to, I guess I had to supervise it
in some instances.
So I had to kind of help the team craft standards
in terms of how we were gonna review huge volumes
of transcripts, like how we were gonna define
that we were talking about intents earlier,
how we were gonna define categories of meaning
(14:57):
and really get down to like a really granular level
of detail that I think was beneficial.
So you kind of learned that like a company,
like a Google, like an Amazon, like even an IBM
are as great as they are because of the mastery
of the most minute details.
(15:17):
- It's all in the nuance, I guess.
Can you tell us a little bit about what conversation design
is today and kind of how you see that function
or that profession?
- I think we're at a point of inflection or profound change.
It's definitely something that we all talk about as peers,
even as we go about, you know, kind of our day-to-day jobs.
(15:39):
I don't know of any unemployed conversation designers.
I don't get those calls like, "Hey, I got laid off.
I'm looking for something.
Can you help?"
That doesn't happen.
It seems like we're at, you know,
as far as I can tell pretty close to,
I don't have the stats, but I would guess
like fullish employment, let's say.
So everybody's busy, let's say,
(16:00):
but the AI revolution of 2023
as I think it will be remembered.
I don't think it was of 2023.
I think 2023 was the Bastille moment was the first.
And I think, you know, we will be one of the many professions
changed by the AI revolution in ways that I think
even we don't totally understand yet.
(16:22):
So I think it's hard to say
because I think it's changing in real time.
- So what's the job like today?
What would be a day or a week in the life
or however you want to describe the function
that it has evolved into?
- You know, I think I can say this and it's not just true.
It's true, it would be true beyond Amazon,
which is everybody still has things they have to deliver.
(16:44):
Like I said, I don't know anyone who isn't employed.
So everyone is still ostensibly doing project work,
strategic or tactical.
People are still trying to reap the benefits
of NLU based systems and contact centers
and all kinds of functions.
Generative AI, LLMs are mature enough for some things,
(17:06):
not mature enough for others.
So I think today it's not too different
from what it was a year ago, two years ago.
The where it might be differing are for individuals
who work at companies that are heavily invested
in either developing or deploying generative AI,
(17:28):
large language models.
That's where I think you would start to see more disruption.
- So let's just pretend that I'm lucky enough
to join your team and I'm a total newbie
at conversation design.
Would you walk me through perhaps how to,
or teach me how to design an interaction like you would
in your normal day to day?
- Well, the first question I have to ask you, Ryan,
(17:49):
is are you a control freak or are you not a control freak?
- I would lean on the less control freak side of things,
I think.
- Then this is a good time for you
because this is not a time to be a control freak
as a designer.
- Tell me more.
- Because the AI should be able to do a lot of this for you.
Where the rubber really hits the road is in how smart
(18:10):
the AI is and how much it learns how to do by itself.
And this is sort of out there in YouTube videos
and this, that, and the other.
So if the AI has picked up sufficient data patterns,
references the right conversations,
right types of conversations,
then we don't have that much to do
because it should have seen examples of conversations
(18:35):
and be able to detect the pattern and kind of do it itself.
- To get a little bit more detail,
when we're thinking about a project
that you would have to deliver on,
is there a way that you would think about structuring
what am I trying to accomplish in building this interaction
or building architecting this sort of conversation?
Is there a framework that you use to think through
(18:58):
the AI should learn how to do this gracefully on its own?
- You're kind of going back to kind of the basics
of UI, UX design and customer empathy
and thinking about and understanding
their intentions and their goals.
What are their needs states
and what tasks are they trying to perform?
(19:19):
And then from there, it's going to be language-based.
So how are they going to conversationally
accomplish that task and what tools are needed, if any,
by either the AI or the person to accomplish those
and then how do you do the integration?
- When you're thinking about broadly conversational AI,
(19:42):
image, video, the like, what's your view on the space,
the industry or the technology as it stands?
- It feels a little bit like the '90s.
It's kind of like a tool explosion.
I mean, there was pets.com, cats.com, dogs.com,
like a website for this, a website for that.
(20:02):
There's an AI for this, an AI for that,
one that can record your face, one that can record your voice
like, it will be kind of interesting to see
like where the dust settles and kind of who's left standing
a few years from now, maybe everybody, but maybe not too.
Where is the real utility and where is novelty?
(20:23):
- So what do you think?
Where is the real utility?
- Honestly, I think that there's just so much information
in the world and in our personal lives
that anything we can use as a tool to process and summarize
is gonna feel like a win.
- How interesting.
Do you think we're there?
Like, do you think that's gonna be a 2024 thing
or perhaps closer to 2030?
(20:45):
- I don't know.
I really don't.
- Yeah, me neither.
It's gonna be fascinating to see how it goes.
I would love to be able to, at the end of the day
or at the beginning of the day,
have some sort of large language model, AI, GPT,
whatever it be, just get me up to speed
on all the news that I would find interesting.
I mean, that would be exciting.
(21:05):
Switching gears a little bit,
tell me a little bit about how you approach hiring.
- Well, my current role, I don't hire.
I don't currently manage people.
But how I have hired in the past is
I think I look for aptitude, attitude, and appetite.
And appetite, I would say, could be drive or curiosity
(21:27):
or maybe a little bit of mixture of both.
Yeah, obviously every role has some skill requirements.
You need to make sure the bases are covered
or that the person can get there quickly.
But ultimately, I'm looking for somebody who can grow with,
and especially in a field that changes so much,
I'm looking for somebody who can grow with
(21:48):
and push the boundaries of the job and push me.
- What do you mean by that?
- Push me to, if I'm the leader of the team,
push me to stay ahead of them or get out of their way.
Like if that's what has to happen, like seriously,
I wouldn't stand in someone's way.
- So let's just say we've got somebody who's excited
about the concept of conversation design.
(22:09):
They see the opportunity there.
Is there anything that you'd recommend
that they become proficient at or learn on their own
before they would kind of toss their hat in the ring
and say that I wanna be a conversation designer?
- I would say there are three,
I've noticed like three major patterns.
There are people like me who did a job switch
somewhere along the way who are maybe already seasoned
(22:30):
professionals and for whom it was a natural transition
at some point in their careers.
There are people who started in some form
of conversation design and they usually went
through a company like Nuance or 24/7
that did more like traditional IVR services
and then transitioned into Alexa, Google Assistant, Siri,
(22:52):
as those started coming online.
And then the third category, which I would say
are more of the like junior, more juniors,
came through some form of like data labeling role.
- Interesting, so those little tiny details
that you were saying that Google
and these other large companies have mastered,
it seems like the pipeline for new candidates
(23:13):
is to come through and understand those little tiny
kind of kernel level building blocks.
- Yeah, but I wouldn't say even that
as like an established career path.
I would say it's people have still had
to kind of self-direct, acquire skills, network to get there,
but those are kind of the three patterns
(23:33):
I think I've observed across all the places
that I've worked and people that I've come
to know in the space.
- Interesting, so in either of your last couple of roles
when you've been very much so focused
on conversation design and conversation architecture,
what have your hiring processes been like?
What was it like for you to interview with these groups?
(23:54):
- I mean, these big companies have such defined
recruiting processes, like it is what it is.
You can't do much more than just try to be prepared
and ultimately be yourself
and try to represent yourself well.
- We'll get back to the conversation shortly,
(24:15):
but I wanted to tell you about how I can help you
find your fit.
I offer one-on-one career coaching services
for experienced professionals who are preparing
to find and land their next role.
If you're a director, vice-president or C-suite executive,
and you're ready to explore new opportunities,
please go to goodfitcareers.com
to apply for a free consultation.
(24:35):
I also occasionally send a newsletter,
which includes stories from professionals
who have found their fit, strategies and insights
that might be helpful in your job search,
and content that I found particularly useful
or interesting.
If you'd like to learn more,
check out goodfitcareers.com and follow me on LinkedIn.
Now, back to the conversation.
If someone's gonna be jumping into this field,
(24:56):
if they're gonna commit to saying,
I wanna be a conversation designer,
should they theoretically be prepared
to build a conversation or to write some prompts
or to, is this a whiteboarding sort of activity?
Or how do you think about being able
to just kick the tires?
- I would say expect to have to do a test task of some kind.
(25:16):
So some kind of project.
Having done several of these, how bad do you want it?
This is just, this is me being me.
I am extra, I wanna look buttoned up,
I wanna look like a professional professional.
So I spend time on them.
(25:39):
I don't spend like 40 hours on them,
but I don't throw something together in two hours
and walk in the door at Google and say, here you go.
Like it's gotta be good enough for Google
to want to put their name on it
and show it to one of their clients.
Think about it like that.
That's how I would advise to think about it.
(26:00):
- And what is your perspective on good enough?
Like let's say that you're moving
into a new interview process
or you're building one of these processes
to hire for your team in the future.
And you put together an example task
that you want someone to work their way through,
design an interaction, design a conversation.
Is there a way that they can understand
(26:20):
through your perspective what good enough looks like
and when to say, okay, I've spent however many hours on this,
I can make peace with this
and like submit this and move forward?
- I don't think it's a defensive posture
or a defend your life.
I don't know if I said, like when I said that before,
(26:41):
if I was saying it too strong.
I was trying to get across like the way I pursue it,
which is like, that was me talking to myself.
Like you need to be ready, ready, ready.
I would underline, like think about it.
If one of these huge companies,
they're gonna put their brand logo on it.
Like it's gotta be that good
(27:01):
or they've gotta see that it could be that good
if powered by all the resources
they're gonna be able to put on it.
So it's gotta kind of convey that trajectory.
I think what it is is it's gotta convey a point of view
and you've gotta be able to articulate
like what customer or business problem you're solving,
maybe what design problem you're solving.
(27:22):
Sometimes that's interesting too.
- What do you mean by design problem?
What does that mean in this context?
- Trying to see if I can think of one in the,
it's kind of hard to think of one in the abstract
as a customer problem would be like,
I wanna pay my phone bill.
And the conversation is I wanna pay my phone bill.
- What if we took our conversation today?
So we're gonna talk for 45 minutes or so.
If I wanted to be able to say,
(27:44):
hey, can you help me extract the questions that were asked
and then help me build just kind of a story arc,
a narrative to say, here's where we started,
here was the peak of the show and here's where we closed.
How would you think about building
architecting that conversation?
- It's a little harder because we've met
but prior to talking today.
- Sure.
(28:04):
- So we kind of knew,
we kind of knew we were gonna talk about,
but it's a little more open-ended.
Like what conversation design,
it generally is some form of a service interaction.
For a bank, it's like pay the balance on your credit card
or pay your credit card bill.
I don't know if you can do this yet on them,
(28:25):
but like I'm going on a vacation,
order currency for my trip to Europe.
- Sure.
- So I have cash when I get there.
Those kinds of like transactional tasks
that they would have to pay a human representative
of some kind or pay for a retail location for you to do.
Not only do you not have to call a 1-800 number
(28:45):
or use an app, you can talk to a robot and get it done.
- So imagine maybe we're like an insurance company.
This is an example actually from one of the people
that I'm working with right now.
They built the chat bot
or they're implementing this off the shelf chat bot.
And one of the things that it kept getting stuck on
was people would engage with the chat bot
and ask what's my deductible?
(29:06):
And instead of actually going into their file
and being able to say that your deductible is $100,
it would come back with the definition of a deductible
and it would drive everybody nuts
who was actually trying to figure out what is my deductible.
How do you think about, I don't know,
debugging that conversation or refining that?
- On the surface of it,
it's either a training data problem or a software problem,
(29:28):
depending on what tool they're using
because NLU based systems are very boxes and lines.
Like imagine you're like-
- And that's natural language understanding,
is that right? - Understanding, yeah.
Like your legacy, I guess will soon be legacy.
So imagine like a boxes and lines flow,
like a Visio flow or there are a bunch of those tools.
(29:50):
So somebody asks, what's a deductible?
That's a arrow from there to a point in the flow.
That point in the flow needs to know,
or should know to go to the backend
and look for variable deductible
and get $100 and bring it back.
So that when you as a conversation designer
are writing the answer,
(30:12):
you're writing a sentence that essentially is,
your deductible is blank.
And when it's rendered by the IVR,
it reads your deductible is $100
because it got a hundred back from the backend.
- Anything else you wanna share
in terms of the mastery of the art
of conversation architecture and conversation design
before we move on?
(30:32):
- Conversation design is something that we all do
all day, all the time.
Imagine going to like your favorite coffee shop
and you talk to the person at the cash register,
like that's a conversation design.
Like you know how to have that conversation.
The person at the cash register
knows how to have a conversation design.
That conversation has been designed
by like the social construct of the place you're in,
(30:55):
the activity you're performing.
So we're either like actors in or participations
and participating in conversations
that are designed for us all the time.
We just don't perceive them that way.
As a discipline in technology, it's new,
but as a form of attaining goods and services,
(31:16):
let's say been around for millennia.
- Sure, as long as language has interacted
or has been a thing.
If I can pry just a little bit more,
we, the humans are trained by that same social contract
that you were describing.
We've mastered these interactions.
We've had a few of our own awkward interactions.
I'm sure, get up to order a cup of coffee
(31:37):
and totally just failed terribly.
Are there things that machines
or the models that we're talking about now
just like don't get it
or seem to just have a really hard time
understanding the social norm of,
hey, I'd like a cup of coffee.
Okay, that'll be a dollar, right?
Like they just fail on?
- I mean, if they're more like legacy systems,
(31:58):
again, ones that are built not on emerging technologies.
When you get to like extra hot, no whip,
all these customizations,
it gets down to like how well designed and coded is it
to handle potential variation
on something as personal as a coffee order.
- So when you're thinking about the kind of
(32:21):
making the complicated art of conversation design
a little bit more simple, a little bit more common sense,
what are your thoughts?
- It's interesting because,
I kind of joke around with people that like,
we always make it like way harder
the way we like contort ourselves into pretzels
to ask an assist, like a virtual assistant,
(32:42):
like what time is it?
We overthink it, we do it with people too.
Whereas like, I kind of suspect that
even if we are like Regina Georging it
towards some other person,
like we actually are pretty forgiving toward one another
with like chatbots or any kind of virtual assistant.
Like I know from like some research we did,
like I was a part of like a long time ago,
(33:05):
like when I first started doing conversation design,
like we noticed this like behavior of like,
we called it like key wording.
So this is like 2017, something like that.
Like way, where people like yell at it,
they're like, like they're talking to someone
who doesn't speak English.
(laughing)
(33:26):
Right?
And so, because they're not sure if they're being understood,
like again, they're contorting themselves
to these like patterns or whatever.
But like, I wonder like when they have like chatbots
or any kind of virtual assistant
that's more fluent or more fluid,
like how will people kind of natively talk,
(33:49):
especially like in voice?
I think chat or typing is completely its own thing.
And I'm sure the open AIs of the world
and the people who have been GA
with those kinds of products have data,
but I'm really interested in how people will speak.
I think speeches are really interesting.
(34:11):
- Sure, especially when, like I think about,
I don't know, I feel like the mom analogy is an easy one.
When I think about my mom interacting with Siri or Alexa,
there's like all these like errors in translation.
She gets so frustrated and sometimes she'll try
to make it so simple or try to overly manipulate
what she's saying or asking,
that it makes it very hard for the bot
(34:32):
on the other side to actually get it.
I'm also so curious, once we feel that level of trust
or confidence that like, okay, it's gonna get it,
whether I'm speaking quickly or slowly,
or I'm being obtuse or if I'm doing a good job
of very specifically describing what I need,
I'm really curious if we're gonna get more comfortable
and laid back, if we're gonna start to get,
(34:53):
start to use more advanced, like prompting techniques
in the way that we speak naturally, like what do you think?
- Yeah, I mean, my mom also, keywords,
but I think she's just like late.
I don't wanna say late, my mom is not lazy,
but my mom is like, well, just trying to say
the fewest words possible, because she's got,
she's watching lifetime and whatever else.
(35:16):
She's just like timer 10 minutes, like she's retired.
She's living her best life.
I don't know, like, I don't know how like,
'cause I know like the prompting that I do is like,
I do like in my own time,
I do mostly like image prompting these days.
I'm in an AI art class with this professor
(35:36):
from the main college of art that a friend of mine
hooked me up with, so I'm doing a lot of that.
So a lot of like, I'm not gonna,
like I think like a mid journey prompt,
I'm never gonna say dash dash V six dot zero dash dash S,
whatever you have to say there.
I don't know like how much like verbal prompting,
(35:58):
I think will always have to be pretty simple,
just because you're never gonna be able to speak
the syntax of a complex prompt.
- Do you think any of these models
are gonna get like personality, a little bit of sass,
a little bit of like, come on, bro,
just tell me what you want.
- ChatGPT has custom instructions that you can play with.
(36:19):
I told mine to like give it a little bit of zhuzh
to make it a little bit like more,
trying to make it sound a little bit more like me honestly.
- Sure, I don't even know how to spell zhuzh.
- I didn't either, it took me like a week to figure it out.
I finally did, I saw, I don't know how I did it,
(36:41):
but I was like, oh, and then I sent those instructions
to a friend who wanted to, so I just screenshotted
like where it was.
And my friend was like, I never knew how to spell that word.
I was like, I didn't either.
(laughing)
Yeah, but you can do that.
And I suggest for anybody who wants to see what agents
are like with personality, go to ChatGPT
(37:02):
if you have an account and play with that feature.
I don't know like how good it is,
like I more or less like told it,
you can be a little bit funny, but don't be like too much.
And I told it to give me deep cuts on certain like topics
where I think I already, like if I ask it about,
like I read a lot, so if I ask about books,
(37:22):
like can you give me like, I'm like, I already know that,
like give me the deep cuts, skip to the deep cuts.
For a while I said the shortest possible version
of an answer 'cause they were really long,
but then it got to like where every answer
was seven words long and I was like, okay, like.
- You mentioned you were in an AI art class,
(37:42):
what is that like?
- It's super fun.
It's like my personal learning and development,
a friend of mine colleague took a class with this guy
at the main college of art and then told me about it
and was trying to do something on her own afterward
and I asked like, can I do that too?
So we get together like once every couple of weeks,
(38:05):
go over industry trends.
I spend spare time trying to make things
and I kind of will find, I guess I treat it
like you would treat like kind of like a,
like a user research testing project,
like I'll find like a thread.
It's like one week I worked on like narcissist pictures,
like I don't know if everyone knows that story,
(38:27):
but narcissist was the figure in Greek mythology
who looked into a river, thought he was so beautiful,
fell in love with himself and turned into a flower.
And the reason I picked that story is because
like in terms of a painting,
it's always a dude looking into a pond.
It's always looks the same.
Like, and then you can play with all the variables
(38:49):
around that to see all the different things
that mid journey or Dolly three will do
to produce different kinds of painting.
So I do a lot of stuff like that
and then I put it in like a notion doc
and then share out what I've learned,
prompting technique.
- What have you learned about prompting
when it comes to images, not just conversation?
(39:09):
- Personally, I always try to get it.
I always try to get these services
to break their own content policies.
- Ooh, interesting, okay.
How would you even begin to patch those kinds of holes?
Let's say you were on that team.
Let's say you're at mid journey or wherever it is.
How do you start to combat against?
- For in an art space, I don't think I would.
(39:31):
So, well, who's the user for like Dolly,
it was Dolly three that I use.
Who's the user?
If you think it's like a graphic artist at some company,
then that's to me is on the company to set their policy.
You're making, let's say it's like Airbnb or something.
(39:52):
Like you're making like pictures for people
to hang in their, the homes they're Airbnbing.
Like you wanna set a policy that is,
those are gonna be like whatever they need to be.
If you're opening that tool for artists,
it needs to be like whatever artists need to use to create.
I don't think there should be a policy.
(40:14):
I think that there needs to be,
we need to as a society or as a legal framework,
like put some consequences on misuse of AI output.
It should cost you if you put a deep fake out
that causes somebody harm.
- Sure, oh yeah.
(40:35):
And I mean, I think about the Snapchat filters
of the future here where it's like, look, this is my style.
Let's say that I'm an iconic artist, right?
Like how do you even begin to say, oh, this is actually me.
This is my art, my style versus I'm stealing
from someone else.
- I don't know, Picasso said all,
(40:56):
I don't know the quote exactly, all great artists steal.
- To a certain, I mean, to a certain degree
and that same concept of like, I'm a writer,
I read a book, I read these classic pieces of art
or these classic literature.
How would it not influence your own training data
and influence a little bit about what you put out there?
- It always does.
(41:17):
I mean, I have done some creative writing.
I'm theoretically writing a novel right now.
Say theoretically, 'cause I have an idea.
- Is Chad Bot writing the novel?
- I wish Chad Bot could be writing it for me
while we're talking right now, actually.
(laughs)
- Right, gotta work on that conversation.
- If I were up to date on my other goal for the year,
(41:39):
which was to learn Python, maybe that would be happening.
(laughs)
- Feel like Chad's GPT is so good at Python.
Do you think that you're really gonna need to learn
how to Python effectively?
- It's early enough days that I feel like,
like all of these tech, 'cause like in my,
(42:00):
like I said, I was saying, this is all of the stuff
I do in my free time as I sit here at my computer
and play with all the other toys that are out there.
It's like with this art stuff,
like I feel like the reason my output is good
is because I do know a little bit about art
and art history, like I know how to prompt it,
but I feel like anybody can still get something
(42:20):
that they'll like, but for somebody like me,
I'm going for like a little more like precision.
So, you know, knowledge helps.
I don't know, like I feel like--
- The general awareness.
- Yeah, I feel like maybe have the computer science
background to know how to set up like the environments
and like how to execute code and like what all the like
(42:43):
code pieces I would, you know, like what kind of like,
how many scripts, like what scripts would I need?
And I feel like maybe even just the foundation,
you know, we'll see how it goes with foundations.
Maybe if I got through like foundations one, two, and three,
then after that I could start having some AI
do some of it for me.
(43:05):
I had a friend who became my friend because I took a lot
of creative writing workshops with her.
She was my professor, teacher,
but one of the things she suggested was like,
sometimes it's fun to like put your writing
into Google Translate and like translate it to Japanese
and then to Irish and then to, you know,
(43:26):
do all these transformations with it
and see what comes out.
Sometimes like garble it up and see what it comes out as.
And like, I feel like there's like ChatGPT and others
are just like tools like that to really kind of like play
with your own creative output and see what you can get.
I mean, a piece of prose doesn't have to be what it's,
(43:50):
you know, we have the tools now to really reinvent
these art forms, you know, it could be totally different.
- Interesting.
Is there like an art project or a prompt or a, I don't know,
like in terms of the class that you're taking right now
or an idea you have for the creative output from the class,
is there like a project that you could suggest
(44:11):
to anybody out there who might be, you know,
playing around with Dolly 3 or MidJourney or whatever it is?
- I feel like MidJourney 6 to me,
feel like they're onto something.
It seems to have higher fidelity to the text.
Like one of the things that I've been really interested in
(44:32):
again, 'cause I, you know, I start with writing
and then images are like newer to me
is also for like Instagram sometimes,
like when I started first, 'cause I'm old,
I started Instagram like when I was 37 or whatever, like,
oh, and then a friend told me like,
if you're ever gonna publish anything,
you need to have a presence because like post things
(44:55):
and the odd chance that I ever published something
and need to promote things.
But I was like, I do, I read and like,
how exciting is that to like look at like, oh,
here's a book, like.
- Sure, sure.
- But I was like, it would be cool
and I've always wanted to do this.
It's like, I could take like a passage and like illustrate it
(45:15):
and by hand, that's really tedious.
But if I could say here, Dolly 3 Mid-Journey,
here's a passage I liked, image of Fayette.
I did it last night when my husband and I went to see
a 4K restoration of the film "Alien".
- Ooh, I bet that was awesome.
(45:37):
- It was awesome, but I gave it the,
we've seen the thing on social media where people say,
alien is a, I'll see if I can get it right.
"Alien" is a film where no one listens to the smart woman.
Then they all die.
- Classic story.
- The smart woman, except the smart woman,
the smart woman gets away with her cat.
(45:57):
And it gave me, this is Mid-Journey 6.
It gave me a woman and kind of a olive jacket
with dark brown curly hair.
She looked like she was like 12, which I had a problem with.
Not quite "Scorny Weaver", but close.
(46:17):
And a cat.
And the word alien on it.
It was basically the film poster.
I was like, oh, there we are.
- So if anybody wants to join Chad's team,
pick your favorite passage out of literature,
write the prompt for Dolly,
we'll try to get you an interview.
(46:38):
- Yeah, I like bonkers and crazy.
And I think all of this should be fun.
We should feel a little like mad scientists.
And enjoy it because this is the time to do it
before it gets really professionalized.
I don't think we're there yet.
And I think it'll be a while, so have fun.
(47:00):
- All right, so thinking about
if we're gonna give any advice or share any insights
for folks who might be aspiring in the field
or perhaps even going back and giving advice
to your younger self, what do you think?
What would you share?
- I don't know because I feel like
I found the right thing professionally.
At least in the places and spaces that I was,
I don't know if it existed until I found it.
(47:23):
So on one level, I don't know.
But on the other, I would say,
don't take chances willy-nilly.
I think the way that I tell my story sometimes,
you know, I kind of,
'cause I like to joke around a little bit,
I tell it, well, I kind of did this and then I tried this.
I didn't take any of those changes, of course, lightly.
(47:44):
I thought about them.
You and I worked together through some of those transitions.
- Oh yeah. - But I wasn't afraid of them.
I just thought about them before I did it.
And thought I was gonna put my name, my reputation,
not my entire career because I had skills
(48:04):
and could do other things.
I wasn't afraid of failing,
but I thought about it before I did it.
But I have a niece who's,
she's, is she 19 or about to,
she's either 19 or about to turn 19, freshman in college.
I told her, I was like,
I wish the thing that they were teaching y'all
in some kind of life skills course is
you will get laid off at some point.
(48:26):
Probably. - Oh yeah.
- Probability is that will happen to you.
I was like, it happened to me.
It's not fun. It sucks.
You will recover.
But I had a career counselor,
or they gave us career counseling,
and I met with this woman and she was like,
really like sat down with me and helped me like think about
like how to look at,
(48:48):
'cause I was telling her,
I was like, I wanna stretch in roles.
And sometimes I'm like,
nervous about applying to things
if I don't 100% fulfill the job requirements.
And she was like, well, you don't have to.
She's like, if you're close, apply.
And she's like, and sometimes you can determine
like what skills you need to either acquire
(49:10):
or how to like talk around or actually like think about ways
that you do have those skills.
So she showed me like a way to like go like,
kind of old school,
but to break down a job rec and put it in a table
and kind of in a corollary column,
kind of map out like your experiences,
their skills that map to those and see which ones are blank.
(49:32):
And that's your kind of go-gets or things to think about.
I kind of do that more mentally.
I think it's time consuming to do like that kind of
table approach and do that much writing.
But yeah, like, so that I'm that deliberate
in every choice that I make,
but I'm not going to not take a risk
(49:52):
because technology changes, the times change.
I love the quote from Martha Stewart
in her recent masterclass commercial,
which is when you're done changing,
when you're through changing, you're through.
I think I'm adopting that as a personal mantra.
- Change is a good constant.
If somebody were to try to join your team,
(50:12):
you know, if I have the aptitude,
the attitude and the appetite,
but I don't want to just throw that in the summary there,
is there anything that is a good clue
or an indicator for you when reviewing
before actually meeting the person?
- Well, I think one thing you'll see,
you know, as we talked about the test task,
that will tell you a lot.
Well, that will feed into like kind of the interview process,
(50:33):
depending on, so for those roles that like Google,
the interview was processed with sometimes,
those were 30 minutes, sometimes longer.
So it'd be like part of it's a presentation,
part of it is Q and A.
So you could tell a lot by the person's like thought process
(50:55):
how they think, what they know, what they don't know,
but just how they present things
and how they address the questions that are put to them.
And then after that, you're giving them,
you're putting them through one-on-one or panel interviews.
I think you get a lot from the, just the test task.
(51:17):
And then in the one-on-one,
you can really dive into skills
and you can really see how far off are they.
And if it's something that is,
have they ever labeled before?
No.
Do they know how to read?
Yes.
Okay, then they can probably figure that out.
Did they know dialogue flow CX?
Oh wait, that product wasn't publicly available
when we were staffing people.
(51:38):
So did they know another conversation design tool?
Okay.
Sure.
Yeah.
Switch in here to bring this to a close.
When you're thinking about the future here,
next couple of years, maybe the next decade,
what are you excited about?
Retirement, just kidding.
Not that close.
(laughing)
Come on, Jeff.
I'm not that close.
(52:02):
I honestly have no idea where this goes.
I really don't.
I mean, somebody like cracks AGI tomorrow
and you know, who knows?
I guess specific to my field,
and I guess like generally what I would say is,
I would, advice would be is that what I said earlier,
(52:25):
don't fear change, embrace evolution,
learn from an Amazon leadership principle,
learn and be curious always,
always try and learn new things.
Even sometimes learning a small things,
you don't have to acquire like a third language,
second language, learn how to program,
(52:46):
just small things sometimes keep you fresher.
That's all you really need to do.
I would say also don't fear automation on the face of it.
My hope for automation is that maybe there's a future
where I don't have to do things
that I just don't like to do.
(laughing)
Sure.
Personally, professionally, all of the above,
(53:08):
like Chadbot does all of that for me.
Chadbot.
(laughing)
And I do the things that are higher value,
connecting with friends, family, colleagues,
more strategic work.
Like I think that that's certainly a possibility.
I think like taste, editing, duration,
(53:30):
all becomes more important.
I think there may be some nuance
in terms of like specialization.
I don't totally know what that means,
but I just think I kind of see some like slicing and dicing
even of existing specializations.
So specific to my field,
I think everyone who understands AI systems,
(53:51):
how they work, how to mediate interactions
between humans and computers using language,
will 1 million percent have jobs?
We may even have like higher value and more important jobs.
Will we be called conversation designers?
(54:13):
I don't know.
I mean, conversation design
didn't even exist 10 years ago, right?
Or it did in very specific niches
like nuance and 24/7 where people were designing
call flows for customer service.
- Any other thoughts, reflections, inspiration,
things you'd wanna share with anybody listening?
- I should preface this.
(54:34):
I haven't read the book, but I'm familiar with the concept.
The Daniel Kahneman book, "Thinking Fast and Slow."
That may not be the title.
I would say cultivate your second level thinking,
your slow thinking, your deep thinking.
I don't know if other people do this,
but I sometimes have a fear that people do,
(54:55):
which is like we work however many hours we work a day.
Then we like close our work laptops,
open our personal laptops or phones,
and then are reading like technical articles
or industry happenings,
and you never give your brain a chance to process.
And I think that processing is critical.
So I would say like take up some hobby
that occupies like your hand, like another sense,
(55:18):
like your hands or your eyes.
Like I took up digital drawing during the pandemic,
and I find that I feel like I'm much more creative
and thoughtful because I do something for an hour or two
here and there that just puts me on a different zone.
(55:39):
I mean, some people meditate and they can do that.
Some people get that with exercise.
That's cool.
But something like that, that gives you a little break,
gives you a chance to think on it,
kind of like I said,
just process whatever's happening in your mind,
work or not work on a different way.
- Well, Chad, thank you for coming on today.
It was really great to chat with you.
I appreciate it. - Thank you.
(55:59):
Great as always.
- Our next episode is with Nate Bush,
Product Manager at Dell Technologies.
- You can't excel at the role
without really being passionate about what you're building
and who you're building it for.
- If you enjoyed this episode,
make sure to subscribe for new episodes,
leave a review and tell a friend.
Good Fit Careers is hosted by me, Ryan Dickerson,
(56:22):
and is produced and edited by Melo-Vox Productions.
Marketing is by Storyangled
and our theme music is by Surftronica
with additional music from Andrew Espronceda.
I'd like to express my gratitude to all of our guests
for sharing their time, stories and perspectives with us.
And finally, thank you to all of our listeners.
If you have any recommendations on future guests,
(56:44):
questions or comments,
please send us an email at hello@goodfitcareers.com.
(upbeat music)