Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Sarah (00:00):
Welcome back to Ag Geek
Speak, and this week we have a
very interesting topic and agreat guest to help us talk
about this topic.
First of all, I'm going tointroduce the topic and then
we'll introduce the guest,because he's been here before.
So the topic for this week isactually going to be about the
human interactions with AI andprecision agriculture, with AI
(00:28):
and precision agriculture.
So, in other words, what partof the decision-making process
should humans be responsible for, as opposed to AI, helping us
be more efficient with makingthese decisions, and what can AI
do effectively?
So, to help us with thisconversation, of course we have
our very own Jodi Boe, who is onevery episode of A Geek Speak,
and then, of course, we have ourvery own Jodi Boe, who is on
every episode of Ag Geek Speak,and then, of course, our guest
(00:49):
for this topic conversation isTravis Yeik.
Travis, do you want tointroduce yourself and make sure
that we all know what you sortof do with AI here at GK
Technology?
Travis Yeik (01:02):
Yeah, thanks, sarah
.
I'm excited to be here and talkwith you guys on the subject.
I have a background.
I grew up on a farm ranch insoutheastern Wyoming and we did
a lot of irrigation and weraised beef cattle and as well
as some dairy cattle forbreeding.
(01:23):
And from there I went to theUniversity of Wyoming and I have
a degree in geography or remotesensing and GIS and a minor in
soil science, and then went onto the University of
Nebraska-Lincoln for a graduatedegree to get a better education
(01:45):
or further education in remotesensing and specifically dealing
with remote sensing inagriculture.
And from there I worked forValley Irrigation for a season
or so and I was their agronomistfor variable rate irrigation
season or so and doing I wastheir agronomist for variable
(02:07):
irrigation.
And then I joined GK Technologyback in 2014 and I've been with
them for 11 years and Ideveloped the ADMS software that
is widely used by farmers andconsultants, and so I guess one
of my main goals then is todevelop software for that's easy
(02:29):
to use and intuitive and justhelpful throughout the precision
agriculture.
Sarah (02:34):
Well, that's great.
Thank you for that introduction.
So I think you know one of thethings that we talk about in
precision agriculture a lot isthat interaction or maybe the
complicated relationship thatexists between the practical
agriculture people agronomistsand farmers actually in the
(02:55):
field and the computerprogrammers that are behind the
software, trying to run thesoftware to work with the
practicality.
But it's interesting that inyour background you actually
have that agriculturalbackground and you've been out
in a field so you understandsome of the nuances and the
(03:17):
unpredictability that can comewith just common agriculture and
farming practices every day.
Travis Yeik (03:23):
Yeah for sure.
Culture and farming practicesevery day yeah for sure.
I think probably there's not awhole lot of people that go into
the tech side or to the codingside, I guess, after doing some
farming or having thatbackground, to relate to people
on that level.
And I think it's superimportant though, rather than
(03:47):
having somebody from anothercountry developing software
without being able to interactand provide that support, get
that feedback directly from thepeople who use the software.
Really.
Sarah (04:01):
And I think, as we're
prepping this conversation to
that same point, you know okay.
So Jodi and I we work on thesales side of GK Technology and
Travis is working on the productdevelopment, the computer
programming side of things.
So Jodi and I are working withagronomists, we're writing maps,
(04:24):
we're writing prescriptions, weare really on that practical
side of making things work outin the meetings where we're
trying to understand for surehow things are working on the
farming side of things.
How do you think about this andthose sorts of things, to make
(05:11):
sure that what you'reprogramming is really, you know,
practical, going into things.
And I do think it's interesting, because there's been times
where I feel like maybe I'veasked some questions about how
things work for you.
I probably don't ask enoughquestions like that, though.
Travis Yeik (05:22):
It is totally like
a symbiotic relationship, right,
you have to.
I mean, we have to understandeach other to be able to move
toward that.
Though.
It is totally like a symbioticrelationship, right, you have to
.
I mean we have to understandeach other to be able to move
toward that goal.
Yeah, yeah.
Sarah (05:31):
I honestly think that's
one of the greatest strengths of
our company to be to be realabout it.
But you know we wanted to talkreally about AI.
You know artificialintelligence because you have
done some programming withartificial intelligence for some
products, have done someprogramming, um, with artificial
intelligence for some productsthat hopefully we're going to be
incorporating into our, intoour software and our daily,
(05:52):
daily, um, daily practice.
So, travis, we hear AI all thetime, um, and we're not in the
cattle industry.
So really what?
And we're not in the cattleindustry, so really what is AI
when it comes to precisionagriculture?
Travis Yeik (06:18):
Yeah, it used to be
artificial insemination right,
that's how I knew it growing up.
Jodi (06:26):
Yeah, I've talked to dad
about it and and I wonder if he
thinks which one is it?
Yeah, travis, he's working fora software company, but he keeps
talking about AI, like whatwhat the heck is going on.
Travis Yeik (06:37):
No, yeah.
So artificial intelligence is,in its very simplest terms, is
being able to mimic what humanscan do to to achieve processes,
and it's it was created back in,like the 1950s, which is really
hard to believe that it wasthat long ago.
Sarah (06:56):
I had no idea.
Travis Yeik (06:57):
Yeah, yeah, and
this term was coined in.
There was some models that werecalled Markovian decision
process and bellman equations,and these were used back in the
like 1960s and 70s.
Right, you watch those old-timevideos I don't know if you've
seen them, probably where, uh,they um have, oh, what is like
war games that was, that was onethat was popular.
(07:18):
I think it came out in the 80s.
But, um, yeah, where you havethis ai that's playing games and
it's and it's taking over theworld and that's you know.
So they had this stuff clearback when and it really isn't
until recently 2014 ish or so,is when, uh, it has become
popular and it's it's growntremendously, partially by the
(07:40):
equations themselves.
Uh, there was a big influenceor big kickoff with that, with
Google developing some opensource AI software and
researchers being able to usethis software in universities,
and then the other part is nowthat we have computers that are
able to handle the processing,such as the GPUs needed for
(08:05):
processing all the data and thememory and storage that way, and
so that's a big part of whyit's grown here within the last
what, even 10 years or so.
Jodi (08:15):
So what I'm hearing is
like this this concept of
artificial intelligence.
What it is is basically takingdata, putting it into an
equation and kind of predictingwhat would happen, or mimicking,
like how humans would take aninformation and then make it
quote, unquote, make a decisionand like.
Up until now, we've beenlimited in terms of like what
(08:38):
those equations are themselves.
So Google has helped to bringout a better one that can be
worked on by researchers.
Google has helped to bring outa better one that can be worked
on by researchers, and we alsohave better processing, and
these two things coming togetherhave made it so that when we
turn on the TV or look at thenews, artificial intelligence is
something that we see.
Almost every time we look atsomething in the news, it's
(08:59):
there.
So that's huge and I feel likewe'll probably probably this
isn't going to go away.
Travis Yeik (09:05):
Yeah, no, it's
interesting.
I was looking.
There's a very popular equationand it's with reinforcement
learning and it was developed byDeepMind, google back in 2013,
2014 or so, and it was one ofthe first ones that could truly
(09:25):
play games such as Atari back inthe 1980s, the Atari gaming
system and this equation wasable to.
After it learned these games,it was able to play as well as a
human can and so, and then youfast forward now, seven years
(09:47):
later or so, and since thenthey've had other equations come
out for reinforcement learning.
Some of them are called theAlphaGo you might've heard that
one that was popular here aboutfive or six years ago, being
able to play chess, or the gameGo, which is a Chinese game, and
(10:08):
beat the top Chinese player inGo, which is just huge, because
this game has hundreds ofdifferent actions or
possibilities that you can takewithin this game to complete
your goal there.
Take within this game to uh,yeah, to, to complete your goal
(10:29):
there and uh since then.
Now we have a recent one, suchas uh equations.
Again, they're called dreamerthree or uh, or efficient zero
or moonet, and these ones arenow 500 times better than that
equation that came out in 2014,right, and this and these, these
ones came out in 2021.
So that's seven years later andwe're already 500 times better.
(10:50):
So what's it going to be inanother seven, 10 years?
And we're going to be, you knowthat much more improved again.
So this thing I like to compareit maybe as like when we had
computers, the internet, comeback or come out in what 93, 94?
I remember getting the firstemail address when I was back
then and being able to surf thenet, and since then, you know,
(11:12):
computers are just ineverybody's life, no matter what
you do, like if you're asecretary or a contractor or a
farmer, even right, everybodyuses them now.
And yeah, it could be that waywith AI, where it's just going
to be so necessary for our jobsto do, and yeah, so it's
interesting how things areprogressing and how we're at the
(11:35):
beginning of this revolution.
Probably is what it's going tobe and where we're going to be
here in the future we're goingto be here in the future.
Jodi (11:46):
So I've got a question of
like clarification, so like when
you mentioned equations, likeyou mentioned the AlphaGo
equation, or would even likeChatGPT be considered like an
equation.
Travis Yeik (11:53):
Yeah, so they use
several different ones.
In its broadest term, chatgptis called an LLM, which is a
large language model, and butthey also in the back end of
those there is somereinforcement learning.
So an LLM is it uses a varietyof, I guess, equations.
(12:19):
It's hard to explain.
Jodi (12:20):
Probably don't put this on
air, but I'm just thinking here
out loud Because I guess, likethe point I want to make here is
like air, but I'm just thinkinghere out loud, because I guess,
like, the point I want to makehere is like it's more than just
like a Y equals MX plus B, likea linear equation that we might
have all heard or like think ofsomething about.
It's.
It's a lot.
(12:40):
It's like a big giant Excel,like if then statement like is
that kind of how to think aboutit?
Like, how, how do we thinkabout an equation?
Is it as simple as thinkingabout having an input and then
that input goes through themodel and we get out what
happens next?
Like, how do we think aboutthese things compared to what we
assume is an equation?
Travis Yeik (12:58):
Yeah, so nowadays,
when we think about the AI, we
really are thinking aboutmachine learning.
We think about the AI, we reallyare thinking about machine
learning, and that breaks downinto that deep learning or into
reinforcement learning.
In deep learning there'straining, whether it's
(13:19):
supervised training orunsupervised training, and a lot
of that is used for an imageanalysis or for these large
language models that chat gptpuses, and it can then generate
responses, right, or generateimages in some sense, and so, as
compared to then, reinforcementlearning, which is more of your
(13:42):
input and output, where youtake, uh, we have a state or an
observation that we're lookingat and, based on this
observation, we can havedifferent actions, and it takes
that, that observation and says,hey, based on this, let's do
this action, and it is able tolearn just similar to what, um,
(14:02):
what, what babies or humans ordogs can do.
Right, if you want to tell a dogto sit, you train it and you
give it a treat as a reward andafter a while it says, oh, hey,
if I come to you and I sit down,I'm gonna get a treat, and
that's kind of what thereinforcement learning is, and
so it's like chat gpp they useboth of those in in some sense,
(14:23):
um, but they're, yeah, sothey're a large language model
and in the background they kindof those in some sense.
But they're, yeah, so they're alarge language model and in the
background they kind of usesome reinforcement algorithms to
help progress things and learnas your conversation goes on
with it.
Sarah (14:36):
So what kind of treats do
computers eat?
Travis Yeik (14:40):
That is a good
question.
That's a tough question.
Jodi (15:01):
It's really just a
negative one or one right.
A negative one would be anegative.
My computer like freezes nowI'm just gonna scream that at
negative one and hope that itdoes never do that again.
Travis Yeik (15:11):
Yeah but I think
about this like on the term.
So I'm this is probably out ofcontext and you can totally cut
this out, but like, based onthat, though, we keep hearing
things of oh you know, ai isgoing to take over the world,
it's going to take over farming,it's going to hurt humans in
the future.
Right, based on what it does.
(15:33):
To me, it's all based on whatwe teach it.
Um, if we have a say, we havean algorithm that says I want to
make the environment safe,clean, the, the best thing
possible.
It's going to do that and it'sgoing to get rewarded to do that
.
So all of a sudden, it saysokay, well, humans are causing
the environment to degrade orpolluting it, or whatever it is.
(15:56):
Well, it's going to learn thatsolution to fix that right, but
it also has to be trained then,like, how does it get rid of
humans?
Does it get trained to killthem?
Does it also has to be trainedthen, like, how does it get rid
of humans?
Does it?
Does it get trained to killthem?
Does it get trained to poisonthem?
And so then it has to learnthat as well.
Which are, you know?
I don't know.
So, to me, I don't see ai as asthat level where it's learning
(16:21):
like set several different tasksthat are completely unrelated.
Uh, really, we're training onone single task, and to be able
to to kill humans is just to meit's like intuition, which it
doesn't have.
It's just training based on onwhat we want to teach it.
Sarah (16:42):
This is a really great
conversation right here.
This is really interesting andvery pertinent to agriculture
right now, because one of thethings that I am hearing in the
countryside is fear over notbeing able to drive a tractor
(17:03):
anymore.
My ability to go out to thefield and spend time in the
field, which is the part offarming that I love, is going to
be taken away from me becauseof AI, and so, really, what
you're saying, travis andcorrect me if I'm wrong is that
unless we train it to do that,it's not going to be that way.
Travis Yeik (17:25):
Yeah, totally.
Jodi (17:26):
And adding on that too,
like from what I think I just
heard from you, Travis, is thatthese models need some sort of
definition of what winning isand what their goal is.
They need a human you know thishuman interaction part of AI.
They need to know what they'removing towards right Like with
Atari, it's really easy to seelike or a video game, you have
(17:47):
an end goal most of the time,unless it's like animal crossing
or some open world, but they'reyou're still rewarded in that
game setting of like what to donext and how to move and how to
operate yeah, so it's like thistiny little model they've
learned.
Travis Yeik (17:59):
They've learned to
just play that one game right
now, you can't just turn aroundand have it uh, intuitively know
how to tie a shoe right.
They're completely unrelatedthings and so they would have to
be taught on this task and thattask to be able to, and then to
put those tasks together, likewhether they're related or not,
like that's.
That's like truly humaninteraction, like you know what,
(18:22):
what we do in our brains andand deductive reasoning, and to
do that with an ai boy we're I.
I feel like we're generationsaway from that still that's a
really good point yeah, this isa great conversation yeah, you
were talking, though, on likeyeah, and I think when you first
sent out the invitation for meto talk you, I think you sent it
(18:45):
out.
As is AI going to take overagriculture?
Jodi (18:50):
With that guy with the
beard?
Travis Yeik (18:51):
Yeah, yeah.
So what's your thoughts?
Is it going to?
Sarah (18:57):
What do we mean by take
over agriculture?
I mean that's a great, likewhat you know.
Do we mean that humans willliterally not be farming anymore
?
Because, essentially, if AIwere to theoretically down the
line, take over agriculture andwe don't need farmers for the
entire decision making processanymore, why do we need farmers?
Jodi (19:22):
That's a good question and
I think it comes back to what
we just talked about right, likeright now, it seems like AI
models can be trained to do onething, but in agriculture
there's so many things going onall the time, right, that's why
we talk about farmers as wearingall these hats because they're
mechanics, they're experttractor drivers, they're
decision makers, they'reagronomists, they're financiers,
(19:44):
they're CEOs there, they'reagronomists, they're financiers,
they're CEOs, like there areall these different things that
don't really connect or likethey don't really fit into one
area.
So I feel like, in order toreally create a model to replace
a farmer, I mean, you're askinga computer genius to build that
to make it, I think, successfulenough to rival a farmer.
Travis Yeik (20:04):
Yeah, Maybe it's
like a better thing to say is
like what can it do now?
And you know if we're still inthe beginning of the AI in that
revolution, and what can it donow and what is the future of it
going to be, and what are wereally looking for?
Are we looking for it to takeover, you know, all human
(20:25):
interaction in farming?
I would highly doubt it.
I don't know if that's the endgoal or not.
It would seem to me thatthere's always going to be that
farmer that owns the land.
Right, we're not going to haverobots, AI, owning land, and so
it's going to be a farmer makingthose important decisions.
Sarah (20:47):
Okay.
So I think it's interestingbecause, really, the three of us
come at this whole angle, fromtrying to use technology and
precision agriculture to makegood agronomic decisions, right
To try to be the most efficientthat we possibly can be with our
agronomics, whether that'sdrainage, whether that's
(21:09):
fertilizer, seed, chemical.
How are we going to manage that?
But farmers, you know, they aremaking economic decisions.
When do I sell the grain?
How do I sell the grain?
When do I decide to buy thatnext piece of land?
When do I decide to buy thatnext piece of equipment?
Oh, the equipment broke.
How do I fix that equipment?
(21:30):
What is the right?
So there's all of these otherdecisions that also go into
being a farmer, and I thinkthat's something that's really
important for the industry toremember.
For the industry to remember,you know, because in our little
agronomy world which, when youthink about what a farmer does
(21:52):
every day to Jodi's point thatshe brought up earlier a farmer
has to wear all these hats,right.
But in our little agronomy worldthat's just such one small
piece of the puzzle and evenwithin that it can be so hard to
have a Technology make gooddecisions, because we are
(22:13):
dealing with life science forour decisions.
You know there is it's not justan equation out there that
makes plants grow.
We have biochemistry in thatplant.
We've got enzymes in that plantthat can make chemistry
equations balanced that wouldnever be able to balance outside
(22:35):
of that system.
We've got slightly differentshades of green across different
plants that you could neverdescribe, except that they're
just naturally a little bit of adifferent green.
So there's all of these thingsthat are occurring.
Oh and, by the way, did Imention the weather that always
tends to throw in a monkeywrench into a lot of things that
(22:56):
we're doing.
So these challenges from acomputer programming standpoint
have just got to be complicated.
Jodi (23:05):
My thought in this, too,
is like I just don't think
there's enough data out there totrain a model to do these
things.
Like I think about you knowwhat as a farmer, like what I
would want AI to help me with onthe farm.
Like one thing I would love tohelp get help with is like, hey,
do I decide to replant or not?
But guess what the AI modeldoesn't have?
It doesn't have what my soiltemperatures have been.
(23:26):
I don't record that I should,but I don't have that data.
I don't have a yield monitor onmy combine.
So like it's got no yield dataabout, like my specific area,
but even, and even outside ofthat too, though like it needs
data about what canola yields,like when it's planted on a
specific date, that specifichybrid, and like we have you
know research that's done byuniversities, but like that's a
(23:50):
low amount of data and thatwould also need to be inside of
a model.
Like there needs to be gooddata, I think, to teach these
models.
And like right now we collectsome data in agriculture and
that has been, I think, acritique of precision ag these
last 10 years is that there's somuch data and not you know
enough things being decided onthat data, because, I mean, who
(24:12):
has the time to do that?
Maybe that's where AI helps us,but like there's not even
enough, like data, I don't thinkthat we could build a model to
help us make some of these harddecisions that, as farmers, we
have to make.
Sarah (24:23):
And Jodi, to that very
point.
When you think about replantingcanola, that is one small
decision, one small agronomicdecision for an entire farming
year, and think about how muchdata is required for just that
(24:44):
one small decision.
Travis Yeik (24:46):
Yes, so you guys
both bring up great points.
So yeah, like the weather AI isnow, it's actually doing pretty
decent with weather.
It used to here five years agoit could predict out maybe six
hours, right, and now it canactually predict out maybe a day
, two days, and in the next 10years it might do better.
(25:08):
But as you say that's onesystem, say that's one system,
and so again, uh, kind of my mynow or my what I said earlier,
how saying, hey, this can playatari and now can tie your shoe,
well, it's the same thing.
Well, it can predict weatherreally well.
Well now, how does that relateto, uh, the soil nutrients and
how does that relate to disease?
(25:30):
and and the all these systemscome together and they kind of
mesh into one big net and to beable to have an ai to say, okay,
these are related in some tinyway, and to be able to model
that and now change it just alittle bit and put put us in
north dakota and now put us incalifornia or wherever else,
right, and how the entire systemjust changes completely again
(25:53):
based on all these differentfactors.
That goes right into Jody'spoints.
How much data is needed tomodel that?
That would take an enormousamount of resources and time to
be able to gather all that data.
To say that it could yeah, Imight be able to to understand
(26:15):
that in maybe not with ouralgorithms now, but 10, 20 years
, sure, why not Based on youknow how we're progressing, but
you have to get that data to doit.
Wow, I that that's quite thefeat to come to.
Sarah (26:31):
So let me ask you this,
to this, to this, to this whole
point of this conversation.
So when you think about, likeweather data you know we just
talked about, you know, takingthat decision model from North
Dakota over to California justbased in the weather data alone,
would we have?
Do you think, based on your,your knowledge, your work in AI,
(26:54):
that we would have to have twoseparate AI models for those
environments to make that, tomake good decisions work?
And I know that's a loadedquestion.
Travis Yeik (27:05):
That is a loaded
question.
I'm good at those.
I personally don't deal a lotwith weather, but from what I
know is that AI can take in justtons and tons of data and
filter through all that and say,hey, this tiny little bit of
information here is important,and so when it goes through in
(27:28):
this AI, they go through theseneural networks is what they
call them right, and so it'skind of like a branches on a
tree these neural networks iswhat they call them right, and
so it's kind of like a brancheson a tree.
We're at the bottom, we havethat trunk, and then we have a
little bit of data thatseparates out to the different
trunks, and then it separatesout again and again and finally
we've got a million differentnodes that reach out to all the
different branches.
And so it's kind of like ourweather system, right, Like we
have some things the trunk ofthe tree which might be
(27:57):
universal to all the weather orwhatever, such as rotation of
the earth or however.
But then as we keep going intolittle branches, each one of
these observations changes andchanges and changes, until we
get different actions with eachdifferent observation.
So, yeah, ai can take in andprocess that all weather data
into one single algorithm andthat's a lot of data that it has
(28:17):
to process.
But now to take all thatweather data and also take all
of another system's data, suchas disease or nutrients, and add
that and then add another oneand another one, and add that
(28:38):
and then add another one andanother one, that's where I
don't think ai has that, um thatability to do stuff like that
yet, yet being the keyword.
I'm not very good at predictingthe future, um, but that's fair.
Sarah (28:47):
Yeah, and especially for
this conversation, because I
think there's people out therethat want ai to predict the
future for us.
Travis Yeik (28:56):
Yeah, and that's
important too.
It's like, right, we, we don'twant to replace farmers.
Even even me as a coder, like Idon't want to replace farmers,
like they're super, superimportant and I don't want to
replace jobs, but it's all.
It's a tool, right, that youuse.
And that's really what it is isto help help make decisions
Right.
And if you can have I don'tknow let's say, 50 different
(29:19):
models, one that predicts eachand every little thing, you as a
human can put those togetherand say, ok, I see, based on
this information, given thatthis might be my best decision
for this small, tiny littleaspect.
And now you can relate that tosay, ok, well, based on this, I
need to make this financialdecision or this small, tiny
little aspect.
And now you can relate that tosay, okay, well, based on this,
I need to make this financialdecision, or this time to plant
(29:41):
or this amount of nutrients toput in, or whatever it is.
And I think putting that alltogether is a tool to help us
make better decisions.
Sarah (29:52):
And let's be real about
this.
Farmers have to make bigdecisions every day, and the
market is demanding that.
The margins that farmers haveto work with are so tight that
the market is demanding, youknow, nutritious food that's
affordable.
And the farmers have tightmargins on the backside and so
(30:15):
the demands of them to gothrough and be as efficient as
they possibly can, that's there.
That's why we see increasingfarm sizes, economies of scale.
So, you know, farmers aregetting over larger acreages at
a time, but we're able tovariable rates so we can address
those, those, those nuanceswithin the fields to make sure
(30:41):
that we're the most efficientright.
That's, that is what the marketis demanding.
And at the same time, ifthey're, if those farmers are
larger, farmers are dealing withmore decisions as well, and
more detailed decisions, thanthey ever have been in the past.
Travis Yeik (30:58):
Yeah, we are
inundated with the amount of
data coming in from all thesemonitoring systems and to be
able to make knowledgeabledecisions or process all that
data coming in, you've got totake time in your day to do that
.
All that data coming in, youknow, you got to take time out
of your day to do that.
And how important whether thatis, you know financially,
(31:20):
supporting financially onwhether that data is or if we
need.
Yeah, where am I getting athere?
Sorry?
Jodi (31:28):
I think this is really
good fodder for a second part of
the episode, because what we'retalking about is that there is
more and more incentive forfarmers to be more efficient.
With their time, there's a lotof data that could potentially
help farmers be more efficientwith it.
The missing piece is, you know,could AI be that tool that
condenses this data down andhelps farmers become more
(31:50):
efficient?
And I think we could have areally good conversation about,
like, what are some ways thatwould work?
Where are some ways that humansstill need to be a part of that
?
Because I think there stillwould need to be.
But that's kind of where I thinkthis is all leading to is how
does AI help us be moreefficient not replace us as
agronomists or farmers, but howdoes it make our jobs better or
(32:11):
our lives better?
So thank you so much forjoining us on this first part of
our conversation with Travis.
This has been fantastic thusfar, so please stick around.
We'll continue this fantasticconversation in part two, and
with that, with GK Technology,we have a map and an app for
(32:33):
that.