All Episodes

July 25, 2025 62 mins

AI isn’t the future—it’s already transforming freight. In this episode, we chat with Thilo from Levity AI about how tools like ChatGPT and Levity are automating quoting, tracking, and sales insights to boost broker productivity and eliminate soul-crushing tasks.

Support Our Sponsors:
QuikSkope - Get a Free Trial: Click Here
Levity: Click Here
DAT Freight & Analytics - Get 10% off your first year!
DAT Power - Brokers & Carriers: Click Here

Recommended Products: Click Here
Freight Broker Basics Course: Click Here
Join Our Facebook Group: Click Here
Check out all of our content online: Click Here

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome back everybody.
It's another episode of theFreight360 podcast.
We got a good special one heretoday for all of you tech
junkies, or AI junkies, likemyself and Ben.
We're going to talk some AI andsome technological advances.
On today's episode, we'rejoined with Tilo from Levity.

(00:20):
We've had him on the show inthe past a few times, so, tilo,
welcome back.
Man.
How's, uh, how are things?
You're in europe right now,right, yeah, yeah, I'm in berlin
, germany, at the moment verynice, very nice, nice, uh,
summer in germany.
I've actually never been togermany's uh, but we're gonna
get into what's new with ai,what's new with levity.
You know how the landscape ofum, the application of ai and

(00:43):
transportation has evolved andsome of the challenges.
But first, for everyone outthere, if you're brand new,
you've got hundreds literallyover 300 other episodes to check
out, as well as over 100 Q&Asessions where we answer your
guys' questions in ashorter-form podcast.
We've got educational videos,downloadable content, blogs, the
Freight Broker Basics course ifyou want a full educational

(01:07):
option for you or your team, allon our website, freight360.net.
A lot of content on YouTube.
Share us, leave comments.
You know, reviews all that goodstuff Ben real quick in the
sports sector.
Did you watch?
Tony Scheffler, I was going tosay did you watch?

Speaker 3 (01:24):
Absolutely.

Speaker 1 (01:24):
I didn't watch it, but was there like what's the 20
second highlight?

Speaker 3 (01:29):
He kind of ran away with it.
At some point he was up by likeeight strokes between number
one and number two, first andsecond place.
I think the closest it got onSunday was maybe six strokes.
He's just super solid, reallysteady plays, really steady
plays.
Every hole, exactly the same.
I mean not really a lot of upsor downs.
There are a couple miss hitsthat I saw on Saturday where he

(01:50):
looked like he was getting alittle frustrated, but literally
he had it pulled back togetherby the next shot, like he's just
very consistent, very solid.
Other highlight is I think itis 1197 days between his first
major and his fourth, and thatis the exact same amount of days
between Tiger's first major andhis fourth.
So literally to the day theyhave the exact same amount of

(02:13):
majors.
I think Tiger might've starteda little earlier career wise.
I think he might've been alittle younger when he won his
first.

Speaker 1 (02:19):
But still super impressive.
I think Shepard's in histwenties still, though, isn't he
like 28?
Oh yeah, he's still young, yeah.

Speaker 3 (02:24):
Love, I think, schaefer's in his 20s still,
though, isn't he like 28?
Oh yeah, he's still young.
Yeah, love, watching him play.
I was also.
You know, I'm a huge fan of.

Speaker 1 (02:35):
Brooks Koepka Loved watching him miss the cut.
He wasn't even there anywhereto be seen.

Speaker 3 (02:40):
Doesn't seem to be able to show up to a major
anymore.

Speaker 1 (02:44):
Aka the guy it was.
Was it Ireland or?

Speaker 3 (02:46):
Yeah, it was in Ireland so everyone was pulling
for Rory.
Obviously Rural Portrush, Ithink I can't, I don't, I've
never been there.
So like conceptually, thecourse has kind of seemed the
same to me.
It wasn't the old course.
I think I'd have to look it up.
Exact, and I kind of shouldknow that I watched it all
weekend but my mind's blanking,oh good.

Speaker 1 (03:04):
Anyway, let's, we'll shift to news, so, and we'll
just get right into the AI topichere, because, ben, I know
you've been big on some of therecent releases and we're going
to segue this right into ourconversation with Tilo.

Speaker 3 (03:16):
We can segue right in and I was sending this to you
yesterday, nate, and I use thison a project that Tilo and I
were discussing yesterday, and Iuse this on a project that Tilo
and I were discussing yesterday, so it's like a really good
segue.

Speaker 1 (03:31):
Chatgpt just released the agent model in ChatGPT, not
to be confused with the agentmodel of freight brokering.

Speaker 3 (03:34):
Correct.
So they've had deep research,which you know, to my
understanding, it definitelytakes longer and I believe the
model like goes and checks itswork against itself and will
like, step by step, to try tofind the hallucinations, come up
with better answers.
And I've listened to interviewswith people in like the like
finance world that are doinglike case studies with like MBA

(03:56):
students for companies andresearch and they're like these
are.
Some of them are as good aswhat we're paying to have done
with consulting companies.
So, like anecdotally, I'veheard really good things about
deep research.
Then the agent model just gotreleased like two days ago, and
then it got released to all ofthe accounts, like pro, premium
and plus.
So I got it yesterday, watcheda YouTube video on somebody

(04:19):
using it and like to explainwhat it does is basically is a
little window that'll pop up,and like to explain what it does
is basically is a little windowthat'll pop up.
You give it a prompt and itwill go out into the web and
just search different websitesto pull information back, check
it against itself, to come backand give you not just what's
based on their training data,which is theoretically
everything on the internet, butlike real time going to places

(04:42):
that you can prompt it to, and,like I was able to do some cool
stuff.
Like I saw someone do a demowhere they're like, hey, find me
, all of the orthodontists inAustin, that's who I want to
market to.
Then I'd like you to compareall of their publicly available
email addresses to a metaphor,maybe to cross-reference them.
Then it said right after thatwrite in a very effective cold

(05:04):
email to each of these people,make it eerily stockish, as if I
know what is going on in everyone of their social media
platforms.
It pulled up all the dentists,all their email addresses,
created an Excel sheet and thenwrote an email for everyone.
And it was like hey, I noticedthat you went to school at
Harvard and then you did yourdental training at University of

(05:24):
Penn.
Noticed, you're into kayaking.
Hey, you're an overachieverlike me.
And then it correlatedeverything it knew about the guy
using that model to write avery descriptive, very good hook
to each of the people it foundand their email addresses of
which he just plugged in andsent emails to them.
So those are some really coolthings I saw like literally two

(05:45):
days ago.
I used it yesterday and itstill hallucinates.
It still makes errors, becausethe stuff that I did I could
find errors in both deepresearch and the agent model and
the thing that Tilo and I wereworking on and this is what I'll
segue with because I wantTilo's thoughts is we were

(06:05):
trying to figure out anintegration we're doing for AI
between our TMS, our trackingsystem, and our teams, and we
wanted to be able to pullinformation from our TMS for
updates.
But our TMS doesn't get all ofthe information from the
tracking system.
From the tracking system, theiruser interface, like when you

(06:28):
go to their, their login, youcan see more things like this
driver's behind.
This driver can't make it.
This driver is on schedule.
Our TMS just shows where he'sat and when he checks in.
So we wanted to pull all thatand then we want to feed that
into like we use G suite tocommunicate with our all of our
you know, carrier, sales reps,track and trace and basically we
wanted to have 100% visibilityinto if this driver is not going

(06:50):
to make the pickup or delivery.
We want to be able to send amessage to our team and just say
hey, on macro point, forinstance, shows this driver's
behind.
You might want to take a lookat it, for example or hey, this
guy's tracking has turned off,maybe you want to give him a
call.
Then we want to either layer insome AI phone calls to maybe
just touch base with the driver,see if it can resolve it, or
kick it over to a chat.

(07:12):
So it was like how can we get100% visibility into the
problems before they occur?
And then Tilo and I had to lookinto the integrations and
basically I think for anyone outthere it's like some
information gets pushed from oneplace to another, some gets
pulled I'll let Tilo explain howand why that's the case,
whether it's an API or a webhookand then some of them basically

(07:32):
the API documentation is just abig sheet that says hey, if you
connect your computer and yoursoftware to our software, we can
send this to you from thesefields, and this is how the
information comes back and forth.
So after we outlined how wewanted to do all this, we had a
bunch of questions after we readit and went okay, what
information gets pushed, whatgets pulled, how often, how

(07:52):
frequently and how can we get itto where we need it and what
needs connected?
So I went to start doing what Inormally do was read all of
these things to see if I couldfigure it out, send emails and
ask questions, and I was like,oh, this tool just came out,
let's see how good it is withthis.
So I put in every singlequestion we had everything.
I wrote all my notes, all myquestions into deep research and
said see what you can do tofind a better understanding of

(08:16):
how we should do this, what isthe workflow, and also, please
write me a prompt for your AIagent.
I let deep research run, thenput it into the agent, because I
knew the agent could godirectly to their websites live,
find any information thatwasn't in the training data that
might have been changed, andthen I asked it to do the same
thing again.
Then it gave me a breakdown andit answered like I think, almost

(08:38):
every question we had.
And when I read it and checkedit back, I'm pretty sure I could
see some of the things thatweren't correct that I've
manually gone and changed forlike our document for you and I
to work.
But I'm like it looked like itgot like 95% of some of the
harder questions that wecouldn't get.
And the last thing I wanna sayis I emailed some of these
questions to both MacroPoint andour TMS.

(08:59):
The answers I don't think theygave me from my TMS are actually
correct.
I think the AI agent got usbetter information than the tech
support team at the TMS.
We'll see as we go through theprocess, but I'm pretty sure it
gave me more accurateinformation than the human being
that works at that company.
Using AI to enhance AI hey writebetter prompts for yourself

(09:21):
than I can write this outlinekind of blows me away as far as
like.
I did that on 45 minutes.
That would have taken me like awhole day.

Speaker 1 (09:27):
I want to throw Tilo.
Can I get your perspective,like your side of all this, so
from you know, from levity, oreven, if you want to go broader,
like from the from thetechnology side of it?
How does like?
I guess, where are we at in thelandscape of like the
implementation, implementationof AI, and all of this for the

(09:47):
logistics industry?

Speaker 2 (09:49):
Yeah, I mean, changebt is obviously a general
purpose tool, right for any kindof request that you might have.
And the question really is,what tools do you use for which
you know task, right?
So for these kinds of researchthings where it's like, hey,
find me all of the dentists inyou know Minnesota and you know,

(10:14):
do research on them and putthat in a spreadsheet, or, you
know, create a presentation forthis topic, or these kinds of
you know research-heavy thingswhere you have to visit a lot of
pages, accumulate a lot ofinformation.
But it's also there's not thatrequirement, right, that you
have to be a hundred percentaccurate or you have to have

(10:35):
certain information in there forthis to be successful, right,
because you can add the missinginformation and you can remove
the incorrect or the unnecessaryinformation.
So there is not thatrequirement to be super, super
accurate.
But when it comes to the kindsof tasks that we're automating
for our customers, we have to bealmost perfect, Because if we

(10:59):
automatically create an order ina TMS and then we mess up
something there, right, thenworst case, the truck is showing
up at the wrong location andthese kinds of things are way
more.
You know need a way higher.
You know, quality need to meeta higher quality bar than

(11:21):
something like that, right?
So it's more of these.
You know I don't want to dothis right.
I don't want to click on 40different pages and read them
all, you know, combine them intoa document or something like
that and just automating thatperfect, right.
For other cases there are othertools and you just have to get

(11:41):
accustomed to that.
I think the best advice that Ican give anyone is just to keep
working with that stuff.
See what works, what doesn't.
You know.
I mean you could ask the agenthey, find me, you can make it
higher level, right, becausefind me new customers.
Or, you know, sell freight, seewhat happens, right, we're not
there yet, I'm sure, but to kindof see where the limits are,

(12:05):
what it does and these kinds ofthings.

Speaker 3 (12:07):
So to that point I want to say something that
happened yesterday that Ithought is kind of funny but not
, is it like?
I talked to an executive thatyou both know of a TMS and he
was talking about how they'rehaving so many issues.
To what you're saying, the barin our industry is 100% accurate
.
If you're wrong, it costs moneyor something really bad happens
.
Send a truck somewhere, paidattention, whatever, right,

(12:29):
there's lots of repercussions.
So getting it to that bar, he'slike, is incredibly difficult.
And he was telling me some ofthe things that he read about
uses in the medical field ofwhere like, yeah, it will catch
lots of things, but it alsocatches lots of things that
aren't wrong, meaning like thereis also a risk at saying
somebody has cancer that doesn'tand you give them chemo because

(12:50):
it's just wrong on that sidetoo, right.
And the funny thing thathappened yesterday I guess it's
not really funny there was alittle kid that I think had pink
eye at the place that mydaughter, my wife, went and my
wife was like, oh, like you know, is your kid okay, like what a
normal parent would say.
Right, I'm gonna say like, hey,like we have a doctor we use
locally if you need somebody tobe happy to give you their
number.

(13:10):
And she says, oh no, he's hadit for about four weeks and chat
gpt said it should go away in amonth or two.
And I put some like oh my god,remedy on it and she came back
like she was so visibly upsetlike it upsets me still even
thinking about it of like I'mlike I just it was literally
when I got done doing what wewere just talking about that she

(13:31):
came in and told me this andI'm like Jesus, I'm like honey,
like I just found all theseerrors and things that like I
know because I'm doing what Tilosaid, of like going back and
checking to see where it's wrongand where it's right, but yet
you have people just going.
Well, the computer told me, Iguess I'll just put my child's
welfare in what ChachiBT said,and I'm like this is terrifying
to some degree.
Wow.

Speaker 2 (13:53):
Oh my gosh.
Yeah, yeah, I mean just athought on that right, like it
really depends on the task andwhether you are able to loop in
human oversight or not.
Right, because in some cases,getting the human oversight and
checking every single case, Imean you're not winning anything

(14:14):
there, right, like the AI isdoing something and then you
know you're still doingeverything manually, so you're
not winning any time there.
There are other cases, though,where you know.
What we've seen, for example,is we have lots of customers
that get a ton of emails fromtheir customers about damages
and you know saying packages,right, and then they get those

(14:37):
to a central inbox, but theyneed to forward it to the right
person, and that data needs tobe sort of prepared for that
person to then work through thecase.
Right, so they're still doingthe same work, but the prep work
of taking all of the availableinformation and making it very
easily digestible is done by AI,and you're saving I don't know
15 minutes for every case.

(14:59):
And then the other case that Iwas thinking about right, like
the cancer case.
It's not bad if you, I meanit's a little bit bad, right, if
you tell about, right, like thecancer case, it's not bad if
you.
I mean, it's a little bit bad,right, if you tell someone, oh,
you might have cancer, the AIdetected something.
And then you double check andyou're like, oh, it's actually
not cancer, right, so okay.
But the reverse is very badTelling someone oh the AI said

(15:23):
there's no cancer, but there isactually cancer.
That's way, way, way worse.
So that's why these kinds ofcompanies that are also doing
cancer screenings with AI,they're optimizing for that
right and they're optimizing forfalse positives to be higher
than the false negatives.
And you can do the same in, forif you, if you use ai to to

(15:45):
process invoices, you want to,you want to check every single
invoice is it correct?
Does it match what was agreedright?
And then in the past you mighthave to check every single one
for correctness.
And then you have a thousandinvoices and nowadays what you
would do is you pre-scan it withAI and AI sorts it by the

(16:06):
likelihood that it has someerrors and highlights those
errors.
And then you just go one, two,three, four, five, and at some
point you're like, okay, thereare no errors anymore and I can
just, you know, approve the rest.
And then maybe you only need tocheck a hundred right, because
maybe there are in 1,000invoices you have 10 that are,
you know, faulty right and havesomething in there.

(16:27):
But in the previous case youwould have to check all 1,000 to
get the 10, right, and now weonly have to check 100 to get
the 10, right.

Speaker 3 (16:35):
In that use case.
We've talked about this.
Like I'm doing lead researchfor the salespeople right, when
I can do that using the tool youshowed me, which was clay that
I started using, or it's likebasically step-by-step AI that
goes in and then checks the lastcolumn so I can look for errors
and say, like 5,000 leads oflike people to a company, to a

(16:56):
lane, that all match up with whowe want to talk to, but then I
can find those errors Like I'mliterally going to do it this
afternoon Like I have a list of5,000 people, their email
addresses, their company andwhat lanes we think would be a
fit with that they ship, and 240of them came back incorrect.
So now, instead of me manuallychecking 5,000 leads, I'm going

(17:18):
to cross-reference that 240 tomy original set and it's
probably going to take me anhour where that would have taken
a human being.
A week before we email, reachout and pay human beings to do
the actual work of buildingrelationships with those people.

Speaker 1 (17:31):
Let me ask you this um, how are you?
And like because I really wantfor the audience to to be able
to understand this on a on themost basic level and I'm getting
at here is that this wholeconversation, like you know, the
what AI can do and some of therisks, like I always think of it

(18:05):
as like, give me time, as theexpert, to be able to make
decisions that help me increaserevenue and profit and take the
tasks off my plate that stoppedme from doing that Right.

Speaker 3 (18:15):
So take me through how you're implementing it.
We can go through track andtrace, but I think we can talk
through one that we haven't doneyet that I want to do.
That I think is really valuable.
So the other person that I workwith, owner of TLX, that we
manage the company together andcall it 25, $30 million company
with like 25, 30 people give ortake right Now the questions we

(18:37):
are trying to figure out that wewere talking about and spending
a lot of time on looking at ourTMSs which of our customers
have increased in business,which have decreased.
Some TMSs will tell you thatwhen you kind of look at reports
.
But then we wanted some morebetter data, meaning like okay,
well, like how many orders,offers for loads are coming in
versus how many we're actuallybeing awarded.

(18:59):
Because one question is howmany loads do we move with this
company?
And the next question is isthat company shipping more or
less right now, like are theysending more quotes that we're
missing your batting average,right?
And it's exactly the analogy.
We wanted to know what ourbatting average is how many
pitches were thrown, not justhow many we hit, right?
That gives you a percentagewhich will help you gauge those.

(19:20):
Then we wanted to go a littledeeper, because we know that
days of the week pricing changesa lot on some lanes.
Like we'll do well on a Mondayon this lane and we'll lose
money Tuesday for the same rateon that lane and then do well
Wednesday and Thursday and maybenot Friday.
And we know that information isin the TMS but we can't really
get it out and sort it and likeI'll spend hours in Excel trying

(19:42):
to figure out are our highestmargins on a Tuesday or a Monday
?
Are they always on a Tuesday?
Are there certain weeks in amonth where that change?
And then we wanted to see okay,out of all of that information,
what we're really trying tofigure out is our sales reps.
Who do we need them to reachout to more?
Who are they not talking toenough and are they quoting too

(20:17):
aggressively, like too low, oncertain days of the week?
Do they need to quote a littlehigher on certain days of the
week?
Is that based on a region ofthe country or is that just
there's a lot of trucks in theAtlanta market, for example, on
a Monday and none on Tuesday?
So we either need to pricehigher, but if we can see that
our sales reps can talk to thatshipper and say listen, hey, it
looks like the market ischanging a lot on Tuesdays.
Would it be really difficultfor you guys to load a little
more on Monday and Wednesday?
We can keep your costs down.
There's more trucks, you'll getbetter rates.
The trucks need the loadsanyway.
It works better for thecarriers, better for the
customer.

(20:37):
Everybody saves money by findingefficiencies that are there
that you can't really see thisinformation at TMS and in my
head I'm like OK, I really wantto be able to load all our TMS
data into, let's say, deepresearch in a project in GPT and
ask it these questions whichcustomers are going up or down,
when, why, what days of the week?

(20:59):
Who is doing more, who is doingless?
Now here's where I ran intowhat AI couldn't do is.
It's not really that great atchecking its own math, so it
would constantly give medifferent answers and I spent an
afternoon trying to get it towork that way.
What Tilo and I talked about ismaybe the data needs structured
differently, like it needs toeither go into a database in a

(21:19):
way that GPT can go and do this,but the other answer and we
didn't really talk about this isthere are workflow steps where
basically you can have GPT doone step, then it goes to
another place, maybe a databaseor a CSV file like an Excel
sheet, then does the next step,then does the next step.
So it's like a string ofbasically a calculator, a

(21:41):
computer, a calculator, aspreadsheet, then a computer,
because ideally you don't wantto spend four hours to get the
answer to six questions.
You want to just be able to askthe question and say, hey, where
do my reps need to spend moretime on what customers and why?
And if it can give you thateasier, you spend way less time
talking to people that don'tneed help, more of your time on

(22:01):
the people that do need help.
And then you can do the samething with your carrier base
which of our carriers are mostlikely to need this lane, on
which days of the week, whichlanes do they need?
So when our carrier reps callthe carriers, we have a better
understanding of what they'vebeen doing with us, what they
need, and spend less timecalling carriers like, yeah, I
don't run that lane anymore.
Yeah, I don't need that, wedon't even run that anymore.

(22:22):
That's been three months.
We're over on this side of thecountry, but yet we know there
are carriers that you want tospend time with.
The biggest waste to what Tilosaid is doing this manually,
which all I mean, nate and Ihave done is like you're calling
120 trucking companies a day,you find two or three where you
have a good conversation.
The rest was just pitches andmisses, just throwing the ball

(22:43):
across the plate and you're justwatching it go past and you
have nothing actually of valuebeing created.
So that's like one of those, Ithink, really valuable use cases
for any brokerage, and I don'tknow any TMS that really does
that well and I'm like that isincredibly valuable to give
insight to the executives, themanagers or the brokers and
carrier reps where to spend yourtime and who you should be

(23:04):
speaking with to create morevalue for everyone in the chain
carrier brokerage all the way upto the shipper.

Speaker 1 (23:10):
My initial thought on a lot of that is like AI will
like the, when implemented inits most effective sense, will
like help leadership identifytheir bad eggs sooner and
recognize their studs sooner.
To be able to like, because ifI know it's, you know, it's kind
of like, if you're raising,I'll just, you know, make up

(23:33):
racehorses Right, like, and youfigure out like, hey, we're
going to stop wasting our timeand money on this one.
It's not going to win us anyraces down the road, but this
one, hey, in two years from now,is going to be, you know, a
rock star.
And I think you could probablystart to identify your reps that
way and who's who's best toserve in certain roles.

Speaker 3 (23:49):
There was an interview with I can't remember
who it was which one of thefounding members of Google that
is back at Google working onthese things he's not the CEO,
but ah, my mind's blanking.
He ran all of theircommunication data through an
LLM and he started playing withit and said which team members
in this group of our company areunderappreciated but probably

(24:11):
need promoted.
It identified like this onewoman, so he then went by, like
reading their emails and stuffor what.
Yeah, like I guess all theirinternal, their Slack messages,
probably like theircommunication of employees
talking to each other and itidentified somebody and he then
did the manual work that we'retalking about of went and talked
to her manager and said, hey,how's this person performing?

(24:31):
What are they doing?
What are your thoughts on likeupward mobility?
And he said it's interestingthat you pointed her out because
I've noticed like she's reallyeffective, she probably has the
personalities to be promoted andlike I've been thinking about
possibly doing that and I justhaven't really brought it up.
And he's like you know it'sanecdotal, it was one instance

(24:52):
he's like but it accurately didhelp me identify one of our
employees across tens ofthousands, that we should maybe
spend some time and do a littlemore whatever you know HR work
to see if they're fit to be ableto be promoted.
And it was like exactly what yousaid, nate, like it identified
who had the potential and who togo spend some more time with.

(25:14):
I mean, there's alwaysunintended consequences, I think
, to even things like that, butlike that was, I think, a really
good recent use case.
I heard of exactly that thing.

Speaker 2 (25:22):
Interesting, probably a little bit off the rails from
the initial discussion, but ifI now think about it, right,
especially in large, largecorporations, it's super
difficult for managers topromote the right people right,
because they're busy and oftenthese decisions are made based

(25:43):
on you know who?
I know right, who I like, and alot of people that are
optimizing for that.
Right, they're just being veryvisible, right.
They're, you know, talking tothe right people, they're saying
the right things, but, you know, sometimes that's
representative of whetherthey're actually doing good work
or not, and sometimes, oftenit's not.
And you know, I think you canprobably find some more diamonds

(26:10):
in the rough in that way.
But it reminds me of what we'redoing with our control tower
product, which we launched acouple months back, where a lot
of the customers that we nowhave initially approached us and
said, yeah, we want to dosomething, but there are so many
things we could do and they'reall.

(26:30):
They all sound cool, but wedon't know where to start.
Right, we don't know how toassess, we don't know how to
assess the potential and thefeasibility of these use cases.
We realized that we were doingthe same thing with all of them
over and over again where wesaid okay, please get me an
export of your last, you know,three months of email data or

(26:54):
communications data, and weanalyze that and we find the use
cases for you and we alsocalculate how much potential
there is right.
Are you getting more track andtrace related emails or more
quoting related emails fromwhich customers?
How much are you winning?
All these kinds of things?
Nobody knows what's going onbecause it's a black box, and I

(27:15):
remember that we did somethingalong those lines also for you,
right, where you gave us anexport of all of your emails.
I think it was like I don't know, a hundred thousand or
something, right.

Speaker 3 (27:25):
And then we got you exactly the data out that you
needed to work with, right andshifting through all this stuff,
it's very similar to you knowwhat the agent and deep research
stuff is doing on websites, butwith your internal data and
getting you that stuff out, soand I want to go a little
further, because that is a thinglike, as they're more remote

(27:46):
employees, it is incredibly hardto see the things that I think
I was used to see in the first20 years of my professional life
Right, which is like you're inan office, you see who's vocal,
you see who vibes well withother people, who, at the water
cooler, seems to be having thepersonality that is the fit for
their role, and you see thepeople that are kind of heads

(28:06):
down, that are really fits fortheir role.
Like I was an analyst and asalesperson, so like I like we
used to joke we're like, hey,we're the squirrels in the
closet when you're the analyst,but like there's a fit and
there's a reason that some folkslike move towards that role,
and when you're not with peoplelike you don't have so much
information you see beingpresent with them and like we've
talked about doing similarthings of like we just want to

(28:28):
know, like who's doing well inthe role, who maybe needs some
help, where do we need to focusour training?
Because like what we were doingis like we're just spending the
same amount of time witheverybody and like that's not
efficient and like some peopledon't need as much time and
don't appreciate it.
Some you're not spending morewith that need it and there's no
way to identify that unless youcan see and analyze all of the

(28:49):
communication.
Who's sending the most emails?
Who needs work to your point,nate, like on formatting grammar
and where, and who they'resending them to right?
Who needs help on communicationskills with their customer
service?
Who is doing well by buildinggood relationships with carriers
?
Who's just treating themtransactionally and needs
coached up a little on that?

(29:09):
And it is a giant opaque blackbox.
And if you ask somebody wheredo you need help, the irony is
that's an unknown, unknown tothat that person.
You know that person doesn'tsee their blind spot because by
definition it is their blindspot, so they can't tell you
where they need help In mostcases.
You have to identify it.
That's the job of a goodmanager, or I would say leader

(29:30):
is to know where to focuspeople's, your energy to help
increase people's.
I don't want to saydeficiencies, I would say blind
spots or areas where they wouldbenefit the most from coaching.
Areas for improvement, right so.

Speaker 1 (29:42):
Teal, I got a question for you.
One of the things at mybrokerage that we're doing right
now is we're evaluating a newTMS to move to, and there's a
lot of AI powered stuff and,regardless of AI, like people
seem to be resistant to changewhen it comes to like anything
new.
So I'm curious when it, whenit's kind of implementing levity

(30:04):
with some of your customers orprospective customers what are
like how is the adoption, ben?
Are people embracing it?
Are you getting a mix of likeoh, this is overwhelming, what
does that look like?
Because I mean, mean you're,you're.
The end goal is like is great,right, but getting there is
there's probably, like you know,some hurdles or, like you know,

(30:26):
a resistance to adoption.
I'm kind of curious what?
What does that look like?
And then you know, once you getover, how is it?
How's it all pan out?

Speaker 2 (30:35):
Yeah, I mean, in our case, you always have, I would
say, three groups of people thatyou need to get on board, right
?
One is the managers of a team,right, that just want their team
to excel and become moreefficient.
Then you have one level above.
You need to convince obviously,c-level that this makes sense.

(30:56):
You know, economically right,there is strong ROI.
And then, even more importantly, you have the people that need
to adopt it, because if theydon't adopt it, the whole thing
falls apart, right?
So the team of these managersneed to be, you know, first of
all, educated, right, because alot of the people have they
probably know ChatGPT andthey've used it for some stuff,

(31:18):
but there's so much more to AIand how it's deployed in
different kinds of use cases.
So maybe you know, just givingan example from one of our use
cases in quoting right, whereemails are coming in people
asking for FTL rates from onezip to another, and you know,
know, we're plugged into toolslike green screens or sonar or

(31:39):
you know that, to to kind of get, uh, these rates and then
assist them with rating faster,right, especially when it's
these mini rfps of, like youknow, 20 different lanes and
they would have to go to greenscreens for every single one
right to kind of understandwhere the rates are and then
apply additional markup logicand all these kinds of things.

(32:01):
So they know that it's painfuland they know that it's like
this task that they don't wantto do, but they also don't want
to lose control over what'sgoing on.
So we're usually coming inwhere we say, okay, we're not
gonna take this away from you,but we're gonna automate these
steps.
And what's happening for you isthat you only see the email

(32:22):
when we've done something withit.
So there's already a draftreply that you can review and
you can check the rates and youcan see.
Okay, you know I might want toadjust this a little bit, right,
but I know it's taken away acouple minutes of really
dreading work.

(32:42):
And then over time people startto trust it more and more and
more.
And you know, we've seen itwith a large client recently
where in week two the usersalready started sending 85% of
those drafts that we created forthem without any adjustments.
They just said, okay, looksgood, send, send, send.
Right, and at some point wemight send certain emails right

(33:05):
away, but we're not doing it yet, right?
The ROI on this is also prettystrong, and the other thing
that's helping them is that theydon't have to learn another
software, another tool, anothertab that they have to use in
certain you know context.
It's just their email, right,it's happening right.
There we're more like abackground tool and that makes

(33:25):
adoption much easier.
The other challenge is more inthe how do you set everything up
so that you get to this pointright, and that's more working
with the manager and the peoplethat have the operational you
know, process knowledge, andthat's what takes really a long
time right.
So sometimes you spend severalweeks to set everything up,
account for every edge case, andthen you present it to users so

(33:47):
they're not you know like oh,this is a wrong rate, this is
bad.
They don't know how it gets tothat right.
So you need to be also carefulto when do you expose it to a
larger audience?

Speaker 3 (33:59):
in that sense, here's one on that use case, right,
that is close to one that we use, that we want to do too, right?
Say you have a customer rightthat you've worked with often,
right, and they're sending overa request for a quote, like on
the same shipment types everyweek and that person is manually
doing what Like exactly whatwe're talking about.

(34:20):
In the final mile they go andlook at their company's TMS
history.
What have we been paying forthat as a company?
Then they're going to go to DATand go what is the average,
what's the high, what's the low?
And also like what is theaverage posted rate on the load
board right now?
Okay, so you got four pieces ofinformation.
Say they got five lanes,they've got to manually go in

(34:41):
each of those, write that emailand then make that decision.
What we found is like we foundsome plugins to try this Tilo of
like, basically email pluginsthat will auto-populate the
response and it'll say like so Idon't know, whatever Acme
shipment says hey, these fivelanes.
And says, hey, can you quotethese, ben, these go out

(35:01):
tomorrow and that happens everyTuesday or Wednesday, right, the
email auto populates and goeshey, Frank, based on the way I
responded, it literally writesthat email, very similar to
every other email I sent himevery week just pops up and then
it just drops in.
It says right next to that lanehere's your rate, here's DAT
high, low, average and averageposted.

(35:24):
So now all the person has to dois look and go okay, dat,
average is right, what we paidlast week and the highs around
where it was boom.
Delete this, leave that one.
Then go to the next lane and gooh, I want to add a hundred
dollars to the rate because it'sgone up a little bit from last
week.
This one went down.
Subtract 100.
Now all they've got to do isread it to make sure it looks

(35:44):
good, pick the numbers that arealready right there in their
email, instead of going to threeother websites and then hit
send right.
Like that saves that personlike 15 minutes worth of time to
do the exact same thing withthe same information, just by
presenting it in a moreefficient and effective manner.
Right, and to me, like it's wetried that with a similar way.

(36:06):
Like I kind of manually hackedit together and like the rep
that used it for like a littlebit, he's like dude, this saved
me so much time, like, and hewas quoting so many lanes a day.
He's like dude.
If we could do this at scalelike this would make everything
so much more effective.
And if you could layer that onthe thing I said earlier of like
what days of the week do youpay more and what you don't, and
now it just goes hey, it'sTuesday.
You've been paying more everyTuesday all month.

(36:28):
You should be around here.
You're helping that person getbetter at their job too.
You're not actually dumbingthem down.
You're helping them with morefeedback, right where they see
it.
Because the big drawback I seewith some of these is like,
literally, people that are usingthis, they're doing studies and
they're like you're using GPTevery day.
Like your writing's gettingworse, your critical thinking is

(36:49):
getting worse, just like whenyou use GPS to drive everywhere,
you forget how to go to yourkid's school because, like
you're just used to the cartelling you and you forget which
direction you're driving, andthat for sure happens Like your
brain atrophies the less youstress it and the less you do
hard things.
I think I want to find ways touse the tool, like you said, to
help them learn better, fasterand give them more feedback in a

(37:11):
way that's more usable.

Speaker 1 (37:12):
Yeah, and so this is just kind of generally speaking
here.
This goes back to what I saidbefore is like the, the good use
of automation and AI in generalis like remove the the time
that or, if you can, the timeyou take, spending the tasks
that don't produce revenue andprofits.
If you can take that off yourplate and give you time to spend

(37:33):
more, um, more of your day, youknow, building relationships
with customers, developingbusiness, producing more profits
, that's the goal, right.
So, like I think about when I'mas I'm evaluating TMS platforms
right now and I see how quickly, with some of the automation
now a load or an order can becreated in a TMS on some of

(37:54):
these new platforms and Icompare it to our existing.
Like we, my company uses McLeodright now and if anyone
listening uses McLeod, they knowit's McLeod.
It can take you 10 times aslong to build a new order in
McLeod than some of these newones.
So I've got to come in here Onone screen, I've got my rate

(38:15):
counter, my tender for mycustomer.
Now I have to go and I have toadd a new order.
I've got to enter the pickupinformation, the address, and
I've got to create this newlocation and I've got to add
this.
No, I've got to add this comand I've got to add this
equipment type, blah, blah, addthe customer, the rate, but all,
and I'm on 10 different screens.
Right, where some of theseplatforms now it's like it just
drop your email in here and it'sgoing to read it, and right,

(38:38):
and it's like the larger usecase with with levity, is it
goes beyond just creating anorder, right, you guys are
talking about, like sendingemails back to a customer to
respond to a request for a quote.
Um, so I I think about like, ifthe average, like the, the
frontline user right, you talkedabout the three different tiers
, tila, like that frontline user.
Like, if, if you, if you thinkabout like you don't, you're

(39:00):
going to save four hours a day,maybe, like, with all the little
things, you're basicallydoubling your productivity in a
day, with being able to go getmore business and penetrate
accounts.
Um, because you're not doingthings like drafting an email.
You ever see people that, likeit takes them 20 minutes to
write an email because they'relike it's got to be perfect, and
I go back and I'm like, justsend the freaking email and you

(39:22):
have a lot of these mundanetasks done for you.
Now you can spend more timeactually doing the things that
produce revenue.
Here's the relationshipsTalking to your customers about
what's coming up next quarter,things like that.

Speaker 3 (39:34):
So to what you said, nate.
I wanted to.
I wanted to connect those Right, because the thing that levity
and we're doing just on like thetrack and trace piece and like
we have that scoped out to dosome things all the way beyond
that and go down to keep doingthis Right.
But like he made a really goodpoint and this is the thing that
I've really enjoyed aboutworking with Tilo, cause, like I
talked to, at least a half adozen companies that are trying

(39:55):
to compete in the space dosimilar things I don't know a
month and like Tilo and his teamare very upfront with the time
it takes to do this and the workit takes to get it right, where
lots of the other companieswill just be like, oh, it'll
just do all this.
And then I ask it the questionsI've learned just from working
with him, and then it's like, ohwell, maybe not yet or we'll

(40:16):
see.
And I'm like, okay, well, ifit's only doing 90% of it, like
we don't really want to use iton that thing because like
that's going to create a lotmore work than it saves.
And like they're upfront.
And just to what you just said,to tie this in, just that track
and trace thing right For acompany of our size.
We figured we have four humanbeings that are doing a task
that I can tell you they don'tenjoy, which is staring at macro

(40:39):
point updates, looking forwhether or not the GPS has
slowed down, did it fall off,and then going back and forth
and just staring at these thingsall day, like a human doesn't
want to do that.
They want to be able to benotified when they need to pay
attention to it so that whenthey're not, they can do things
that add more value and are morefun for them, like carrier

(41:02):
sales, building relationships,talking with dispatchers, like
doing things that create realvalue.
Not staring at little numberson different user interfaces to
see when they changed enough todo something like that is to me
like a mind numbing work for ahuman that nobody should be
doing if they don't have to andlike that's the thing we want to
take off their plate to dothings like.

(41:24):
But relation, like the wholeindustry runs on relationships
between people.
If you're just staring atnumbers changing all day like
that is incredibly life sucking.

Speaker 1 (41:34):
To take your point a step further, like not, is it
mundane and mind-numbing?
It doesn't develop people.
If you're just doing data entry, or if you're just looking at
one screen and comparing it toanother, you're not developing
or learning or growing as anindividual within your, a team
member within your organization.
You're literally doing whatautomation and AI should do.

(41:56):
Yeah, you're not growing at all.

Speaker 3 (41:58):
The first six months of my job at a bank as an
analyst and this is 30 years agothey would print out a ream of
paper like this thick of likelegal paper of every bank
account for every business inthat department and we had to
manually go through that bypaper and type those numbers off
paper into a computer for sixmonths and like at the end of it
, I'm like I learned nothing,like I didn't develop, I didn't

(42:21):
grow as an employee, I didn'tget a better understanding of
the job.
I will be doing Like I'mliterally just taking
information from one place andputting it over.
It's like, oh, I'm going topick up a book and put it over
here, pick up a box, put it overhere.
Like you're not, you're noteven developing physically doing
that.
Your brain just hurts by theend of the day.
Yeah, I would ask to you Goahead.

Speaker 2 (42:42):
Yeah, just another thought, and I'm curious about
your opinion on this as well.
I met a guy recently that is ina completely different field,
but he's also building AItooling and in this case, for
nurses.
You know AI tooling and in thiscase, in this case for nurses,

(43:03):
where you know there are a lotof nurses out there, but they're
even more, even more patients,right, they are chronically
overworked because you know theyhave like 12 hour days and you
know what.
So they have these tasks wherethey call patients to remind
them to take their medicine.
Right, and now you could thinkof, you know it, through the

(43:24):
lens of oh yeah, we're, you know, taking away this task and
we're saving them one minute.
Now, the other way to thinkabout all this is what is it
that you can do additionallythat you are not doing right now
because it wouldn't beeconomically viable or people
don't have the time to do it,now that you basically have

(43:46):
unlimited resources in certainways?
So he said like, hey, if I'mcalling with an AI these old
people to remind them of takingtheir medicine, then I have all
the time in the world, right, Ican talk to them for 30 minutes.
I can listen to them.
Yeah, you know, in Vietnam backthen you know you can really do

(44:08):
the work that you usuallycannot do, because the usual
call is like hey, have you takenyour medicine?

Speaker 3 (44:14):
No, please do it.

Speaker 2 (44:16):
Okay, goodbye, right, and now you're not constrained
anymore in that way, and maybewe should also think more about
not only things that you'recurrently doing, which you can
automate, but also things thatyou're not yet doing or have
never done because it wasn'teconomically viable.
Example that I have from,actually, one of our customers

(44:37):
is they had a team for a while,like three or four people, and
they were calling customers orformer customers all day long,
and basically what they did theywent through their TMS or CRM,
I remember and then they checkedokay, who has moved this lane
with us in the past, but stoppeddoing that for whatever reason,
right.
And then calling them and justlike, hey, are you still moving
this lane with us?
In the past, but stopped doingthat for whatever reason?

(44:59):
Right.
And then calling them and justlike, hey, are you still moving
this lane?
And then they say, oh yeah, wedo.
Then why haven't you called us?
Right?
And they're like, oh yeah,sorry, I forgot, right.
And then they're creating thisincremental revenue on top of
that.
Right, but they stopped itbecause it was so expensive.

Speaker 3 (45:17):
Right, to have people on the phone all day it, but
they stopped it because it wasso expensive to have people on
the phone all day.
It's funny I'm doing that thisweek.
We're literally manually doingthis.
I scraped our CRM, matched itback out, compared it to the
data I was talking about earlierto see which lanes we aren't
for whatever reason, and it'salways just out of sight, out of
mind.
And then I build an automationin a TO to basically create

(45:40):
tasks for all our reps to touchbase with these companies.
And then I got an emailsequence built in that will
follow up if they forget andthen set the next task.
Because, to your point, it'ssuper tedious to figure that out
and I'm like if we can just getthe information at least to the
people, then now it's just likeoh, it's Tuesday.
I completely didn't realize Ihaven't spoke to Frank over at
you know whatever producecompany.
He hadn't sent me an email andI just forgot.

(46:04):
Just a quick little phone call.
Build some rapport like makingthat more efficient is like a
huge, I think, value add and Ithink that's what we want humans
to be doing more of doingthings with other humans, right,
like at the end of the day, ifthey're doing more of that, your
business is probably creatingvalue, whether it's carriers,
shippers or even internally witheach other, like we said,

(46:25):
identifying who needs on yourteam to spend more time with
more FaceTime.

Speaker 1 (46:29):
I got a question.
So I think about you know,think about emails, and
oftentimes, like an email comesin, it gets deleted, like the.
The information in that emailnever gets connected to anything
else.
Right, the amount of emails Iget every day?
And if I were to look, if Iwere to aggregate the entire
company that I work for, right,the amount of emails that I get
every day or that we get everyday from carriers telling us

(46:52):
they're available trucks, andthe amount of emails we get
every day from customersrequesting quotes?
Right, does levity or could itin the future, right Connect
those to talk to the TMS?
A lot of the TMS is now havelike, oh, it'll look at
historical trucks that wereavailable, or hey, if a carrier
calls me or an email comes in, Ican log that truck as an

(47:13):
available capacity.
Is there the ability for levityto take those two and be like
hey, quote requests came in fromcustomer Also, this carrier
mentioned capacity available atthe same time.
Does that capability exist oris that possible in the future?

Speaker 2 (47:29):
Yeah, I mean, both cases work very similarly, right
?
The only thing that would bemissing here is where do you
store that information?
Right, because I mean we're nota capacity platform, you know,
like Parade, for example.
Right, I think they're alsobuilding something there.
But if you have an ability tokind of track that in your tms,

(47:51):
that would be best, right?
Uh, just to say, okay, we'rewatching all of the incoming
emails from carriers, we'regetting the truck lists, we're
organizing them, you know.
So it's like a real-time feedof you know what's available and
matching.
That it's its own thing, ofcourse.
Right, what you can do is justsee okay, I have this lane, I

(48:13):
have this lane here, equipmentthat matches came in, you know,
five hours ago and now someone'srequesting.
You know, you can do thesekinds of things, but we're not a
system of record, right, like aTMS.
So the question is always likewhere do you move that data and
what's the logic for matchingand how do you present that
information to your users?

Speaker 3 (48:35):
So that use case and what Tilo pointed out is we're
working with Garrett at LoadPartner because they're building
an open source TMS and that isa use case, nate, that I've run
into a lot and like let's go onestep further.
One is the emails about trucksand where they are and your
customers' needs.
So you got, hey, the supply andthe demand that is just there

(48:55):
and you're not matching.
But also, like we have gen logs.
We've got other sourcing toolsthat are telling us where
carriers are and it's like whenI tried to integrate this with
the TMS, I'm like it just youcan't feed it in and there's no
good way to do anything with it,even if you could.
So one of the things we'retrying to outline is like how do
you get that all into one placeso that all of your emails are
connected, so it can pull allthe carriers that are sending

(49:18):
over where their trucks are?
And theoretically you have justa screen where it goes hey,
customer has been asking for atruck on this lane the past
three weeks.
Gives you some identifier.
These carriers have beenemailing you about this lane.
Oh, and, by the way, genlogshows that carrier has been

(49:39):
running it for the past threeweeks and I don't know what you
do after.
Maybe it sends an email to thecarrier, maybe it just tells the
rep to call the shipper, maybeyou have that information.
Go to the rep and the carrier.
Rep.
Like we haven't really figuredit out, but like your point,
nate, is I'm like there's somany inefficiencies just because
you're not matching this andfeeding it to a person in a way
that they can do something withit.
That, like the opportunities tojust do more with what is

(50:00):
already there, I think is a huge.

Speaker 1 (50:03):
yeah, I see, like the the two big things.
I see long term in the futureis like this these kinds of
developments and enhancementswill allow us to be um, to
provide like the best servicepossible to our customers, right
?
So if so, on one side you'vegot like develop new business
and on the other side it's like,well, let's service our

(50:24):
existing business.
And by service I mean if I knowahead of time that this issue
is coming and I need to addressit, if I can address it
immediately and I'm promptedlike, hey, this service failure
is likely going to happen.
I can call my customer rightaway.
Or I can call the carrier, thedispatcher, whoever, right away
and take care of that.

(50:44):
Or an automated email gets youknow, gets out there and gets
ahead of the issue.
Right, we always talk aboutlike bad news.
Bad news gets worse with time,right, if I if two days down the
road.
I don't know like, hey, by theway, two days ago, you know,
this happened, so your deliveryis going to be two days late,
right, they're gonna be prettypissed off about that.
But if I know right away, I'mlike hey, this happened just now

(51:08):
.
What can we do to get ahead ofit and prepare for a change in
the delivery?
Any of those use cases?
Right, the level of servicewill be top notch, and that'll
really set apart a good brokerfrom someone who's just being
lazy, not adopting technology.

(51:34):
It's almost like if you go back20 years and people didn't want
to use a new type of you know,they didn't want to use a load
board that was online, right,and eventually, if you don't
adopt that new technology andthose new ways of doing business
, you probably won't have abusiness, you know, at some
point.
So that's my because I don't.
The technology and the techstack that I'm currently using
is what I would call, you know,somewhat archaic, and that's why
we're, you know, we'reexploring new products and

(51:57):
things, and I'm like there arefor sure companies out there
that will not exist in fiveyears or will not exist in 10
years if they don't start usingwhat's out there and
implementing it in the best waypossible for their customers.

Speaker 3 (52:08):
So Well, I and I think this is a good thing to
kind of wrap with, but I wantboth of your thoughts on this,
because Tilo mentioned somethingwith Control Tower that I think
highlights how to look at allof this technology and what you
just said, nate, meaning likethere are lots of things you can
do, but the real question isn'tlike what can you do?

(52:29):
It's more maybe what you shoulddo and what is going to create
the most value and how much timeyou're willing to spend to do
those, whether it's time ormoney, right.
So when there's so many options, it's the paralysis by analysis
issue of like I can doeverything and there's so many
things that choose from, like Iended up doing nothing.
Right.
I think what is really helpfulfor anybody looking at this is
like working with a company,like levity to first just get

(52:53):
these first questions answered.
You can start to think aboutthese things of like in there
and then trying to determineright, like where should we be
spending our time?
Where are we going to get ourthe most return for that effort,
and then go what is possible.
And then the next question iswhat can the tools I'm using
actually do what?
What are they not able to doand where do we need to move

(53:16):
from there?
Because it is not just buy this, plug it in, and it's going to
work the way we expect it to,and I want both your thoughts on
that piece.

Speaker 1 (53:23):
Yeah, I mean I'm with you on the you know what can
you do versus what should you do.
It we talk about it a lot.
It a lot like ben, you and I uhwere we help the tia with
coaching their new um, their newbrokers that join their uh, the
new broker program.
And when we do the technologyuh discussion, which I think we
actually did recently one of thethings I always tell them is

(53:45):
like, hey, there's all thesethings out there we're going to
talk about a lot of them, um,just because they exist doesn't
mean you need to have them rightnow, on day one, for your, for
your bank account number one.
But just the functionality,like if just because I can do X,
y and Z doesn't mean I need todo X, y and Z, depends on where
you're at in your journey as afreight broker.

(54:05):
So that's my take on it is, youknow, I think as general as I
can say it, like understandwhat's out there and its
capabilities and then figure outthrough, you know, consult,
consultation and whatnot, likewhat is best for my company at
this point and think long term,like as we grow and as we scale
and as we change, maybe we pivotthe type of business that we're
or that you know the marketswere targeting, and things like

(54:26):
that.
Where can I use these things inthe future and what is being
developed that in a year or twoyears from now Can I use and can
I leverage.
That's my biggest take on.
It is just understand you knowwhat's there, how you can best
use it and what could you do inthe future.
That's my best advice toanybody when it comes to any
kind of tech or automation or AIis, you know, just the
understanding of it.

Speaker 2 (54:47):
Yeah, what I would add to that?
Right, you have a lot of, youknow, kind of FOMO driven
activities as well, right, sopeople doing things for the sake
of doing them, becausesupposedly everybody is doing
them, and that's not really agood approach in general, right,
like I would always recommendto check the math and not only

(55:07):
check the math on a unit levelof if I automate this, I save
two minutes every day.
Also, check how how much effortis it to get there?
Right, because there is, youknow, this fallacy, especially
among software engineers, wholook at something and are like,
oh, this takes a long time.
What if I now spend 200 hoursbuilding something that

(55:30):
automates?
That right, but then, if youthink about it, you only spend
two hours a yearates that, right, but then, if you think about
it, you only spend two hours ayear on it, right.
So you need to check that math,right.
And there are certain thingsthat only really make sense at a
certain scale, right.
So we realize that a lot of thesmaller brokerages are very
interested.
But then you realize, okay,you're not at that scale yet
where it's a real pain, it'smore like an inconvenience for

(55:53):
you and it would be nice toautomate this.
But then you talk to a brokeragethat's 10 times the size and
they have the exact same issue,right, like everything's the
same, and then it's a big issue.
So, you know, we talked to aguy that you know recently, you
know, started a brokeragethey're like three people and
then he told me, like, oh yeah,that would be nice, but we're

(56:14):
not at that scale yet.
So he realized that himself.
Then, a couple months later, wetalked again and now there were
15 people and he told me ohyeah, now I see how this becomes
a problem down the line, right,and I guess when we catch up
again in two or three months andyou know he keeps growing then
at some point it does make sense.
But you have to do the math onevery single case, you have to
look at it on a case-by-casebasis and not everything is, you

(56:37):
know, equally easy to do, right, and some things are not
possible really to the level ofaccuracy that you need to do at
all, right, so, yeah, be verycautious about that?

Speaker 1 (56:50):
Yeah, that's a great point.
Well, I guess, before we wrapit up, Tilo, folks that are you
know, want to learn more aboutLevity, or just have a
conversation about what could itlook like for their brokerage.
How do they find you?
How do they get in touch?

Speaker 2 (57:06):
Very simply, you go to our website, levityai and
yeah, send me the form or emailus.

Speaker 1 (57:13):
Awesome Ben do levityai and um, yeah, send me
the form or email us.
Awesome, ben, you have any.

Speaker 3 (57:17):
Any other ai related uh thoughts or wrap up the
conversation yeah, I mean weshould probably talk the rest of
the afternoon.
I have lots of other thoughts,questions.
I'm like once a week I want tosend you like a 45 minute like
video on like what I think, butwhat?
What his thoughts are.
Because it's exactly this Everytime.
I see something that I'm likethis would be a really big time

(57:37):
saver and add a lot of value.
But then I'm like I don't havethe technical expertise to
answer the second question oflike, well, if it saves six
hours a week, is this a 200 hourproject, a 12 hour project?
And also, even if it's 12 hours, am I getting 90% accuracy, 60%
accuracy or 99?
Because if it's not reallydoing what I think it can, that

(57:59):
changes the whole math right.
It's like, hey, this could be agood bargain maybe not a great
bargain if you got to invest 10times the amount of time to get
back out what you think you putin and it doesn't do or isn't
capable to do it to the degreeyou need it to yet.
So, like it's interesting times,man, I think, for anyone out
there like, definitely dig intothese things, play with some of
these tools, and I encourageeverybody to use things that you

(58:23):
can go and check the answers toto really find where it is
working and where it isn't right.
When you do really big projects, it's very easy to go oh wrote
me a large report, it must allbe right.
But you actually have to do thework of reading it line by line
and then going back and goingwait, where was this right,
where wasn't it right?
What is true, what isn't true?

(58:44):
Because it's definitely notperfect and it's definitely not
100 percent in a lot of thesecases.

Speaker 1 (58:50):
Trust but verify Right.
Yeah, exactly, well, coolpercent in a lot of these cases.
Trust but verify, right, that'sit.
Yeah, exactly, well, cool,t-low.
Thanks for joining us again.
We'll definitely get you onagain later this year and we'll
see you know what things haveprogressed and how things have
changed.
Um, it's super exciting.
This is, this is literally thefuture of our industry, and I'm
super excited to see everythingthat it does.
So thanks again for being withus.

(59:10):
Anything you want to to wrap upwith, no, thank you for having
me.

Speaker 3 (59:14):
Awesome Ben final thoughts whether you believe you
can or believe you can't,You're right.
And until next time go bills.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.