Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
An AI native solution, the framework is notjust that AI is replacing what a human is
doing, but how would you design the model withwith AI in mind?
I think most of the material benefit you'regonna see is when you clean sheet any process
to be like, how would I design this processknowing all the AI tools I have from scratch
and how do I use both technology and humans.
And by the way, I think the example for that isgonna involve both for a long, long time.
(00:23):
In fact, I think humans are a core part of thissolution.
I think in Invisible, we believe that's thehuman machine interface where all the value
sits.
But it's not necessarily just giving all yourpeople on an existing process and a tool.
It's redesigning the process to use all thetools at your disposal.
So let's talk about Invisible.
Give me some specifics on how how the companyis doing today.
I joined in mid January.
(00:44):
We ended 2024 at a hundred and 34,000,000 ofrevenue, profitable.
We were the third fastest growing AI businessin America over the last three years.
So how will DeepSeek affect Invisible?
The viral story was that it was $5,000,000,000to build the models they did.
The latest estimates that have come out sincein the Feet and elsewhere would say it's closer
to 1,600,000,000.0.
(01:04):
I think the number that you've been citing froma compute standpoint is, like, 50,000 GPUs.
So if you had just told that narrative as theexact same story but with 1,600,000,000 of
compute, I don't even think it would have beena media story.
The fact that it costs over a billion dollarsto build that model is is means it is just a is
a continuation of the current paradigm.
Look.
There are some interesting innovations they'vehad, mixture of experts, maybe some interesting
(01:25):
stuff around data storage that that does havesome benefits on reducing compute cost.
But I think those are things we've seen othermodel builders experiment with already.
If I think about types of data, they basicallywent around things that are base truth logic
like math, where there's a fair amount ofsynthetic data available.
That's a fairly small percentage of the overalltraining tasks that that I'd say most model
builders are focused on.
(01:46):
Tell me more about that.
Think about training as kind of three mainvectors.
So you have base truth information where a lotof synthetic or or kind of Internet broad based
data exists.
So so math is a really good example of that.
Then you have task like creative writing wherethere is no real kind of AI feedback, but
there's no synthetic data that's existing.
There's no way to train those models withouthuman feedback.
But the most interesting one is you have awhole set of base truth information where you
(02:08):
also don't have enough synthetic data.
So an example of that I would give would becomputational biology in Hindi.
The corpus of that is just not broad enough.
Each branch of that tree and each topic youtrain off of will have a different approach.
And tell me about what Invisible Technologiesdoes exactly.
We have two big components of our business.
What I call reinforced learning and feedback,which is the process on on any topic where a
model is being trained, we can spin up a mix ofexpert agents on that particular topic.
(02:32):
So that could be everything from I'm gonna usethe example of computational biology in Hindi.
Our pool has a 1% acceptance rate, and about30% of the pool is PhDs and masters.
So these are very high end specific experts.
But the funniest one I talked about recently issomebody who's like falconry in the eighteen
hundreds.
Things where there's just not a lot of good,existing data.
And and look, I think models are gonna be builton the full corpus information that is matter
(02:53):
to humanity.
So there's a lot of branches of that tree, andwe bring all of the different experts to help
train those models.
But that's only half the business where we'reseeing increased focus and demands on the
enterprise side.
The big challenge today and and the kind ofchasm that exists between, let's call it,
Silicon Valley and the enterprise is there'sthere's a demand for broad based model
development, which is really important.
(03:14):
But I think what a lot of the enterprise islooking for is how do I get that those models
to then work at 99% accurate accuracy in myspecific context?
Tell me about some examples of enterprisemodels that have worked.
There in line is a great question.
The step that I've seen most most frequentlycited is that about 8% of models today make it
to production.
The two largest high profile public enterprisecases I've seen are Moody's had a change of
(03:36):
thought reasoning example.
And then probably the most often cited ones,Klarna had a contact center where they
basically built up an entirely Gen AI centercontact center to replace the old contact
center they had.
The realized impact in the enterprise has notmaterialized the way people expected it would.
I am very bullish on where it will go, but todate, those are the only two examples I can
set.
I can cite some some pretty public struggles,but there have not been many other realized
(03:57):
examples that I see.
So there's hundreds of billions of dollarsbeing put into this problem set, only two
successful examples.
Where are the main frictions, and how do yousee that evolving over the next five to ten
years?
Most of that money to date has gone intobuilding the models that are extensible,
generalizable, and and moving towards greaterlevels of intelligent change of thought
reasoning.
We've seen unbelievable progress phase of themodel building process.
(04:19):
The challenge is let's take let's say you're aninsurer and you need to build a claims model.
What you need to know is that your model workswith perfect accuracy.
You're not or 99% accuracy.
The investments have been has have led tomaterial improvements.
It's that the motion of and taking those modelsand fine tuning them in an enterprise context
has not been standardized yet.
The motion of how do I deploy a machinelearning model with accuracy, if you've seen a
(04:41):
bunch of really good examples of that, likestraight through processing of mortgage loans
is one example where those are beingproductionalized.
They're working.
There's a ton of examples of impact coming frommachine learning deployments.
AI has not really figured out.
It's what I call production paradigm yet.
The open AIs, the anthropics, the xAIs of theworld developing these incredible generalized
models.
And then you only have really two use cases forenterprises.
(05:02):
You mentioned fine tuning.
What are the other steps that a company needsto go through in order to make their AI work?
Let's take an asset manager that is gonna builda system to do ongoing reviews of its assets
based on its internal investments.
Right?
The first step you need is you need all yourinternal data organized and structured and in a
place where you can use it and access it.
That's probably the biggest challenge mostpeople face is there's a joke I like to say
(05:22):
that when when good AI meets bad data, the datausually wins.
And I think the challenge is if your internaldata environments, if you don't have a clear
definition of your your assets, your products,if you don't have kind of what I call organized
core data domains, it's very hard to even useAI until you've got that around.
That's probably the biggest challenge I thinkthe enterprise faces right now is most of the
data is on systems that are that are a decadeor or later old.
(05:44):
It's not organized or mastered across thosesystems in a way that they can use it in
general.
So that's one big problem.
I think the bigger than issue is so let's sayyou get that all organized.
David, I'll put you on spot to give you anexample.
So let's say that you built a let's say youbuilt a model, a GenAI model to produce
summaries of investments in the financialservices space and just kinda look at new
investment ideas.
How would you build a chatbot to do that.
You spend money to do it.
(06:04):
At the end of that, you have a model that willstart generating these kind of investment
memos.
How would you define a good memo or a bad memoat scale?
So let's say it it generates 10,000 memos.
How do you know it works?
If you're on a fund, then you know.
Having instant access to fund insights,modeling, and forecasting are critical for
producing returns, which is why I recommendTactic by Carta.
(06:25):
With Tactic by Carta, fund managers can now runcomplex fund construction, round modeling, and
scenario analysis.
Meaning, for the first time ever, you could usea single centralized platform to do everything
from analyzing performance to modeling returnswith real industry benchmarks.
Most importantly, communicate it all withprecision TRLPs.
Discover the new standard in private fundmanagement with access to key insights and data
(06:48):
driven analysis that could change thetrajectory of your fund.
Learn more at carta.com/howIinvest.
That's carta.com/h0wiinvest.
It's difficult to do at scale at 10,000, but Ithink on individual basis, a good model is a
model that hits all the points and then hasmore clarity and more details on the subpoints.
(07:13):
So I would evaluate it based on, did it get allthe main key points of investment thesis at a
high level, and then were the sublevel pointssufficient or covered the main topics?
What you're what you're saying makes completesense, which is you have a set of parameters or
set of outcomes you're looking for to memo.
Even if you have that though, the the questionthen becomes, how do you evaluate that
(07:33):
consistently across 10,000 memos?
And I think this is the difference betweenbacktesting of ML dataset versus GenaIs.
You need a way to actually go back and validatethat what is produced works.
And I think that has been the real challengethat the enterprise has struggled with is you
may have a sense for what good looks like.
You might say, for example, definition of agood investment memo would be, like, at least a
paragraph summary of competitive set, somecontext on the market including growth rates.
(07:56):
Like, you could set a set of parameters thatyou're looking forward to answer.
But then you have to wait through and kind ofassess all that.
And so what we spent the last eight years doingfor the model builders and others is building
what's called semi private custom evals, wherewe effectively set parameters like private that
that would say these are the these are thedefinitions good.
This is the outcomes we're looking for.
And then we use human feedback to score thoseparameters.
(08:17):
So we could go at big scale and say, does doesthis outcome cover what you're looking for?
And we bring subject matter matter experts tobear to actually do that scoring.
I think that's actually been the big gap isthese are often things you can't, you can't
score with a random person to street.
You can't just put it into market and hope itworks.
You need a subject matter expert to say thislooks generally good before any organization
(08:37):
gets comfortable launching it.
One way I've seen enterprises do that is I'veseen a couple customers experiments already is
they'll actually have their own employeesevaluating this at Uscale.
But if you think about the time suck of, like,having large numbers of people just reviewing
that's very hard to do.
So I think a lot of what we've now evolved tois on the enterprise side, a mix of these kind
of evals and assessments of the models that arehappening that we then help customers fine tune
(08:59):
and improve their models.
Is there a gap between what a generalistsearcher might want and somebody domain
specific?
In other words, if I'm making a hundred milliondollar investment decision based on a memo,
that has to be much better than if I wanna findout if, you know, dogs could eat a certain type
of food and, you know, what's the best practicefor raising a healthy dog.
Post earnings reports are more than just a datadump.
(09:19):
They're a gold mine of opportunities waiting tobe unlocked.
With the loop, you could turn thoseopportunities into actionable insights.
The loop has dynamic scenario building toolsintegrate updated earnings data, letting you
model multiple strategic outcomes, such as whathappens if a company revises guidance.
With automated sensitivity analysis, you couldquickly understand the impact of key variables,
(09:42):
like cost pressures, currency fluctuations, orinterest rate changes.
This means you'll deliver more actionableinsights for your clients, helping them
navigate risks and seizing opportunitiesfaster.
Ready to enrich your post earnings narratives?
Visit dalopa.com/how.
That's dal00pa.com/how today to get started.
(10:03):
One of the questions you're asking here is whatis the bar or the risk bar for production
depending on the use case?
And and I do think it's different.
As an example, if the goal of a chatbot is justto say something like review restaurants and
it's a consumer facing end, the the the riskbar on that is does it say anything toxic?
Is there any bias?
You can put some risk parameters around it, butyou don't really need kind of subject matter
expert feedback on it.
(10:24):
I'll get an interesting one.
Legal, like law, The bar for accuracy on thatis materially different than a consumer
example.
And many law firms are experimenting with this,but it's hard to assess something like a debt
covenant agreement without subject matterexperts weighing in on a scale.
The outputs are consistently good.
How do AI models evolve when the parameterschange?
(10:45):
Is this something that'll always be needed tobe refreshed?
There are two ways that models generally beconsumed.
Many models that we consume by consumers thatare effectively just gonna be what the model
builders produced.
Usually, the way the enterprise is using thesemodels is they're tailoring those models to
their corpus of information.
I'll give an example.
Let's say that you have two wealth managers.
Let's say, I'm gonna make up to Fidelity and TRowe Price.
(11:06):
And they wanna use, you know, experiment withthings like robo advisory or question
answering.
They're not gonna just use an off the shelfframework for that.
They're gonna tailor it off of all of theinformation that exists in their communications
and their training documentation.
Any model that's being trained at theenterprise is usually being trained off of the
internal knowledge management corpus that thatinstitution has.
And so you're using the large language modelfrom the model builder, and you're tailoring it
(11:28):
to your specific context.
That that process is called fine tuning.
Prior to becoming CEO of Invisible, you headedMcKinsey's Quantum Black Labs, which is their
AI labs.
What did you do at McKinsey?
I focused on three main things.
One, had all of our large scale datatransformation, data lakehouse, data warehouse
built.
So the first thing I mentioned, which is ifyour data is messy, it's very hard to use AI.
I spent a lot of time focusing on that.
(11:49):
I spent a lot of time doing custom, applicationdevelopment, so building all sorts of different
applications, whether that be for retention,pricing, contact centers, kind of software
custom software that people could use to deploymodels.
And I do think that's an understated part of alot of this is there is what a model does, but
then there is the way that somebody canunderstand it and interpret it.
And a lot of that is the user interface bywhich they consume it.
And so I think that's something the enterpriseis spending a lot of time on is what is the
(12:12):
user interface by which people consume and andthink about and make decisions around these
models.
And in the third area, I oversaw, the Gen AILab, which is McKinsey's kind of global gen Gen
AI tool build.
We're doing any when I was there, we're doinganything from 220, two hundred and 40 Gen AI
built at
the top.
I wanna double click on the enterprise side ofwhat you did at McKinsey.
You mentioned those two high profile use casesfor successful enterprises built.
(12:34):
Did you build any successful enterprise usecases while you were at McKinsey?
We definitely did.
There's a public case you can reference that acouple folks, in Quantum Black build for I n g
where it's effectively a chatbot.
And and one of the things they mentioned in it,very similar to what I'm saying now is a lot of
what was required to put that in production wasgetting it to 99% accuracy.
So they had a lot of parameters and fine tuningwe did around testing, quality controlling it,
(12:56):
building audit elements to make sure that theoutcome is good.
We definitely did a lot of that.
But the rough math you see across the industryis about 8% of Gen AI models make it to
production.
And and that's broad based.
Like, the amount that kind of stall around theproof of concept pilot phase is pretty
material.
I think it will get better over time.
But to date, the challenge has been a lot ofthings I mentioned, challenging data, an
(13:17):
unclear definition of good, and anunwillingness from the folks in the field to
actually use and and believe the outcomes ofthe models.
And I think that that's gonna take time.
There's this concept in the productivity space,which is 80% done as a % good.
Is there, like, an eighty twenty rule herewhere you could use AI to solve many things and
dramatically decrease your need for salesrepresentatives, for customer support?
(13:40):
And does it have to be a % good?
That is a really complicated question.
So there's another analogy we shall use, whichwe manufacturing lines and factory.
And so I ask you the question, if I have 10people on a manufacturing line and every one of
them saves 5% of their time, what is the linesavings?
I'm guessing half a person.
Zero.
(14:01):
Because you can't take out a line and you can'ttake no no person can be taken off that line.
You effectively just move to a world whereeveryone has a little bit more free time.
And I think that's the the challenge of theeighty twenty here is things like Copilot have
had a very interesting kind of last two years,and that they they're helpful, coding Copilot,
legal Copilot, all these things.
(14:21):
But it's unclear the degree to which theyactually save any work.
They kind of tweak a lot of things on themargin.
And I think the difference that I'd say in 8020is I think to do that well, you actually have
to reengineer processes.
You have to say, what does my end to endworkflow look like for claims processing or
whatever that might be?
And how do I take out two full steps toactually get to a better level of efficiency?
That's hard to use a software tool for.
(14:42):
You need kind of people on your team to thinkabout the workflow design.
You need to redesign the actual process flow.
That's been a bit of a challenge of the lastfew years is a lot of people have just focused
on all different types of copilots across alldifferent industries.
And I think that's helpful, but I think thenext phase of this is actually process redesign
and moving to ways where you can actuallytotally restructure the wet line works as an
example.
(15:03):
An AI native solution.
The framework is not just that AI is replacingwhat a human is doing, but how would you design
the model with AI in mind?
Most of the material benefit you're gonna seeis when you clean sheet any process to be like,
how would I design this process knowing all theAI tools I have from scratch, and how do I use
both technology and humans?
And by the way, I think the example for that isgonna involve both for a long, long time.
(15:24):
And humans are a core part of this solution.
I think in Invisible, we believe that's thehuman machine interface where all the value
sets.
But it's not necessarily just giving us givingall your people on an existing process and a
tool.
It's redesigning the process to use all thetools at your disposal.
Thank you for listening.
To join our community and to make sure you donot miss any future episodes, please click the
(15:44):
follow button above to subscribe.
So let's talk about Invisible.
Give me some specifics on how the company isdoing today.
I, we ended 2024 at a hundred and 30 4 millionof revenue, profitable.
We were the third fastest growing AI businessin America over the last three years.
You just joined as CEO.
What is your strategy for the next five to tenyears, and how do you even conceptualize a
(16:05):
strategy given how fast the industry ischanging?
We've had explosive growth in the current kindof core of the business, which is AI training,
and we plan to continue to focus on that.
Our goal is to work with all the model buildersto get these models as accurate as possible and
support them any way we can with lots of humanfeedback.
So if you think about what Invisible has there,we have this kind of AI process platform where
we tranche out any individual task into a setof stages and then insert kind of feedback
(16:28):
analytics and all of those different steps.
We then have, the AI and training and evalsmotion I described, which is a set of modules.
On the back of that, we have a labormarketplace where we can source all of those
5,000 different expert agents on any giventopic.
The core of that will remain our focus.
The shift I envision are are kind of twofold.
One, deepening our focus on using that for finetuning the enterprise.
This This is something I think all the modelbuilders are hopeful as before as well is the
(16:50):
more that we can help all of the enterpriseclients figure out how to get the most of their
model bills they're focused on, how to getthose working, that's better for everyone.
Everyone is hoping to see many more examples.
And I, by the way, am very, very optimisticthat over the next five, six years, we're gonna
see many, many more examples of great Genio usecases in production.
It's just been, I think, a a period oflearning.
The last few years have been kind of a proof ofconcept phase for the enterprise.
(17:11):
Really helping the enterprise get videos intoproduction is a core focus for us.
The other big area that that I am I'm gonnaevolve Invisible into is the analogy I would
use is we're gonna build a modern ServiceNowanchored in GenAI.
So in Invisible's process platform will includemuch more data infrastructure.
It'll include an application developmentenvironment and process builder tools, and
it'll include our eight eight kind of ourreally, really good, services delivery team
(17:34):
around that.
So one belief I have is that it's very hard todo any of this with push of a button.
I think the the age of software has kind ofallowed on the idea that you build something
and people take that as is.
And I think AI is much more around configuringand customizing different workflows exactly for
any given customer wants.
You can invisible you can envision envisionwhat Invisible evolve into as kind of our AI
process platform with lots of visit of processbuilder tools where people can build very
(17:57):
sector specific applications like claims andclaims for insurance or, onboarding for food
and beverage or, fund admin for private equity.
So you'll have a bunch of differentverticalized use cases we'll go after and a lot
of really interesting core data infrastructuretools like data ontology, master data
management, things like that to help people gettheir data working.
How do you avoid being the victim of your ownsuccess?
(18:18):
So you come in enterprises, you streamlinetheir AI models using the services model.
How do you avoid making yourself obsolescent?
Funny piece of context outside there is 70% ofthe software in America is over 20 years old.
The rate of modernization of that has beenglacially slow.
I know there's been a lot of kind of hype thatsays suddenly the whole world is gonna be hyper
modern.
Everything's gonna work in two years.
(18:38):
I think this is a long journey over the nexttwo decades where we get to a world where every
enterprise runs off of modern infrastructure,modern tech stacks, and functions much like the
digital company digitally native companies thathave built over the last five years.
That would take time to get to, but I'm veryexcited about what our platform can do to
enable that.
Said another way, your total addressable marketis every enterprise for minus the two that have
(19:00):
built models.
Even in those two companies, I'm sure they'rethey're looking to streamline other parts of
the business.
I think that's right.
Interesting thing if you look at what what Iwould call the application modernization
market.
So all of the modernization of legacy systemssystems that happens annually.
No player right now is more than 3% of that.
So it's actually a very fragmented market thatis painfully slow in how it moves, and it's the
(19:22):
main frustration points for most enterprises.
Like, if you ask the average CEO in any companythat's over ten years old, how happy are they
with their core data, the kind of tools theyuse on a daily basis, most are pretty
frustrated.
So I don't think this is something where theexisting is really good and everyone's really
happy.
There's a lot of frustration that we are hopingto help fix.
And I think Gen AI will be the root of doing alot of that.
(19:42):
I think there's a lot of tooling you can do togenerate insights faster, to pull up reporting
faster.
And so we will be a Gen AI native, kind ofapplication development platform.
You have a very unique vantage point in thatyou're the CEO of one of the fastest growing AI
companies.
You ran McKinsey's lab.
Walk me through the AI ecosystem today in termsof how you look at the ecosystem.
(20:03):
I've talked to a bunch of VCs about this in thein the past couple days.
The infrastructure layer, which is where mostof the capital is going today, and that's a mix
of kind of things like data centers as well asthe model builders.
And you asked about the gap to to kind ofinvestment today to enterprise modernization.
The challenge is that above the infrastructurelayer, you have what I call the application
layer, which is individual tools for individualuse cases.
Right?
And that could be, I mentioned claims, it couldbe legal services.
(20:26):
It's all the verticalized applications thatexist anchored in GenAI to solve problems.
All of those applications today, for the mostpart, are SaaS or traditional software based.
So they are designed, like all softwares lasttwenty years, to be a a kind of a push button
deployment of a specific use case thatfunctions like traditional SaaS software.
I am skeptical that that is actually gonna bethe way that impact is realized with Geni for a
(20:48):
couple different reasons.
Software as a paradigm has existed that waybecause the idea was it took so long to get
data schemas organized and structured, and ittook so long to build any custom tool that you
had to invest all the money up front ofbuilding a perfect piece of software.
Once you got data locked in on that software,it was very hard for anyone to ever migrate off
of that.
The the term of this is your systems of record.
(21:09):
Once you're locked in on any sort of a systemof record, whether that be an ERP system,
whether it be an HR database, you basicallynever leave as an enterprise because the data
is really painful to move.
And so that's been the conventional wisdom onhow to build software for a long time.
You've you've had some really public examples.
Satya Nadella mentioned it.
What GeniEye may enable is a movement where thevalue moves from the system of record layer to
(21:33):
the agentic layer.
So you actually move to a world where peopledon't say on software that's sticky just
because of the data.
They actually want the best possible softwarefor their specific institution.
So you might have a world where people arebuilding tooling that is much more custom to
their enterprise.
You might have a world where I have a Reactworkflow that uses analytics that are
customized to my enterprise in a cloudenvironment, and I can stand that up in a
(21:55):
couple months.
And I think that paradigm is a very differentway forward for technology.
Now I'm sure there are some that would disputethat.
I'm sure there are some that will say softwarewill exist as Zoe's has.
But I would say that the main feedback I heardfrom a lot of VCs is that most of the
application layer today focuses on the standardsoftware paradigm.
And I think we're looking at something verydifferent, which is we wanna have kind of an
(22:17):
application development environment with a lotof configurability and customizability, the
ability to build verticalized applications forspecific sectors.
That will allow us to say, not this is our ourtool, take it, but much more what is the
workflow you need.
Let's bring that to life.
Let's say a company is looking a Fortune 500company is looking to create a CRM.
What would an AI native CRM look like for aFortune 500 company versus just using a
(22:41):
Salesforce?
What what you would usually end up doing inthat world is you'll look at Salesforce or
Dynamics or ServiceNow as one of these now, andyou will buy out of the box functionality.
Like, you'll buy, let's say, their contactcenter tooling.
But then you will end up customizing a fairamount of that to your enterprise.
So you'll say, my contact center is gonna havethis flow for services, this of calls.
And so even though you're buying the tool,you're gonna spend a year customizing
(23:05):
configuring for your workflow.
CRM is a little bit different in that you dohave several large players, you know,
Salesforce Dynamics and and and ServiceNow nowthat have built fairly good builder
applications for that use case.
If you're successful as CEO of Invisible, whatwill the company look like in 2030?
So I use the analogy for Salesforce.
I do I have a ton of respect for what they'vedone.
(23:26):
My main North Star metric is that every Gen AImodel we work on will reach production.
And so I'm really excited about working withall the model builders over the next several
years to continue to fine tune and train theirmodels and get that working at huge scale in
the enterprise.
And I think that's something that we will be ahuge driver of.
What would you like our audience to know aboutyou, about Invisible Technologies, or anything
else you'd like to share?
Well, I don't want any of what I've said tocome across as pessimistic.
(23:47):
There's nobody that believes that AI will bemore positive for the enterprise over the next
five to ten years.
I think the last two years did not live up tothe hype cycle partly because there was a
belief that you could just buy a product out ofthe box, push a button, and suddenly all your
Geni will work.
My my kind of advice or or or view on pathforward is I don't think that will be the
paradigm.
I think every enterprise will have to buildsome capabilities around what do I wanna get
(24:09):
out of these models, how do I train andvalidate these models, how do I make sure my
data is adequately reflected in these models.
And that's a very doable thing.
When we sit here five, ten years from now,there'll be some really exciting deployments of
this.
Like, the ability to stand up new software, newdigital workflow, companies based on Gen AI is
gonna expand significantly.
But I do think it's been a bit of reality checkfor the last two years that, you know, this is
(24:30):
not like I just stand up a piece of software,push a button, and everything works.
How should people follow you and Invisible?
You can add me on LinkedIn.
I'll be posting about some of the updates we'llbe having there.
And we're building kind of a I call it data ininsights function Invisible as well.
We're gonna start to brings bring as much ofthe truth that we're seeing and what's exciting
and what we recommend, to our enterpriseclients so we can help them navigate what is a
(24:51):
very complex and difficult world.
Thank you for listening to my conversation withMatt.
If you enjoyed this episode, please share witha friend.
This helps us grow and also provides the bestfeedback when we review the episode's
analytics.
Thank you for your support.