All Episodes

January 6, 2026 44 mins

Get Noticed! Send a text.

If you’re excited about building with AI—shipping apps, spinning up agents, or using “vibe coding” tools like Replit, Lovable, or n8n—this episode will change how you think about risk, security, and long‑term value.

In this conversation, Jim James sits down with Dave Horton, VP of Solutions at Airia (AIRIA), to unpack the hidden risks behind today’s AI gold rush—and how to keep innovating without accidentally putting your customers, your IP, or your investors at risk.


Why you should listen
1. The “Oh no…” real‑world AI failure story

Dave shares a true story of a company using an AI coding platform where:

  • The production customer database was deleted/truncated
  • The AI denied doing it
  • The team had to forensically unpick what happened to recover the data

If you’re letting AI touch prod data or infrastructure, this story alone is worth the listen.

2. Guardrails, not guesswork: How to build safely with AI
You’ll hear:

  • How to use agent constraints so AI can’t drop tables, delete databases, or leak sensitive info
  • Why “just ship it” with AI agents can quietly build massive compliance and security debt
  • How Airia acts as an integration + orchestration layer across Microsoft, Google, AWS, Salesforce, ServiceNow, and more

Perfect if you’re a founder, CTO, or builder who wants speed and safety.

3. Compliance made real: GDPR, EU AI Act, HIPAA & beyond
Dave breaks down:

  • Why AI agents typically do cross‑border data transfers (often across 10+ countries)
  • How that collides with GDPR, HIPAA, FCA, EU AI Act, and others
  • Why a single breach could trigger multiple fines from multiple regulators

If you ever plan to raise serious money or sell into enterprise, this is essential listening.

4. What VCs are starting to ask about your AI stack
We cover:

  • How investors now view AI as a distinct risk vector in due diligence
  • The thorny IP questions when your product is built with or on top of LLMs trained on unknown data
  • Why business continuity, backups, and DR still matter even in a “no‑code / AI‑built” world

If you want your AI startup to survive due diligence, listen to this.

5. AI under attack: Red teaming and “AI pen testing”
Dave explains:

  • How prompt injection, data exfiltration, and DLP abuse really look in practice
  • How Airia uses swarms of attacking agents to red‑team your own agents before launch
  • Why you should schedule recurring tests as models and data drift over time

Riverside - Your online recording studio
The easiest way to record podcasts and videos in studio quality from anywhere. All from the browser.

Search Engine Optimisation from the UK
Rank higher on Google with SEO. Fill out the form to receive a FREE quote.

Look Great with AI enhanced headshots
Headshots you can actually use. 16 million headshots made for over 50,000 Fortune 500 executives

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the show

Subscribe to my free newsletter

https://www.theunnoticedentrepreneur.com/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jim James (00:00):
Hello. If you're as excited as I am, and the whole
world is about AI and aboutcoding and building apps that
can change your business andchange the world, then you may
or may not be aware of some ofthe risks that you're facing
while you're doing it. So veryhappy today to have Dave Horton,
who's the VP of solutions at acompany called Aria, joining me.
Dave, welcome to the show.
Thanks so much for having me.

(00:23):
Well, I'm excited because I'minto the AI. I'm trying coding,
I'm building apps, but I alsoknow that there are some risks
associated with building, if youlike, in an unprotected
environment. What's yourexperience of what is happening
in the real world when peopleare building out apps?

Dave Horton (Airia) (00:41):
Yeah, I mean, you're exactly right. We
we see a lot of innovation, alot of excitement around AI,
what it can do for us as apersonally, as well as a
company. But really, to speak onthe innovation without speaking
about some of the risks involvedis, you know, where we can
really kind of have aconversation about, well, what

(01:01):
are the options to make this asafe and secure innovation,
rather than one that sprawls andmore risk and more danger?

Jim James (01:08):
Now risks, on the whole, are not publicised. So
before we dive into, you know,Aria and what, what ARIA does,
can you give us a couple ofexamples of maybe big companies,
small companies, or even evenpeople that have built things
and it's gone wrong, and there'sbeen maybe a financial cost to
them, yeah, absolutely.

Dave Horton (Airia) (01:26):
I mean, interestingly, this, you know,
the way that we are with newswhen something does go wrong is
very widely known, and sothere's a lot of really
interesting examples where, youknow, we've had this innovation
come along and it's producedmaybe unexpected results,
negative and positive and so,you know, some really good
examples I can think of, youknow, replit So, you know, was a

(01:48):
vibe coding software platforminnovation is, is incredible.
You can, with natural language,build out some simple
applications. And ultimately,without having to employ dozens
and dozens of developers, you'reable to get a working
application that links with yourdata, and you're up and away.
Now this is also kind ofinteresting, because where you

(02:12):
have aI calling the shots,making calls on how to connect
to databases, how to leveragethe data that you're pushing in.
Replit was a really interestingexample of where it can also
unexpectedly go wrong. So we sawa scenario where a company was
using replit, they were buildingout an application. Everything

(02:35):
was working great, you know?
They were kind of innovating.
They were making new versions oftheir platform, but for some
reason, there was an issue wherethe database, which contains all
of the production informationabout their customer base,
deleted or truncated from, youknow, from that data set, and
ultimately, they didn'tunderstand why. And so when

(02:55):
querying the the AI like,where's my data gone?
Interestingly and kind of on atangent, the AI actually lied
and said it didn't do anything.
It didn't delete anything. Andso the company had to go and do
a bit of investigationthemselves and discover exactly
what happened. And so unpickingwhat an AI has done, why it did
something in a certain way wasreally kind of an interesting

(03:17):
case study in how things canunexpectedly go wrong. The net
result of it was that, ofcourse, the application data was
lost, but it took quite a lot ofeffort to retrieve that
information and get back to abusiness as usual kind of norm
after after the fact.

Jim James (03:35):
So do you think a lot of people are using sort of
these vibe coding platforms, youknow, almost an irresponsible
way, because it seems so easy,doesn't it, to build out
something. What's your view on,on, like, the responsibility
that people need to take? Idon't,

Dave Horton (Airia) (03:51):
I don't think it's irresponsible to use
platforms to help you to buildbut I think you do need to be
aware of some of the risks ofassociated so, you know, in that
example, what could have? Couldhave been a safer way to maybe
use the platform, and you know,there are technical measures
that you could put in place. Soinstead of having the AI execute
a new version of your code oraccess a production database or

(04:14):
have certain commands to deletethat data, what if we put some
guardrails in place? What if weput some constraints on what
that agent has the capability todo, we could have mitigated a
particular issue. Butironically, you don't know it's
an issue until it's becomeeither widely known publicly or

(04:35):
you've experienced that Falloutyourself, and so and is the case
with AI and many otherinnovations before it's people
aren't aware of the risks untilit actually happens. It's like,
Oh, that is quite unique. Thatis quite interesting. How that
occurred and we didn'tanticipate it.

Jim James (04:51):
So you just come from a black hat conference, like a
three day conference. What aresome of the like, the trends
that you. People are talkingabout when it comes to coding
and AI and the adoption byenterprises and entrepreneurs.
Yeah.

Dave Horton (Airia) (05:07):
I mean, it was an interesting conference
because there are lots of legacysecurity vendors there, and just
like with any new innovation,slapping an AI badge on your
existing legacy products doesn'tnecessarily mean that you're
solving AI security issues. Andso, you know, part of our task
this week was kind of almost reeducating, you know, people that

(05:30):
came along to speak to us aboutwhat it actually meant to be an
AI security platform. But I'dsay the thing that people are
most interested in when it comesto AI is, well, what are the new
threat factors? What are the newissues that you know we need to
anticipate? Because, again, justlike the replit example, really
looking, looking to the vendorsfor a bit of a Knowledge Quest,

(05:52):
a bit of a an education on whatdo I need to know? What do I
need to be concerned about? Andso for a lot of people I spoke
to, it was really they weren'tnecessarily interested in buying
a new product. They were reallyinterested in kind of what
problems we're seeing from ourcustomer base and how we're
solving those problems for them.

Jim James (06:09):
So you mentioned sort of legacy vendors that are maybe
from security, like the softlosses of this world and the bit
defenders, and then you've gotthe new AI players. How does
ARIA fit in to that becausepresumably some enterprises and
CEOs and CIOs are looking atkind of their existing vendors
and saying, Can you can yougraft on traditional security

(06:31):
onto our new AI? And some peoplesay, Well, we've just adopted
this new AI. How can we make itsafe? So how does ARIA play into
that space?

Dave Horton (Airia) (06:39):
So the way that we've looked at the market
is that many companies havedozens and dozens and dozens of
different technologies. They'renot just all developed on a
Microsoft stack or a Googlestack or an AWS stack. They've
got multiple differenttechnologies. And so where we've
always played very well is bybeing a bit of a Switzerland of

(07:00):
the space. We're not tied to anyone particular monolith. We can
play with anyone. If you've gota Microsoft product here that
you want to connect to, yourGoogle model there, if you've
got tools in Salesforce orservice now that you want to
also connect with essentially,we're a unique platform in that
we can, without politics makethat happen for many

(07:22):
organisations. That's reallykind of the first play is that
we're really much theintegration layer for a lot of
these enterprises that you know,have acquired technology over
the last 20 years, and they'vehad mergers and acquisitions,
and they've got a multiplex ofdifferent technology platforms
in their in their purview,right?

Jim James (07:43):
I see, so really you've got, like this
orchestration platform, thenAvenue where they can the tech
team can develop things,plugging in the legacy data,
incorporating some of the new AIagents. And then how does that
work, in terms of interfacingthose new apps to, for example,
HR or customer service or tomarketing, because the

(08:05):
deployment of those apps, thenis really where the money is to
be made by people, isn't it?

Dave Horton (Airia) (08:10):
Yeah, absolutely. And I think, you
know, another angle that I'd saywe've got a distinct advantage
over the bigger players is that,if there is a new innovation,
I'll give you an example, like ato a as a standard or model
context protocol from anthropicMCP. You know, a lot of these
acronyms did not exist 12 monthsago. And so where the big

(08:33):
monoliths struggle is kickingout new products for new
features on day of release orwithin the first few weeks of
release for even and so again,because we're agile, we can
really put some of that R and Dto good use. Customers can
benefit from, you know, beingable to get the latest and
greatest from from the platform.
But you know, apart from that,you know, one thing that we are

(08:55):
acutely aware of is, if I'mlooking to, let's say I'm in HR,
or I'm in legal, and I want toinnovate, you know, with AI,
we're really trying to helpcitizen AI within the business.
So it's actually a business userinitiative, typically, that is
actually driving how they wouldlike to use AI. It's not the CIO

(09:17):
necessarily, or it, in fact,they probably would rather not
get involved in some someregard. And so what we've tried
to do is also build the productaround a user that maybe is not
technically savvy, maybe theydon't exactly know what an
integration or an API would looklike into their specific data
sets. And so really building aplatform that's very simple to

(09:40):
use, even for people that arenot of the IT world.

Jim James (09:46):
Dave, just we've mentioned ARIA a little bit.
Just tell us, then a little bitmore about the company. Then,
how has ARIA come about, andwhere does it sit in the overall
space? Obviously, it's aplatform other people can use to
plug in different tools, almostlike in the old days we. You
have what we call middleware,just tell us a little bit about
the background. Then for ARIA,where's it come from and where's
it

Dave Horton (Airia) (10:05):
going to Yeah, absolutely, I think kind
of got a unique story. So, youknow, we've been developing the
platform for two years, andreally, we've only been out of
stealth just over a year. Andyou know, for a company that has
200 employees, that's a reallysteep growth in a year, and I
would attribute that to the DNAof the company. So the senior

(10:27):
leadership within the areacompany is actually spawned from
two previous successfulcompanies that have gone on to
be wildly successful in theirspecific domain. So one was
AirWatch, which was mobilesecurity platform. It
essentially allowed you to getemail on your iPhone when you
know BlackBerry was, was thekind of the product of choice

(10:49):
for email and so again, a lot ofthe problems that we're solving
for AI today are actuallylessons learned from you know
that innovation wave, where youknow consumer is really pushing
enterprise to develop newtechnologies, and so, you know,
our CEO senior leadership werecontributing members of that

(11:09):
company where GDPR and globaldata privacy regulations were
really driving some of thechallenges that enterprises
needed to cater To, but it alsogave us an appreciation to that
security and compliance efforts.
And so, you know, when we'vebuilt out the platform from day
one, we've already got a reallygood understanding of what

(11:30):
enterprises need from a, youknow, technology innovation way
through mobile, but also where,you know, the regulators in
Europe and you know, some of thenew challenges from a compliance
standpoint, might be introduced,and so it's kind of given us a
unique perspective in how webuild the platform, but also how
we might go to market with theplatform, with our customers.
Tell us

Jim James (11:51):
a little bit more about compliance, because we
were obviously GDPR, we've gotHIPAA, we know for healthcare,
we saw as Health Trust in theStates got, got breached in
there with data. It costs acouple of billion dollars
actually. What are some of theconsiderations then, if you're a
CEO CIO, and you're looking atintegrating new AI apps, and

(12:12):
then the development of

Dave Horton (Airia) (12:15):
them, yeah, it's kind of interesting. AI is
not a single application. Whenyou create an agent, you're
typically using a large languagemodel, and that might be hosted
in a different country to theone you're in. So for example,
if you try and build anapplication with open AI, you
know, more than likely you'regoing to the United States. And
so you know, in the GDPR, that'scalled a cross border data

(12:37):
transfer. And so you know, hasissues when you've told
customers that you're using atechnology and you didn't tell
them that it was going to crossborder, for example, that could
be a consideration. But also,when you're building these
agents, what are the downstreamtechnologies are you connecting?
Where are you sending that data?
When you ingest content thatmight help to be a data source
for the agent that is going togive your end users feedback,

(13:01):
where is that content sitting?
Where's that data source? And sowhat you see is, if you look at
a typical agent, it mightactually cross, you know, 10
different countries by the timeit's kind of giving you that
answer. And so a considerationis really mapping out, well,
where we're building this agent,rather than just going for the

(13:21):
default. Maybe we need toconsider, based on the
criticality of the data, whereexactly it sits, and whether my
end users would be happy orunhappy about us using that. And
if we, if we need to be verytransparent as well about what
that looks like.

Jim James (13:36):
Why is there a risk when data crosses borders?
Because, I mean, we were used tobuying, for example, cars where
parts have crossed borders. Allof our products have crossed
borders back and forth. For Whyis it? Why is Is it a risk? If
you're sending that data acrosscountries, what happens to it?

Dave Horton (Airia) (13:56):
Well, it's kind of interesting. I think the
the fear factor, let's say I'm apatient in the UK, and I, you
know, my doctor has kind ofpatient summary notes. For
example, it gets an AI tosummarise, you know, the
conditions I have, you know, gethave very personal information.
Now, it would be the same as ifthat doctor took the
transcription of ourconversation and kind of left it

(14:19):
on the streets. You know, Idon't know who's got access to
it. I don't know what's you knowwhat standards are in play. And
the fear when you go crossborder is, is the country that
I'm sending this data to of thesame standard as we have in the
UK or in the EU, for example.
And so it's really about datastandardisation, what what is
actually protecting that data.

(14:40):
So the GDPR is not just about orcan I share data, it's also
about, how do you secure thedata, what standards you have
for correcting the data, etc.
And so it's really a kind of aninsurance policy for your end
user about the standard that youhold yourself to, because the
law is actually backing up,you'll get fined. Cetera, if you
if you get it wrong.

Jim James (15:01):
So it's an interesting mix of sort of
technical, political andfinancial consideration. But if
you're either an entrepreneur,for example, a business owner,
and you're not taking that intoaccount, if, for some reason,
there's a breach, then you'reliable, right? And it's the idea
that if you just carry ondeveloping things without

(15:21):
considering it, you're buildinga risk in into the business.
Honey, absolutely.

Dave Horton (Airia) (15:26):
And, you know, I think, you know, we got
very used to talking about thefines associated with GDPR, but
when it comes to AI, it's notjust the GDPR. It's also the EU
AI act, for example, it is also,maybe, if you're in financial
services, you've got FCA as aand what you might find is a
single breach of data might meanfour different fines for four

(15:49):
different reasons. So the impactis getting bigger and bigger,
depending on the use case, butdepending on the issue.

Jim James (15:56):
And do you have some examples of some breaches?
Because otherwise just soundslike scare mongering.

Dave Horton (Airia) (16:00):
Yeah. I mean, you know, some very famous
examples where, you know, I'llpick Microsoft copilot as an
example. You know, they'reobviously very early into into
this market. You know, arguablyas well, a lot of customers get
copilot free of charge, on an e5licence with them. So it's, you
know, it's a natural kind oftesting ground for your first

(16:22):
kind of iteration of your AIprogramme in the business. Now,
One famous example was that whenyou look at SharePoint or
OneDrive, where you hold all ofyour content, you know you have
permissions on these folders.
And so there are certain filesthat I can see that you can't
see, for example, if we're inthe same business. Now, one of
the interesting aspects of abreach within a company was,

(16:45):
well, payroll data, for example,I have access to that, but you
don't, but the AI agents thatcopilot were were producing make
that distinction on thepermissions, and so everyone
could see everyone else'spayroll data if they asked the
right question of the LLM inthat instance. And so, again, it
wasn't an issue until someonediscovered it is a good example

(17:07):
of how I mean new, excitingtechnologies maybe introduced
some risk factors that could bequite serious. You know, payroll
data can be quite sensitive, youknow, in the wrong hands or with
the wrong purview,

Jim James (17:22):
and so you've got risks for enterprises, right? If
you're bringing in what could bea bit of a Trojan horse, what
about if you're an entrepreneurand you're and you're building
out an app? How would it? Howwould it work, for example, with
Aria, versus working on an eightend for example, or replit, or

(17:43):
even lovable, maybe you couldjust help us to understand for
the for the entrepreneur who'sbuilding something that they get
to use themselves or sell on,does that work for them?

Dave Horton (Airia) (17:53):
I mean, one of the, one of the, the other
aspects you know, as well asbeing able to orchestrate and
build AI agents within ourplatform, we do have the side of
it where you can build yoursecurity and your compliance and
governance component as well.
And so a good example might belike the replit example. I would
probably want to if I, if I wasin that situation, again, with

(18:14):
replit, I would have what wecall an agent constraint in
place. And what an agentconstraint does is it looks at
all the tools that the AI hasaccess to to do so it has to be
able to read databases and writeto databases, for example, but
maybe it doesn't need thecapability to drop a database or
truncate or delete a database.

(18:35):
And so we could actually have apolicy that says, well, we never
want the agent to be able to dothat. And if the AI again is
deterministic, it doesn'tnecessarily know, or you can't
anticipate, what it's going todecide it wants to do with a
decision of the information it'sbeen given by. What I definitely
don't want it to do is certainthings with my database. So I
can actually have a policy thatsays, Well, this AI has access

(18:58):
to do all of these capabilities,but I don't want it to be able
to be able to drop a table ordelete a database. And if I put
that policy in place, I'vesolved replit, I've solved that
issue altogether. And theseguardrails are not just for
tools, but they could be forthings like maybe I've got a bot
on my website. I don't want itto talk about certain things,

(19:20):
like my competitors, I don'twant it to be, you know, prompt
injected or be manipulated inany way that would give false
information to the websitevisitor. Having guardrails in
place mean that I can monitor,track and manage the language
that goes in and also out of thethe LLM so gives a little bit

(19:43):
more control than you otherwisewould have had, had you had no
guardrails.

Jim James (19:46):
And mindful of with that, thinking about how when
you have a new company and youhave you have an office and you
bring people in, you havepolicies, employee policies, and
you have some guidance and someguidelines written down. And.
But what you've really raised tome there, Dave, is the real
risks that I'm actually kind ofletting almost anybody into my

(20:07):
new office and saying, do whatyou like, kind of thing. We're
working on this. But I haven'treally got any security on the
on the doors. I haven't got anyway to keep an eye on them when
they're actually in there aswell. Let's just move on a
little bit. As an entrepreneurbuilding something often we we

(20:28):
then look for funding. What doyou think the implications are
for risk and for VCs? Because ifthey're looking to scale the
company, the entrepreneurs oftenlooking for series A or Series B
after friends and family. Whatdo you see as the implications
of this kind of no securitybuilding on vibe versus, for

(20:51):
example, an AI orchestrationplatform, where you've got some
guardrails and you've got somesecurity policies, what do you
think is going to be the impact?

Dave Horton (Airia) (21:00):
I mean, certainly, you know, from what
I've seen, a lot of VCs areactually considering AI as its
own threat vector and its ownkind of additional set of risks
when they're making evaluationsas to who do I invest in and
where do I put, you know, my mycustomers money when it comes to
these technologies. Now, therisk that you see quite

(21:21):
naturally, is the llms. They'retrained on data sets that might
not belong to you. They mightbelong to they might not even
belong to the model provider. Insome instances, we've seen quite
famous case studies on and so ifyou're building an application
and it is leveraging some ofthis data that ultimately feeds
into your intellectual property,and there is some kind of

(21:43):
dispute. Then, you know, if I'mfunding a company, that might be
a bit of a challenge for me toevidence or be able to justify,
where did that data come from?
What is actually my intellectualproperty as a company, and what
was derived by the AI that Ileveraged to build my product?
So it's, it's a complexquestion. Well, it is.

Jim James (22:04):
But I think also you've raised a couple of things
there about the IP that ifyou're inventing this as within
lovable, for example, in thesame way, if you use Dali, for
example, to generate an image,you don't own the rights to it.
So I guess at some stage you mayfind entrepreneurs being
questioned about whether theyreally own the IP, yeah, and

(22:26):
that the VCs are going to beasking, as you say, for some
verification, also that peoplecan maintain the quality of that
product. I mean, how does thatplay out? Because if you are
doing vibe coding, and somethinggoes wrong, and you've got
investors, what's theimplication for kind of the risk

(22:46):
that the investor is buying intoa company that can't really
manage business continuity?

Dave Horton (Airia) (22:51):
Yeah, I mean, you know, with citizen,
AI, anyone can build anapplication, and so it's
incredibly easy for me to go andbuild some software. But, you
know, I think when we're sellingto enterprise, they need that
level of kind of, H Ha, youknow, dr, they need to be able
to have some of these standardsthat mean that the code is

(23:13):
version controlled, theinformation within it is backed
up. You know, there's enterprisestandards behind the scenes that
go into it. And so I think a VCneeds to consider not just are
they using vibe coding, but havethey built the infrastructure
around that first phase? Thatmeans that, okay, code pushes
backups of that data. All ofthis are also considered in the

(23:36):
grand scheme of things.

Jim James (23:37):
Yeah, I read somewhere that 70% of
institutional investors now arelooking at part of their due
diligence being on the coding,and whether it's if you like a
regional source coding, orwhether it's coming from a
generic platform, which, asyou've said earlier, might have
been duplicated and sharedsomewhere as well. What about

(23:59):
the kind of defence of theproduct or the app that you've
built. How are people, if youlike red teaming or trying to
break software? Because we'vetalked about compliance. But
there's also risk threat, whichis bad actors. And I've worked
with clients like f5 before, andbeen frankly shocked and scared

(24:20):
at the level of malice, butit's, you know, often large bad
actors that are well paid, evenstate funded, sometimes, that
are trying to break and steal.
How does ARIA help with with, ifyou like, testing within a
secure environment?

Dave Horton (Airia) (24:34):
Yeah, it's actually very interesting. I
mean, I, I'm a cyber securitypractitioner by by trade. You
know, that's where I kind of,you know, gravitated my career.
And as much as I'm excited aboutAI, I'm also intrigued by, you
know, some of the innovationaround the, you know, the red
team, the hat, the hackers, andso, you know, one of the one of
the use cases that that we seewith AI is that it's actually

(24:58):
opened up a lot. New threatvectors that were not
necessarily understood even ayear ago. There are, there are
new technologies that have beeninnovated in AI, but now there's
also a counter play, where thereare new threat vectors or attack
services that are vulnerable,that need to be understood fully
by a company developing theirown applications with AI now red

(25:20):
teaming in itself. There's a fewways that you can, you know,
kick off a programme where youcan just see, well, susceptible
is my AI that I've I'm veryproud of, but it's how well does
it perform? And some scrutiny,you know, with high level
attacks. And so an attack that Imight go and perform on an agent
might be a prompt injectionattack where I try and get it to

(25:43):
break outside of the rules thathave been defined within the
prompt itself, for example. Soif it's a HR bot, for example,
maybe I try and get it to saysomething it was not designed
for, or give me information itdidn't it shouldn't necessarily
be giving me and so I might usea library attack to essentially
go in and maybe throw 100different inputs into my agent

(26:06):
and see, well, how much of thatcould get flagged, flagged back
as a as an issue. And there'sother things as well, like, you
know, DLP, if I try and insertDLP, data loss protection. So
let me try and extract somepersonally identifiable
information, or even put some ofthat personally identifiable
information into the agent andsee, will it accept it? Will it

(26:27):
continue with the that line ofquestioning? And so red teaming
allows you to, in the firstinstance, just see, well, what
are my guardrails not doing if Idon't have any guardrails, you
know, what's the LLM allowingthe attacker to extract from
this agent that might haveaccess to some pretty sensitive
data if you're integrating itwith your existing applications.

(26:48):
But an extension of that is thatwe actually have a swarm of
agents that can actually betasked with attacking an AI
agent and seeing what it canextract. So just with natural
language, I'll give my swarm ofagents the task of trying to
exfiltrate some credit cardnumbers, for example, from an
LLM that we've got, we've gotset up, and it can go and just

(27:12):
try, you know, multi turn, somaybe over a conversation of 30
different utterances, what canit extract and see if there's
success or failure? So it reallygives a bit of a benchmark as to
without me finding out the hardway. You know, seeing before we
launch an agent, before we goand productionize it, is it
susceptible to anything that wemight need to consider a

(27:32):
guardrail to protect against.

Jim James (27:33):
So to be fair, to say that maybe the analogy is that
you can, you know, you can buildand you can, you can test drive
it, but in private securitywithout prying eyes, in the same
way that they test drive, youknow, cars in in the Arctic
Circle, for example, before itcomes back to being driven on
the main road,

Dave Horton (Airia) (27:50):
exactly right? And it's good practice to
see, well, in the worst ofconditions, how does that? How
does this agent or car performin these circumstances? And
obviously, you know, thefeedback might be that there
were, you know, 10 differentavenues that you know, weren't
protected when we we have theguardrails that we went out for.
So let's go and enhance those.
Let's go and add thoseenhancements into what we would

(28:12):
productionize and retest. Andinterestingly, you know, the
we've talked about thedeterministic element of AI over
time, your LLM might get somekind of drift. That might be
changes from when you launchedit to day. And so you want to
actually test on a regularcadence, so maybe even schedule
every day I'm going to run thesame test and see if there's any

(28:35):
change in the security posture.
And if there is then I kind ofalert my team to go and monitor
or how did that happen? Do weneed to make an additional
guardrail? Is there anythingthat we would do to enhance that
security?

Jim James (28:49):
Is that something then, within aria that people
can set up and it becomes, ifyou like, a controlled,
repeatable experiment

Dave Horton (Airia) (28:56):
exactly, it's kind of like penetration
testing, but on your agent. Soit really just gives you the
ability to get up to the minutekind of feedback on what is it
is your agent susceptible to?
And you can, you can schedulethat. Most companies are looking
at standards like SOC two andISO 27,001 they're usually a
yearly pen test on yourapplication. Is what is is

(29:17):
required. But this gives you theability to do it every day or
every week if you wanted to tosee, you know, what are those
threat factors

Jim James (29:28):
we've talked a lot about, you know, compliance,
about security, about investorrisk, which are, I think,
important, because people thinkabout the opportunity of AI
without necessarily thinkingAbout in Downstream what might
happen? Because if they'resuccessful, of course, they
become a potential target forattack. But Ari is not just

(29:48):
about defence, it's also aboutcreativity and about engagement.
Tell us a little bit about theWilliams collaboration, which,
of course, is famous f1 andlet's talk a little bit more
about the. Engage with thecommunity and and how people are
actually using ARIA, and howyou're getting ARIA into the
market. Yeah.

Dave Horton (Airia) (30:08):
I mean, you know, the the Williams
connection is obviously a prettyexciting one for, you know, a
motor racing fan like I am, butyou know, when you look at
Formula One, everyone thinksit's about the cars and the
drivers. But what they fail torealise is that it's a, you
know, each team is its owncompany. Each team thrives on

(30:28):
data. And, you know, they're notjust competing with the car and
the driver, but also the thetechnology stack is a, you know,
a component of the success ofany particular team. And it's
quite ironic looking at, youknow, the 2025 Formula One
season, each car has probablygot an AI sponsor because, you

(30:50):
know, it is such a component ofof that data analytics, etc. And
so area we chose the WilliamsFormula One team. They've
obviously, you know, had alegacy and a history in Formula
One, arguably very competitivethis year, you know, being fifth
in the championship, which ishigher than they have for some
time. Now, I can't attributethat to strictly to area or to

(31:13):
AI, but certainly, you know, ifwe're considering that AI is an
unfair advantage, if youcapitalise on it in a certain
way. That's really what theFormula One teams are doing
right now. They're looking at,well, how can we have aI
interpret the regulations, forexample, and maybe give us some
insights, rather than having aswarm of people go through the,

(31:35):
you know, 1000s of pages oftechnical documentation and
interpret that, AI is fantasticat looking at natural language
and maybe interpreting or seeinghow the language could be
construed in a way that wouldgive us a an advantage. But
really, the, you know, there areso many different use cases
that, you know, in anyparticular interaction that we

(31:55):
have with Williams, there'salways some someone that has a
new idea as to how we can useAI, and it's, it's not just on,
you know, the performance of thecar. They're a company like any
other. They have a hiring team,they have a HR team, they have a
legal function, a financefunction. And lots of the agents
that we, you know, work withsome of our largest customers,

(32:17):
are very transferable betweenany company. And so you know,
going back to your question oncommunity, we have an agent
community where customers canbuild their own agents, and if
they want to, they can actuallyshare it with with the
community. So if I've got areally unique idea, I've spent
time developing the perfectAgent with the right tool set, I

(32:37):
can release that to thecommunity and get some kudos for
being able to develop somethingquite so innovative, but it also
allows others to maybe get 80%in the way to a use case being
complete within theirorganisation, without having to
start from scratch every singletime.

Jim James (32:54):
Yeah, so Ari is going to help Williams to go faster,
and you've got the presume thehuman side as well of monitoring
and evaluating how the how thedrivers are going. You mentioned
about the agents. Dave, can Iask you a question? Do you have
a favourite agent? And if youdo, what would it? What does it
do?

Dave Horton (Airia) (33:12):
Yeah, I mean, one of the ones that I
commonly use is I'm, every day,I'm speaking to customers and
prospects of area. And one thingthat I take quite a lot of time
to do, or at least prior toworking for area, was, well, it
really pays to understand whoyou're about to speak to. You
know, what's their background?
What's their specific job role?

(33:34):
What sort of technology havethey worked with before? What
are the values that theircompany has so that I can align,
you know, how I were to speak tothem. And so, you know, really
simple kind of research agent. Ican create a an agent that will
connect to my calendar, and Ican ask a question, like, you
know, research the meetings Ihave today. It'll go look at my
calendar, see all of themeetings that I have. And then

(33:56):
with prompt engineering, I mightsay, Well, I'm only interested
in the ones that have customerson them, for example, and it can
go and pick up the attendeelist, go off to to do
essentially a Google search, dosome research on who they are.
Maybe that tags on to, you know,their LinkedIn profile, whatever
they've got out there. And itkind of builds me up a map of

(34:17):
like, what's important to this,this person I'm about to speak
to. Is there any particular areathat I might be better to know
about going into this meetingrather than not you know the
company itself, if they're anoil and gas company, for
example, then I know whichagents might resonate better
with that particular audience.
And so it's a very simple agent,which is an LLM with maybe two

(34:38):
or three tools that hascapability for but it saves me,
over a course of a few months,hours and hours of time of just
doing research, and at best, itgives me a better visibility
into how to approach customers,how to speak to them about what
they care about. Great, although

Jim James (34:55):
I won't take it personally that you said the
important meetings are where youmeeting someone might be a
customer. Well podcast are alsoequally important. How difficult
it would it be for someone likeme to build an agent, maybe
taking using one that's alreadythere and modifying it? How
accessible is it for people touse ARIA? Yeah, so

Dave Horton (Airia) (35:18):
we've kind of taken the approach that we'll
try and be all things toeveryone. And so there is an
angle where we actually developthe product. So it is drag and
drop interface. It's very muchkind of like other orchestrators
that that you've mentioned aswell. So we would call this sort
of the low code approach to theplatform, where I don't need to

(35:38):
do any kind of coding. I don'tneed to touch any kind of Python
scripts or or anything likethis, I can just sort of
configure it's all click, clickthrough. I can drag and drop,
connect the links, and then Ican I can run test it, deploy it
how I want. But we also do caterfor some of those more pro code
scenarios where I do want to dosomething clever, where I'm
maybe having a full kind ofagentic flow. Maybe I'm using

(36:02):
machine learning models toconsume data. Maybe I have to
use some Python script tomanipulate the data for my
particular use case. So we'retrying to give customers the
tool set, whether they are kindof citizen AI, with very little
kind of technical knowledge, thesame platform for the player.
The Pro coders that you know,wants to do very elaborate kind
of connectivity within theirorganisations

Jim James (36:24):
data, okay, but then they can do all of this coding
with a with a peace of mind, asit were, that they've got
compliance and they've gotsecurity, and they're minimising
their risk. And ultimately, ifthey've built something useful
and valuable that they couldmonetize it without any threat
from from outside, Dave, thislisten to a little bit about you

(36:45):
as well. Tell us a little bitDave Horton and your role as
well. Yeah.

Dave Horton (Airia) (36:49):
So, I mean, you know, for the last 15 years
or so, I've been in theSolutions Engineering realm, and
essentially, the way I kind ofexplain this to, you know,
analysts and customers when I,when I introduced myself, is
that my team has the highesttouch point with our customers,
you know, the people actuallyinnovating with AI, which means
that, you know, we're really onthe the front line as to, you

(37:12):
know, what is it that people aredoing in terms of use case, or
what are the important aspectsthat they want to consider When
building out these agents? Andso the challenging aspect is
that, you know, you're literallygiving people the flexibility to
do 10,000 different things for aparticular use case. You know,
how long is a piece of stringis? You know, is quite hard to

(37:33):
answer when you you don't knowtheir technology stack and such.
And so my team really works withthem to understand, well, what
does your technology look like?
Where is the data that would beuseful to build out an agent,
ultimately build out that thatagent show them the value of the
platform, being able to do thisincredibly quickly and then
ultimately secure it as well.

(37:53):
So, you know, not just that onthe innovation side, but might,
might even be a completelydifferent stakeholder within the
business, we would have a verydifferent conversation about,
well, this team is innovating,but you're probably concerned
about, you know, some of thesafeguarding and responsible AI,
here is your section of theplatform that allows you to
protect and manage that side.

Jim James (38:15):
And one of the reason I ask that is because, you know,
I've been to the ARIA website,and I've had a demo, of course,
and, and it's really, reallyimpressive, not only the
platform, but actually we cantalk to a human. And I'm using,
you know, lovable and Nan, butthe best you get is to talk to
another AI bot. So I thoughtthat was a really interesting

(38:35):
approach that Ari is investingin, in the human side, that
actually you can make anappointment and have a one to
one call to get your needs metand to give you guidance, plus
you've got this community. So Ithought that was very
interesting, that you're theretoo.

Dave Horton (Airia) (38:50):
Yeah, I think, you know, the the natural
instinct of everyone is that AIis going to, like, solve every
single problem, but you can'tsolve human interaction, you
know, with AI necessarily,obviously, we want to keep, you
know, my if I look at my team,for example, we've got a global
team of solutions engineers anddifferent geographies. And the
reason that we have that at allis that, you know, customers do

(39:12):
like a face to face. They dolike to be able to to speak to
someone about, you know, theirparticular issues. And you know,
by having a global team that'sthat's really ready to support,
you know, it really opens upsome doors and into maybe
additional use cases they hadn'tconsidered. So I'm, I'd like to,
I might be lying, but, you know,I'm quite glad right now that AI

(39:33):
is not coming directly after myjob. I think you still need some
kind of level of humaninteraction to kind of truly
understand and articulate value.
But I think as well, people dolook at AI as maybe keeping
smart people working on smartproblems, rather than it's
replacing smart people, youknow, the agent I mentioned
earlier on about, you know,doing some research for me, that

(39:54):
is a task that I no longer haveto do. I can outsource. That to
AI, but I can still have thatcustomer interaction. I can
still make the best of my time.
And so I probably encourageeveryone to kind of look at your
role and think, Well, what arethe areas that I could outsource
to an AI so that I can be morefocused on, you know, my

(40:17):
specific skill sets, my specifickind of value add when I'm
interacting with my customersand my employees.

Jim James (40:23):
Yeah, you're right, David. And this idea that people
that lose their jobs not be the,you know, the people that work
with AI, he'll be the peoplethat ignore AI, isn't it? And so
you're using it to optimise yourperformance. Okay, I'm going to
ask you a question that you Ididn't prepare you for. Okay,
where do you see AI going in thenext let's say, can I say three

(40:44):
to five years? I know it seems along time away, considering how
much things have moved in thelast 12 months, but can I ask
you to give us an idea where yousee AI going and where you see
ARIA fitting in with that?

Dave Horton (Airia) (40:57):
I think the real kind of interesting aspect
for me is that we know. We don'treally know where it's all going
just yet. I think the there aresome guesses that I could make
around how we will interact withAI in the future. I think the
model that you know chat GPTwent down where you have kind of
a textual input, and you askquestions, you get responses,

(41:18):
but it means I have to leave theapplication where I had that
question is something that is itneeds to be addressed. I think
people want AI where they'reworking. They don't want to be
redirected to where they're notworking. So I think there's some
technical elements where we canbring AI closer to where the
user is in terms of theirworkload. But I'm pretty

(41:39):
excited. I mean, if you justlook at, like, video and image
creation, in the last year, it'sadvanced, you know, I think
there's going to be some reallyinteresting kind of arenas where
we can't even anticipate whereit's going to go. And, you know,
I'm always looking at the AIinnovation from the slant of an
enterprise. So ultimately, isimage generation and video

(41:59):
creation, is that an enterprisekind of value add, or is that
kind of a, you know, you know,consumer curiosity. So from an
enterprise standpoint, I thinkthere are standards around.
Well, how do we get end usersauthenticating to the right
applications? How do we secureto make sure that they're we're
not giving too much liberty tothe AI to deliver what it what

(42:22):
it needs to be. Maybe a boringkind of topic there, but I'm
actually quite interested to seehow the security side and also
the AI governance side, youknow, with the EU AI act coming
coming online, it's going tobecome more commonplace that
you're going to have to evidencequite strongly. What was, what
was your thought process? Howdid you build privacy by design
into, you know, some of theseagents that you're building.

Jim James (42:44):
Dave Horton, VP of solutions, that are, if people
want to find out and connectwith you and maybe get a demo,
how can they do that?

Dave Horton (Airia) (42:50):
So again, we're really easy to work with.
So, you know, the website isobviously a great place to go
and see kind of high levelinformation you can sign up for
for a trial. There's also kindof communities like discord that
you can sign up to and kind ofask questions. We also have the
capability to run trials of ourplatform so people can get

(43:10):
access to it. And within theplatform, you can request the
one to one with any of the thesolutions team to kind of help
you through as well as havingkind of video content, etc,
around some of the core pieces.
For me personally, you know, I'mon LinkedIn, you know, it's
probably where I'm most active.
You can see where, you know,area is, is kind of operating.

(43:31):
And so if you just wanted to getmaybe not just an understanding
of area, but also, you know,what is AI? What the
possibilities there? Hackathonsare really great kind of
community events that I'd alsoencourage to look at. And we've
got a calendar list that thatyou can see on the website, that
you can go and join one in yourregion, your city.

Jim James (43:49):
Great. And Dave, I'll, I'll put the link to that
on the show notes. Dave, thankyou so much for joining me today
in the London studio. Thanks somuch. So we've been talking to
Dave Horton, who's the VP ofsolutions at Ari, that's a, I R,
I A, as you can see from thelogo behind me, fascinating. The
opportunities of AI are immense,but also some of the risks of

(44:11):
compliance and threat. So if webuild products and apps, we must
think of the opportunitiesentrepreneurs, but we must also
think of the long term exposurethat we create for ourselves and
those people we work with. Sohope you've really enjoyed this.
Obviously, all of the show noteswill detail Dave's details and
where you can sign up forhackathons and trials, and as

(44:32):
always, if you've enjoyed theshow do please share it, because
we don't want any entrepreneurto get left behind.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.