All Episodes

December 6, 2024 • 39 mins

Can (and should) AI replace expert intuition? Kris Braun shares how AI transforms data workflows, the difference between delegating to AI and abdicating responsibility, and why we can now solve previously unsolvable problems.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
But what these organizations have learned is that BI didn't actually replace the need

(00:05):
for data experts providing insights from the data.
What it comes down to is there is a level of expertise in human judgment that's quite
essential in these high consequence decisions.
To remain human, we need to keep doing some things that are hard.
If you're a business leader, if you are abdicating your decision making, you're going to get

(00:26):
less good at making good judgments.
You just want to get to those experiments as soon as possible so that you could become
the expert in your domain for what are the opportunities.
No one else will know that for your specific domain, for your specific customers.
You're the one that will have to discover that.

(00:46):
Welcome to Artificial Insights, the podcast where we learn to build AI people need and
use by interviewing product leaders who have launched AI products.
I'm your host, Daniel Manary, and today I'm joined by Kris Braun.
Kris is a seasoned leader with experience launching serverless products at Google, where
he drove adoption to over 85,000 customers.

(01:09):
As a founding CTO and repeat entrepreneur, he has built impactful products like kids
wifi, and is now applying AI to enhance data insights and management at RunQL.
Currently part of the Google Startups Accelerator, RunQL is transforming how data teams collaborate
and innovate.
Kris, would you introduce yourself to our audience?

(01:29):
Yeah, I'm Kris Braun.
I'm the CTO at RunQL, where we're on a mission to empower data teams to handle the ever-increasing
volume of requests from business for insights into the data.
We know that there's more and more data all the time, and these teams need to provide
the trusted insights, and so we're helping them do their craft in ever-increasing amounts

(01:53):
of requests, ever-increasing amounts of data.
I do that within a series of product creation, product founding.
I'm very much usually the first person involved with tech products, and so I've done that
a number of times with a variety of different outcomes, but I've almost always had fun

(02:14):
with it.
In this case, you guys have gotten into an accelerator or some other program as well,
haven't you?
Yeah, that's right.
So, the Google for Startups has a number of cohorts, and so we are in a cohort of Canadian,
mainly AI-focused companies that are based in Canada.

(02:36):
It's a great program.
While I was working at Google for several years, where Google is in general not where
a zero-to-one product founder might end up, I happened to be working on a product that
was getting newly launched, and so it was good for that, but I definitely sought out
the startups within Google and where I could help.

(03:00):
I was a mentor in this program, so knew about it, but didn't realize that we were going
to enter the program until our CEO said, oh, by the way, we just got into this program.
I said, it's a fantastic program.
I know the folks in it.
I'm pretty sure that we got in on our own credit, and not because of any connections,

(03:20):
but it really is nice to come full circle.
Yeah.
That's funny.
Okay.
First, on a podcast like this, I think one of the big questions on everybody's mind is,
Kris, are you an AI?
I heard you ask that to a mutual friend of ours, and I can confirm that Anwar is not

(03:42):
an AI.
Maybe you're going to have to ask him to confirm.
He should do a reverse touring test on me and confirm.
Yeah.
Within computer science, I'm sure many people feel like outliers in some dimension.

(04:03):
Everybody has that thing where they feel like, oh, I don't fit the norm, but I've always
been fairly, not just philosophical, but had a real interest in the intuitive, the emotive.
There are things in me that if I am an AI, then I've been trained well on not always

(04:24):
being rational, maybe.
I think hopefully I could pass a touring test.
Yeah.
For RunQL, how do you guys use AI?
Like a lot of companies, it's at a few different layers.
The first place that many try to bring AI in is an enhancement.

(04:47):
It's something that you're already doing that you could do without AI, but it gets better
with AI.
That's often the first thing that you look for.
Unless you have a product that couldn't have existed before, you already have verified
a user need.
For us, it has to do around helping data pros with their workflow.

(05:12):
Some of the elements of their workflow include documenting queries.
AI is great at summarization.
We can pre-populate certain amounts of documentation in a way that just saves them that effort.
We can use AI for, for example, we're helping detect when a schema changes and suggest updates

(05:40):
to their queries to address the schema changes.
We'll probably get into some of the details of where's the line on AI and everything,
but there's an approach to... You can detect schema changes quite programmatically.
You don't necessarily need an AI to say what is the difference between version A and version

(06:01):
B. Perhaps you could even go as far as applying those changes very programmatically as well,
but there are elements to this that do require some of the things that an LLM is better at.
That's a way that we can make that feature better where the suggested updates to the

(06:21):
queries are a little more grounded in either in what an LLM would predict are the right
adaptations that take a bit of a leap that might be harder for an on-the-nose translation
function, but also ones where it maybe doesn't get led astray.
It gets the point and it makes a good quality recommendation.

(06:41):
Those are some examples in that near enhancement.
Then we can talk more about the other domains, but some of the things that are really interesting
to us are the problems that we're just getting to now.
As we've become good at helping the Data Pro teams, the next domain for us is solving the

(07:02):
whole workflow from the business user making a request to them getting a trusted insight
from the data team.
That's where a lot bigger, thornier problems come up with, well, how do you enhance the
ticketing workflow so that you don't bother the Data Pros when that question has already

(07:23):
been verifiably answered?
But how do you know that it's the same question and you're not missing a nuance?
If it's close or if two questions combined answer it, how do you recognize those two
questions and combine them and still preserve the fidelity of the response and the trust

(07:43):
that is there?
All of those kinds of things are the next frontier of applying things which are really
quite interesting.
That sounds a lot like customer support too.
Right.
Oh, well, okay.
It's interesting.
There is another company in the Google cohort with us and this is some of the value of these
programs.
You talk to others who are solving and in some ways they're solving a similar problem

(08:08):
in a very different domain.
They, I believe it's called Easy Assist, and they work with franchises and these large
franchises have lots of franchisees that have lots of questions.
They spend a lot of time answering these questions and they basically need a similar automation
to a ticketing system which is, we've got all these questions, have we answered them

(08:31):
already and could we answer them?
But if we haven't, then we need to answer it and feed it back into the knowledge base.
That is more kind of qualitative probably data and answers compared to the quantitative
large data that our customers are doing, but a similar kind of application of AI gets involved
for sure.
Yeah.

(08:52):
I think I might know those guys actually, so I'm going to follow up with them after.
That's super cool.
Yeah.
Distinction, because often when I think of generative AI and chat GPT style AI, I think
of that kind of qualitative data like writing better marketing articles or something like
that.
But what would you say is the difference between what that enables for you guys to do with

(09:15):
those data science questions versus that maybe less AI and more programmatic they can mention
before?
Yeah.
The domain really does matter.
So on one side, you have hard data, but you are trying to get insights.
So an insight is always an interpretation of the data.
So you're grounded in hard facts, just the data driven piece, but you need to make some

(09:40):
level of interpretation of the data.
That's what's interesting on that side.
On the other side, many business questions have high consequence and require a level
of trust in the answer.
So for example, today, I hope this is not happening.

(10:02):
I hope executives and CEOs aren't faced with these existential crisis in their business.
And instead of going to people on their team, they've hired the best and brightest, they
go to chat GPT and say, we've got competitors and we've got this.
What should be in our next quarterly plan?

(10:23):
Tell us what to do.
You're going to get an answer.
And the answer may inspire some thoughts or ideas of what you could do, but it's really
not something that's going to be able to make all the judgments and own the decision in
the way that is needed for a lot of high consequence stuff.

(10:44):
And so at a very human level, there's actually an analogy or an analogous situation that
doesn't require AI, but it helps us understand what's at play.
So over the last several decades, BI, business intelligence has been a big thing and companies
have spent billions of dollars effectively trying to democratize access to data.

(11:10):
That's been the promise.
The promise was you've collected all this data.
Now your whole organization needs to benefit from it.
So you need to kind of open the access, open the Commodal and let everybody go free for
all on the data to get the insights.
And it has a ton of benefits and they're definitely like, you know, there's no benefit in keeping

(11:31):
it inaccessible.
But what these organizations have learned is that when you look at the number of ad hoc
business requests where a business user says, I need this insight, it has not decreased.
BI didn't actually replace the need for data experts providing insights from the data because

(11:54):
the interaction will go something like this.
The business user will say, hey, I need the Q4 sales for this region with this nuance,
whatever.
They'll ask the data team and the data team will say, we built you a dashboard.
And you know, the response will be some variation of, I don't know how to use it.
I don't know how to interpret the data.
Effectively, I don't trust myself.

(12:14):
I could use it, but I'm not sure I would trust the insight that I got from it.
And what it comes down to is there is in a level of expertise and human judgment that
they need to say, hey, this person who is actually an expert at interpreting data, you
know, did well in their stats course, they are giving me this interpretation.
And so I think that's the same thing with AI.

(12:35):
AI can be assisting in providing all the data points and the insight or the things that
the right person needs, but there are some human judgment that gets into the mix that's
quite essential in these high consequence decisions.
Yeah, I think you mentioned two really key points there, which are AI can provide insight

(12:56):
that's sort of a derivative of the data, not the data itself.
And then potentially it can provide more trust.
But I don't want to put my business decisions, my whole business on the line by something
chat GPT said, I want the person that I've entrusted that decision to make that decision.
And even personally, I've had an instance where like an exec says, look at this survey

(13:18):
we collected.
It clearly shows a correlation between when we introduced this feature and our net promoter
score and they're negative.
So net promoter score is going down.
Tell me why that is.
And chat GPT could tell you, yes, they're correlated.
Yes, it looks like it's going down.
Everything you should maybe roll back this feature, for example.
But if you look at the data, they don't actually correlate those two data points.

(13:40):
So there is no way to make a meaningful connection between I introduced this feature and my net
promoter score went down because your volume of customers went up.
You weren't collecting, did you like this feature?
And so there's no way to meaningfully correlate them that a human who has their judgment and
domain expertise should be able to talk about.

(14:01):
Interesting.
Yeah, I think I hear in that example, you are able to bring in something that's maybe
outside the scope of consideration for the LLM model that helps inform your judgment.
It's interesting that plays both ways.
LLMs can bring in things that are outside.

(14:22):
They bring in things outside of my scope.
So sometimes I'll ask a question and it'll bring something in and it's now drawn into
my scope and it wouldn't have.
By their nature, they've consumed much more information than I have.
So it can be helpful.
So there is a bit of a symbiotic process of enhancing our judgment by using LLMs in that

(14:43):
way of they could bring in more into our judgment.
That's great.
But it is different when you fully outsource the judgment and say it's just going to happen
within the LLM itself.
And for the way that you're using LLMs that run QL, right now it sounds like it's to automate

(15:06):
processes to inform when things change and potentially to update them as well.
Where do you see the future of that going?
You mentioned there are some opportunities that are exciting.
Yeah, I think the interesting areas are handling larger parts of the problem.

(15:26):
So it's one thing to provide some assisted writing to documentation.
You could quantify the number of minutes saved in a regular workflow for doing something
like that.
The next harder problems that you might solve could be if the status quo is a business user

(15:50):
asks a question, it needs to go into a queue.
They're going to get no response until it's gone through the queue.
Several days later, someone looks at it, they fully shift their human context to think about
it, analyze it, and they come up with questions they can't even fully answer it.
So they send back some questions.

(16:11):
This cycle repeats a few times.
You get to the end of a large workflow and the business user gets the answer.
It seems like there's opportunities to apply things that shorten the cycle, are able to
provide answers with levels of confidence to the business user immediately, are able

(16:32):
to extract necessary information from the business user ahead of time so that it's fully
prepared for the data pro when they go to look at the ticket.
But other answers that other people on the data team have done that are relevant are
surfaced for that data pro.
So you kind of imagine when you sit down to do your craft, which there's a limited number

(16:55):
of people on the data team, you've kind of got everything in front of you.
You've got everything from the business user.
You've got all the relevant information.
Not only does that help you answer better, but it does help you close the loop quicker
and provide that answer.
All of those things are very interesting and have a level of challenge that is kind of

(17:17):
beyond, is this a good summary of this one query or this one question?
Because there's levels of confidence, there's amounts of accuracy, there's a huge human
interaction component.
You have to interact with the business user and you have to fit within a data pro's workflow.

(17:38):
They write SQL queries.
They are technical folks.
They don't want minority report and 3D related things flying at them.
They're like, I work with SQL and so I need data in a thing that's not going to pull me
out of my skill set and my workflow, but it's going to enhance it.
And all of those dimensions make it kind of more interesting than simple summarization.

(18:00):
Yeah, my goodness.
There's a lot of factors.
And I think we touched on this topic in our discussion, but what would you say to someone
who would view AI as taking their jobs?
Yeah, I hope AI doesn't take any good jobs, any real human work.

(18:21):
Actually, let's back up.
There's kind of like a framework that I think about the role of AI that we've touched on,
but I kind of want to name it explicitly because I think it helps answer this.
So I seek three levels of applying AI.
The first is augmentation.
So that's where AI, you know, it's the thing when you're writing your email message, you're

(18:45):
getting like a smart compose.
And if you choose to hit tab, you have an option that saves you some keystrokes.
It's not drastically changing what you're doing.
It's just an enhancement to something you're already doing.
Then the next level is automation, where you actually turn things over to a system that

(19:06):
includes AI.
It's kind of like delegation.
So I think like on a team that I'm working with, there's some benefit to having some
extra insights from people, but there really is a big boost if I can just say, hey, could
you just take this on and take care of it for me?
Delegation or automation needs to be done carefully.

(19:26):
You need to know that the person that you're delegating to or the system that you're delegating
to is trustworthy for the domain that you're delegating to.
And so I think this is the phase that we're at right now is enhancement, good.
We can always step in.
Automation, there's the initial use cases for automation are there.

(19:48):
And I think we'll be able to do more as we go, but we'll have to understand what are
the limits.
When, you know, when are there safety or judgment things happening there that we can't fully
automate?
The level that I'm concerned about, particularly with jobs and what we do is, I'd call it
abdication.
So it's like, I'm responsible for choosing my business direction, or I'm responsible

(20:12):
for picking what's most important for me to work on today, or for expressing care and
kindness to others around me.
If those things become things that you say, I'm just going to hand that over, I want to
be a caring person, but I don't have time.
And so I'm going to outsource that to an AI.

(20:37):
The danger in that is twofold, I think.
And then the obvious one is can an AI replace a human?
That will be a debate and people will debate on certain sides to some point that certain
domains AI could perform better than people in some measurable way.
I could see that.
I actually think there's maybe perhaps more danger on the other side, which is to remain

(21:01):
human we need to keep doing some things that are hard, such as caring for each other.
If I didn't have to change my plans and do, you know, the acts of care and love for other
people, my very soul would probably start to shrivel over time.
An AI might be doing a great job of sending those cards and those letters and messages,

(21:25):
but I as a human being would be less loving and less kind.
Or if you're a business leader, if you are abdicating your decision making, you're going
to get less good at making good judgments.
If you're not good at pushing through a hard to think through problem, you're not going
to be good at thinking through hard problems and you're only going to be able to outsource

(21:47):
them.
So for all those reasons, I hope those are not the kinds of things that we get rid of.
And yeah, I do hope that people are able to play this role where they are themselves delegating
to AI so that there's a shift in work, but that it's not completely replacing the people.
Yeah, I love that summary because I think there is a big discussion on the horizon of

(22:11):
how much do we need to keep as humans and education is an area I care a lot about.
And it's changed with chat GPT.
You can tell when someone doesn't know their stuff, they've just delegated, they've abdicated
knowing things to chat GPT.
And that's not healthy.
I'm optimistic in the short term that we can augment many jobs and automate many jobs.

(22:35):
And it can be a fun experience because I was going to ask you as a developer, as someone
who's in the code, what's changed for you with coding with AI?
I love to get to that, but your teaching example helps me practicalize.
I think what I was trying to articulate, which is I think we all know how important teaching

(22:58):
is and how much better it could be.
Like teachers are strained under resource and we notice that these elements of teaching
and developing children well, it's under invested in.

(23:19):
And so the optimistic path would be if AI could augment and delegate certain teaching
tasks such that teachers had more capacity to do many of the human development pieces.

(23:40):
I think I hear about my wife works with children in public schools, primary schools, and there
are children that have issues that are not noticed by the teacher in a class of 30 kids.
They go the whole year and it's not that the teacher didn't have the time to recognize
and work on one or two things that might've unlocked that year for that student.

(24:03):
Teachers had that game changing, it would be amazing.
Now the dark path is the teachers are just like, whew, auto teacher is rolling.
In our era, it would be like, I stuck the film projector reel in and I can go out for
a smoke or sit at the back of the class because it's teaching the class.
There is a human tendency to be like, whew, I'm off the hook, it's running on autopilot.

(24:26):
If optimistically teachers could actually lean into the things that they always really
wanted to get to in teaching, it is a better world then.
Absolutely.
I'm excited to see if we can make that happen.
Yeah.
So you were asking about development in coding.
Me personally, I'm a Vim user, so I'm not using any of the fancy IDEs or anything like

(24:51):
that, but I definitely have Copilot and LLMs wired into my development environment.
So pretty typical mix of available code completion in editor.
Along with, I think I use quite a lot prompting and conversational.

(25:13):
I would say I use that a lot more actually, which is a form of delegation for me.
I code with an intent and so I'm usually writing and the other one's basically just fancy auto
complete to me.
It's ah, okay, you got a little ahead.
It's a little quicker than typing.
But the other is recognizing points where it's like, oh, I need this component.

(25:37):
I need this thing.
And particularly when it's something that I feel like it can specify really well, then
I'll just pop over, do that, iterate a bit on that.
Just use it with a snippet of code and then put it back in.
And it's absolutely a multiplier, right?
It's the kind of thing where a day could have been spent crafting this nice little algorithm

(26:01):
for something, but actually you can get it close with AI, refine it a bit, and then move
on to the next thing, which I love.
I love both the craft of coding, but I really like the forest.
If I had to choose, I'd choose the forest over the trees.
And so although I like a real deep, good deep dive occasionally, it's nice not to kind of

(26:21):
feel like you got sidelined by, oh shoot, I have to spend the week doing this thing
in the corner to get unblocked.
You know, the larger thing I'm trying to do feels like I can much more operate at the
higher level and say, this is the outcome.
And the smaller, the dive in pieces often get knocked off quicker, which is wonderful

(26:43):
as a product creator.
Yeah.
So how has your experience with the entrepreneurship, the business side of things shape the way
that you look at the code, that forest over the trees maybe?
I think there's a lot of levels to that.
My business partner, Robert and I will talk about kind of like your orientation towards

(27:05):
the code as an entrepreneur.
There's an element of knowing that all code is disposable on some timeframe.
There are examples of code that's been running for decades, but the majority of stuff that
we're working on now, probably at least in a PCS ship kind of thing is going to be like

(27:26):
completely replaced within some amount of time.
And so yeah, it does orient you a bit towards that.
That said, there's a high cognitive overhead to code that you can't make sense of.
And so there is this line.
You kind of surf this line of like, how can I structure things so that I can still conceptualize

(27:49):
the system and it's not going to bog me down when I think about making changes while not
over-polishing things that might get completely rewritten or thrown out when we make changes
based on what we're learning from customers and the market.
I guess both of those are optimized for future flexibility and velocity.

(28:12):
Yeah.
So you kind of need both of those things cause you to write in a particular way and a certain
level of humility of, I know this is not optimal, but it's good enough for today and we can
always go back.
I like that distinction because I feel like it's tempting from a business perspective
to say it could be cheaper to abdicate our responsibility to write good code.

(28:35):
Right.
But if you do that, you can't modify it.
No.
You're stuck.
No.
So it ends up constraining you and reducing your flexibility in the future because you
wrote things in such a way that your only option is to rewrite for certain changes and
you usually don't have the capacity for that.

(28:55):
So then you're restricted in your business decisions to the things that are possible
with your initial poor implementation or whatever.
So yeah, that maximizing flexibility along with velocity, there's some interesting mix
there that keeps teams humming.
It's like one of those things that you just know when you've done this with teams a number

(29:18):
of times, you know, when it feels right.
It's like the hum of the engine or whatever.
You're like, yeah, it feels right size.
The amount of effort that it takes to deliver that feature.
We're doing it right.
When it kind of feels like, Oh, why is that going to be so hard to do that?
Is something off or why is it taking so long?
I think we're over thinking this one and need to like turn down the solve everything for

(29:42):
everyone for all time level that we're operating at.
Yeah, those are some tricky levers that I'm sure we could talk a lot more about, but I'm
curious what you've seen in AI that's changed, I think in the past year or two and why now
is a good time to build something like RunQL or the other projects that you might have

(30:05):
seen through your work with Google or the incubator?
There are a number of dimensions that will continue to progress.
You know, similar to Moore's law affecting the processing power of chips.
Decades ago, products were designed with a set of assumptions about what computers could

(30:25):
do that were, I think in retrospect, were too rooted in what they could do then and
not understanding what exponentially increasing processing power would unlock.
We were a little too on the nose with what would have seemed like a lot of waste back
then.
Actually, once you're willing to waste that many CPU cycles on something, more becomes

(30:49):
possible.
There's some dimensions like the breadth of knowledge, things like context window.
You know, context window would be a perfect example of something that we were originally
thinking we have to make design decisions because we can only fit so much in a window
and the windows keep doubling in size.
Response speed, right?
You know, LLMs are slow to respond.

(31:09):
If you need something that responds immediately, you cannot use an LLM.
Response times are coming down.
Ability to reason.
That is increasing.
So, there's all those dimensions that it doesn't make sense to write use cases off based on
the current capabilities.
So, yeah, I think in terms of thinking of ways to apply it and where to jump in, it's

(31:34):
best to start a little like where you're a little ahead of the curve.
Go for something even if the latency and response time is going to be a little too slow now
or you have to constrain, you know, you feed less context in than you want.
Start working with it now because by the time you've learned enough to have your feature

(31:54):
working well, chances are some of those limitations will have been raised, had a doubling.
And that helps you kind of continue that back and forth.
And then you get to know the edges too, right?
You get a feel for, oh, we thought this would be a blocker or it wasn't, or we thought this
would be great.
And we now, you know, know from trying it end to end with customers, the latency really

(32:17):
just kills the experience.
Like they can't, you know, there are things where it's like, ah, it was fun to build,
but not yet.
And you just want to get to those experiments as soon as possible so that you could become
kind of the expert in your domain for what are the opportunities?
What are the constraints?
What are the things that we can do now?
What are the things we're waiting for?

(32:38):
Because no one else will know that for your specific domain, for your specific customers.
You're the one that will have to discover that because it hasn't all been figured out
yet.
And it sounds like access to data, going back to even the reason why RunQL exists, access
to data is a huge limiting factor and that separates domains.

(33:00):
Right.
Combination.
It's actually, it is access to data, but it is that healing element too.
You know, you, if you have worked with those customers or in that domain for a decade or
more, I do, I really do believe that there's certain judgments and intuitions that you'll
be able to make that are unique as a human creator involved in this process, as well

(33:21):
as you have access to data that may not be publicly available.
And so the combination of the two give you a unique opportunity, unique insight into
a problem to solve.
Yeah.
I, that to me, it feels like if you haven't gotten into AI yet, maybe the opportunity
has passed you by.
It sounds like what you're saying is take that domain expertise that you've got and

(33:44):
then get into AI and put those two together.
Oh, absolutely.
A bit.
Yeah.
For a number of reasons, I really hate that vibe that it's too late.
I even started to like, there was something off when there was that kind of furor around

(34:07):
like you need to be experimenting with AI like right now, otherwise you will be left
behind.
I think on one level, no, AI is going to evolve quickly and people will be coming into the
market at all times.
And if you skipped a couple of stages, you will be able to join at some later stage.

(34:31):
Like that it's just evolving at that pace.
But on the other side, I just don't think that's a helpful reason or way to do it.
Because I think it's that kind of, that leads to the, oh, we need to launch an AI.
Like where's the AI feature on our roadmap for Q4?
We need it.
What can we do?
And then it's like, we need to put in an AI centric feature and AI is what's driving the

(34:55):
feature.
That's rarely good product management.
It's not good product design.
That's so much better to say we're serving a customer.
We've identified a problem.
It's either a problem that we're already solving, but whoa, it could be that much better if
we were able to unlock an AI component to it that actually made it better than we've

(35:19):
ever been able to make it before.
Great.
Apply it there.
Or we've been avoiding or we've thought we couldn't solve problem B. Our customers have
it.
We just have always said, sorry, that's the one.
We don't solve that problem.
If AI enables you to actually enter into that and solve a new problem, again, fantastic
reason to do it.

(35:40):
I really would wait until you have those, even from the learning standpoint, because
it's so much better to learn with something real, like a real world thing.
It forces you to learn the real edges rather than the theoretical pieces.
It's just the best way to learn new things is applying them to something worthwhile.
Don't get caught up in the hype.

(36:01):
No, no, especially the guilt part, right?
Yeah, no way.
Have fun with it instead.
That's a great point.
It is fun.
And personally, it just brings in context that I never would have thought about.
I love it.
Yeah, agreed.
So just maybe as our last thought, would you like to share anything about what you're working

(36:24):
on right now or how we can connect with you?
Absolutely.
We excitingly have just made public releases of some of our products that we were working
with a smaller set of customers on.
So anyone can go to runql.com.
You can use either the web or download desktop versions of our app, which is particularly

(36:49):
for data pros.
Now, a data pro is really anyone who is working with databases.
These include SQL and NoSQL databases.
One thing we learned working with our early customers is while we were originally targeting
more data analyst roles, it turns out that the roles are very fluid and teams often want

(37:11):
to use the same tool.
And so we ended up adding in all the pieces that say a data engineer or even like a regular
software developer like myself would use because we are also working with databases all the
time.
So the invitation is open if you work with databases at all, you can download for free

(37:32):
and use our apps and we think that they could be the best modern SQL IDE that you've used
for doing your work with the upside of if you work on a team or you often go back and
reuse a lot of your queries, that's where we're really going to add the enhance is,
oh, I have done so I've looked for this kind of information before.

(37:56):
I've done this thing before.
We help you find that reuse it rather than rewriting the whole thing, share it with your
team and save you a lot of that work to be able to reuse the hard work that you did to
in order to get the insights.
So can I use it to abdicate my responsibility when the CEO says to these two things correlate?

(38:18):
You know, I think it brought a design.
I think we maybe even made that a little hard.
You should try to force it to see if it'll do that.
We do have an AI agent called Runa and Runa is there primarily to help you modify your
query, write a query, do work for you to optimize the query, that kind of thing.
But you could probably try to push the boundaries and say, just give me the answer.

(38:43):
What should I do?
Let me know.
It gets a first-hand domain expertise and push the boundaries.
Yeah.
That sounds like fun.
Thanks, Kris.
And thank you for joining us here today.
It was awesome to have you on.
Absolutely.
Daniel, thank you so much for running this podcast.
It's an important conversation and I've enjoyed listening to the other guests and the things

(39:06):
that I'm learning from them.
And so, yeah, I really appreciate you bringing folks like us together to talk about these
things so that we can evolve the field together.
Thanks for listening.
I made this podcast because I want to be the person at the city gate to talk to every person
coming in and out doing great things with AI and find out what and why, and then share

(39:30):
the learnings with everyone else.
It would mean a lot if you could share the episode with someone that you think would
like it.
And if you know someone who would be a great person for me to talk to, let me know.
Please reach out to me at Daniel Manary on LinkedIn or shoot an email to daniel@manary.haus,
which is Daniel at M-A-N-A-R-Y dot H-A-U-S.

(39:57):
Thanks for listening.
Advertise With Us

Popular Podcasts

Are You A Charlotte?

Are You A Charlotte?

In 1997, actress Kristin Davis’ life was forever changed when she took on the role of Charlotte York in Sex and the City. As we watched Carrie, Samantha, Miranda and Charlotte navigate relationships in NYC, the show helped push once unacceptable conversation topics out of the shadows and altered the narrative around women and sex. We all saw ourselves in them as they searched for fulfillment in life, sex and friendships. Now, Kristin Davis wants to connect with you, the fans, and share untold stories and all the behind the scenes. Together, with Kristin and special guests, what will begin with Sex and the City will evolve into talks about themes that are still so relevant today. "Are you a Charlotte?" is much more than just rewatching this beloved show, it brings the past and the present together as we talk with heart, humor and of course some optimism.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.