All Episodes

July 9, 2025 • 51 mins

Send us a text

In this episode, we sit down with Bogdan Knezevic, co-founder and CEO of Kaleidoscope, to talk about a growing challenge in life sciences R&D: making smart decisions when data is scattered across teams, tools, and partners.

Bogdan explains why disconnected systems lead to costly delays, duplicate experiments, and missed opportunities. He shares how the shift from academic to industry research, where projects are shared, not siloed, requires better workflows, clearer handoffs, and more thoughtful tools.

We discuss:

  • Why real-time access to decision-ready data matters more than connecting every system
  • How delays between experiments quietly waste months of progress
  • The hidden cost of repeating work because past data is hard to find
  • Why user-friendly tools are just as important as powerful ones
  • How better data management can strengthen trust with partners and investors

If your organization is working to scale R&D, improve collaboration, or simply make better use of the data you already have, this conversation is for you. Even small changes today can lead to huge gains tomorrow.

Learn more about Kaleidoscope Bio at https://www.kaleidoscope.bio/

Connect with Bogdan at https://www.linkedin.com/in/bogdanknezevic/

Ready to assess your organization’s efficiency? Connect with us at leanbydesign@sigmalabconsulting.com to uncover high-impact improvement opportunities. 🚀

Learn more about us by visiting: https://sigmalabconsulting.com/

Want our thoughts on a specific topic? Looking to sponsor this podcast to continue to generate content? Or maybe you have an idea and want to be on our show. Fill out our Interest Form and share your thoughts.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thanks for joining us on another episode of Lean by
Design, and today we have areally special guest.
He's somebody that I connectedto years ago, before we started
Sigma Lab Consulting, and we'vecontinued to maintain that
relationship.
I'm excited to see the thingsthat he's doing, and so today we
are here to talk to BogdanKnezovic, who is a co-founder

(00:21):
and CEO of Kaleidoscope.
Bogdan Knezivic, who is aco-founder and CEO of
Kaleidoscope I met him a longtime ago when he was initially
piloting wanted to get somefeedback.
Honestly, lawrence, I probablygave him much harsher feedback
because at the time, I think Iwas getting a lot of emails
asking me can you look at ourplatform?
Can you do this?
Can you do that?
I had a little bit of a harshresponse, but I got to hand it

(00:44):
to you, bogdan.
You carried that so well, andthen I was like, okay, this is a
real guy, this is a realcompany here that he's trying to
build.
Like, let me, you know, givehim a little bit more of my time
.
I think a part of what you and Ido, lawrence, is we're really
interested in what other peopleare doing and, however, we can
impart some knowledge tosomebody to help their progress

(01:05):
at their company.
You know that's something thatwe like to do.
We like to really be a part ofthat.
Bogdan's coming with over 15years of academic and industry
experience, including immunology, neuroscience, regenerative
medicine, genomics, with anemphasis on preclinical drug
discovery so very similar to ourspace where we're coming from
the labs and seeing things thatare in front of us and going why

(01:29):
isn't anybody doing anythingabout this?
So today we're really excited tohave a conversation with Bogdan
because we're going to starttalking about those R&D
decisions and how well thosedecisions are being made.
When we're looking at thescientific data, do we have the
data that we need to create theright decisions at the right
time?
Are we missing things or are wejust going based on a hunch

(01:51):
because of some data that we sawpreviously, without really
looking at the trends?
So this is a big topic and, Ithink, a big challenge, with a
lot of younger companies thatmay be really trying to hunt for
that one piece of data thatthey can string through a set of
hypotheses and really drivetheir operations.
So I'm excited, bogdan, welcomeand thank you for joining us

(02:14):
today.

Speaker 2 (02:15):
Thanks for having me, oscar, and I remember those.
I didn't have any bad memory ofit, so I'll take your word that
it was a harsh feedback, but itwas really helpful speaking to
you and to people like you thatare generous with their time,
because obviously all that Icare about and all that we care
about at Kaleidoscope is solvingthings that actually matter to
people and so understanding whatthat is and what is maybe

(02:37):
perceived as important but notactually important and the
reverse is really useful.
So I appreciate, obviously, thetime you spent then and since
then.
So thanks for having me.

Speaker 1 (02:48):
Absolutely, and I'm glad that you don't have any bad
feelings.
Sometimes I think about thesethings.
I'm like how did I meet them?
And then I go wow, I was verycrude, but I'm glad that we are
here and having this opportunityto talk to each other.
When we started thinking aboutR&D and the pace of research,
you're constantly looking forupdated data.

(03:08):
It has to be scrubbed, it hasto be massaged, presented a
certain way.
When we talk about our abilityto make high stakes decision,
why is this such an issue whenit comes to R&D?
Why is it that we are makingdecisions that are incomplete in
terms of the context or interms of what data is telling us

(03:30):
that we have delays, etc.
Why is this such a common issue?

Speaker 2 (03:36):
Yeah, that's a really meaty question.
I wish I had a simple of.
It comes down to anunderstanding of what you're
trying to do when you're runningR at an org or a cross-team
level, which means that it'sreliant on data that might be

(04:08):
distributed heavily acrossdifferent people and teams and
tools.
And so when you bring those twothings together the fact that
it's this distributed data andthat every data point has its
full milieu of context behind itit becomes a very, very
challenging thing to manage,especially in a world where
you're relying on humans andhuman time and human memory.

(04:32):
If you don't have the rightcombination of behaviors,
processes and tools, if youdon't have those elements,
you're inevitably doomed to letthings slip through the cracks,
and so those kinds of thingscompound over time and over the
course of R&D.

Speaker 1 (04:47):
They really do.
You know, as you mentioned justnow, sort of this series of
permutations.
It's just hypothesis afterhypothesis, after hypothesis,
depending on where the data isgoing, and understanding that
the data is there, but thecontext may not be there,
because we are dealing with muchmore complexity in the projects

(05:11):
that we're doing.
We're doing projects thatreally span, you know, data and
strategies from other functionsthat help us determine which
direction we want to take theresearch.
We want to take the researchand it is in that framework that
we've all grown to love or hate.
That lies in thoseopportunities that often are
missed, that create gaps inknowledge.

(05:33):
And well, why did we go in thisdirection?
Why did we run that?
I thought we had theconversation with this team and
we realized that that was notthe best move forward.
But things are not necessarilydocumented the right way.
Perhaps the communication, asyou said, as it changed from
hands to hand, the communicationof a new strategy, was not
really discussed.
So are these issues that yousee only in earlier companies,

(05:59):
growth companies or even at theenterprise level?

Speaker 2 (06:02):
I think we see them across the board.
I think there's only more atstake and more opportunity as
you scale, because it goes fromthe problem being, you know
simply I use quotes here how dowe manage handoffs or make
faster decisions, to one, thatis, how do we best leverage the
like corpus of knowledge we'vebuilt.
And anyone who has like been inbiotech and worked in biotech

(06:23):
knows that after a certain pointyou have now accrued so much
data and knowledge that you veryeasily lose the ability to
remember what was done or whatwas deprioritized or did someone
leave in the middle of acertain screening campaign being
run.
And that matters now, not justfor you to do the work you know

(06:45):
you have to do for a givencompound, but also because you
might be sitting on a treasuretrove of knowledge and data that
might lead to entirely newprograms, more pipelines being
launched that you just literallywould not know because six
months ago didn't get the resultback and forgot.
And now that compound is justsitting there shelved.

(07:05):
There's entire businesses nowthat are built around this
premise of let's find thingsthat are collecting dust, and I
think that there's just atremendous opportunity for
companies internally to improvehow they stay on top of their
decision, relevant data and howthey get the maximum value they
can out of any data package thatthey have.

Speaker 1 (07:24):
I think it's pretty remarkable, as you're explaining
it, when you really start todig within your role and your
function and you start to seethings that were created that
you've never known.
You start to see initiativesthat started and stopped because
there's this inherent lack ofcontinuity and lack of knowledge
base within organizations.
There was a point in my lifewhere I thought that going into

(07:46):
biopharma was going to besimilar to my dad at NASA
working 40 years.
But in some cases these arerevolving doors.
People are taking roles to worktwo or three years to then move
on to another organization totake on a different
responsibility.
So, especially when you seethis for people within the life
sciences, if you go fromacademia through industry, you

(08:07):
have technicians, graduatestudents, postdocs and
researchers on grants, andthey're there for blips of time,
but you could very well besitting on a treasure trove, at
the very least have a flow ofwork that has been done that
gives you more information onthe direction that you should be
going, preventing anyduplication of experimentation.

(08:29):
That's really just going toslow down your bottom.

Speaker 2 (08:33):
Yeah, and all these things are linked together.
So I like that you brought upthe kind of revolving door
analogy, because the timehorizon on R&D is massive
several years on the low end andso when the pieces are all
interlinked and you have thatkind of movement, then you can't
as easily say, okay, well,someone will be here, even if
they're here for just a year,and they're really great.

(08:55):
It's self-contained work andit's one and done.
We use this analogy sometimeswhen people ask us about project
management and what's differentabout project management in bio
versus or in life science, r&dversus elsewhere.
I think a big thing is thatthings are just not as ephemeral
.
In biotech you're not justdoing a checklist and completing

(09:15):
it and moving on.
You're going to have theseartifacts that are all
interrelated, that might come uprepeatedly in the future, and
so you're not just trying to dosomething for the sake of
forgetting about it.
You're going to need to accessit in some way, shape or form,
later.

Speaker 1 (09:28):
I love that perspective.
It bodes well with the thingsthat, lawrence, you and I have
been doing in the projects thatwe've been running with
organizations, where you reallytry to build a system that you
can reference, because thinkabout how often project teams in
R&D and in other spaces, whereyou're really collecting things,
but you never take a moment toreflect, you never spend an hour

(09:51):
or two to say, okay, let's lookback at what we're doing.
All you're doing is looking atthe very end.
What happened at the finishline, when really it's all about
the journey?
What did we learn?
What did we improve?
How did we get faster?
How did our decisions becomemore clear or lack thereof?
And then what can we do in thenext one?

(10:12):
There's so much power inreflection and there's a saying
that says experience is notwisdom.
Your ability to reflect on thatexperience is where you get
that wisdom, and I think thatwhen people come into the
organization, having someassemblance of what has happened
before is really going to primeyou in the way that you make
decisions, in the way that youset up your R&D space and drive

(10:33):
toward those goals.
That seem like a checklist, butto your point, it's not.
It's not a checklist.
It's being able to look at thepast and, in some ways, try to
predict the future.

Speaker 2 (10:42):
The iteration piece is exciting to me because it
manifests in slightly differentways depending on stage of
company.
So I think a lot of early stagecompanies really need to be
embracing the fact that, likeyou want, your goal isn't to get
10 half-baked programs toclinic.
Your goal is to get your mainlead candidate in programming to
clinic.
And so what you actually wantto be doing is testing and

(11:05):
iterating very rapidly in theearly days so that you know what
you can kill and kill the fast,and then focus more and more
resource on the things that arethat are looking good, so that
when you get to clinic you'reless positioned to then take on
the kind of challenges that comewith clinical trials.
So that's on the like earlycompany side.
So, having that awareness oflike, are we iterating on?

(11:25):
What is the decision?
Are we getting good?
And then the flip side is ifyou're a mature company, maybe
you have now clinical stageassets or you have a growing
portfolio of programs.
You're two, three, 400 peopleor more.
Of course, if you're a globaltop 10 pharma, that goes without
saying.
But if you're of these, likenow, mature mid, mid-sized
biotech, now this becomes howyou build an engine that keeps

(11:52):
innovating and keeps novel ipthe like.
Improvements in iteration arenow at a macro, like meta level.
No longer we get one thing toclinic it's okay.
Well, this is the engine.
Is how do we get it to be asefficient as possible?
Because it's still immenselyexpensive to run these
iterations.
So you're going to need to show, especially if you're public,
you're going to need to showyour board.

(12:12):
Hey, this is how we're gettingeither better or cheaper over
time.
Otherwise, you're not reallydriving value across the org.

Speaker 3 (12:21):
To your point, the engine itself, at some point in
time, was designed for specificoutcomes, and so, as your
portfolio is growing, your teamis changing, the decisions are
being made.
That machine needs to getmodified over time, right?
And I think you guys havetouched around this idea of
obviously, all of us aregenerating data all the time and
that data is being used to makedecisions.

(12:42):
Right, but oftentimes that datais not very accessible, and so,
even if that data exists,somebody can't see it or maybe
use it for what they need it for.
So there's the accessibilitypiece, and then there's the
organization of the data itself.
Right, when you are generatingthe data, are you leaving
breadcrumbs for people to beable to follow if you are not
there or if you move on tosomething else?

(13:03):
And those are really importantpieces to make high quality
decisions.
And, bogdan, from yourexperience, from working with
clients and what your companydoes, what are some of the flags
when you come across a growthcompany versus an enterprise
company that may be dealing withsome of these issues?
It sounds like the differentsources of data.
It's one thing to have it allconnect within a company, but we

(13:26):
all know that we have partnersoutside of the company itself
too, and you have to makedecisions off of other people,
people that are outside of yourorganization.
So how do you help themnavigate that?
And what's one of those redflags when you talk to a client
and say, okay, hold on, let'sjust align on what the actual
problem is here.

Speaker 2 (13:44):
Yeah, I like that question.
A couple of things came to mindas you were talking.
One is a pretty simple question, but if you can't answer what's
your best performing compoundlike why you, why answer that
without having to email your sixreports and then they have to
dig into their data.
That's a problem and it's avery common thing in this space

(14:05):
is that that's a hard questionto answer and and yes, there's
gonna be quite like follow-ups,like how do you define best
performing, whatever?
Sure, you can define it how youwant, but if you don't have an
easy way to do that without abunch of manual back and forth,
then that's an issue.
Another one that I see isdowntime, so this concept of
downtime being really expensivein biotech, which is when you

(14:27):
actually know what the nextthing is that you could be doing
, or you have the informationthere.
It's just like nothing ishappening.
I've seen this manifest whereteams will, you know, get
together.
Let's say, two differentdistinct biology, chemistry or
some kind of combination ofteams will get together, maybe
once a month to go over thingsthat are relevant to both teams.

(14:47):
I've seen this happen a coupleof times, where people will
share like one team will ask sowhen can we have those designs
ready so that we can go andscreen?
What do you mean?
They've been ready for like sixweeks now and you realize that,
like someone had saved a filesomewhere, but there wasn't even
a simple system.
I'm not talking about all thepowerful things we can do with
Kaleidoscope, that aside, buteven a process by which, okay,

(15:08):
when this completes, notify thisperson or post it here or add
it here, and so when you thinkabout all the ways that kind of
downtime can happen, and thenyou factor in external
collaborators and people thatyou're requesting work from that
are not in your organization,the mistakes they can make there
or delays that can happen there, you're now stacking or
compounding delays over time.

(15:29):
So in a course of a year, thiscould easily be a quarter of
time.
That's just from being betterat visibility into who's done
what and can you action the dataas soon as you have.

Speaker 1 (15:41):
You've touched on so many things that are.
For me, it brings a little bitof a visceral response, because
I've been in those situations,too, where you're sort of
sitting around going like didn'tanybody actually have a
question of where's that data?
For the last six weeks, whathave we been doing?
And usually you're talkingabout R&D teams that are working

(16:04):
on multiple projects, sothey're not just doing one, and
that's a transition that happens.
When you talk about root cause,why do these things happen?
Well, I think in some cases wewere trained to be like that.
If you think about where doestraining start?
Typically it's going to be inacademia and you are going to
have a self-contained projectwhere I get to determine the

(16:25):
experiments that I'm going to do.
Or, if you're a technician,you're working with a postdoc or
maybe a graduate student, soyou're just really helping them
on whatever they need wheneverthey need it.
Now that you're moving into anindustry role, you need to talk
about these things.
You need to alert people whenthings have been completed or
when you're near completion as aproject manager.
When you're near completion,because you need to make sure

(16:46):
that the other side is ready toreceive what you are doing If
you tell them last minute, theymay say, oh, I didn't set up any
experiments so it's going totake four days.
How often is that, bogdan,where, oh, I didn't set up any
experiments so it's going totake four days?
How often is that, bogdan,where, oh, I didn't grow my
cells for that experimentbecause I didn't know you were
ready, so I just split themyesterday so I'll do them again

(17:06):
in three days to set up theexperiment that I can do two
days later.
This is all the time.
And then when you start tobring in partners, cros, other
organizations that probably havea lead time and no one really
has a picture of the flow of thetiming, as you said before,

(17:28):
it's sort of this like I'm anindividual contributor, ok, now
that I'm done, I'm going to goand move on.
And it just left on the desk,it's left in the cloud.
In these earlier companies it'srare in my experience to find a
research operations specialist,a project manager, something of
that nature, in R&D spaces,because they don't need it,

(17:49):
apparently.
But these are the things thathappen.
We are not trained asscientists to pass the baton,
because a majority of what's inacademia is self-contained.
It's the experiments you'rerunning, whether it's supporting
somebody or that you are a gradstudent and you have your own
things to do.
You are creating the pace.
In these scenarios, everybodyis a part of that pace and you

(18:10):
lose motivation.
You lose those crisp ideas thatyou're ready to knock in the
next one, get sidetracked, youget distracted, and when we talk
about why why these thingshappened, aside from the tools
that we're using there's alittle bit of ownership that
gets lost, whether it's theexpectations or behavior that
has not changed or has not beencommunicated across the

(18:33):
organization yeah, spot on.

Speaker 2 (18:36):
I think the academic culture piece is a big one and
there's a lot of layers to it.
Everything you just said 100%.
Also the aspect of time framesand time horizons, and you have
funding for two or three or fouryears guaranteed and actually
chasing the thing that's mostintellectually stimulating for
you.
The emphasis is on novelty, butnot enough on reproducibility.

(18:57):
I don't need to talk about kindof reproducibility crisis, but
all the incentives rewardnovelty.
Obviously, novelty is at theheart of why biotechs sprout,
but I think the realization thathits a lot of people like a
wall is when you're working inbiotech.
Your biotech is the engine tocommercialize IP.
It's not the place to go andblue skies roam and follow

(19:20):
curiosities.
We've seen this pattern overthe last couple of decades of
how we've moved away from aworld where trauma does
everything to one where reallyit's the academic labs doing the
fully innovative explorativetesting and then there's
something, an inkling or aseedling that you can take and
commercialize and so you createa biotech around that.
And then you have thesespecialists, cros and CDBOs and

(19:41):
other orgs that you collaboratewith to support and the proof
points you need that are notcore to your IP but are
important in building thatpackage.
And then pharma gets involvedwhen you're phase one, phase two
, wherever it is.
So this kind of distributednature, this whole
commercialization engine, issomething that is very foreign
to academics.
It definitely was for me aswell.
It's a whole other pace of workwhen you come out of that world

(20:05):
.
I think that is a culturalaspect that is really important
to educate people on and makethem aware, unless you're
joining a super, super, superearly discovery team at a
massive organization and youhave free reigns.
Of course that's different, butI'm talking about the average
biotech.

Speaker 1 (20:23):
It takes me back to when I had that transition.
When I came in from academia Ihad this is this what industry
is like?
Kind of perspective, becausethings were not really connected
.
My understanding was that thereason why it's so hard to get
into the industry they wantexperience, they want is because
it's, you know, super complexand it's fast paced and

(20:46):
everybody's.
And I get there and I'm like Isee a bunch of people out there,
computers, so I'm like, okay,this is different science that
I'm used to, and no onecommunicating.
There was more through emails,but then it was almost like
everyone was doing their ownthing and then, at the last
minute, it was a mad dash effortto develop insights.
Let's try to pull these thingstogether so that we can have a

(21:06):
strategy for the next stage.
It was underwhelming andsomething that I've grown to
understand now, but it expandedyour horizons of awareness, like
who should be aware of the workthat you're doing, regardless
that your office and your lab isover there?
I was in a lab down the hallwaythat I could easily have just
disappeared every day because itwas a back exit to the parking

(21:27):
lot, but my work wasn't just mywork, it was everybody else that
was around me, that was workingwith it as well, and I think
understanding and thatdevelopment comes with time, but
I don't think there's enoughattention to it.
The habits that we bring in, orlack thereof, contribute to
some of that slowdown,contribute to some of that
disjointed nature where youmight be coupling new scientists

(21:47):
with folks that have been doingit for 15 years, that have
expectations that have neverbeen communicated.

Speaker 3 (21:53):
There's a balance, though.
Right, because we went fromeven 15, 20 years ago, where a
lot of this stuff wasn'tconnected, to now opening the
floodgates, where you have thismassive flow of information.
And, if you guys are familiarwith, the paradox of choice is,
when you have too many options,people get difficulty making
decisions.
So not everything that peopleare communicating to you is

(22:14):
actually useful.
I think there needs to be somesort of boundary between okay,
if we're going to make thisdecision, we should actually be
considering A, b and C.
And what is the balance betweenwhat things you include and
what you don't include?
And I'm sure, bogdan, you comeacross this because your company
is focused on connecting thesesystems.
So how do you help your clientsout and how should the industry
really think about what thingsshould be connected?

(22:36):
And because it's a difficultchoice, right, because there's
trade-offs and I would imaginethings get very expensive when
you try to connect every singlething you're using and you look
at it and go.
We don't even use most of thestuff.
That's for what we need it for.
You know, how do you thinkabout that from your company,
lens, and, just broadly speaking, from the industry as a whole?

Speaker 2 (22:56):
That's a really great question.
We took a very clear stance onthis early.
I'll caveat by saying there'slots of tooling out there that
will help with various things.
So this is just how we chose toapproach it, which is focus on
the data you need for thedecisions that you're making as
a team.
So we have this concept of like.
We call them data slices as atechnical term, but it's the
slice because it might be across.

(23:16):
So we have this concept of like.
We call them data slices as atechnical term, but it's the
slice because it might be acrossa bunch of contexts or teams or
places, but it's the dataslices that are important for
whatever phase of work you're in.
All of the other data is reallyimportant, and all the other
data compounds and drives that,and all the other data will be
audited at some point, of course, but in like, our worldview,

(23:37):
it's what data do you need toreach a decision?
And so a big thing that thebest teams we work with, and
then also the ones that start inan okay place and report huge
gains, are the ones that reallycan leverage yes, our tool, but
also just the mindset that webring, which is well, define the
thing that you need what's thepurpose and what's the decision
you're trying to make.
You need, like, what's thepurpose and what's the decision

(23:59):
you're trying to make.
Start with that and then workbackwards and understand okay,
well, this is the goal.
For these reasons, to achievethat goal, we need, let's say,
this data package.
So what experiments or studiesor assays or things do we need
to run to achieve this datapackage?
Let's plan those out and thenyou go and execute on the
science.
It's been interesting to see howpeople interact with our
platform, where you have theteams that are really well

(24:19):
established and for them it'sfiguring out okay, great, how do
we expose the views you needand make you work faster.
But you also have the teamsthat are like a kaleidoscope
becomes our framework fordefining what we're doing and
why, and so after implementing,we've noticed these really great
, unexpected, incidental,indirect results, which is the
team is no longer planning youknow tomorrow's experiment.

(24:42):
Today they have clarity on thisis what we're trying to achieve
this month or this quarter, andthis is why it matters.
So I, as an individual,understand the work I'm doing
contributes to this white space,or contributes to novel data
here, or contributes to fixingthis thing that doesn't look
good.
Here you have more of thisalignment.
It's everyone from the researchassociate up to the CSO feels

(25:06):
like they're on the same pagewhen working in this way.
Laura, it's kind of answeringwhat you asked, but it's not
about bringing everythingtogether all the time all in one
place.
It's being methodical andprincipled.
What are you trying to achieveand what do you need in order to
achieve it?

Speaker 3 (25:22):
Oscar and I have been really talking about the
decisions that these companieshave to make.
What is the level of riskassociated with it?
Because if you're trying tomake a really risky decision,
you'd okay.
Let's just pick and choose whatwe want, because we still don't
know what that eventual targetis going to be.

(25:50):
But it depends on the risklevel of decision-making that
you are in.
Every company is different,depending on the life cycle of
the asset that you're developing.
There are some things wherewe'll just have a little bit of
data for this, versus the otherthings that are much further
developed, like no, we need tohave this, there's no, this is a
non-negotiable item that weneed to have.
Oscar, do you have any opinionon this as well?

Speaker 1 (26:12):
I think that what we're talking about here is
difference in how some R&D areset up.
Either they are looking for aspecific target, something that
specifically addresses thisdisease and this patient
population, because of X, y, z,and in other cases, you may be
looking at folks that aredeveloping platforms, where

(26:32):
they're trying to develop a newtechnology.
We don't exactly know whereit's going to go.
It's going to have a lot ofdifferent things and it's going
to have a lot of differentfeatures in it, but we want to
create a platform that can do X,so you sort of get to there and
then you're going to have tostart making decisions of all
right, where does it make senseto?
How can we partner this withpeople?

(26:53):
Where should we be doing ourown internal experimentation?
Your goals are going to shift,your goals are going to change,
and I think it's reallyimportant for folks to
understand that, even thoughyou're setting up a goal, you're
trying to reach a target thatmight be like a year out, the
way you get there might bedifferent than what you think
you know.
Really having the ability topull these things together Again
, running the experiments thatanswer the right questions and

(27:15):
that's the other part that wedon't really talk about.
We go with it as a hypothesiswithout really saying what are
the different questions that wewant to answer with the $2
million that we're about tospend.
You know, looking from high tovery specific, obviously, once
you get into further intodevelopment, you know these are
those questions that you'restarting to see now as being a

(27:37):
pinnacle to R&D research.
Your target product profile andyour target compound profile.
You know, what is the compound,what should the compound do?
And, as a product, what shouldit do for people?
Those are two very differentquestions that you eventually
need to answer when you're doingdrug development, because
they're going to take you downcertain spaces.

(27:57):
You need to make sure that yourcompound is going to nail it
and then you also need to figureout well, how is this going to
turn into the right product forthose patients?
Are they elderly?
Do they have trouble accessingoral medication versus making
this an IV, versus being inhaled?
Those are all vastly differentdirections that they're going to
cost a little bit of money toget that answer.

Speaker 2 (28:18):
They're also going to be very intertwined with what
your teams are executing on inthe lab.
That's something that we alsotry and really emphasize, which
is, like you need to keep adynamic view of what that target
profile is, and so we have, youknow this, like leaderboard
dashboards in Kaleidoscope.
But when we get this in frontof people who've like worked in

(28:39):
pharma, one of the first thingsthey go is like, oh, this is
basically a way to likebenchmark against a target
profile, because you can spin up, like you know, whatever
compounds you're trying tocompare and you pick the
parameters you care to track andyou see them side by side and
pressed across all iterations ofwork that you've done.
And so it's interesting when,like again something that I

(29:01):
learned through just working inthe field and through the
customers that we serve, whichis there's going to be things
that change in either yourunderstanding or in the
competitive landscape orsomething that is going to, then
it might be very like strategicor commercial change.
You have to then propagate thatstrategic direction through to
what your R&D team is doing,because if you change the

(29:22):
delivery mechanism or if youchange something about that for
commercial or strategic reasons,well now your team has to go
and build or optimize fordifferent things.
If you change the deliverymechanism or if you change
something about that forcommercial strategic reasons,
well, now your team has to goand build or optimize for
different things.
So, having a way to connectthat and to know, not just a way
to propagate that informationbackwards, but then to also know
do we actually have things thatfit this new profile?

(29:43):
Well, because we've now spentfour years doing R&D, we might
actually already have candidatesthat are very excellent fits
and you wouldn't have known thata priori because that wasn't
the profile that you wereoriginally chasing.
So a lot of this stuff.
It becomes very clear, at leastto me and to people I talk with
, why you need to have goodsystems in place when it comes

(30:06):
to data.

Speaker 1 (30:12):
And so what you're suggesting when you have these
strategic changes that cascadefrom leadership, from your
manager, et cetera might put youin a position to go back into
your system and say you knowwhat.
This is where the data has tobifurcate, depending on which
direction we go.
However, the core data that wehave now taking in this new, you
know, target product profilenow we can see you know what.
We have 50 constructs that weactually are matching up with

(30:34):
what needs to happen over here.
Let's resurrect those and seehow far we can go.
I mean, I think that's a greatexample of how you know
understanding not only just yourdata, but pulling in those
things and maintaining thathistorical knowledge and data
for the company can save you alot of money and can save you a
lot of time to go back andrestart the screening.

(30:55):
The lucky ones are the ones thatcan nail that strategy from the
beginning and say we might gointo three directions.
All three need this set of data.
Let's go there.
As we continue to get more, wemight do a side experiment here
and there to see if we get ablip that we can then carry out.
How long has Kaleidoscope?

Speaker 2 (31:15):
been active.
We started the company at theend of 2021.

Speaker 1 (31:19):
In that time since then, can you share a story of
an organization that you went tothat probably was as lost or
disconnected and you were ableto help them take a more
thoughtful approach intoestablishing better systems,
better process and hopefullyclosing in those gaps?

Speaker 2 (31:38):
The first thing that right away popped in my head of
where I experienced it most wasjust in my own project, and this
was in grad school.
So this is where it becamepainfully obvious to me that the
anecdote here is I experiencedit myself.
And then also as soon as Istarted talking to people
immediately around me, like theamount of times where everyone

(32:00):
was like nodding along andsaying like yep, and sharing
stories.
There was pretty eye-openingbut examples of things like a
colleague that was interested insequencing a number of mass
patients because there wasreally exciting work happening
on the RNA sequencing andgenomic side and spending

(32:20):
whatever it was three, four,five months prepping that data
set to go and pair with with thehospitals like patient recruit
and not get blood and getconsent, and all that.
Then finding out on a teammeeting the once a quarter team
meeting that happens that likethey had already pre-generated a
hundred or a patient's worth ofsamples from that exact target

(32:41):
cohort and it was in a freezerand a spreadsheet somewhere on
some folder that had logged thatthere's the sample, the samples
are in this freezer, but goodluck finding that when you're at
an institute that's as big as awelcome center at Oxford.
So it's just a work that wentinto the redundant work, the
time and the razor and theenergy and the mobilization of
other people now and patientsbeing involved.

(33:02):
All of that obviously isextremely painful to grapple
with.
So that was maybe like my mostacute example.
And then, where I've seen us beable to shift teams, I think one
is this example that I gaveearlier of managing the handoffs
, and this happens quite oftenthat individual teams will have
great internal tooling for whatdata they have.

(33:26):
Maybe they have an electroniclab notebook where the molecular
biologist can look, or thecomputational bio team has their
own database they can querypretty easily.
They have these great toolsthere.
But what about when you'rerelying on work from someone
else or when you're requestingwork?
I've come across teams thathave tried to then Lawrence to
your point earlier build morelike integrate more like try and

(33:48):
do that.
But then it becomes very unrulybecause now you are trying to
effectively maintain and buildsoftware systems in-house
yourself, you're now alsothinking about okay, well, wait,
how do I notify people?
So I have to have like anotification system and then
wait.
But how do I manage permissionsand authorization and how do I
make sure that the wrong persondoesn't get access to the wrong
data, and so teams that sharedthat they did a lot of that

(34:09):
realized oh my God, we'reincurring so much tech debt now
and putting our IP at risk andit's just not something that you
want to touch.
The alternative is, well, Iguess, keep pumping money into
that or go back to the reallybroken way where you might go
six weeks without knowing youhave data that you can action,
and so there's examples likethat where we've come in and

(34:29):
given teams again, it soundslike very straightforward, but
that's the whole point, which isgreat Use us to request the
work that you need, like findthe data that you need for the
decision on go, no go, and then,as you get it, have a way to
action and say, great, this is ago, I'm going to request it
from this vendor.
And then they get anotification.

(34:52):
It goes to their email.
Again, we're big believers inlike don't force too much
behavior change if you don'thave to People where they are.
So it goes to their email.
These are like major companiesin the world, so we want to
stick with what they're used to.
They open a simple page.
They have a way to change thestatus, drop a file in, add a
comment the same thing that theywould do via these sprawling
email threads.
They can just do on one page,hit submit and now that notifies
biotech like, hey, you havedata, it was marked as complete.

(35:13):
Here are annotations for it.
Here's the file, you can reviewit and see.
Or you can have us do moreautomatic QCing.
But you can have a human comein and review it and say great,
this has all the things that Iexpected.
I'm going to mark this ascomplete or no.
This was a mistake.
We have to understand thisbetter.
But the point is you'retightening that feedback loop
immensely and going from a worldwhere you'd have to do that

(35:36):
yourself or risk losing three,four months a year to it's
managed for you and you canfocus on executing the stuff
that you need to executein-house.

Speaker 3 (35:47):
Thinking about just how the users input the data is
very important, because if youmake it very hard, people are
less likely to put in the rightdata, they get frustrated, they
get mad, they shut their laptopsdown and we know the end of
that story.
So it's beautifully said thatyou guys have thought a lot
about what that flow ofinformation is, not just from
the standpoint of management ordecision makers, but also at the
other end.
If you're requestinginformation from these people,

(36:08):
you got to make it easy.
You can't make it somedifficult thing where you have
to jump through six differenthoops and five different doors
and you're just not going to getthe same result.
That's a really important piecethat we try to drive home as
well, and things that we do istry to make things simpler,
because usually that's justbetter.
You try not to make thingscomplicated, because people are
naturally resistant to change Ahundred percent.

Speaker 2 (36:28):
I can't take credit for this.
This is all by a designco-founder, david, and the
brilliance that he brings, whichis how do you hide away all the
complexity?
You can, as humans hatecomplexity, and he's like a
designer by training who thenmoved into product and who then
moved into health and lifescience.
But it's been really cool andbig privilege to see how he
thinks through those things andthings that seem very simple but

(36:51):
can make big changes, like Iremember when we were like first
productizing how do youannotate, like why you made a
decision?
We had like a flow that wasstill pretty quick.
I think it was like two orthree screens that you would
like click through.
We'd ask you different promptsand then David and his kind of
obsessive mind of make itsimpler, make it simpler, make

(37:13):
it simpler.
We got to a point where nowit's add data point and it on
the screen just highlightsanything that you could click.
That's a data point and you canjust scroll and click the
things that you want and it pinsthem.
And so we've removed the ideaof these click throughs because
we know humans are going tostart it and be like oh, I don't
have time for this and like getit or leave it or do it wrong,
and so just making it deadsimple so that it's just like

(37:33):
the way you would intuitively doit If someone on the table
presented you with things andsaid what do you like?
And you're like that thing,that's the magic that we tried
to bottle and love that we'redoing as a product first team.

Speaker 1 (37:45):
I mean, it's such a great approach because you're
really taking human-centereddesign approach.
There are people that are atthe front of this that have to
do this work.
You know the science is alreadycomplicated.
The strategy is complicatedbecause, let's be honest,
there's a lot of data we wishwas available but it's not
available.
So you have to do it.
You have to come up with theright experiment, you have to

(38:07):
come up with the righthypothesis to answer, and I
think what you're expressinghere is a refound vision of
things.
Don't have to be so complicated, and we're used to that.
Why?
Because humans are creatures ofsurvival.
So if we get a new position andwe go, wow, this place has
really garbage process, what dowe end up doing?

(38:28):
Making our own.
We decide to go okay, now, whenI get this document, I'm going
to do this, I'm going totransform it this way, and then
I'm going to create somethingelse, blah, blah, blah.
And now you start getting allof these like process ninjas
that are all coming up withtheir own process, expanding the
variability in the output thatyou're getting.
Oh yeah, this is cool, like,can you like do what David did?

(38:52):
He did something really cool.
Can you go?
You know, you start to see thosequestions all the time and it
slows down the progress.
It slows down the continuitythat we talked about in the
beginning, the knowledge base.
Like, if you can't tell me howyou're going to do what you're
going to do, how can you expectanybody else to follow suit?
How can you expect theconsistency to be there?
How can you expect the projectto progress when we stop trying

(39:15):
to figure out the most perfectthing and we start looking at
simplicity and creating theright vision?
What are you seeing in yourclients, in your customers?
What are you seeing change inthe way that they work?

Speaker 2 (39:28):
I think a lot of it is understanding what the
purpose of the work that you'redoing and why so that kind of
autonomous?
We're all gunning for the samegoal If we can get together as a
team to make major decisions,otherwise I have clarity on why
is the work I'm doing importantand how does it contribute and
how can I contribute better.
So I think that's a human levelchange that we see happening.
There's all the productivitystuff that I mentioned, which is

(39:51):
obviously immensely important,especially when markets are what
they are today, where you'reunder the gun to deliver as fast
as possible.
So saving a day, a week orthree months a year is massive
and it can be life or death.
Another interesting byproducthas been and this emerges as a
natural use case which is theteams that are like, really
leaned in and really embracethis mindset with some that have

(40:12):
shared their, you know, c-suitewill ask why is this not a
Kaleidoscope dashboard, which,like I love to hear that that's
like an internal household namethat way.
But some, some of thesecompanies have shared huge value
here is just like the distilled.
Where are we Like, what's thestate of our affairs when it
comes to R&D and what data do wehave to support that.
They've then started sharingthat also with their pharma

(40:34):
partners, their investors, theirboard, like different
stakeholders that are externalto the org, and the result of
that has been, in like thepharma partner case, whoa, your
data is like really easy tounderstand and navigate, which
is the industry is so rapportand trust oriented that, if
that's the effect that you couldhave, well, now you're that
much more likely to work withthat pharma partner, for them to

(40:57):
pay you major dollars to workwith you on assets or
co-development or a platform orwhatever it is, and so those
kind of downstream effects havebeen really cool to see, because
now it's company putting itsbest foot forward.
And if you fail because thescience doesn't work out, like
in my view, that's fine becausethat's the risk that you're
taking.
But if you fail because ofpreventable reasons or things

(41:19):
that you could have done better,that to me is the real shame,
because that means that therewas something that could have
been in the hands of a patientand could have completely
changed the trajectory ofsomeone's life that won't or
will take a year or two or threeor four longer for completely
preventable reasons.
So that's the world that wewant to eliminate.
When I get asked like what'sthe goal of Kaleidoscope?

(41:40):
Well, one is every biotech onthe planet using us because we
believe that it's valuable.
But two, the goal is that everypiece of science translates and
gets to the people andpopulations that need them the
most in the fastest possibletime.
And if that takes 10 years, thescience is fine.
It should take the minimumamount of time possible because
people are waiting at the end ofthe line.

Speaker 1 (42:02):
Making the right decisions.
You touched on a few thingsthere.
We talked about how the reallydevelops a more clear
foundational knowledge for thepeople that are conducting the
work, that they know what theend is.
I've had my own experience in anumber of organizations
academic and industry where thecynicism of the people around me

(42:22):
of my work doesn't matter.
It angered me a little bitbecause I really spent a lot of
time to try to get myself intothis role.
This is where I want my careerto be like.
No, no, no.
I don't care how small everyonethinks my work is.
It means something.
I am here because they need mehere.
How do I show people more ofwhat I'm doing and you're not on
communicating with thepartnership, the leadership, the

(42:43):
CROs?
When you have this level ofclarity, oh, the confidence that
people feel when you're able totalk about it where you're not
sort of like this came fromstats you are on your game and
that changes the relationship,not only the relationship
between partners, but also theway that leadership looks at

(43:03):
your group and says you knowwhat?
I'm going to back up a littlebit, because they have it
together, they figured out aflow, and these are sort of
those byproducts that we don'tthink about because it's not
something that shows up on yourbalance statement, it's not
something that you know crossesoff a corporate goal, but it

(43:24):
really does so much to drive themindset that is so critical
when we're trying to improve theway that we do work, when we're
trying to see the possible andunderstand the value of the
things that you have around you.
I mean, these are multi-milliondollar projects.
Get out of that Word documentor that Excel spreadsheet and
get into something that makessense to manage the right way,

(43:47):
to give you the abilities toexpedite those decisions and to
have alignment across peoplethat may or may not be at your
organization Super, supercritical.
So, as we start to close, we'retalking about this gap between
R&D, decision-making data, howwe connect across organizations,

(44:11):
how we connect across partnersand CROs, and really what are
the signs that we're not doingthe things as well as we could
be doing?
What is one piece of advice?
It could be scientific, itcould be operational, it could
be personal development.
What is something that youwould give to leaders on
bringing clarity to theirdecision making, on, you know,

(44:33):
navigating through thecomplexity where they're trying
to find the right way, the rightplace to start.

Speaker 2 (44:39):
One thing that came to mind is it's never too early
or too late to start makinggains, because gains will
compound.
Even small gains will compounda lot.
And so if there's a shift youcould make today, a shift
towards a better process, ashift towards a better tool, a
shift towards anything like that, it's worth making.
I think it's time to plant atree is like 20 years ago, and

(45:00):
the second best time to plant atree is today.
Especially first time leaders,they feel either oh, it's too
soon, like why, like who cares,and they don't realize that's
the easiest time to make achange, because it just becomes
second nature.
And then your future, you isgoing to be singing your praises
.
And on the flip side, people whoare like, oh, a lot of some
cost fallacy, like we've triedand sometimes it's

(45:23):
understandable, I get it.
We, we work, we're software.
So I sometimes understand thatpeople are like they feel burned
because they've used really badtools.
They're jaded, I get that.
Sometimes it's yeah, but we'relike so complex, like how could
we do this?
And again, it's about definingwell, what's the minimal thing
that you could do today thatwould move the needle on it, and
how do you do that today andhow do you do that today?

(45:45):
And so it could be as simple ashey use the same naming
convention.
That's a very simple change.
It doesn't require you bringingon a new vendor, it doesn't
require anything like that, butit's going to move you in the
right direction.
And maybe it is like find avendor that can integrate these
10 data sources and give you ademo.
Sure, but I think that's thebiggest thing, that when's the
right time and when's the righttime, and the right time is now.
It's literally now.

(46:06):
There's very few exceptions tothis, obviously, but it is
sooner than you think.

Speaker 1 (46:11):
I can empathize with that so much.
Really, what we're talkingabout, guys, here, is don't wait
.
Don't wait to fix things thatcan be addressed.
Don't wait to have theconversation.
You may not have the bandwidthto do something right away, but
these are things that createripples through your
organization.
We've seen it as well that youknow we're with a, with a client

(46:32):
, for you know one year, twoyear, and then you start to see
people that you have not evendirectly engaged with, that are
starting to come into yourdirection to say like hey, like
I want to jump on board withwhat's happening in that group,
I want to jump on board withhappening with this group, and
you start to see thisappreciation and how things have
been made easier and sort ofthe clarity that people get.

(46:54):
And how do you know about it?
Because people stop talkingabout it, because people stop
talking about how bad it was,because no one's going to come
and give you a trophy,unfortunately.
So you know no one's going tothrow a party for changing the
process and doing a new thing,but people stop talking about it
and they start talking aboutthe value, they start talking
about what really matters, and Ithink that's what we talked

(47:17):
about here today.
So, bogdan, thank you very much.
For anybody that's interestedin learning more about Bogdan's
work about Kaleidoscope, wherecan we send them?

Speaker 2 (47:26):
Yeah, best for me would be Connect on LinkedIn.
I'm always there.
Our website is kaleidoscopebio,so you can check that out, and
then we do also maintain aresource blog where we post
mostly opinion pieces and kindof perspectives.
That's on blogkaleidoscopebio,so for anyone who wants to
subscribe there, we've gottengreat feedback from the

(47:48):
community that they appreciatethat content.

Speaker 1 (47:50):
Awesome, bogdan.
It was a fantastic conversation.
I feel like there's about sixmore different conversations we
could go into, so thank you forjoining us.
We'll thank Lawrence.
He jumped out, had another callto go to, but it was an
absolute pleasure and I lookforward to doing this again.

Speaker 2 (48:05):
Thanks so much for having me, oscar, likewise.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.