Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Will AIB, your guardian Angel, imagine AI helping you
anticipate your well-being needsevery day?
A system that helps you define your optimal state and
anticipates the subtle shifts that affect it, all while
learning from you. But our personal data is
personal. With leaks and breaches,
companies with questionable ethics, and malicious actors, we
need to be careful about how we share our information.
So let's explore building trustworthy AI systems for most
(00:22):
sensitive data. Because it's time that your
personal data start working for your well-being.
Welcome to episode 42 of Tool Use, the weekly conversation
about AI tools and strategies, powerful thinking minds brought
to you by Netek. I'm Mike Byrd, and this week
we're joined by Michael Tiffany,Co founder and CEO of Fulcrum, A
foundational data platform for human information.
Michael, welcome to Elias. It's a pleasure to be.
Here, pumped to have you on. Do you want to give us the
(00:43):
background how you got into AI and what brought you here today?
Yeah, I I really think of myselfas a hacker.
I, I belong to a transnational hacker gang called Ninja
Networks ninjas.org, mostly known for our, our, our parties,
the hacker party, the ninja party at, at Defcon was a really
(01:04):
fun thing to do for a very long time.
And I, I worked professionally in cybersecurity also for a
really quite some time. I founded a company called Human
that's one of the winningest anti botnet companies on the
Internet. And in that company, I was the
leading a team that was in many ways fighting off evil AI,
(01:25):
because what you want to do as aas a botnet operator is make a
million computers under your control look like they're
actually real people. And botnet operators got
increasingly good at that. So it was a really fun spy
versus spy game where the the bot operators are trying to fake
(01:47):
us out and we're trying to actually fake them out and
detect them without them realizing that they were being
detected. Really fun business to to grow
that made us all feel good because we were literally taking
the money out of crime, making cybercrime payless.
Well, yeah, with full crowd, my colleagues and I have been
(02:09):
applying our security talents towhat we think of as a
foundational like generational challenge, which is how can we
build a trustworthy repo for allthe data that our lives produce
that is under our control. So the the what we've built is
cool. It is helping people see their
(02:31):
lives in new ways. So it it is it is useful the IT
turns out that your your data becomes more useful when you de
silo it. So just bringing it together is
empowering our customers to do cool new things.
But broadly, what I'm hoping to do is create a company that's an
important force for individual data sovereignty.
(02:52):
I think we we need to own our data, we need to control it, and
especially individuals need to have control over whether an AI
has access to their data or not,which means the repo needs to be
under our individual control. Excellent.
Yeah, I, I find myself at this unique point where I'm very
privacy security centric for fora while now, I've made it like
(03:14):
priorities to make sure, you know, you, you lock things down.
But I also with a, a medical science background, see vast
potential for AI in the house science space by just giving it
more data. Just trying to find a walk that
line. So what's the reception been
like to Fulcris so far? Well, even though I'm obviously
very proud of my hacker background and the privacy by
design that we built into the system, I would say that people
(03:35):
don't buy security features or, or, or privacy features very
much like most of our customers are finding us and using our
software because they, they wantto get more out of their, their
wearables or, or, or data sources or they, they want to
(03:56):
see their data in, in, in some new way.
So I, I would say that our customers are paying us for the
data transformations that are possible the the visualizations
that we have created and they trust us because of the privacy
by design principles. But but it's not, it's not a big
(04:18):
buying trigger. It's maybe just a buying
consideration, if that makes sense.
Yeah, absolutely. One thing I'm interested in,
just because you're able to aggregate all these different
data sources, have you come across or been told from any of
your customers some unique correlations or potentially even
causations from just like uniquedata sources that end up working
well together? Yeah, yeah.
And, and I'm so thankful that people do show up in Discord or
(04:40):
e-mail us about these findings because one of the friction
points for a company like ours is because of our privacy
principles, we, we don't necessarily know how our
customers use the software, right?
Like like we're not seeing our customers data.
So, so if we've unlocked something really cool, like we
don't know about it unless they show up and like give us a
(05:01):
testimonial, but happily they do.
So thank you to everyone who shows up in Discord or, or has,
or has written us via e-mail with, with some of those
stories. I'll start with with one of my
own stories, which is I didn't even know until, until we, we,
(05:23):
we shipped this folk feature andI'm looking at my own dashboard
that my Apple Watch is continuously monitoring the
volume around me. Right?
Like literally how loud is it? And I had a clue for this.
Like every once in a while, the watch will give you a warning if
it's text that you're in a really, really loud place.
(05:43):
What I did not know is that actually I have a continuously
updated time series of DBSPL in my environment all the time
because the the this isn't surfaced by any watch app, but
it is the kind of thing that shows up in my folk or data
store because I have contacts installed on my on my phone.
(06:06):
And so I'm displaying that next to my sleep data.
And I I can see how noise in thenight effects my phase of sleep,
which is wild because this is probably affecting many, many
people. But if it doesn't wake you up,
you don't know that you're beingaffected or, or, or a pattern
(06:30):
that I've noticed is actually, Iwill wake up in the middle of
the night because of some noise that stops by the time I'm fully
conscious. So I have no idea that it was
noise that woke me up, but I cansee it the next morning when
when I go to the tape. So that that that is like a
simple correlation. It was always there to be found,
but was just hiding. And so it was super eye opening
(06:52):
when when I saw it. Now getting into some customer
stories there, there are a wholebunch of actually variations on
my story that are about debugging your sleep.
Probably the the biggest life upgrade that a Fulcrit is
unlocking for people has to do with with debugging their sleep.
(07:14):
And sometimes this is deliberate.
So people will try new supplements or new new habits.
They'll, they'll record them in their focal timeline and then,
and then they'll see, you know, the go to the week view and
they'll see week over week. OK, how am I trending?
We something I'd never heard of before was a a user who, under
(07:36):
the advice of his doctor, has been paying attention to
specifically his ratio of R.E.M.to deep sleep.
So the, this guy as a matter of health was trying to decrease
core sleep and increase REM sleep and deep sleep together.
So, so, so, so aiming for, for aone to one ratio.
(08:00):
And the, the, the surprising driver that increased deep sleep
in general and also raised REM sleep so that these two were
going up together had to do withthe timing, not just of eating,
but the timing of eating specifically carbs.
(08:22):
So I, I don't know if this generalizes, but, but if you, if
you want to, you know, do this investigation for yourself.
The trick was if you're going toeat sweets, eat them before
noon, OK. I think it's just great.
And and and and and. Pay attention to your day over
day deep sleep numbers and see if you get better deep sleep on
(08:45):
days when you when you shift your glucose spikes to earlier
in the day. OK, nice.
And actually, I wouldn't mind just sticking on sleep for a bit
because I find it super interesting and it's one of
those like health hacks or health improvements that seem to
be beneficial across the board. So.
From from personal experience, I've experimented with taking
some magnesium and zinc before bed and especially on days where
I exercise. It tends to have a good effect,
(09:06):
but I've never actually measuredit.
So if I were to get into a hardware device to help, whether
it be Apple Watch or a ring or anything else, what would you
recommend as something that's actually reliable to track?
Say if I took a different magnesium blend to see if that
would actually. Happen differently.
Oh cool. All right, so so let me speak to
my book here because I'm really proud of of how we've done this.
I believe that the state-of-the-art in self
(09:27):
tracking and doing this kind of self experimentation is layering
passive monitoring that you you set up and then you don't have
to think about again, which you'll get from a device.
The devices that I really like are Apple Watch or a ring.
And I must say I love my smart bed.
I'm using an 8 sleep bed which is not only giving me cool sleep
(09:48):
telemetry, but then is also keeping me cooler on my side of
the bed than my wife who prefersto sleep more warmly.
So great little hack for maritalbliss there.
My my devices are just streaminginto my full credit data store.
I don't have to think about it again.
And then you layer on top some deliberate monitoring.
(10:14):
By deliberate, I mean you set upsome annotations so that you can
keep track of what supplements you're taking when and, and then
what the formulations are. So for in, in my case, for
instance, I got a minor head injury, I got a concussion.
And then Dave Asprey told me, dude, you need to dial up the
(10:34):
creatine in a big way. So, so he put me on 10
milligrams of, of creatine twicea day.
And so I started tracking that manually.
Like literally every time that Iwould take creatine, I, I would,
you know, swipe the annotations in my context app on my phone.
So I was recording the dosage and the exact timing and, and,
(10:59):
and partly this was so that I would also share with Dave and
show him like I'm following yourinstructions.
And, and then secondly, those habits were then showing up in
my timeline for my, my, my weekly retro.
And I'm looking at like the pastweek on, on Fridays.
And I just, I arranged the manually recorded data along
(11:22):
along with the passively recorded stuff.
So I'm getting this full superset.
And, and then like once you've dialed that in, then you can
start playing with things like, OK, let's try doubling the dose
this week. Let's have the dose this week.
Let's change the timing of of ofof when we take it and then and
then you can get visualizations.Sometimes sometimes your meat
(11:48):
space brain is all you need to do pattern recognition or for
for more subtle changes, you know, might be time to whip out
the the Python notebook. So all all of the data that is
being collected by Fulcra is also available as nice clean
time series with something called the life API.
(12:13):
And some of our users are using,you know, Python notebook,
sometimes just a Co lab notebookin the cloud loading our Python
library on Pypy. Others are have no idea what the
words that I just said mean, butcan connect a code generating AI
to to the life API and just instruct instruct the model in
(12:36):
English to to make the calls that you need in order to look
at your own data and interpret it.
Nice. So actually I have a question
for like the builders in the audience, both the before and
the after the the full step in the flow.
What was your experience been working on integrating with
devices to actually that collectand expose that data?
(12:57):
Do you find there's a lot of workarounds needed?
Like if they had a standardized API, would that make it easier
for these aggregating services like Folker to exist?
Or how could the best practices be on the collection end?
Oh man, there. There is so much nuance here.
The So as you might imagine, whoop bands versus a Garmin
smartwatch versus a Fitbit versus an order ring will all
(13:22):
say record your heart rate. But, but, but, but in their own
way. So, and, and heart rate's
actually an easy one. The people will get extra
proprietary with the way in which they record HRV so that
there are very subtle differences.
And at, at Folker, we, we've just signed ourselves up for
(13:45):
like dealing with all of these nuances.
And it really feels like I, I wonder if, if this is what it
felt like in the early days of Microsoft to be like on device
driver support. You know, where our jobs it, it,
it's almost like we're, we're a firewall between people who just
want to get things done and likefiguring out the quirks of
(14:09):
different device makers. So, so if we're doing our jobs
right, we figured out a lot of these quirks and, and, and we're
doing the ingest and we're dealing with the difference in
sample rates and the difference in the raw data.
And sometimes the raw data actually changes.
And what is in Apple Health today will not be what's in
Apple Health tomorrow. We'll deal with all those quirks
(14:32):
so that you can just have nice clean time series and you do not
need to know anything about the difference between, you know,
these devices. One of the reasons why we think
that's important is that most smart device makers aren't
actually going to like be aroundfor the entire length of your
life. We, we've designed our data
(14:53):
store to truly gather the data of your life for your entire
lifetime and possibly beyond. Like I for one, want to donate
my data corpus to my heirs. I want I want my data to be
unambiguously inheritable. So if if some future descended
to mine wants to train a model on, you know, grand Pappy
(15:14):
Michael's data like they can. So so we're thinking about data
longevity at that scale. It is very unlikely that all of
the wearables manufacturers thatI've mentioned will will survive
as long as even I hope to survive, which means that you're
going to change vendors. This has happened while I've
(15:36):
been working on Fulcra. I used to have a hometic scale
and now I have a wiping scale. But I, I want a nice clean time
series of my weight that just goes back several years and does
not involve two different API calls just because like the
vendor changed So, so, so we've managed a whole bunch of those
quirks and I feel really proud of our quirk handling.
(15:58):
The other thing that is it otherpeople can do, anyone can do
this, but I, I feel like we by stepping into the breach, just
just give everyone a quality of life benefit is if you hit the
(16:19):
life API, you're getting fresh up-to-the-minute data of
everything without any extra work, right?
So, so if, if, if you write, if you go to all the trouble to
write a notebook that's going toanalyze your step count or
analyze, you know, how your sleep relates to your
supplements and, and you just save that notebook on collab.
(16:40):
It's going to work today and it's going to work tomorrow and
it's going to work a week from now.
And you don't need to sit there and like do a new fresh export
of your data up to that point. No, you should just be able to
click rerun on the notebook and just get today's data.
It just removing that friction removes the, the cost of trying
(17:01):
an experiment. And that means you experiment
more, which is, which is really cool.
You know, like, oh man, maybe I shouldn't share the story, but
what the hell, I'm on a roll. I actually, OK, actually, no one
should do this. This is terrible advice.
I, I was having a, a, a discussion, air quotes
discussion with my wife about whether we were letting our room
(17:25):
get warmer this past winter thanwe did the winter before.
Right. So So in other words, she and I,
we've been married a long time. We have a slight disagreement in
our memory. She remembers it one way, and I
remember another. And I realized, wait a minute,
the temperature time series willbe in my vulgar datastore.
I can I can just look this up right now.
(17:47):
Yeah. So, yeah.
Loaded in a collab notebook which you, you can run collab on
your phone. This is doable.
So, so I, I simply like made a graph like right there while
we're like sitting and talking about this, I'm like, oh, I can
just go to the tape. We can get the definitive
answer. And I did and it was nice.
(18:08):
And my wife is game for these kind of things.
So, so it was all OK. I I think maybe you should not
always resolve all meritable marital differences about
memory, like quantitatively, empirically, with data in the
Python notebook. But it worked.
I'm OK, I'm still married. Nice.
Yeah, you you got to pick your battle sometimes, but having
(18:30):
that as a backup option is always nice.
Yeah. Actually moving to the other
side, say, say someone's building something that was
going to process the people leverage the Life API or even
just in general best practices. What have you found is is good
things to keep in mind when you're trying to process
people's data in a way that respects their privacy, but at
the same time can give them the insights that they need?
Yeah. Oh, that's a really good
(18:50):
question. So, and, and to ground the
answer to this question, let me paint a picture of, of, of the
future. I think that agentic AI is going
to get better and better. I think code writing AI is going
to get better and better. So I think that developers of
(19:12):
all skill levels will end up incorporating more AI generated
code into their, their work streams just as a matter of like
going faster. Like, like it doesn't even
matter if, if, if, if you're good at, at code, like if, if
you have a helpful robot that isjust able to kick things out
really fast and, and it's reliable, then of course you're
(19:34):
going to do that 'cause then youcan just achieve more.
Let me also just acknowledge that we're not there yet, right?
Like like a lot of AI code generation is still like
frustrating. But I, I am someone who, who
thinks it's going to get better and better.
What this means is we're increasingly going to have an
intelligence on tap that is justable to indulge your random
(19:57):
questions with throwaway code toto help you make any decision or
indulge a question that you develop in the shower.
You know, and, and, and this is probably going to be a good
thing. In fact, I think this is going
to be an amazing thing. But but there are two ways in
which this goes down. 1 Is that in order for the AI to help you?
(20:20):
It needs access to your data until you ship your data to the
same place where the foundation model lives.
And, and this is obviously the vision of the future that that
some AI companies are pursuing where they, they're increasingly
making offers to people, which is like, we already host your
e-mail, like we already know this stuff about you.
And, and, and look, we have a great model, you know, let us
(20:43):
help you. And in fact, you never have to
leave the, the warm comfort of our walled garden that that's
future number one. Future #2 the, the future that
I'm hoping that Folker brings about is 1, where you can
temporarily grant access to an AI model or runtime environment
(21:04):
so that it gets only the data that it needs in order to answer
your question or provide the help that you need.
And otherwise you can turn that knob back to the off position.
And I, I really want to live in a future where individuals have
that level of control. I don't think individuals are
(21:25):
going to be able to control AI by controlling how capable it is
or how smart it is. I, I just don't think that's
going to work. I think the models are going to
get smarter and smarter and, andso we will not be able to
control them by like keeping them artificially dumb.
So how are we going to control them thinking like a hacker?
I think the only control surfacethat is truly feasible is 1
(21:49):
where we can say, yes, you can access my data and then we can
change your mind for any reason or no reason at all and just be
like, actually, no, you know howI'm going to open the pod Bay
doors on my own. Thank you very much.
That's the future that that thatFulcra is built to support.
So now with that foundation laid, I, I think that the, I
(22:15):
think O auth is awesome. I, I think that you want to sort
of separate ingest, which you might want to always leave
running from access. These need to be independent
processes. Ingest is most convenient when
it's in the cloud because a lot of the data your life produces
(22:40):
actually comes from the cloud inthe 1st place.
For for instance, your calendarsare probably stored on a cloud
somewhere. If you want to make a copy for
yourself, you you can copy it all to your local laptop hard
drive, but it probably just makes sense to have a cloud
native store that's under your control.
And so full Curry exists for that.
And then when you want to do analysis, you can either do it
(23:03):
in the very cloud oriented way that I just described, or you
just like have a Python notebookrunning in collab and it's just
doing Oauth to your full grade data store to put to pull down
the data that you want. Or you make yourself a heavier
weight local copy for the purpose of doing, you know, I
(23:23):
don't know, you know, maybe somehardcore data science that you
want to run, you know, locally on your machine, in which case
you're, you're basically using your full grade data store as
part of this, you know, ETL pipeline.
That's that you're bringing intoyour analytics workspace, which
maybe you're doing on your own. Or, or, or maybe the way you're
(23:47):
incorporating an AI is just likeas a copilot to help you write
the code that runs on your data.So you're not ever really
sharing the data with the AI. You're just getting AI
assistance to write the code to process the data locally.
I, I, I want us being able to make individual decisions about
(24:09):
exactly that. Like like where do I want my
jobs to run? What?
What part of this do I want anyone to be able to see versus
what do I want to keep completely private?
Oh great, all right, a lot of time back there.
So I recently went on a trip andon 2 separate occasions talked
to people who uploaded medical data, whether it was blood work
or results from a surgery into ChatGPT.
Got some advice and we're more informed for the conversation
(24:31):
with doctors. I'm not vouching for using
ChatGPT as a replacement, but being able to pulling data,
especially blood work or types of like very analytical stuff,
it does tend to help. But as soon as I brought up the
search, some people they're like, why would you want to
upload your personal immutable medical data to these big
companies? You never know who they're going
to sell it to. Why not?
There's just been such a historyof shady data protection in in
(24:53):
the digital age that people are understandably worried.
Now there's one company, Obsidian, that use for notes and
they have their file over app principle where you can just
take your data, go to another service.
And that was actually the trigger that made me move from
Notion over to Obsidian just yeah, those those type of data
control principles, which I believe Fulcrit is, is playing a
vital role in the like AI data pipeline.
(25:15):
Do you see any missing pieces tobe able to enable this like dad
on, dad off switches? Or is Fulcrit going to be the
missing component that's needed?And I haven't been asked this
before, and I'm so pleased you asked because because we can go
super deep on this. I too love Obsidian and I love
(25:35):
that architectural principle. Like honestly, I now try to
store as much of like my life aspossible in markdown.
Like like like if I need to write to a file, I'm like, can
we make this markdown right? Which I love even kind of non
intuitive stuff. So, so I think I was a pretty
early adopter of, of mermaid, right?
(26:00):
For, for like all the times whenI want an illustration.
But actually I, I, I want the primitives to be like made out
of markdown, right? So thank you, you know, code
committers to like mermaid dot JS and, and, and to Obsidian
for, for making this world possible.
I like living in this world. Here's a reason why Fulkra had
(26:23):
to be built different, if you will.
Files as a concept basically comes from Xerox Parc.
Like a bunch of the computing primitives that define our
personal lives come from Xerox researchers who were creating
metaphors based on 70s office culture.
(26:45):
Right? Desktop file folder.
Now there are billions of computer users.
There's at least a billion people who have a computer who
have literally never seen a filefolder, right?
Like, like, like they've never seen the reference.
So it's actually sort of crazy that we're stuck with the
(27:09):
metaphor. Like it's not helping anyone
understand the digital version. They're like, oh, I get it.
It's just like my desktop at work.
Like, what even is that? Now, there are some fundamental
limitations of this of this metaphor, which is why in
enterprise computing, we're not stuck with it.
(27:30):
For instance, in the enterprise,lots of us are using streaming
data stores of of various flavors.
Like Kafka is beloved by many teams because it's incredibly
useful. There is no Kafka for people
like I have my choice of where to store files.
(27:50):
I can store it locally on a disk.
I can score store it in Google Drive, iCloud, Dropbox, drop.
I have no such choice for how tostore my like heart rate time
series right, Which is a continuously updating thing that
I I if if it's ever expressed ina file, it's very artificially
(28:12):
expressed in a file with an artificial end date that
probably corresponds to when I decided to do the export it it.
This is just not a data type that is natively file oriented.
And this is a solved problem in the enterprise.
Why? Because enterprises have
approximately since forever beenmulti device, multi platform,
(28:35):
adopting different kinds of technology.
You some in the data center, right, or the cloud, some on
Prem. This has been normal in
enterprise context for, for decades.
And therefore there are enterprise tools that are meant
to tie all this stuff together. And they'll, they'll have names
sometimes like enterprise operating system or, or in the
(28:59):
old days an ERP that there's just, there's a system you can
plug all the other stuff into sometimes.
Now this is called a data lake house.
None of this exists for people. So we had to build the
functional equivalent of a data lake house for ourselves and and
this cannot be run off of our laptop in markdown files because
(29:20):
so many of these data types actually don't lend themselves
to storage on block devices as files at all.
So I so so we had to dispense with that particular
architectural choice of Obsidian, but we're in the same
almost like spirit or culture where we want to give people
(29:43):
complete control of their data and portability.
So everything that you stream into your full credit store, you
should be able to get out and, and, and, and possibly take
somewhere else or, or just buildyour own cool, like cloud store
from scratch or, or maybe even like, you know, run a private
(30:05):
version of Fulcrum that you know, is just in your own cloud
tenant. And, and you don't need us at
all. Or, or you cut us off from, you
know, access to, to your key store and, and you're the only
one who can decrypt your data. Like the, the, that's the level
of individual control that we want the future to consider
actually normal. So, so we're trying to, you
(30:28):
know, normalize right now and, and I think that you can make
this into a Business Today in 2025 in a way that I think you
couldn't in 2015, because there is an increasing acknowledgement
that individuals should be able to pull their data out of the an
(30:54):
application silo. And, you know, there should be
some data check out process. So in some ways Fulker is it, it
is a recipient of that like vibeshift, for instance, I am now
always streaming my location data into Fulkra because I'm
running the Fulkra app context, which is just, you know, making
(31:17):
a little recording about where Iam.
So, so that I have a way to to look up my, my past location
history. Well, I now actually have years
of location history in my Folkerdata store because I went
through the Google checkout process to get my old Google
Maps timeline data and then uploaded that to my Folker data
(31:40):
store And, and and my awesome Folker colleagues wrote a little
parser that now works for everyone.
So anyone who goes through the Google Maps takeout process can
upload that file to their Folkerdata store and it gets parsed
and just becomes a seamless location time series that in my
case, goes back to like 2009, which is super cool.
(32:01):
Yeah. So that part of running this
business is actually hearing from people who show up in
Discord and they're like, hey, can you can you figure out how
to take in my lab tests or my this or my that?
And, you know, we want to eventually absorb everything.
So. So we'd love getting those kinds
of requests. Awesome, super cool.
I I can only imagine the possibilities along the lines of
(32:22):
de siloing the data and making sure that the data that is about
you can be exposed to where you want it.
Something comes up with control of data's the monetization where
maybe I don't want to just use it from one app to another app,
but I actually want to say, OK, I will allow my Spotify data to
be sold in order to help train the smaller something.
What are your thoughts on on moving forward the monetization
of data? I'm cautiously optimistic, but I
(32:44):
think that the, as a business nerd, I have some skepticism
about this as a business model that that I would love to share
with you. Because in my last company
fighting off bot Nets, we got a lot of exposure to how
advertising works. And, and the reason why we got
(33:06):
that exposure is that there werea lot of bad actors in
advertising who were trying to goose their ad revenue by
getting bots to like click on ads.
So, so, so this was a major monetization mechanism for a
bunch of botnet operators. It's like creating fake traffic.
So in our in the pursuit of our mission to crush that, we
(33:26):
learned a lot about how data monetization works for AD
targeting. It's absolutely fascinating.
And it's very, very clear that there's tremendous value in the
data that makes ad targeting work better.
And so many, many people think, gosh, the only person who's not
getting paid for this data is like the consumer.
(33:46):
Maybe we should start a businesswhere the consumer gets a piece
of this action. And, and nothing stopping you
that, that there's nothing wrongwith that business model, except
that the amount of money that advertisers are willing to pay
for the data is a big number in the aggregate, but a relatively
(34:07):
small amount of money per person.
Such that the only people who are truly motivated by like the,
the $0.20 or the dollar, you know, the, the, the, the, the,
the dollar figures that are really up for grabs here are,
are like nuts. People who advertisers are
(34:28):
necessarily like super enthusiastic about reaching, you
know what I mean? Like if you are going to change
your behavior because of $1.80 like incentive scheme, there
aren't a lot of advertisers thatare like you're fighting amongst
themselves to advertise to you, you know, So there's there's
(34:50):
something, there's something here where giving people the
ability to to monetize their data by contributing it in a big
data pool is it is like non 0 sum like like everyone would
benefit. But I'm not sure that it's going
(35:11):
to bring about the micro paymentutopia that the people envision
just because the bid ask spread,if you will, is too high, right?
Like the amount that the buyers are willing to pay for that data
is not a very big amount such that it actually motivates a lot
of sellers to be like, oh, sure,I'll sell my Spotify listen data
(35:36):
for in exchange for that reward.Like the rewards are too small.
So I think the way we get there is less about monetization and
it's probably more about trading.
It's probably more about gettinga better experience.
So for instance, I would be willing to share my watch data
(36:01):
across multiple streaming services if it just meant that I
got better recommendations aboutwhat to watch next.
Like I'd be willing to maybe share my Spotify history with
YouTube or vice versa, just to just to have a better
recommendation algorithm experience, right?
(36:21):
Like do you need to pay me in the form of money or can you pay
me in the form of of a better, more finely tuned experience?
Right now, we can't even explorethose things because these data
sets are not fungible. If there's some way that Fulcra
can, can, can be an unlock thereand, and help people really
trade their data to just have a better time, right?
(36:48):
Just waste less time explaining themselves then that that's an
opportunity that that I I reallywant to lean into.
Have you seen anything or or have any optimism around the
idea of a bring your own algorithm where we're able to
try to like take all this data in and then say I want to
process in this way to present with me this content?
(37:09):
Yeah, right. Oh my God, no, I have not seen
anything as that. I I've not seen anything awesome
yet. But I'm, I'm confident it's
going to happen. But here's the level of
generality that that I think we should all be working towards,
which is recommendation algorithms right now have the
(37:32):
wrong goal because the goal is to keep you on that platform,
right? Like the the purpose of the
TikTok algorithm, it is to show me another thing that that
forestalls the time when I stop looking at TikTok.
And, and that's the same for Instagram for X, you know, for,
(37:55):
for a lot of that, you know, thestate-of-the-art in, in
recommendations. Then if we look at e-commerce
recommendations, it's always like, show me the next thing
that I'm going to buy on this exact platform, right?
So we, we can think of each one of these recommendation
algorithms as being, let's call it platform greedy.
(38:19):
They're, they're only informed by the data on the platform, and
their only goal is to is to keeptheir user spending time or
money on that particular platform.
A recommendation algorithm that actually serves you generalizes
in a different way, which is what is the next thing in your
life to pay attention to, right?The, the true limited, the, the
(38:46):
ultimate limited good in all of our lives is our attention.
A recommendation algorithm that truly served us would be saying,
here's the thing you should pay attention to next, with the goal
being your overall Wellness, happiness, effectiveness, right?
(39:07):
We want to serve your personal goals.
And I think we're going to live long enough to, to live in a
world that works like that. And I, I think it's going to
come about because of agentic intelligence.
A personal AI running a personalrecommendation algorithm can be
serving you and making recommendations that you can
(39:29):
think of as essentially multimodal.
Like given everything that my personal AIA guardian Angel AI
can see about what hooks me on TikTok, what, what work emails I
have like struggled in the compose window, right?
Like pay attention to number of words relative to like how long
(39:53):
I had the compose window open, right?
Would be a really interesting metric of like I'm, I'm churning
on how to write this message right, versus the, the things in
my inbox, you know, time spent reading individual items.
Take all of those as inputs. And when I reach a stopping
(40:15):
point, when I finally click sendon that e-mail.
What is the next thing I should look at Maybe it really should
be honestly a hilarious cat video on TikTok right?
Like like don't turn me into a ultra productive robot.
Sometimes I do want to look at cat videos so.
(40:37):
So it is entirely possible that cat videos really is the thing
that should float up to like my top recommendation right now.
But the I want to live in a world in which that that
algorithm is aimed at what is literally next best for me
across every one of my attentionoptions, instead of what's the
(40:59):
next e-mail to show me to like, keep me running superhuman.
Yeah, the the ability to have anactive firewall for information
would be incredibly cool. One thing you mentioned was
having your location data and the heart rate data being synced
up and and I think that's awesome.
There's more of a trend now to get these always on listening
devices that people can wear to record the conversation.
(41:20):
Yeah. What are your thoughts when data
collection goes from just the individual to their surrounding
environment as well? I'm.
Excited about this and and I andI want to raise a caution.
I'll start with why I'm excited.Wearable listening devices right
now are generally offering theirusers a value prop around
retrieval. It's usually about remembering
(41:40):
stuff. Now let's think deeply about
exactly how that happens. That basically happens through
Sir if if I'm collecting a bunchof transcripts.
In fact, I often have meeting robots showing up in my meetings
to help help me remember things that are said.
But I only make use of it in this very particular
(42:01):
circumstance in life where I'm like, Oh yeah, what did I talk
with Mike about? When was that meeting was like
last week? All right, I'm going to go to
my, you know, repo of where the transcripts are, do a little
search and then have a memory jog.
I think always on recording can do so much more.
(42:22):
For instance, imagine a personalswear jar.
OK, like like imagine if the school just like I married a
farm girl and I used to curse like a sailor and and my farm
girl wife not really into it. So, so I had to make this
conscious decision to change my behavior and, and, and pull my
(42:46):
like joyful profanity out of my life And, and you know, I did.
So that's, that's a behavior change that it's really helpful
to track. It's hard to self track how many
times you've you said any particular word at all.
(43:09):
So if you have an always on robot, just keeping track of
that and then floating it up to you as a number.
At the end of the day, you can see, am I trending down?
Am I trending up? Now apply that to something more
sophisticated, which is sentiment analysis.
So would you like to know at theend of the day, how positive you
were? How how generous were you with
(43:30):
your compliments? How many times were you
critical? How many times did you lose your
cool? Are are, are you trending up or
down in optimism versus pessimism?
The this is the amazing kind of self telemetry that is possible
when you have just passive collection in words.
(43:51):
And, and if, if you can see thatlike, let's say on your folk or
dashboard, or do you see that in, in, in, in, in some sort of
reporting scenario, It's just teaching you something about
yourself without you launching asearch, You know, So I, I think,
I think the, the search directedways in which people are using
(44:13):
Otter or fireflies or friend or limitless are cool.
It, it, it's a good anchor set of use cases.
But actually the way that this ambient collection can take
things from subconscious experience into conscious
experience is like mostly untapped.
And so I'm extremely excited about it.
The caution that I want to raiseis it is around social
(44:40):
conventions of consent. So like right now I'm tracking
my location history and I don't have to ask anyone's permission
like I want. I want a perfect record of where
I spend my time because I'm curious about that and it'll be
value ever more valuable over time.
Like where is, if I'm wearing analways on recording device, it's
capturing everything that I'm saying and everything that also
(45:01):
anyone around me is saying. And it's, it's like a hassle.
It, it doesn't even seem to be really feasible that I like, go
around with like a consent form.Like, like, BTW, I'm going to
capture some of the things that you said.
Are you cool with this? So, so, so there's going to need
to be an evolution of social norms to, to like manage, manage
(45:24):
this. I think the upside is so high.
The upside is so high of computer augmented memory that
we should figure out how to do this.
Like I, I think we will because it has such a life upgrade
possibility there. There's got to be some room
(45:46):
between the social norms of today and, and like the weird
ones that show up in Snow Crash with the gargoyles that we
should be able to find some, youknow, middle path based on just
incremental development that that we're doing right now.
So fingers crossed, like, like I, I don't know what that answer
is going to be because it's going to be an emergent social
(46:08):
phenomenon. But I, I do think that the
ultimately we'll just end up having a set of norms that, that
manage this in the same way thatwe have sets of norms around,
you know, what time of day it's cool to call someone on the
phone and what time it's not, right?
Like, like the AT&T didn't come up with those norms, right?
Like, like Alexander Graham Belldidn't come up with those norms.
(46:29):
Like we, we, we, we, we sociallyevolved to, to kind of consensus
norms. And I think that's what's going
to happen with technology like this.
Yeah, I believe it's limitless pendant and if it's a different
voice in the wearer, it'll redact it.
But I don't know if it's common knowledge and I don't know if
that's going to be the step it needs.
Yeah, I feel like we could talk all day, but last question for
me to keep this reasonable, one thing that I am concerned about
(46:50):
with this hyper optimization with AI is there's a certain
line of agency where people above it will be able to use
this as a tool and be aware of the usage.
But people low it would see the benefits of it and become very
reliant on the AI. Where do you see us becoming too
dependent on these systems to like make decisions for US
versus still having the autonomythat we get the benefit from?
Right. OK.
(47:11):
So first I think we should just acknowledge how surprisingly
theoretical this question is. Like if you would ask me about
how, how AI like it's, it's going to play out.
If if you'd asked me this, you know, several years ago, I'd be
thinking, oh man, like this could be very empowering, but
then end up creating these like incredible dependency conditions
(47:33):
just just seem like the obvious prescription.
But I don't know how it's playing out in your life.
The way it's playing out in my life is across everyone that I
know who tend to be like really smart, super technically
literate people. There's still a minority that is
incorporating AI tools into their lives right now and is
becoming like more knowledgeable, getting more of
(47:55):
their random questions answered,you know, more productive in a
wide variety of ways. And and then everyone else that
is like generating the occasional, you know, Studio
Ghibli picture, but it's other like not making use of these
tools. It's blowing my mind.
Like the what I am seeing is that super high agency people
(48:21):
are consciously figuring out workflow adaptations in order to
get more out of these tools, changing their behaviour and
changing their output. They're the highest agency
people that I know are, are likebecoming AI super adopters and,
and, and, and low agency people are just like almost ignoring
(48:41):
what is happening. So, so, so maybe we're just
wrong about how this ends up augmenting everyone's lives
because in, in imagining that we're creating these
dependencies, we're actually like, isn't that the imagination
of low agency people who are like, oh, they're just using
(49:02):
these crutches instead of learning how to, you know, do
this stuff themselves? And that's just not the track
we're on at all. Like for some reason, very
counterintuitively, it, it's, it's the highest agency people
that, that, that, that are, they're figuring out how to use
these tools and, and the people who want crutches, I think are,
are trying out ChatGPT as a crutch and, and then getting
(49:26):
frustrated and, and, and not getting a lot of benefit, right?
Or or or or they're making random, you know, copy paste
error errors. Like all of us sophisticated
sophisticates are like, that's aweirdly large number of M dashes
coming from this e-mail from Bob, who I don't previously
remember using a lot of M dash. But like where where's the Super
(49:49):
adopters are not making M dash mistakes, right.
Like like the the the this so-called like high taste
testers are like, I hate these Mdashes.
I'm going to add some custom instructions to make this stuff
go away. We all did that like last month.
Yeah, absolutely. Michael, this was a blast.
I learned a lot. Really fun talking to you.
Before I let you go, is there anything you'd like the audience
(50:10):
to know? Oh, right, so if, if I've
intrigued you about what we're building at Fulcra, come try it
out and especially come hang outwith us on Discord like it's
it's a fun scene. We want to be told what to build
next and we want to hear about your cool use cases.
(50:31):
So seek us out at Fulcra dot AI,give the software a whirl and
and and and and build with us. You can you can shape our
direction because this is this is still a a small but intense
scene. So yeah, come on by.
Absolutely. We'll link everything down
(50:51):
below. All right, Michael, we'll talk
soon. This is fabulous.
Thank you.