Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Alright, and hello everybody.
(00:03):
So we have an extremely special episode of the podcast today.
So it's not just going to be myself and Shashanak talking about the news when AI, but we
have an amazing guest.
So this guy is honestly one of the smartest people that I have ever met.
And I say that with no exaggeration.
(00:24):
He is just like a wealth of knowledge and I'm incredibly grateful that he was able to
come and join the podcast today.
So Lonnie Christman who is currently the CEO of Lumina Decisions.
Oh sorry, not the CEO, the CTO, my bad, the CTO of Lumina Decisions Systems and where
(00:46):
they make Analytica which is kind of their main product.
So Lumina Decisions Systems Company and Analytica is the product and he is making a new thing
called Ascista.
So Lonnie Christman is an incredibly smart and interesting guy.
He actually was the number one electrical engineering student in the United States when
(01:14):
he was at Carnegie Mellon and not only did he go, oh sorry, my bad, Berkeley.
And then, sorry Lonnie.
So he did that at Berkeley and then he moved over to get his PhD from Carnegie Mellon.
And then later on he now has been the CTO of Lumina Decisions Systems for 21 years.
(01:42):
We were actually just talking before the podcast and he decided to work at Lumina Decisions
Systems and actually turned down working with Peter Norvig at his startup.
So he is like really like a mover and shaker and just like an awesome guy and I'm really
excited to talk with him today.
(02:03):
And yeah, Lonnie thanks so much for taking the time to meet with us today.
Okay, well thank you.
That was a hum, rather humbling intro.
Wow.
Okay.
One question just to kind of start it off is you're building the software you've been working
on for 21 years called Analytica and you just came up with this new thing called Assista.
(02:28):
Why would somebody maybe want to use Analytica and why would it be better than maybe something
like Excel?
Yeah.
So, yeah so Analytica is a platform that is really designed to help people build
decision models and Excel is really not a very good tool for this.
(02:54):
Excel is great for some things.
You know, I've actually heard that one study that Microsoft did in the 90s, they found that
only 3% of people using Excel even knew they were self formulas that were possible.
So a lot of people use Excel for things other than quantitative modeling and it's great for
(03:14):
ledgers.
Once things get really complex, it's very limiting.
You're kind of stuck in two dimensions.
There's no real structure to the model.
It's not visual.
People talk about big Excel models being completely impenetrable and very non-transparent and they're
(03:35):
very inflexible.
Once you have to commit to your basic structural assumptions and once those are nailed down, you
really can't change them.
And the reality is when you start building models at the beginning, you're dealing with
real world situations that you're trying to model and it's just a big confusing mess.
(04:00):
You don't know the right way to model it up front.
It turns out a lot of those decisions are better made later in the process.
So with Analytica, I should say, you know, I've been kind of working, this is my area of
research I guess for 21 years, is working on, you know, how can we use computers to help
(04:23):
people make tough decisions in situations where there's little or no historical precedent
and what types of tools can we bring to the computer, you know, to the software to help
people think more clearly to use model-based decision making and to develop these things.
(04:46):
And so, you know, it's really been built for that purpose.
And what you find is it's more scalable, it's transparent, it's more flexible and, you
know, whole host of things.
It's multi-dimensional, it models uncertainty explicitly, it was built for that from the
beginning.
(05:07):
And all these things are, you know, just much more powerful, much, you know, much more
appropriate when you are doing model-based decision making.
Another question or shall I show?
Yeah, I can go on.
It's okay.
I'm curious, like, so this seems like a very complex tool.
(05:29):
What are some of your customers using this for?
What are some examples or ways that this is benefiting people?
Yeah.
So it's a very deep tool.
It's been underdeveloped development for a long time, it's very mature.
But at the same time, it has a simplicity to it.
So, you know, part of its design was made to help people learn how to take a real world
(05:52):
problem and translate it into a formal model.
That's a process that's, you know, that's kind of a skill that's hard to learn, hard to
develop.
And because it's very visual and stuff, it actually lets you do a lot of that stuff fairly
easily.
Its depth comes in useful because you don't kind of get stuck.
As far as what people are using it for, it's a domain agnostic modeling environment.
(06:15):
So it's appropriate across, you know, whole host of domains.
And we do have people in, you know, dozens and dozens of different domain areas who have
used it and are using it.
In fact, I think we, about five years ago or something, and I figured out that, you know,
we'd had over 2,000 organizations that have used analytic of pretty much every name you
(06:37):
can think of.
And about 45,000 people at that point had worked with it.
Our strongest uptake has been in the renewable energy sector.
And we have a consulting arm at Lumina as well that does a lot of work in the renewable
(07:00):
consulting, the renewable energy sector.
And some of that's risk management in that sector as well.
So for example, if you are, you know, either say California, the grid or your utility companies
here and you're trying to figure out where should we build, where should we put batteries
(07:21):
on the grid, you know, and what's the cost effectiveness of those.
Well, this is a good tool.
That's another one would be like when should we build a new power plant and stuff like
that.
And that kind of thing takes a lot of, has to take a lot of stuff into account that's, again,
sort of unprecedented compared to what happened in the past because, you know, what's going
(07:45):
on here is the world's been changing and, you know, there's this uptake of solar panels
and electric vehicles and wind and, you know, there's changes in how people use energy
and where they live.
And you know, you're trying to make decisions that affect, you know, the next 30 years
and involve big capital investments.
So, you know, model-based decision making there, you know, can allow you to really make all
(08:09):
your assumptions explicit to bring people on to the same page.
So, you know, we find advocacy positions are often our stumbling block when you have
big decisions, different people, you know, kind of nail down their positions and they don't
communicate well.
But model-based decision making will kind of bring people on to the same page and allow
(08:30):
them to talk about the same assumptions.
We've helped some, let's say, unnamed utility company deal with the decision of when they
should cut power to avoid setting a town on fire from when the winds get too high and, you
(08:52):
know, how to use the finite resources they have to best maintain and upgrade their
gas pipeline infrastructure so they don't accidentally blow up a neighborhood.
And if some people in Bay Area here may remember the explosion in San Bruno 10 years ago or
something with our neighborhood was blown up from a gas pipeline exploding.
(09:18):
And, you know, so those are some examples in that sector.
We've seen a lot of usage in the food risk safety area, USDA, FDA, and, you know, some
other countries' equivalents.
And I've not been very involved in that.
I don't know too much about it, but that was big.
(09:39):
There's people who do portfolio management in Pharma.
One of the leading tools, a company now called PlanView used to be called in Rich, I think
they required, had a leading tool for pharma companies that helped them manage their portfolio
of drugs that are under research.
(10:00):
That was all built on top of Analytica.
I see, I've seen forestry, I've seen Baluga Whale management, you know, setting hunting
limits.
Water quality modeling, environmental modeling.
(10:20):
There was a project to, there was, they were thinking of sequestering all the nuclear waste
from power plants in a mountain near Las Vegas many years ago.
And these, Analytica, to model that Oak Ridge National Lab has a site that the government
(10:43):
uses to figure out how veterans who have been exposed to radiation should be compensated,
like what percentage of their cancer could be attributed to being exposed to testing of
nuclear weapons and stuff.
And they have big models that run in Analytica that compute, you know, that probability and
(11:04):
that determines what their payment will be.
We've seen, we've seen it in manufacturing.
Common finance, not as much as people would think, it seems like a perfect tool for the financial
domain.
I definitely would think that to be like a perfect tool for the financial domain.
So I guess all over the place, yeah.
(11:24):
On the, by the way, on the Analytica.com website, there's a case studies section.
And it has a lot of really neat examples in there.
Yeah, I'll mention one other one real quick.
This one has gotten a lot of attention, one several words.
It was a documentary made about it.
(11:47):
But it's a neat example.
So in the, there are these big oil wells that are off the coast of Santa Barbara.
And if you, ever in Santa Barbara, or been there and look off the coast, you'll see them
out there.
Everybody's familiar with them.
And it turns out, these oil wells were put there around 1970 and they are huge.
(12:09):
They are, they sit on, they're these big platforms.
The largest ones are larger than the Empire State Building.
They're sitting on the ocean floor.
And when the oil companies first got the leases to put them there, they made an agreement
with California that said when they run out, they will remove them and restore the floor
(12:32):
of the ocean to its original pristine state.
And well, those oil wells are running out now, okay?
So, but it turns out they are now the most bio productive regions off the California coast.
They are giant reefs.
They're encrusted with corals, several feet thick, small fish breed there.
(12:55):
It's, you know, a safe area for them.
And the marine biologists certainly don't want to see them go away.
But it turns out, coming to an agreement for how to dismantle these has been an area of
absolute contention.
We had, we got involved with this, but there were, I think, I think it was 23 different organizations
(13:20):
that had interest in this.
There were the oil companies, of course, in the state of California.
There were legal defense funds that wanted to make sure the oil companies were, you know,
were held to their agreements.
There was a fishing industry, the shipping industry, environmental concerns and marine biologists
and so on.
And they went for years and years just never getting anywhere.
(13:43):
This is an example where you just have all these advocacy positions that people just talk
across each other.
They just don't speak the same language.
And we got, came into this project and built a model where we, in Analytica, where we
took everybody's concerns and new account.
We looked at all the different assumptions and modeled those explicitly and we modeled
(14:07):
all the different objectives people had explicitly.
And the various parameters, you know, were in people's control, how to, you know, what
decisions could be made.
And we shared this model with all the groups and they were able to play with different
decisions and see what the impacts were and a lot of them learned about objectives they
(14:30):
never thought about before.
And I wouldn't say that they necessarily came to agreement on everything but they did
come to almost unanimous agreement on what the best solution would be.
It should be.
I think we had only two organizations that were not on the same, did not have the same opinion.
(14:52):
And the solution was basically to remove all the piping, of course.
And then cut the oil well, I'll cut these towers off 180 feet below the surface and topple
the top part down to be a secondary reef on the, on the floor and leave the giant structure
there as, as a giant reef.
And they drew up a plan where that would save the oil companies half a billion dollars
(15:19):
in decommissioning costs and that half a billion dollars of savings would be split between
the oil companies and a new coastal defense fund to help with the ecology of the California
coast.
And with some provisions that if the oil companies didn't actually do the decommissioning that
their percentage would shift over.
(15:40):
And this was a while ago.
The governor, Arnold Schwarzenegger, signed new legislation into law to change the lease contract
and adopt this solution on his last day of governor, which is an act that he's meant to draw
attention to.
(16:01):
So it's kind of a big deal.
I noticed those oil wells are still out there.
So oil companies, I think, are dragging their feet.
I don't actually know.
But yeah, it's a really neat example, I think of, has a lot of similarities with things we
see often where you have a lot of different decision makers who just have had to seek as
(16:22):
it positions, even within a single company.
And they're just not talking the same language.
But once you start talking about a model, the conversation changes and people start talking
about particular assumptions.
And so that's why this model-based decision-making paradigm is so powerful.
It's a generalization of database decision-making.
(16:43):
You hear people say, or data decision-making, right?
Data-based, I guess.
So you hear people say, we're making our decisions based on data.
But the fact is, that's not really a very good term in my opinion, because data is always
about the past.
And what you're always really doing is you're trying to make a decision about your situation
(17:05):
in the future, which is always a little different from what your data came from.
So model-based decision-making is trying to take those differences into account.
And that makes a lot of sense.
So first of all, that's a wild story about the re-soft the coast.
You said it's like the oil rigs off the coast of Santa Barbara.
(17:26):
Yes.
I feel like that'd be really interesting to maybe go scuba diving there.
If they allow it, I would be like, I would be like, I would be scuba diving there.
Really?
Yeah, that's cool.
Like snorkeling scuba diving, that seems like fantastic.
And also, it seems kind of like a win-win where you get all this great new fish ecosystem,
plus the oil companies off the pay as much.
So it's kind of like kind of nice.
(17:47):
And then if hypothetically you can move those big rigs into the ocean, then you don't
have to worry about that kind of a quote unquote ugly view.
So that's pretty cool.
But I wanted to kind of get into how you view thinking about these models.
So to me, kind of simulating one of these complex scenarios that has a bunch of different
(18:12):
focus groups and interest groups seems like a really complicated thing.
And I know that something like analytic would help with that.
But I still feel like there's probably a lot of skill required in making models.
Because you know, sure, you have the tools, right?
It can help me.
(18:32):
But I feel like it might be challenging for me to kind of go and figure out what even are
the variables.
What is the problem that I'm trying to solve here?
It seems kind of hard.
So how do you sort of like, what are you thinking about like a model?
How do you start?
How do you figure out what are the important parts of a model and whatnot?
(18:55):
Well, you're absolutely right on everything you just said.
So I mean, the situation that's pretty ubiquitous is that you have a really complex real-world
messy situation that just you might think, oh, I could really use a model.
But boy, I don't even know where to start.
This is really confusing.
(19:16):
I have ideas of what factors I want to incorporate, but really no idea how to go about doing it.
So this is the normal state of things.
This is what you always face.
And actually, I'll mention right now that I am worth -- this is what my AI project is all
(19:38):
about, is really helping lower the bar here.
But even before that, I mean, the whole research on Analytica has been to address this exact
question.
So the general approach here is we actually always start by drawing an influence diagram and
(19:59):
not of the whole problem, but really taking just one really small part of the whole problem.
Sorry.
So we take a step at what was it influence diagram for?
Yeah.
And so an influence diagram is just a drawing that has nodes connected by arrows.
And the nodes correspond to variables, and we distinguish different types of variables.
(20:25):
So we always usually try to start with objectives and objective variables.
And obvious objective is often profit if you're working in the business situation.
But it might be protecting sea life off the coast or making sure the oil companies pay
(20:46):
their fair share or making sure shipping isn't affected.
So there's all sorts of different objectives.
And these are often things you either want to maximize, minimize, or satisfy.
So that's a type of node.
We always kind of put those over on the right and then decisions.
So these are asking what things are under our control.
(21:10):
And this might be discrete decisions or categorical decisions.
Do we turn oil platforms into hotels or do we get rid of them or do we cut them off?
They could have quantitative aspects to them often a decision will be how much of something
(21:32):
we should do, how big to make this or something.
We like to put those on the left.
And now in between are a whole bunch of intermediate variables.
And we try to basically connect these with arrows.
And these influence diagrams also have chance variables in them.
Those are uncertainties that have to be estimated.
(21:55):
And we call those out distinctly as well.
Those often take expert assessment.
So anyway, you connect these with arrows.
The process of drawing an influence diagram is similar to what people will often experience
if you get some people into the room.
(22:16):
And you say how are we going to approach this?
And you take out a whiteboard in the pan and you start drawing different shapes and variables
on there.
You try to figure out how you connect them.
And if you can connect the decisions to the objectives through a bunch of other variables
and all seem plausible steps, then you say, hey, I think we've captured something here that's
capturing the essence of stuff.
(22:38):
The thing you always want to realize is a model is-- this is a good time to say what a model
is, too.
So a model is a simplification of a more complex reality.
And in AI, we use the word model all the time with these large language models and vision
(22:59):
models and stuff.
And a lot of people in AI space don't actually realize why we're calling them models.
And the usage of that term in AI is often kind of equated with, say, a piece of software
or an architecture, which is more of a software solution.
(23:21):
And that's not why those are called models.
They're called models because they are approximating a more complex reality.
The large language model is actually trying to predict the structure of language.
In that sense, it's modeling.
It's specifically, it's predicting the probability of the next token that's going to appear.
(23:44):
And so it's used, then, as a component.
But if you kind of back up, a more general use of the word model means you're trying to capture
a more complex system with a simplified representation.
A conceptual model is another example, right?
If you're trying to understand something, try to develop a conceptual model.
(24:06):
And so the challenge that you talked about in terms of finding out what the different variables
are is trying to figure out, how can we capture the core pieces?
Of this problem, without capturing everything.
So what we have to find, what variables are going to make sense for that?
(24:30):
And sometimes in certain areas, people have developed ways of talking about things.
Economists have certain supply and demand and various quantities that they talk about.
So it kind of helps you say, hey, maybe if you've never studied economics and you're trying
(24:50):
to describe the economy, it would probably be pretty hard to come up with supply and demand
and interest and interest rates and figure out all those pieces.
But people who work in their own areas often have ways of talking about things that can help
a lot.
So I would say that part of it, drawing an influence diagram, the key there is, again, is to start
(25:17):
really small.
It's not the whole thing, just a little piece of it.
And that makes it very manageable.
And it's just a matter of drawing something, just like you do on a whiteboard.
You know, you just drag into the diagram and connecting with arrows.
And we find actually that very non-technical people can participate in that process.
You have people in the same room.
A lot of some technical, some of them not, and you're doing that and people, I don't think
(25:39):
that's right.
You know, and stuff.
It works quite well.
You know, it's very understandable.
So in some ways, it seems like such an easy thing.
But in other ways, that is the hardest part of modeling is figuring that formulation out.
And then once you have that, we then flush it out by getting it to compute.
(26:00):
We do quantitative models, and that's because hard decisions almost always involve trade-offs.
They're hard because you're trading off things.
They're complex, but they involve trade-offs and they involve uncertainty.
And so to really weigh trade-offs, you have to go quantitative.
So you try to predict a number for profit.
(26:22):
You don't try to predict.
Profitable or not profitable?
Right.
Right.
That allows you to kind of make trade-offs.
And so you flush it out, get it to compute.
And now that you have something working and simple, now you can start refining it.
So models give very complex.
(26:45):
I mean, often they do, but the idea is start simple and then add in additional factors and
stuff, little by little.
And you often change your mind.
You think, hmm.
I mean, maybe I've been model of this way.
You change your mind.
Now maybe I'll change it this way.
So a big part of analytic has been making it very easy to just switch around and change your
(27:08):
mind.
There's also always dimensions to models.
So time dimension, of course, but you'll have a number of oil wells.
And you might have stakeholders.
And you might have different concerns.
You might have different categories of this.
(27:29):
You might have different scenarios.
All of these things are various dimensions.
And that's a core part of how you structure your model as well.
So analytic has always been a multi-dimensional modeling environment.
We had tensors way before tensors existed, but analytic is intelligent and raise or way
better.
(27:49):
Way better.
Every dimension in analytic is explicit.
Analytic can actually kind of take care of knowing when two dimensions of two different
values are the same dimension.
And so you don't have to like figure out how to transpose things to fit them together.
And it also makes it really, really flexible where you can just add a dimension to a model
(28:13):
and you don't have to adjust any of the definitions anywhere.
So it really means that one of the absolutely unique things about analytic that doesn't
exist in spreadsheet, it's a programming language, is those core structural commitments
that you have to make.
(28:34):
You don't have to make them to late in the modeling process, which turns out to be when
they're more natural.
If you're using a spreadsheet, you need to figure out how to lay things out right up front.
And once you've done that, you're stuck.
That's how it's going to be.
And of course, it's very hard to go beyond two dimensions.
You go through all sorts of generations when you have four or five dimensions.
(28:55):
In programming languages, again, you really kind of have to commit to what all your dimensionalities
are and what data structures you're going to use.
And you build a thing on top of that and if you need, you know, decide, oh, that wasn't
really the right way to go.
It's usually a major investment to make the changes.
(29:17):
So that's one of the big things we focused on a lot with analytic and design as well.
Is that ability to kind of make these decisions when it tends to be the right time in that
process of building models?
So yeah.
I'm curious.
Do you feel like there's ever like a sweet spot that you've seen from models?
(29:38):
So you mentioned that the models can get incredibly complex and we've seen, you know, with large language
models.
I mean, no one person could probably ever begin to understand all the intricacies of the
model.
But I'm curious that if you've noticed that maybe like sometimes a simpler is better or
(29:59):
like, I don't know, have you ever noticed that there's sometimes diminishing returns with
having like a more and more complex model or do you feel like sometimes you seem like, you
know, the more variables you add, it kind of the better.
I don't know like what you've seen there.
No, that's an excellent, excellent question and it is absolutely relevant all the time.
(30:22):
And so there, there actually have been studies on that that have shown that kind of a
broad generalization simpler models actually do better than more detailed fine grain, you
know, models at predicting and improving decision making.
(30:43):
Now in reality, there's a certain amount of complexity that you need to capture the essence
of your decision problem.
But once you go beyond that, it can actually be harmful.
And so it's not entirely intuitive and I think it's a natural human nature to make models
(31:05):
more complex than they need to be.
So yeah, I see that all the time.
And again, the best models capture things simple.
You know, E. E. Guam C. Squared.
It's like, wow, what an awesome model.
It captures so much and so little.
So yeah, there's absolutely that.
(31:29):
There's also a question that we could kind of was in there too, which is, is there a sweet
spot for when to use models at all?
And that's also a really important question because there are a lot of decisions that occur
every day that you want to make like that.
And the difference between a really mediocre decision and the best decision is not that big.
(31:55):
And so you don't want to be wasting your time, spending a lot of time to make a better
decision on those things.
You probably want to just make the decision and time is probably more important or agility
is more important than making a good decision.
But there are more serious decisions where there's this fundamental trail between how long
(32:19):
you, how much effort you put into making good decisions versus executing.
And big projects and companies face this all the time where management and stuff just
jumps into executing.
(32:40):
I'd say a whole lot of those situations, you find a little bit of higher quality of decision
translates to a real big difference in execution cost.
And there I've seen stuff when people have looked at this and say there's the natural human
tendency there is to execute too soon.
(33:03):
Maybe they don't know how to go about making better decisions or they don't know what to
do to improve their decisions.
And so they jump into it too soon and then they really pay the cost with execution which
could have, a lot of times, things could have been saved.
The Shuttle disaster report is an example where that was a big criticism when we lost
(33:24):
the Space Shuttle Challenger.
They said that had happened.
So I'm curious, do you have a mental model on when it's worthwhile to make a model?
So for reference, I work at Amazon and one thing that we talk a lot about is one way door
versus two way door decisions.
So basically, one way door decision would be a decision that you can't take back.
(33:48):
So for example, if I said, I want to amputate my arm, well that would be a one way door decision.
It's not going to easily grow back.
So I'd probably want to do some modeling there to see if that was the right decision.
But a two way door decision would be one that you could easily reverse or maybe it's in
consequential.
So it's like, oh, I'm going to try this thing for dinner.
(34:10):
I don't necessarily need to do a model of the pros and cons of having Mexican or Chinese
food tonight.
It doesn't matter.
I can eat another meal later and if I didn't like this meal, I can try something else.
So I don't know, is there any kind of like a like heuristic that you use to figure out
like when you actually would go through like the trouble of like, you know, actually like
(34:32):
figure out your variables, trying to like, you know, spend the time with like the decision
making process?
Yeah, I mean that, I like that.
I like all that one way to a door stuff too.
That's really neat characterization.
I, you know, I have to say I don't.
I think that is such a broad kind of a broad area, you know, that, you know, I find it,
(35:01):
I guess I'm hard pressed to come up with really simple heuristics.
I think there's a lot of guidelines or principles, you know, things like you said, if it's a one
way door, that's probably, you know, important, you know, that might be a good indication that
it's worth the time.
You know, you probably, you know, have some internal feeling a lot of times as to, you know,
(35:27):
how much difference could a good decision make?
And you got to kind of probably want to make that up front.
And, you know, and there are intermediate steps too, you know, there is a post-stitential
for just sketching out a really simple model, you know, I mean sometimes, you know, these
are 10 minutes of work.
And, you know, and then that might indicate, you know, it's worth a little bit more and
(35:51):
so on.
So, you know, my boss, you know, made a model for real use of when he should leave the house
to go to the airport, you know, and the basic question there is, you know, there's a cost
for sitting around the airport doing nothing.
And, you know, the cost of leaving too late is you miss your flight and then, you know,
(36:15):
you've got to maybe sit around even more and wait for the next one.
But, and there's all these uncertainties in between.
So, you know, he actually made a model, you know, so he could kind of pick the optimal time
to leave the airport.
And, that seems like a very valuable model for most slackers.
It does.
What time you go to the airport typically?
(36:36):
I don't know.
Maybe what was the conclusion from your...
Well, the model actually lets you put in the couple parameters, like, you know, what's
your uncertainty distribution on how long it will take you to get to the airport.
And, you know, what's, of course, what's your cost, you know, and stuff.
So, you know, that would differ depending on where you live and stuff like that.
(36:58):
So, you put those in there and then what you get out is, you can get out the optimal time
to leave the airport basically and you can get a curve that says, "Here's your utility
distribution at each time."
And, you know, what you find is your utility kind of goes up for a while as you postpone
leaving and then there's a certain point where it drops off real rapidly, you know, because
(37:22):
your probability missing the plane goes up and then it kind of goes, starts going up again
because, you know, at that point, you might as well show up a little bit later because you
missed your flight, you know.
So, yeah.
As far as when I leave for the airport, I have to admit I don't necessarily do that,
but I live pretty darn close to the airport.
So, so it takes me 15 minutes to get to the airport.
(37:44):
So, it's not quite as hard.
I use, you know, I guess for me it's typically an hour in advance.
The main uncertainty in my case is getting through security.
Yeah, that's true.
Like, getting through security one hour.
I think we do the type of guy who like, we'll try to get there like two hours in advance
and I would just like end up spending like an hour so just like, swiddling my thumbs,
waiting around.
Yeah.
(38:05):
It always seemed like the wrong decision, but then like, you know, I've had it happen
in the past where I actually have missed a flight.
So I remember that this one time I was driving.
It was like a four hour drive.
I was driving back home.
I was living in Colorado at the time.
We were driving in Nevada.
We just did like a little Grand Canyon trip and we got, it's like, crazy traffic jam and
(38:26):
I missed like my flight by two hours and ever since then I was like always trying to
add like more buffer and so I don't know.
I don't think it's like always the optimal decision.
I feel like I'm guessing that Shashanky probably leave like at the last minute for
the airport.
I do.
I hate the opportunity cost of just waiting at the airport and try to maximize the amount
of time that I can get out of my day.
(38:47):
But maybe thinking of some more consequential decisions that have wide reaching impact, I
was thinking more about the case studies that you mentioned and seems like maybe a lot
of the decisions that are being made at large scales at enterprise software, enterprise
companies that governments are maybe using tools like Analytica to make decisions.
(39:13):
This reminds me of my cousin who's working in Australia for the government to allocate
resources for clean energy spending.
This is a very challenging task because you're at the whims of the current political government
which has their own agenda and that keeps changing every few years.
(39:37):
But these people working in clean energy, allocation resources, allocation of resources
and figuring out what is the best thing to work on, manage all these competing needs,
agendas, that seems like the perfect use case.
And she got an award recently and the award was because she proved that a lot of the initiatives
(39:59):
that they had were completely ineffective or very bad and did not meet the promises that
they had initially.
So looking at some of the offerings at Analytica, it seems like you are helping people articulate
the problem, clarify some of the assumptions hidden biases that people may have, get alignment
(40:23):
on multiple interested parties and maybe make simulations about how is this model going
to play out in different scenarios with various assumptions.
And then maybe going to the next project that you're working on, how do you use AI to simplify
(40:46):
this process and maybe help people clarify their assumptions in the first place?
Yes, yes.
And this is one of the things today I find very fun because of course with my AI background,
machine learning background, I love working on this stuff.
And I couldn't do this on a personal level with just a separate chat window in chat TBT,
give it context, have it summarize every my brain dump and articulate the things that I'm
(41:13):
thinking.
Yeah.
And yeah, which is great.
So yeah, so I mentioned I'm working on ASSISTA, which is an AI assistant for Analytica, which
is there to help the Analytica user in every aspect.
And so my long-term scope is really large.
(41:38):
Help people use Analytica itself as the software if they have questions.
What are the instructions for doing this or just to have it do it for them?
It's very agentic that way.
Can actually take actions within Analytica and help them navigate the extensive documentation,
find stuff there.
And also to help people structure a problem when they're built the influence diagram and
(42:07):
figure out what variables to use and all that stuff.
And I think it's very exciting.
So when GBT 3 came out, I thought, okay, maybe we can start incorporating this and using
this.
And I looked into it at that point and then I concluded, no, no, it's not worth doing it
(42:31):
with you this.
But when GBT 4 came out, the whole landscape changed.
I even remember the date still.
It was March 14, 2023.
It was like, my world view changed because that just shocked me.
And one of the capabilities of GBT 4 had was it seemed to have a really uncannily good
(42:57):
capability of figuring out how to take a description of a real world messy problem and distilled
the essence of it in the form of variables.
So I call this creative formulation.
It's that creative act of how do you formulate your model.
(43:18):
And since humans find this really challenging, it seemed like, wow, what an interesting
synergy.
A little more experimentation, I found that it was really low on technical proficiency,
which would be like fleshing out the quantitative aspects of the model, the expressions and stuff.
(43:40):
And people catch onto that pretty quick and don't find that too hard to learn.
But it's like, wait, here's something humans are pretty good at and it's not very good at.
So it's an interesting synergy.
So at that point, I launched this assistive project.
I thought, okay, there's a real potential here to help people with this step and that could
(44:03):
really make it, people who have that skill might not need as much help there.
But I have that skill and I still use TPPT for this all the time.
So it's very useful.
But for people who aren't as good at that skill, boy, having an assistant, there is just
(44:24):
this seems really useful.
Nancy did a webinar, I think in May of 2023 live on Zoom, where I asked TPPT, give me,
give me a come up with an idea of a problem that I could build a model for right now.
(44:48):
And something that is out of my area of expertise and I said, well, how about managing wildfires
in a natural wilderness?
And I said, okay, I said, now, how would we structure this model?
And I built a model for this live in a domain that I didn't even know going into the demo,
(45:09):
what the model would be of.
And I didn't even know anything about it.
So it's like TPPT was really good at that.
So I mean, I'm building on that capability.
Ascista has some areas where it's, I think, quite a bit better in, you know, does quite a
(45:33):
bit more reliable than using ChatGPT.
I have a lot of stuff going on in the background, which is given a podcast like this.
We can talk about a lot of the techniques and stuff if you're interested in stuff.
>> Yeah, I'd love to go into the detail on that.
And so, yeah, I mean, that's, I just say, first of all, this is a really interesting area
(45:54):
to work on, you know, is to really get it to improve.
So it's, you know, right now it's, you know, I call it an AI assistant.
It's not just an AI chatbot or something.
It's, you know, it actually has, it can look at your model, look at what you've done so
far. And it can change stuff. It can actually do stuff, you know, for you.
(46:20):
And it can build, you know, starting to be able to build influence diagrams and so on.
And right now is a really good time for me to put a plug in for a presentation on this
exact topic that I'm doing on a tool or 10th online.
So it's at the Risk Awareness Week, 2024 online conference.
(46:42):
And I think you guys can put a link probably in the podcast notes.
>> Yeah, we definitely will.
>> And put it in the description.
>> And it's 2024. I don't know what year you'll be listening to the podcast.
So, yeah, yeah. But yeah, we'll put it there.
So if you want to, is it in person, is it going to be online?
(47:02):
>> It's an online.
>> Okay.
>> So, I think it's definitely worth watching. We got to see a little sneak preview of the talk.
(47:32):
We can't share it, but I will tell you that the talk is quite interesting.
So yeah, if you have the time, I definitely would recommend watching the talk.
I learned a lot, I learned a lot about Analytica.
And, you know, we're talking now.
Sometimes it's a little bit hard to visualize.
You know, I think everything we're saying, right?
But Lonnie uses a lot of like visuals.
(47:56):
You can see him create like a model for a lithium mine.
And it's something that I knew nothing about.
But, you know, kind of seeing him walk through the process,
like really kind of made it so that I sort of could understand what he's talking about now.
So like right now, we're kind of talking the abstract, but like, you know,
(48:16):
doing this and seeing it visually, really kind of made everything sort of click.
So I highly recommend watching this presentation.
>> Yeah.
>> I kind of wanted to take a step back and maybe talk about what brought you here.
What are the decisions and actions that you took that ended you as the CTO of Luminat decision systems?
(48:42):
And maybe we can start with your undergrad years.
>> Okay.
>> And being awarded the best electrical engineer at Berkeley in the country.
>> Yeah, yeah, those.
Clearly, you know, the pinnacle of awards for me, I got awarded the Alton B.
(49:02):
Zerby award is the most outstanding electrical engineering student in the United States of America.
Which is really quite amazing.
And maybe I'll step a little earlier than that to tell you how I got to that.
So I grew up in Silicon Valley in Santa Clara initially and eventually my family moved to Los Alptos.
And my dad had just graduated as an electrical engineer when we came here when I was almost seven.
(49:33):
And he was, you know, the smartest electrical engineer that I think I've ever encountered.
He's just amazing.
He was capable, I never came close to matching an electrical engineering.
And very enthusiastic and very enthusiastic about the cutting edge stuff that's going on here.
(49:55):
And so I was out one day playing cops and robbers with my friends on bicycles.
And finding I was having a hard time keeping up chasing somebody when I was making the noise of a siren.
So I came in and I asked my dad, I was, you know, I don't know, seven or eight, ask him.
(50:17):
Can I get a siren from my bike?
And he said, he kind of looked for a minute and said, yes.
And he reaches over and pulls out a piece of vellum paper, puts it on his desk and starts drawing this weird looking schematic.
I'm kind of thinking, well, what the heck is he doing?
I watch him draw this thing, this schematic and he hands it to me and he says, here, here's your siren.
(50:41):
I thought, you know, and anyway, he took me over to the workbench and proceeded to show me how to solder and read resistors and wires and how to read the schematic.
And so I built that little siren and we went into his work and the metal shop at his company and folded up a little metal chassis for it.
(51:06):
And it was a little sufficeable to say, the process of building it was a whole lot more rewarding and fun than the siren itself was.
And I was hooked.
So from then on, I went to my dad and would say, could you design me a schematic for this?
Now I knew how to build stuff.
So I got into electronics like that.
And you know, building all these projects, we built a pong game about a year before Atari came out with its first home console.
(51:36):
A whole bunch of stuff.
And then, as I was getting into about the seventh grade, 1977-ish maybe, oh, another thing that he'd do is he was working on the weekends a lot.
And he would take me into the company and do kind of babysit me.
He'd set me up on one of these little, they called them calculators at the time.
(51:57):
They were little computers.
So the HP 9825 was the first one. It had just a line of LED display, you know, 20 characters across, a little tiny printer and a plotter hooked up to it.
And I would sit there and have under program.
So it was about eight or something.
And I became actually better than him at programming.
So he was much better than me at designing circuits, but I got really good at programming.
(52:21):
And so we, both kind of got interested in building our own computer.
We'd also go down to this little place called the bike shop a lot.
It was not too far from our house.
And they had all these little altars and different things and kind of dream about these, you know.
And so we decided, yeah, let's figure out what computer we want to get.
(52:44):
So we ended up deciding to join a homebrew computer club.
It's called the South Bay Homebrew Computer Club.
And there were actually two homebrew computer clubs in the area.
There was one that met at Stanford.
We went there for one meeting.
And then there was one that met down near Elendin.
And they were a little bit different.
(53:06):
The one at Stanford had speakers and they were kind of in an auditorium.
And the one in Elendin was a lot more like our generative AI meetup group.
It's a, you know, we kind of sat around with a small group and talked about building computers.
And home group computers.
(53:27):
And that one actually got started building a project together.
And the same thing.
So everybody built the same digital group kit for that.
And then people in the club would design peripherals and share the schematics.
And my dad was into the extensibility of that.
(53:50):
He loved designing the, you know, extensions to it and stuff.
And so when I was, cyberspace grade there, I built, you know, of course, I did all the building.
I soldered everything together.
But he did all the designing, built the first, you know, my first computer.
And that computer, I tell you, we put so many things on that computer.
(54:12):
It grew and grew.
It was really amazing.
It turned it, it was quite a monster by the time it, you know, finished its lifespan.
So what were some of the specs of that computer?
Do you remember?
Well, the very first version I built had two kilobytes of RAM.
I had, we had a tape recorder, 100-bod tape recorder that I could store stuff on.
(54:40):
And everything you did, you know, was with a little hex interface to the memory.
So going there and you could change the hex values of each byte in memory and then go
about and run it.
And it actually had a little thing that showed you what the registers were, you know, and stuff
when you were stopped.
And interestingly, I, you know, got that thing running and I was playing with that and playing
(55:02):
with that.
And one day, you know, I got it to print the letter A on the screen, you know, through
programming and hex and stuff.
And I have to tell you that that was a moment when everything snapped into place.
It was like, I get it.
I know how this works now, you know.
And from there, I was able to program and hex and do all sorts of stuff.
(55:26):
We upgraded that to eight kilobytes of RAM shortly after building it initially.
And then I think it was 24 kilobytes of RAM.
Yeah, these things were a pretty fast succession.
These were things people were working on this group and stuff to you.
And then we did the big project, which was a 64 kilobytes memory system, which was maxed
(55:48):
out.
It was a Z80 processor, which if you know the 8080 processor is more or less the same thing.
And as soon as we, I'd say, I never, ever ran out of memory until I had 64K.
And at that moment, I never had enough memory from that on.
(56:08):
So from then on, it was like constant overlay swapping and everything, you know.
Yeah, for reference, my MacBook right now has 16 gigs of RAM, which is roughly like 10
million times more RAM than the first computer that you've been.
Yeah, actually, isn't it a 5 million?
(56:28):
It's more?
8 million.
But I'm running up a little bit.
Yeah, you said 10 million.
Yeah, you're right.
You're right.
Okay.
Yeah.
So it was interesting.
I wrote a Scrabble plane game completely in hex.
I had 64 kilobytes at that time, but I needed space for the English dictionary.
(56:54):
So I set aside, you know, I had to plan out the whole memory kind of manually up front.
I set aside 256 bytes for my code, maybe 512 bytes, which included both the Scrabble plane
part of it and the dictionary editor so that I would have all the rest of the memory for
(57:15):
the dictionary.
So I implemented a Scrabble plane game with a dictionary editor that took 512 bytes of
code.
Wow.
512 bytes.
That is wild.
And because I was self taught, I didn't know better.
I figured out on my own that the way you make decisions, you know, if things or whatever,
(57:39):
you know, the way you have the code make decisions is it would rewrite the code right in front
of what it was about to execute.
So it was self-modified.
So, right.
So what you wanted to do is have you get the code would actually rewrite the code right
in front of what it was good.
Yeah, and then execute that code and rewrite it right in front.
And so it turned out I got really compact code.
But later on, you know, when I started working with some professional programmers, they were
(58:00):
like, "That was not a good idea."
And you know, I learned a lot.
So yeah, those are the old days.
They're really, really interesting.
I think that's a pretty good technique for code golf.
In case you're just trying to get as compact as possible.
It's like, "Oh, you know, why do we have this if that just like I'd rewrite it?"
Yeah.
And you know, one interesting thing was at this time when we were building our computer,
(58:24):
you know, we went down to the bike shop quite a bit.
And one day they got in this new computer and I really loved it.
It had two joysticks on it and it had color graphics.
It was like, "Holy crap, I had a bricks game on it."
And I was like, "Oh man, I could write so many games with this."
I didn't like playing games that, video games that much, but I like writing them, you know.
(58:47):
And we almost got one.
But, you know, my dad really was much more into being able to design and extend the system.
And that was a very close system.
I read later that those were the very first Apple twos that Steve Jobs and Steve Wozniak
built.
And they put their very first ones there in the bike shop.
(59:09):
So I was actually playing with the very first ones.
Wow.
I almost bought one.
But that would have been a good one.
It was worth like a million dollars.
I don't know about that much, but they think they are worth a lot.
Yeah, so Steve Jobs and Steve Wozniak were hacking away at their Apple One and Subsequently Apple
Two.
While you were also tinkering with your computer, one of the other homebrew clubs at Stanford.
(59:34):
Yeah.
Yeah.
Yeah.
How do you feel about where we're at now with Chat Gbt and these LM's, Agentech behavior
and like the late 70s, early 80s?
Oh, that's a good question.
Yeah.
So when I was in that stage, people would tell people I was working on a computer.
(59:59):
They'd tell adults that and they'd say, what's a computer?
And sometimes they'd say, is that one of those things in the basement of a bank?
And I'd kind of explain what it was.
I'd say, in 10 years you're going to have one.
Everybody's going to have one.
They'd look at me really strange and say, why would I want one of those?
And I said, you know, and I gave it some things they'd use it for and stuff, but a lot of them
(01:00:24):
are true, but they're hardly hit.
The main thing turned out to be big.
But I was off by 10 years.
It took about 20 years for that to happen.
In 20 years, everybody had one.
And stuff.
But I had zero doubt in my mind at that time that this was going to change the world.
(01:00:45):
I was working on this little, funny, little technology that people around me, some people in
my area in Silicon Valley here were very excited about, but nobody else knew what it was.
I was just positive.
It was going to massively change the world.
And I have to say, I have that same feeling now with generative AI and with these AI technologies.
(01:01:14):
And in some ways, I actually feel like the transformation from AI will be bigger than that one was.
I think it will be a much bigger deal.
There is a, you know, Amdahl's law, which says, you know, we tend to overestimate short-term
(01:01:35):
change and underestimate long-term change.
And I think that really, really applies to the current moment as well.
So, you know, it's very easy to look at this stuff and say, man, in two years, the world's
going to be completely different.
I can feel pretty confident that those estimations are off.
(01:02:00):
And, but, you know, when you say in 10 years or 20 years, this stuff will completely transform
the world in ways that are beyond what we are imagining.
I think it's going to be very true too.
So, it's going to be very big.
You know, there's just so many bottlenecks is what happens.
(01:02:22):
There's so many bottlenecks to get something to where it transforms the world that, you know,
the thing right now that looks like the big bottleneck, it might solve, but it was kind
of shifted to the next bottleneck.
And some of these are technological bottlenecks, some of them are societal or economic or, you
(01:02:44):
know, whatever.
But there's a lot of those you have to get through.
And that's why things don't change as fast in the short-term.
But it's building a foundation for something really, really, really monumentally transformative,
I think.
So, what do you feel like some of the bottlenecks are now before we can kind of get to the,
let's say, I don't know, like maybe like AGI or like the singularity.
(01:03:06):
I don't know if we'd consider like AGI and the singularity, but I feel that they're kind
of like similar, but like what would you consider kind of like some of those bottlenecks
before we get to some sort of state where like computers are kind of better than humans
and everything?
Yeah.
So, I'm optimistic we'll get there.
(01:03:28):
And I'm optimistic it'll be in my lifetime.
And I'm pretty, you know, I'm a lot older and you're, but, you know, but I, well, I talk,
I have to say I have to kind of temper this because I talk to people who have kind of
both sides, you know, where they, I feel like they're way underestimating and talk to people,
(01:03:51):
I think they're way overestimating how fast we'll get there.
And, you know, so I think within the group I'm sitting with right now, I think it's
going to take a little bit longer than I think you guys might estimate.
And a big part of that question though is getting at, you know, what we have in mind when
(01:04:14):
we say AGI.
And you know, I think we all agree that it's something that has a similar level of capability
to human beings.
Now, we might not necessarily even care ultimately about having the same capabilities as a human
being because we want to somehow make things better.
(01:04:34):
That doesn't necessarily mean replacing us verbatim.
But we have to somehow figure out, you know, what we mean when we say that.
And I think it's really easy to look at certain things that we really revere today, you
know, and a lot of those are, you know, high level capabilities like programming and things
(01:04:56):
like this that, well, it feels like if we do that, we've really conquered something.
And this is a pattern I've seen, you know, the whole 37 years I've been in AGI is people
have always looked at some of these revered cognitive areas and thought that's what it
means to solve AGI.
(01:05:16):
And, you know, there's this guy Hans Morvik at CMU back in 1990s said, no, no, that's wrong.
These kind of things are much, much easier than the things two-year-olds do.
And, you know, the basic ability to live in the world and interact with the world and perceive
and change things and deal with three-dimensional, continuous environment and, you know, really
(01:05:39):
get causality right and all that stuff.
Trump's, you know, stuff you do in natural language and it will take a lot longer to get there.
And it seems to be panning out, you know.
So I think it's a long ways to go.
I use example a lot of times, I think a plumber is the epitome of, you know, maybe one of the
(01:06:00):
last, if we can get to the area, you know, the level of the plumber, what a plumber it is,
then I'll say, okay, we're there.
But we are so far from that, you know, plumbers have to, they go into a, you know, a situation
where something's backed up and they have to deal with a completely novel situation, geometric
situation, et cetera, you know, with who knows, weird plumbing that's been configured in
(01:06:24):
a weird way with some completely, you know, you have to figure out some weird clog or whatever
is going on.
And, you know, sometimes they have to dig through concrete, go through walls, repair sheet
rock, sometimes carpentry, sometimes electrical stuff, you know, they just have such a wide
variety of things you have to adapt to and then they have to figure out, you know, how
(01:06:44):
am I going to turn that wrench, you know, in this tight area, you know, it's like, you
know, you just got all these things like that that are just so far beyond, you know, where
we're at.
So in that sense, there's, you know, I see a really long path in front of us to get to,
quote, "AGI" as I see what "AGI" means.
But that doesn't mean that, I mean, the rate of progress that we've seen in the last,
(01:07:09):
you know, four years or whatever is, is mind-boggling and it doesn't seem to be slowing down.
So I think we're going to get, you know, huge value out of this in all sorts of ways, even
though we don't necessarily make it to this quote, "AGI" you know, complete "AGI".
We're just, for a lot of people, the goalposts is just going to keep moving.
Maybe my goalposts are out there in ways, you know, compared to a lot of people.
(01:07:32):
But I think, you'll see just a lot of people keep moving the goalposts because, you know,
yeah, yeah, for sure.
Actually, I was going to say that having recently done plumbing at Mark's place with the
help of Mark's Landlord, I realized how challenging and complex it is because each builder can have
(01:07:53):
their own unique nuance and go off script of what is standard.
And it requires a lot of critical thinking to solve these kinds of ambiguous, nebulous problems.
But, you know, one really cool thing that was announced recently by OpenAI was the O1 model,
which is shown to have some kind of at least mathematical quantitative reasoning ability.
(01:08:20):
And I don't know how novel of a model it actually is, as opposed to just incorporating a lot
of the agentic behavior, chain of thought reasoning, react, frameworks to reason out a problem
with multiple steps.
But what do you think about that?
Do you think that's pushing the boundary a little bit and getting us closer to that reasoning
(01:08:41):
ability?
Yeah, and funny you said that I was playing with that this morning.
I was actually testing it out and thinking what it could do.
So yeah, absolutely.
I think this, what the O1 model is really focused on is how can we use inference time reasoning
to improve the responses.
(01:09:01):
So instead of just pumping through the language model and spitting it out, how can we actually
think longer and think in a variable amount of time.
So if it's a harder problem, we should think about it longer.
And that's a really hot area of your research right now.
I mean, there's quite a few things going on in that area.
(01:09:24):
And I think that's one of several really important areas.
And it's one that's certainly, I'm very interested in, I've been following closely.
Some of the other ones of course are multimodal domains, which gets a little bit closer to being
in the real world.
I think maybe actually I'll say robotics bringing these things into robotics to have more general
(01:09:45):
robotics and control and stuff.
And I don't know, there's a few others in there, I'm pretty forgetting right at the moment.
But yeah, very hot area.
So have you played with O1 very much yet?
We did briefly when we were recording our previous podcast episode.
(01:10:05):
I asked some basic questions, you know, the Starberry Power problem.
It was able to solve it.
Some of the YouTubers who tried it said it failed, but looking at their screen recording, it seemed
like the times that it failed, it wasn't using the chain of thought.
The times that it passed, it was using a chain of thought.
And it was doing multiple steps of computation behind the scenes.
(01:10:29):
And then consolidating all those intermediate steps into a final result.
It seems like that might be a good candidate for Analytica too, to help you use a model that
understands data as supposed to just predicting an extokin.
Yeah, actually I don't know if it's really a very accurate thing to say, necessarily
to understand data, but I think it's, you know, it is a reasoning step.
(01:10:53):
Breaking out the compositional reasoning stuff.
It can reason about data logically as opposed to going with auto-complete, which is what
it seems to be doing today.
Yeah.
Yeah, I guess one thing is, if you had like a huge table, say of numbers and things like
(01:11:14):
this, that gets a little overwhelming for language models.
And I don't know that the stuff in O1 solves that particular problem.
I would assume it would probably run a SQL query and do some analysis on the data as
opposed to looking at the raw data itself.
Yeah, I think so too.
So that's true.
It kind of code interpreter style.
(01:11:36):
So yeah, the very first problem I tried on it was in the same vein as the strawberry problem.
And it didn't do as good.
It was write a numeric that describes how many words are in the numeric.
And you know, I had one.
30 words, this numeric does comprise, blah, blah, and it had 34 words.
(01:11:59):
And I said, did it pass?
And I said, no, it has 34 words.
It actually didn't actually do it.
I said, well, can you redo it to pass?
So that can't, this lyric has 28 words.
And it's like, no, it has 36 words.
(01:12:20):
So it didn't do too good on that one for me.
But I did some, I did a lot of dance math stuff on it today and it was a very interesting
experience.
I mean, overall, pretty impressed.
But there were definitely, definitely a lot of limits.
(01:12:40):
And I, and assist is doing some stuff that's kind of in this vein as well.
And maybe you want me to do all that stuff inside assist, you know, with, oh, I wonder, maybe
I can improve on it.
Whatever is there too, you know, you never know.
How does assist to do with large tablet data?
Because you mentioned that a lot of these large legatoes have a hard time dealing with
(01:13:03):
lots of data.
Like, I have an excel spreadsheet.
It's not the best to interpret it.
Like, I don't know.
How does assist to do with something like that?
Well, right now it does it.
Oh, okay.
Yeah, in fact, I've, I've just, I've kind of avoided giving it access to large, large tables
of data because I just know it's not going to work yet.
So there's a lot of stuff that's on my roadmap for stuff to get to eventually.
(01:13:27):
But yeah, so, but, you know, I have a lot of ideas and I've seen a lot of stuff on that.
So maybe someday.
Is that also like an area research that you kind of follow in closely with like the large
tabular type data to work with?
I don't know if you like have any thoughts on like how far away we are from solving that
(01:13:49):
kind of problem.
Well, I mean, the best I've seen has been code interpreter, you know, and maybe '01, because
it's, you know, the next generation.
But, and, you know, that approach, as Shashank said, was to basically, you know, write
some Python code to process the data.
So it knows not to process it directly, but to, you know, to use tools to process it, which
(01:14:14):
makes a lot of sense.
And it does a pretty amazing job at that.
But it was, it was also very limited.
I mean, there's, you know, I got under my third task or something with it and it was unable
to do it.
So, you know, there's a lot of room for improvement there.
Yeah.
I mean, I haven't seen anything that's been taken me by storm in terms of processing
(01:14:40):
large tables with LLMs directly.
So yeah, I mean, it isn't.
I don't know.
I mean, maybe, you know, computers are pretty AI and stuff are quite good at that.
So maybe, maybe the answer is don't, you know, don't try to somehow subsume that in the
AI part, how they use those tools that might be the answer.
(01:15:03):
Yeah, that's actually probably a pretty smart way of approaching the problem.
I mean, like, you're right.
I mean, like Excel exists, all these things exist that are like really good at like processing
tabular data.
And we do have tool call it.
So maybe like some sort of like agentic tool, maybe like the way of approaching kind of
that problem.
I probably think business intelligence is where LLMs have a good use case to interpret
(01:15:27):
this data to help with analyses on large amounts of data in ways that human beings can understand.
Yeah, yeah.
Yeah, well, I do want to work on one of my next goals is to be able to say at least a graph,
actually, you say, you know, interpret this graph for me and, you know, add that in.
So that'd be pretty cool.
(01:15:49):
So yeah, I wanted to kind of switch this topic a little bit.
You know, we've talked a lot about like Analytica.
We've talked about, you know, some of like the history, like the area.
But one thing that I know you've done some work on is around like ag risks and like I specifically
like existential risk.
So I kind of wanted to get like your take on like maybe like mainly like a what do you
(01:16:11):
feel like the probability of like, you know, some sort of like existential doom is like,
you know, like AI, like killing all humans and like like do you think that there's any
like merit to that? Like is this something like that keeps you up at night or like me think
that's maybe just like overblown like movie thing.
Like I don't know, like I don't know if you have like thoughts on like, you know, what the
actual like risks you see are of this like technology.
(01:16:33):
Yeah.
So last year, like project ended about a year ago and went for a year before that just before
chat started just, you know, a little bit before chat GPT hype came in.
I got pulled into a project called MTER, modeling transformational AI risk.
And you know, because we specialize in modeling of unprecedented situations, they thought, you
(01:16:58):
know, they pulled me into the project to see if we could figure out how to model that exact
question.
So I worked on that for about a year and you know, actually coming up with a good model
for that is pretty darn tough.
It is tough.
I became familiar with a lot of the stuff that's going on in the AI safety and existential
(01:17:25):
risk field and a lot of the, let's say the arguments that that field is making and I mean
the bottom line is, you know, that if an existential outcome occurred, it's so catastrophic
that even if there's a low probability, you've got to take it seriously.
(01:17:48):
And you know, the bottom line is that there seems to be merit in my opinion to the arguments.
You know, basically I had a colleague on the project who kept saying, you know, I don't know
what to think necessarily, but you know, can you just tell me, you know, can you just punch
holes in these arguments?
You know, what's, where's the flaw of the argument?
(01:18:11):
It's like, if you really read a lot of these, they don't have very, you know, there's not
really any flaws in the argument.
They're well thought out.
You know, a really good book that I do recommend from quite a while ago is a book called Super
Intelligence by Nick Brostrom and it's very well thought out.
It's a little hard reading.
It's kind of an academic level reading in a lot of places.
(01:18:34):
But he really thought this through kind of in a technology independent way.
It's not like, you know, here's how they're going to be implemented, you know, but here's
some things we can kind of figure out, you know, we need to worry about and here's why.
So, but overall, I mean, myself, I'm a techno-opinimist.
I mean, I definitely am.
I'm excited about this stuff.
So, you know, I think there's, you know, the expected outcome and, you know, very high
(01:18:59):
probability outcome is that, you know, in the long term, AI is going to improve humanity
in incredible ways.
There's going to be a lot of disruption.
A lot of disruption will be in here in there, but disruption is different from economic,
from existential risk.
But there is a small probability that we could end up, you know, we could end up with an
(01:19:23):
extinction event.
So, you know, right now is the time to be thinking about not arguing whether that's true or false
necessarily because the utility is so bad if it happens, but really focusing on what
are the decisions that we have today to minimize that because I think there are some.
(01:19:44):
My personal opinion is that, you know, the things that were likely to be able to control
have are almost all in the technology realm in terms of solving some of these alignment
problems.
You know, the reality is today, we don't know how to control, let's say, a species that
(01:20:06):
is, you know, run circles around this in intelligence.
We really have no idea how to, you know, the gorilla is absolutely superior to us in every
single physical way you can list pretty much, but we have completely dominated it and
subjugated it to this little area in Africa because of our intelligence, right?
(01:20:28):
And you know, how are we to know that the species we create doesn't do the same thing to
us?
I know there's some people, Hans Moravik, Rich Sutton and some others that, you know,
view this as either inevitable and not necessarily that bad of a thing.
You know, these are our mine children, our descendants.
So it's just a fact of life, you know.
(01:20:51):
They're not biological descendants, but they are our descendants.
So, you know, it's just the way it is.
If they're superior, they win out.
But, yeah.
So I don't know.
Yeah, yeah, it's hard to say.
So based on the recent, or maybe a few months ago, Koo that happened at OpenAI with the same
movements, ousting and then subsequent rehiring and then more recently, most of the executive
(01:21:20):
board at OpenAI and some of the founders leaving and specifically Ilya Sutskovur, founding
super safe, super intelligent.
Who's, which camp are you in?
Are you pro-progress, given all the progress that OpenAI has been making, releasing amazing
models that you yourself are using today in your own product?
(01:21:44):
Or, would you side with Ilya Sutskovur and some of the others who are cautioning us to
put the brakes on and think more deeply about some of the risks?
Yeah, yeah.
And, you know, I guess I'm not completely convinced that that's quite Ilya's position, but there
(01:22:04):
are people like Jeffrey Hinton and stuff who are definitely people I respect tremendously
that are saying that.
So, I'm more in the pro-progress camp at this point.
I mean, there's also a realism in here in the, you know, there's no, you gotta kind of put
(01:22:30):
this in the context, there's no putting the genie back in the bottle, but I do think that
the benefits are also really, you know, just tantalizing.
So I tend to be a little bit more, you definitely more on the pro-progress, but I kind of everything
I do though I try to stay open, try to stay nuanced.
(01:22:50):
So I don't necessarily, you know, say that I'm like absolutely there, I kind of try to keep
everything as a balance.
You know, you're gonna kind of have to take all these things seriously.
Yeah, I think that's a really good point.
And I agree, I think that it's really hard to put the genie back in the bottle.
(01:23:10):
I mean, let's say, you know, you stopped working on it or like, you know, some other people
decided to stop working on it.
So, well, it's not like you're gonna get everybody to stop working on it.
I mean, like, progress is somewhat decentralized.
Like, maybe you could lock things down in America, but then maybe China will be working on
something, maybe someone in the Middle East, maybe like, some guy in his bedroom, like,
(01:23:31):
maybe decide to make like the next revolutionary breakthrough in AI.
And I think that, you know, it's hard to stop it.
Which is both a little bit scary, right?
Because like, you know, we don't know where that progress breakthrough will come from.
But also, incredibly exciting because we don't know where the progress breakthrough will
come from.
So, I don't know.
(01:23:52):
I think that, like the way I kind of look at the problem is that, like, I sort of,
in line with you, in saying that like, I would like to see these things be built somewhat
faster. And if we have a quick building, then we can also have, figure out like what the problems
are and solve it more rapidly, right?
(01:24:14):
So it's like, I mean, you can hypothetically go sit down, think about what you're gonna do
for days, weeks, years, on end, and then make a decision.
Or you could just like, go make the decision, see how it turns out.
And then if it doesn't turn out well, you could sort of iterate.
And I think that as long as we can prevent these AI systems from having like too much power
(01:24:36):
on it, so I feel like, you know, as long as you don't make like the robot with the gun and
say like, hey, just like, kill any terrorists.
Like, it's, you could probably be all right.
Maybe it's a little extreme, but I feel like, you know, I like to see like the, you're
description there actually was kind of a direct connection to what we talked about earlier,
(01:24:57):
which is, you know, when is it worth building a model versus executing.
And you know, here you said, I think like, I'd like to just see us execute.
Not spent too much time deliberating, making a better decision, but run out and execute.
And, and iterate.
And I mean, that's a good thing, because actually that's another thing to consider when you're
(01:25:17):
thinking about that earlier question was, you know, you do want to always set things up
where you are in a position to iterate.
But you know, there's actually a cycle where the cycle isn't really just around iterate.
It's around, decide and iterate.
So, so, you know, you're, you know, at each, each moment you can decide whether it's worth
(01:25:38):
spending more time on the decision.
In this example, in this particular case, you know, like I said, I spent a year working
on modeling the risk.
I think it's a worthwhile thing for some people to be working on.
But I also definitely kind of felt from being on that that it's a little bit too, or
(01:26:00):
early to be too useful on that.
And maybe that's, you know, limits on my own creativity, because I mean Nick Boster and
I thought actually did some amazing things in his book that maybe showed you can do a lot
early on.
But, but yeah, I do think, you know, the more people are doing things, you know, that are
(01:26:24):
learning how we align these models and things, the better off we are.
I liked Zuckerberg's comment on this.
He is, he has a similar mentality as both of you.
He mentioned that yes, we should be mindful of some of these existential risks long term.
But in the short term, I think some of the more practical risks are spreading misinformation,
(01:26:49):
especially with the elections coming up using LLM's to generate mass propaganda or campaigns
that attack other people, create deep fakes and so on.
But setting a politics aside a little bit, what do you think are some of the disruptions
that we're going to face, especially with lots of jobs, or maybe like a slow transition
(01:27:12):
into less hiring, and how can people adapt to this new world?
What should they be learning?
How can we prepare ourselves?
How do you prepare for a younger audience entering college or high school?
How do you prepare for this new world?
(01:27:32):
Yeah.
Yeah.
So, what disruptions are we going to see?
One thing I want to say is for the short term economic disruptions like job loss, I think
we can look to history and look at other technological innovations and what they have done in terms
(01:27:56):
of how they have transformed job loss and things like that.
Because at every stage of the way with previous technological innovations, we have seen
the opponents out there saying, this is going to destroy jobs, it's going to...
It's happened with the printing press too.
The printing press, automobile, yeah, exactly.
(01:28:18):
Everything along the way, computer, all this stuff.
And there seems to be a story that plays out every single time, which is there...
In fact, are some segments that experience job loss, particular jobs go away, and particular
towns where that was the only job they're suffer, and there's big disruption.
(01:28:44):
But on the whole, the average well-being of society gets better.
And there's a fundamental reason that happens every time is because these technological innovations
improve productivity.
And that means we can do more with less, and there's more value.
(01:29:05):
So you don't need horses anymore.
Suddenly, you have the car, allows you to do things cheaper, you don't have to maintain
stables anymore, and all this stuff.
And now you can use that money for other things.
So we see that over and over again.
These short-term disruptions that we're seeing in AI are in the same camp.
(01:29:29):
They are the same thing.
We're going to see localized disruptions, and we're going to see opportunities created
elsewhere.
And we're probably feeling a little bit more because some of these are touching a little
closer to us.
So what does this mean, though, for people, especially young people who are trying to figure
out where to aim so they don't get disrupted?
(01:29:53):
The first thing I'd say is I think the skilled trades are very safe for quite a while.
So again, plumbers, carpenters, roofers, mechanics, and so on, machinists.
So those areas, we've kind of undervalued them for a while.
(01:30:15):
But I think we're going to see a shift again in society where those become some of the
more valued professions.
And a lot of the professions like marketing and copyrighting and whatever.
So those types of professions are blue collar ones.
(01:30:35):
Those are going to have some major disruptions and shifts.
I think going into any of the technologies is actually going to be a very safe area to
go into if you want to go into computer programming or things.
I think you're actually safe.
If you can adopt the mentality that the fun of the job is not the particular technology
(01:31:04):
you're working with today, but it is in fact the rapid change of technology and the ability
to jump on new stuff and learn new stuff continuously.
And if you have that mentality, then you're safe and you're in good shape going into these
careers.
I don't advocate, don't do it, as long as you have that mentality.
(01:31:29):
I myself, I think, was lucky because from the age of eight I got involved in the cutting
edge technology.
And it changed really fast my whole life.
I loved it.
I loved the rate of change.
I loved having the new stuff constantly.
(01:31:50):
And I actually went into AI in 1987 because it just seemed so hard to do.
It wasn't that it was a practical thing at the time.
It was very fringe, but it just seemed like there's not a harder problem to solve.
So I loved that challenge and that change.
(01:32:12):
Now if you have the mentality that I'm a Python programmer, I'm going to learn Python,
and then I got to stick with that verse in my life.
Yeah, Python is popular right now, but the odds of it being the center piece of skill for
people in 10 years from now is very low.
Not just because AI, but just looking at history.
(01:32:37):
The big thing 10 years ago wasn't Python.
So that's just the way it is.
I would not go into things that lead to kind of blue collar jobs that aren't sort of fundamentally
technology and aren't fundamentally skilled trades.
(01:33:00):
I think that would be a lot of those areas.
I mean, that might include lawyers and stuff.
I mean, doctors are always going to be safe, but I think AI is going to impinge on a lot
of the academic parts of that.
But you have a lot of office work type jobs and things like that.
Those are where you're probably risking it if you're aiming for those jobs.
(01:33:22):
So maybe some more clerical work that you might see.
If you were doing data and tricks, something like that that might be taken, or I could even
imagine something where it's really repeatable work.
Maybe if you were like, we're going to move this box from here to there, or we're going
to sort these things.
I could imagine AI with some sort of robotic hand could take that type of job.
(01:33:46):
But I think your spot on when it comes to plumbing, I feel like that makes a lot of sense.
And we were kind of meshed before that plumbing is hard.
And I would love to see some sort of AI test to see who can be the first person to automate
a plumber.
There should be some sort of like an X prize or plumbing.
I feel like that would be pretty cool.
(01:34:08):
I would say that across all fields, even if you are in a field right now, there's something
like marketing or whatever, your future is not tied to that field you're in right now.
It's tied to your ability to adapt.
(01:34:31):
Things don't change overnight.
You realize that things do not change overnight.
A lot of times when jobs get overtaken, it doesn't necessarily affect the people who are
working on them that much.
The job drops off, but you'll find it's because people transition out of the job or retire
and stuff.
(01:34:52):
So truck drivers, for example, most truck drivers today are in their 50s.
Young people aren't going into that profession because they know it's going to be automated.
The truck drivers today are probably not going to get displaced, but in a decade, most
trucks will be automated.
The ones who are there are just going to retire.
They're not going to get fired or get replaced.
(01:35:13):
They're just going to be kind of a transition.
That's why a lot of jobs go sometimes.
One thing I want to ask you about is you're a busy guy.
You're the CTO company, you have a PhD, you have kids.
You've been employed at a company for 21 years in executive position, which is quite amazing
(01:35:39):
that you've been able to do that.
How do you maintain your work life balance and still have time to do what it takes to run
like the day-to-day operations of a company, maintain your family life, and then also play
around with new technologies and have time for podcasts and whatnot?
(01:36:01):
How do you structure your time and your day?
Yeah.
Well, one thing I say as my kids are all grown.
I'm an empty nester now.
So that frees up a lot of time.
When I had four daughters, there wasn't much time at all for anything else.
So being a family, raising kids and stuff is an absolute commitment.
(01:36:30):
I committed the time to that.
I've made that the priority.
But I've always been a workaholic.
I've always just been so into the technology.
And I think I value myself worth, I value it through my productivity.
(01:36:52):
That's where I feel good about myself, is if I feel like I'm productive, especially on
technology and stuff.
So I guess that part kind of happens automatically because of that enthusiasm.
(01:37:12):
I did kind of take up a methodology in college that it didn't make sense to spend more than
eight hours a day focused on the job.
And at the time, the job was for me studying.
I got Indian engineering, of course, and I just fell in love with it.
(01:37:33):
I loved college.
I loved studying.
Obviously, I did awesome.
That was, you know, I did very well.
You were the best at studying.
Yeah, apparently.
But unlike a lot of my colleagues, I went home at six and went to bed.
(01:37:57):
The only overnighter I pulled was actually, I think I was a graduate student already.
It was a group project.
We did the overnighter.
You know, I'm not sure if it was a graduate, in a graduate course or an undergraduate
course.
I remember we were implementing a 60-odd two from the gate level processor.
(01:38:18):
That was a two day project or something, an overnighter in there.
So what was the overnighter effective?
Well, probably not, but it was kind of required because it involved multiple people.
You know, the times lots would have been very hard to coordinate, you know, to be able
to really get everybody working on it at the same time for the same period of time.
(01:38:42):
So, you know, I did that through my career as well.
I guess I'm a believer that what really determines your productivity is how focused you are,
not how long you work.
And I think if you work more than eight hours on a job, you end up diluting your focus
(01:39:06):
rather than the amount of stuff you get done.
So for me, you know, an hour of focus is worth 10 hours of mediocre time.
And, you know, so that's, you know, so that's, I think, a really key part.
(01:39:27):
I, I mean, I was, you know, I was a really fast programmer, you know, there was, I don't
think the distinction between programmers is that big anymore, but I do think back in the
80s and stuff, you know, the hundredfold difference between the mediocre and the top programmer
(01:39:48):
was a real thing, you know.
And, you know, I was definitely up there.
So I was able to do a lot more than everybody else was with a small amount of time.
So that helped.
But yeah, another thing is I turned down a number of big opportunities to go into really,
(01:40:12):
really high level positions.
At one point, you know, one point I was thinking, you know, whether to become the CTO of SGs,
which was a 700 person company.
And you know, after talking that over with my family and stuff, we decided, no, don't do
it.
Because, you know, you gotta leave time for the family.
(01:40:35):
And so I've also been driven to stay close to the metal.
I watched my dad, who was a super enthusiastic engineer out of college and moved, he was
really good.
And stuff, he moved very quickly into management position.
You know, very soon, he was managing hundreds of people or something, and, you know, billion
(01:40:55):
dollar projects.
And I saw his happiness with his job really deteriorated, you know, with that.
It just was not as fun for him.
And so I kind of said, you know, stay close to the metal.
I don't know if I always, you know, looking back, sometimes I kind of regret not doing, you
know, that, but it also allows you to keep more of a work life balance if you, you know,
(01:41:18):
if you don't jump for that.
So that's kind of been one thing I've been for.
You know, don't jump for big management stuff, you know.
So if, you know, if Luminor grew to where, you know, I'd have to be managing 100 people,
I would bring somebody else in to do that.
I'd stay lower, you know.
(01:41:40):
So, I'm curious just kind of like to dig in on that a little bit is how do you manage, like,
delegation?
Because you said that you wanted to sort of focus on being close to the metal.
And I know that a lot of the stuff that you ship from on Analytica and Luminor, that you
work on that like yourself, right?
But I also assumed that like, you know, it was only 24 hours a day.
(01:42:03):
You have a commitment, size, how to work.
How you decide what is something that you want to work on versus like, a delegate and
have somebody else work on it.
And like, I don't know, like, how do you decide like, you know, what you're going to spend
your, your focus time on?
Mm-hmm.
Um, hi.
You think that's an area I wish I could do better at myself.
Because, you know, a lot of that ends up being dictated because it ends up coming onto
(01:42:29):
my play and there's nobody else to do it, you know.
So, that's not the ideal.
Nobody else has the skills to do it.
So, um, which is not the ideal.
Um, yeah.
My time at Ask Jeaves, I don't think I mentioned that, but, you know, I was with Ask Jeaves for
a while as a startup and search engine in the, um, 90s.
(01:42:50):
And, um, um, and that one I managed to, a bit larger group and, and in one case, a project
that involved about 80 people by the time it was done and I was leading the whole thing.
And that involved people in all sorts of areas because, uh, marketing came and said, "I want
you to, to build this."
And, uh, so, yeah, I had to integrate with that time, technical writers and marketing people
(01:43:16):
and, uh, engineers and, you know, the whole gam and the QA and everything.
And, uh, even some of the, um, customers, you know, stuff.
Um, and I think, mostly I was managing an engineering staff there.
I was a director of engineering.
Um, mostly, I guess what I thought was my job as a manager is to make the people working
(01:43:44):
for me as successful as they can be.
And so, it wasn't so much how I delegate to them.
Um, I like people to pick their own tasks.
I think if you, if, if, you know, I don't value ideas a lot, I think it's all in the execution.
I think ideas are cheap.
But if people have the ideas, you know, if they think it was their own idea, even if everybody
(01:44:08):
else was out of the tube.
But if they think it was their own idea, um, let them run with it because, you know, let
them prove that it works, you know, and, um, you know, that motivates people.
And, you know, I guess where I would step in is like, do I need to do something that that's
person's going to have a hard time with, it's going to enable that person to be more successful.
(01:44:31):
Right?
Um, so that might be something that, you know, their expertise isn't going to quite allow
that or it's going to be too big of a learning curve or, or something.
Um, and, uh, just kind of look for those kind of opportunities, you know, and, and then
take on those things, um, I don't know if that makes sense.
(01:44:51):
It makes sense.
Yeah.
I think it kind of makes sense to be able to, uh, help enable people.
And I like what you said about not, uh, like delegating, but having people, uh, kind of
figure out like, uh, what's important to work on or maybe you can even like help them,
uh, help themselves figure out like what's important to work on.
(01:45:12):
But, yeah, I like that.
Not just like, oh, like you do this, you do this, you do this.
It's like, yeah.
And I would imagine that's like probably takes a lot more metal band with the figure out,
like exactly what everybody's doing, like, oh, this is what they're going to add.
Like, so they do the stuff like a lot of them.
It makes sense, yeah.
I'll mention one thing here, but other people might experience this too.
I guess I'd be interested to hear from other people on this.
Um, we've entered an era of remote work.
(01:45:35):
And, um, in fact, my company is kind of extreme because after the pandemic, um, me and my boss
are the only two that ever come into the office regularly.
I mean, I'm in the office by myself all the time.
Everybody else, they just continue working from home, um, which is kind of a lonely state.
(01:45:55):
But I do find that, um, you know, like what I mentioned, um, I find it workable in person.
And I have a hard time remote.
I really don't know how to manage people remotely.
I mean, I have not figured it out because that same thing of being able to look at, you
know, where people are and, you know, what's going to allow them to be successful and, um,
(01:46:17):
you know, all that stuff.
Um, it just seems like when you're remote, you almost have to delegate.
You have to really vary kind of micromanage the task specification.
And, um, so I don't know.
To me, the whole remote work thing has just been really tough.
And I have not really figured that one out.
So I don't know.
(01:46:37):
That's kind of an interesting topic area.
Yeah.
So remote is interesting.
I feel like, um, when it comes to remote work, uh, I think that it can work better.
I think when everybody is remote, um, I think this whole hybrid thing is in a certain sense,
like kind of a little worse to both worlds because I think that like it makes sense, like,
(01:46:57):
hey, if everybody's in the office, um, you can kind of, uh, get that, uh, general, like
chit chat and figure out what's going to work to, uh, kind of figure out what people
are blocked on.
Like, for example, if you walk by somebody's desk and you see like, oh, like, uh, this is
clearly not the inefficient way of solving this or like, uh, like it seems like they're really
stuck.
Like, you can kind of like, uh, catch on that.
(01:47:19):
But like when it's remote, like if somebody's like struggling for two days, you may not notice,
right?
Um, and I think that, uh, at least I think like with remote work, something that I've seen
is that, um, like you have to kind of like, uh, almost force those like, uh, ad
talk conversations a little bit more like, uh, you have to, like, maybe have like, uh, I,
(01:47:41):
like, I don't really like meetings, um, because I feel like, you know, they can take away from
the day, but I feel like in a certain sense, like when everybody's remote, it becomes like,
a little bit more important to like have like that regular update to say, like, hey, like,
you know, this is what I'm working on.
Maybe this is what I'm stuck on.
Uh, and then like even just like little time for, like, diving to work, which is like,
uh, I have a little bit of time for fun in the meetings too that you can sort of, uh,
(01:48:04):
figure out like, like maybe we'll start something stuff like that.
So I don't know.
That's just like, uh, something that like, I've noticed because like, I mean, I haven't worked
as long as you, but I've worked at a few different companies where, uh, it's been like, uh,
fully in person, uh, fully remote and then hybrid.
And I, I mean, I think hybrid can work if like, we just come into the office on the same
(01:48:25):
day, but yeah, I can't wait to say I'm like, if you're there, just like, alone in the office,
it's like, you might as well be remote.
It's like, yeah, it's, it's basically remote at that point.
Yeah.
Yeah.
So me and I like the transition though.
I like to keep my work and sit at home separate.
So, right.
Yeah.
And I do that for that reason too.
For sure.
For sure.
(01:48:45):
Yeah.
It makes sense.
Um, but, uh, yeah, I don't know.
I, I think, uh, we've covered a lot of ground.
I don't know if it's just shocking if you had any other questions you want to ask about.
No, I, I think we went through a lot.
Uh, what you're doing now where you came from, um, maybe one other thing I was curious
about, I'm feel free to, uh, pass on this question is, you know, um, Lumina has been, uh, I think
(01:49:08):
I assume a private company still for like 30 plus years and you've been there for 20 plus
years.
Uh, are there any plans to go public and, uh, so the one analogy I have in my mind and this
maybe incorrect is like, pow and tear, uh, which is doing similar things in terms of, uh,
helping large organization, government entities make decisions about complex, uh, things that
(01:49:31):
they want public a while back and they're, I think they're doing pretty well.
Uh, does Lumina have any plans to go public and are you, are you hoping for that?
Yeah.
Well, I have to say, I think going public is not anything in the near future for us.
Um, but, um, you know, there, there, there might be some structural changes coming up
(01:49:53):
in, you know, in some ways, but I can't really go into that.
So, yeah.
Of course.
Yeah.
Um, yeah.
And I guess like, uh, one last thing before we kind of wrap it up, uh, want to just like kind
of turn it to you, like do you have any questions for us, uh, or like, is there anything
that, like maybe you want to mention, um, like anything, anything at all that, like, you
(01:50:15):
know, the floor is yours.
Oh, well, okay.
Well, first of all, I thought today was really fun.
And, uh, yeah.
So, thanks for having me.
Um, you know, I, I come to the, uh, gender V.I. meet up on Thursdays, uh, pretty regularly.
I've been doing that, uh, the whole time.
And, um, and I've gotten a lot out of it.
And, uh, you know, we have a small group at Panera and the conversation goes all over the
(01:50:40):
place.
Um, but, um, you know, it's it for listeners who are in Silicon Valley, um, you know,
they stopped by and say hi, um, let us know what you're doing.
Um, it, you know, I've noticed a bit of a difference at the beginning.
The majority of people weren't even in AI and were coming to learn what the stuff was.
(01:51:01):
They're curious.
Um, a lot of the same people are still coming, but a lot of those actually have projects going
on in the area.
Um, you know, almost everybody has something interesting they're working on.
And, um, you know, I've learned a lot of stuff from there that I, um, would not have
learned even from listening to podcasts or, or whatever or reading papers.
(01:51:22):
I read lots of papers.
Um, so, um, you know, it's, it's, you know, it's, it's interesting.
So, yeah.
And I, I thank you to for, uh, organizing that too.
It's not really to keep that going, like you have.
Um, as I said, actually, I don't know if I mentioned this, but, you know, I talked about belonging
to a homebrew computer club, um, when I was a kid.
(01:51:44):
It was a lot like this group.
And, you know, I got my first programming job in eighth grade, because of a connection
there.
And, um, and then I got another job, my fourth job, um, you know, not too long after that.
Um, that lasted for six years and, and where I really found my mentor and, and, and, and
(01:52:06):
heard part of, uh, who, um, had a startup company that I worked for for six years.
Uh, yes, really where I learned a lot about programming and, um, and how old were you at
the stage?
And that was also because of the screwpip along too.
So, um, and, um, the first job actually was somebody who was in the group who, who hired
(01:52:28):
me.
And the other one was just like, hey, I know somebody, you know, who, and, you know, it was
a connection that way.
So, um, you know, so anyway, it's, there's a lot of, you know, a lot of stuff that happens.
And at the time, you don't really realize it, but, you know, you look back and you say, hey,
that was important.
So, this could appear as, like, I'm sure it's like that for a lot of people who attend,
you know.
(01:52:48):
So, well, I appreciate that.
And, uh, it's like, uh, you know, uh, I think it wasn't my idea to create the group.
It was Sean's idea.
Uh, he, he noticed that it was like just the, a real hassle to drive into the, the city,
like to go to San Francisco every day.
Or, you know, for all the, uh, different meetup groups and we wanted something to South
Bay.
(01:53:08):
I think Shashank saw that there was kind of like a, uh, an, like a desire, uh, to talk about
this like new, cool, gender-to-vei stuff.
Um, and so I'm speaking for Shashank.
I mean, it's, it's a team effort.
I think Mark is really the pillar who was stood here and, uh, built a community around
him too.
And he's the one who, uh, has a great voice and great charisma and, uh, the ability to
(01:53:31):
socialize and bring people together and, uh, create an environment that is very accommodating
and inviting.
And I, I did see, I did think about some similar parallels with the Homebrew Club that you mentioned
and especially the, uh, idea that, uh, you all decided to pick a project and work on it
together.
That's something that some of, uh, the other group members have expressed to and I think,
(01:53:52):
uh, I feel like we should try to do that.
Maybe put up a poll, um, have a request for ideas and see, uh, which one people want to
work on, uh, maybe one or multiple projects.
Yeah.
Yeah.
Yeah.
Yeah.
I think that's, uh, and we're, uh, there's a lot of work that we can do to make our community
(01:54:13):
better.
So, um, I think that's a great thing.
We're, uh, we're not going to have a conversation about that.
So, um, I think that's a great thing.
I think, uh, we're going to have a conversation about, uh, what's the next thing that you're going to,
uh, you know, about that, uh, the next thing that we're going to do is just, uh, uh, we're going to,
um, this is like fascinating to talk to you and, um, yeah, just like, uh, well, the knowledge.
(01:54:36):
And again, just thank you again.
Uh, and by the way, uh, just one more reminder, uh, don't forget about the, uh, presentation
that you're going to be, uh, a lot of things we're making.
So we'll be posting all of that, uh, details in the description.
So, yeah, I hope you all, uh, are able to go watch the full talk.
Yeah.
Check out the risk awareness week on October 10th and you get to sign up for three talks
(01:54:59):
for free.
So check it out.
We'll post a link in the description.
Yeah.
I hope, uh, I hope you guys attend.
I hope to see you there.
And once again, thank you guys.
This was great.
This was a pleasure.
Thank you so much.
Bye.
Thank you so much.