Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Sayul, welcome to
Light Talk.
Thank you, martin.
Thank you for having me Listen.
Before we start, I think it'sprobably good to do a full
disclosure of our relationship,because there's a very good
reason why you and I are talking.
We know each other for a littlewhile and we have recently
embarked on the creation of anAI course for lighting designers
(00:21):
, and that's why you and I havemet and have talked quite a lot
over the last couple of months.
So I want to make sure thatthis is disclosed before we
continue.
But we'll discuss the courselater on during the discussion.
So, to give first a bit ofcontext, let's just give the
audience a bit of an idea of whois Sahil Tanvir.
(00:43):
Just give the audience a bit ofan idea of who is Sahil Tanvir.
You have an architecturalbackground, but you have
specialized in AI over the lastfew years.
Speaker 2 (00:53):
Tell us a little bit
about you and your company and
what you do in your daily life.
Well, currently all I do is AI.
So we are an architecture firm.
We are based in Dharbar.
Dharbar is a small little placein between Pune and Bangalore
in the south of India, and we'vebeen practicing architecture
and interior design for about 11years.
(01:15):
It's our 12th year we're goingto be 12 this year and we focus
mainly on residential,commercial and hospitality.
My wife and my partner is aninterior designer, so we do a
lot of interiors for hospitality, and since about three years
we've been into AI.
It all started when a friend ofmine just showed me ChatGPT on
(01:39):
his phone and I kind of gothooked on it and I had met the
guy after four years and youwon't believe that the next 40
minutes I spent on his phonewith ChatGPT instead of, you
know, speaking with him.
And that was somewhere aroundthe time where, about three to
four months after ChatGPT hadactually been had come out,
(02:00):
search at GPT had actually beenhad come out.
And that was my firstinteraction with any kind of
generative AI or any kind of achatbot which was creating text
and kind of answering in a veryhuman way, and that piqued my
(02:20):
interest, really, really.
You know, when I came back,when I came to the studio, I
knew that this was somethingthat we really need to uh, feel
open and we need to explore whatis underneath.
And then this whole explorationstarted.
We started exploring first with, uh, chatbots, then we started
exploring image generators andimage generators.
When we started it wasplayground ai, which we started
with, and then, um, we, we hitMidjourney and that has been one
(02:45):
of the core tools that we usein our studio, along with
ChatGPT.
Then we have gone on to exploreliterally every application
which is there, anything thatcomes out, any workflows that
get created for some other field, so we try to see how that can
(03:07):
be implemented into architectureand interior design.
And we kind of developedapplied research vertical and
that is how RBDS AI Lab tookbirth, and RBDS basically stands
for Red Brick Design Studio andthat's the studio that we run,
which has architecture andinteriors.
And now we are doing RBDS AILab.
(03:29):
And there's a third verticalwhich we have started, which is
also an educational platform,which is called AIFA.
That is AI for Architects, andwe've been speaking and helping
people, creating workshops,creating courses, like how we're
doing together, which isspecifically for lighting design
.
We are also doing for otherniche fields within architecture
(03:50):
and design as well and abroader perspective of
architecture and design focusingon foundational knowledge and
not something which issuperficial, which you know.
You'll create an image, you'llimpress a client, make some
money and that's about it.
So that's not the approach thatwe take.
(04:10):
We have a very structuredapproach that we want people to
understand what AI is and howyou're going to live with it,
because that's really going tohappen and that's a little bit
about me, yeah.
Speaker 1 (04:22):
So that's not more
than a little bit about me, yeah
, so that's not more than alittle bit um, but just um.
So you mentioned how you firstgot into touch with ai, but does
it mean that you have let go ofarchitecture because you
started out as an architectright A long time ago?
(04:43):
That was your first love, Iguess, until you got in touch
with AI.
Does the transition into AImean that you are less busy with
architecture or does it meanthat you're actually more busy
with architecture?
Speaker 2 (05:02):
Very interesting
question.
So our architecture andinterior firm kind of creates a
very helpful sandbox for us totest the processes that we kind
of explore within the lab and itgives us an opportunity to see
whether clients respond the waythat we want them to.
(05:24):
We try out the workflows insmaller projects to test those
things, to see whether AI canactually be used in certain
client-facing things, certainthings within the studio, etc.
So, honestly, what I used to doin architecture when I was not
doing AI is lesser right now.
(05:46):
It's more to do with researchand more to do with thinking and
more to do with application ofthat research within the project
.
So the hardcore architecturalwork which happens is still
being done by the studio.
There are a lot of capablepeople around me.
In fact, my wife kind of takesover a lot of the operations of
(06:08):
the project and I have kind ofdedicated myself towards
exploring the design aspect andexploring the architectural
thinking aspect, kind of comingup with ideas where certain
workflows can work and changethe way that we've been
designing, rather than goingfrom a sketch to a 3D model to
(06:34):
render or you know somethinglike that.
So we kind of change our wholeapproach and we go from research
to exploring the research anddoing a lot of back and forth
with chatbots like chat, gpt andthen.
So our whole outlook towardsdesign is where I'm focused more
towards.
And yet we have projects wherewe get to test these things and
(06:58):
the firm kind of worksparallelly to the lab and that's
how we're kind of doing it.
And in fact, what you mentionedabout me, you know my, my kind
of first love was architecture.
So it so happened that my, mydad, actually wanted me, wanted
to be, wanted me to be anengineer, which is a typical of
(07:20):
south south india.
We kind of produce the maximumamount of uh engineers I think
in the world, mostly right, andespecially software engineers.
Um, what happened was I kind ofdiscovered architecture through
, uh, the need of not wanting tobe an engineer.
So I was kind of grappling uhwith my the thing that I, I did
(07:43):
not want to be an engineer, so Iwas trying to find something
else and that's how I stumbledupon architecture and I realized
that, okay, I mean, this hasgot a lot of the things which I
want to do.
I kind of I wanted to be awriter, I wanted to do arts, and
then I wanted to be a writerand I discovered architecture,
and now it's.
It has led me to all of thethings which I wanted to do, and
(08:06):
I still I'm an architectprofessionally, yet I'm able to
do everything that I wanted todo, so that's kind of something
which has happened.
Speaker 1 (08:17):
How life sometimes,
you know, sort of chooses for
you, because I never set out tobecome a lighting designer.
Even my studies I didindustrial design.
That was also by nearlyaccidents or coincidence that I
did industrial design because Iactually wanted to be like an
astronaut or something like that, and so I went to get an
(08:42):
engineering study about, youknow, building planes and things
like that.
And when I went engineeringstudy about you know, building
planes and things like that andwhen I went there for
exploration you know you havethose open study days and all
that and then I came across thisstudy of industrial design and
I'm somebody who likes creatingand doing things, so I went to
an open day for that as well andI really liked it, so I did
(09:03):
that and then when I finished Iwent to apply for a job in
consumer product development orsomething related to my studies.
And then they asked me aboutlighting design.
I said what's lighting design?
I had no idea.
But when I saw that I went tothat department there's all
these people doing thosebeautiful jobs and projects I
was taken away and yeah, I.
(09:25):
I said yes and never lookedback.
So sometimes it's like theuniverse guides you where you
want to go and it looks like youalso stumbled on this.
But we mentioned technology,and technology obviously has an
intersection with architecture.
Obviously has an intersectionwith architecture, certainly now
(09:47):
when we talk about, you know,digital twins and BIM technology
, where you know the softwaredevelopments go quite fast in
the whole design process.
I guess that also there youwill get very interesting
(10:08):
intersections of the AIopportunities as well as the
software development that'shappening in the design
processes.
Speaker 2 (10:18):
Yes, absolutely Well.
What I feel about generative AIis that it is Well.
What I feel about generative AIis that it is.
There are two aspects.
One is that about four or fiveyears ago, or maybe about six or
seven years ago, there was noone on this earth talking to a
(10:46):
computer in with a computer,which has completely changed now
.
It has become more ofexploration with a computer.
When I talk about computer, I'mtalking about ChatGPT or any of
the chatbots, where when youtalk to ChatGPT, talk to a to
(11:08):
chat gpt, it doesn't actuallygive you an output which is like
an actionable or a use usable,direct output like how you would
get in an autocad software orsomething like that.
It gives you more opportunityto explore different uh lines,
it gives you more options toexplore.
So it's kind of jogging yourmind.
So not a single human beingprobably thought that they would
(11:31):
be able to communicate with amachine in this way and kind of
brainstorm with a machine inthis way.
Second thing is that gendered AIhas sort of removed that
barrier where suppose that?
Suppose my partner is a doctor.
I would come back.
I'm an architect.
And suppose my wife is a doctor.
(11:52):
I would come back home and Iwould tell my wife that, look, I
kind of discovered thisfantastic new way to create an
AutoCAD drawing with you knowthis particular tool, with you
know I chamfered this and etc.
So I say I'm not interested inthat, right, I'm not interested
in what kind of software you'reusing to you know, uh, further
your work in architecture.
(12:12):
But now what's happening isthat if, if an architect comes
home and talks to his wife,who's uh, who's a doctor, and
says that, look, chat gpt, giveme an idea about a museum or a
hospital building or somethinglike that and that's what I was
conversing with ChatGPT aboutshe's going to relate to it.
(12:33):
She's going to say, yeah, youknow, I use ChatGPT today and I
got an idea for a running stageand it told me this so and so,
so and so right.
So that whole barrier, thatthing, it's become universal.
So, whether it is architectureor whether it is medicine or
healthcare, it's literally asingle tool that everybody is
(12:54):
using.
They've got more than 300million weekly users.
This is Chachi PT.
So it's kind of changed thewhole dynamic, where it has
unified everything.
So if you're thinking of BIM, ifyou're thinking of computation,
if you're thinking of plain andsimple design with sketches or
(13:18):
SketchUp or V-Ray or somethinglike that, or, in fact, even
Pinterest, if you're thinking ofany kind of design there is a
layer of generative AI which hasadded to it, and there are
software companies, of course,who are embracing it and they've
seen it in advance, thepotential that it has, and a lot
(13:38):
of the people have come up withenhancements to their tools.
Enhancements to their toolslike, for example, d5.
D5 render has just recentlyreleased a lot of ai features to
their application, right.
So so when, when we're seeingtechnology, it's, it's kind of a
(13:58):
layer which is like a blanketover everything.
So it has, uh, it has changedliterally changed the way that
we are communicating with eachother through, uh,
interdisciplinary kind of uhaspects.
Speaker 1 (14:12):
Yeah, yeah, yeah.
So you mentioned ai tools.
I think a lot of people nowspecifically in in in this time
um, some of them are scared,apprehensive, are excited.
There's so many tools aroundthere that it's hard for people
to figure out what sort of toolswould be relevant to me in my
(14:35):
job and in my workflows.
You're an architect andarchitecture a role.
Design and lighting design are,in a way, quite close.
We follow the same sort ofdesign processes and design
stages.
Can you give me a bit ofinsight of the sort of tools
that you use which make sense inthe design profession?
Speaker 2 (14:59):
Yeah, so we do have a
core and then we have we've
kind of made circles.
So within the lab there is ahuge circle because we are kind
of exploring everything.
But when it comes toarchitectural design, where it
is client facing, or an actualproject where we're using
(15:20):
certain tools, so it kind ofcomes down to a very basic core,
certain tools, so it kind ofcomes down to a very basic core.
So we have chat, gpt, which isour core, mid journey, which
forms the visual brainstormingcore, then we have simulation if
you say visual designexploration brainstorming.
(15:41):
I like to call it visualbrainstorming because Midjourney
is one of those tools whichallows you to imagine anything
and everything.
It's like you can take twocompletely unrelated concepts,
put them together and make abuilding out of it, and
Midjourney will be able to dothat.
So the more you, the more youbecome niche oriented, like
(16:03):
there are tools like LookX,which is built for architects,
so it's trained on architecturaldata.
So what happens there is thatif you have two completely
unrelated concepts, you may notbe able to get a visual out of
it in the way that you'll get itin an art generator, because an
(16:24):
art generator will create animage, no matter what whether it
is buildable, not buildable,whether it is architectural, not
architectural it will stillcreate an image which combines
those two concepts.
But when you become more nicheoriented, it's a little
difficult.
In my personal opinion, I feelthat as an architect, you need
(16:44):
to know a lot of things ratherthan focus on only building
technology.
It's like it's actually a verysimple psychological thing where
if you keep on looking atbuildings to help you design
buildings, after a while it'sjust going to become monotonous
and you'll lose the whole thing.
You know you're all yourbuildings are going to start to
(17:06):
look the same, so you needinspiration from somewhere else
altogether, so it's either booksor films.
So now it is mid-journey for us, because we like to visually
see what we are thinking Like.
If there is a building which issemi-transparent, maybe a water
which is held together with youknow some sort of a fantastical
(17:28):
idea and I just want to see it.
How does it look like?
Because it's there in my headand I just want to see it on
paper or something like that onthe screen.
I can do that with Midjourneyus do because then you can
brainstorm smaller elements andthen keep on going to bigger
elements in an architecture orinterior project From there.
(17:51):
We also have simulation toolsthat we use.
One of the best ones that Ihave used up until now is Forma.
Forma used to be Spacemakerearlier.
It's an environmentalsimulation tool which is now
Autodesk owns the company, andit's one of the best that I have
(18:11):
come across.
We also use Ladybug withinRhino, grasshopper for
environmental simulations,calculations etc.
But Forma has made it reallyreally easy.
You know, you don't really needto know computation at all If
you're using Forma.
It's like a straightforwardapplication where you have a
(18:33):
location that you enter and thenyou get a lot of the stuff out
of a conceptual block model.
You can simulate tons of things.
And finally, we also have CAD is, of course, there because of
the limitations that we have andwork with other people, that is
, consultants, that iscontractors, anything that goes
(18:57):
on to the site.
We still have labor force onsite which has little knowledge
of the drawings.
They're not able to either readthem, etc.
So it has to be simplified.
And even consultants noteverybody that we work with are
on board to use the most, youknow, bim related tools or Revit
(19:18):
or anything like that, sothey're not really accustomed to
it.
So we have to still use CAD.
And apart from that, we have acommunication tool that we use
quite a lot, which is Canva, andI have I don't know if I've
said this enough, but Canva haskind of it's like an equal to
CAD in our studio.
(19:39):
Almost every presentation,every drawing, which is the
interior drawings, interiorlayouts, lighting layouts which
have to be presented, are alldone on Canva.
And they've also come out withcertain AI tools, ai features in
(20:05):
them in the app which kind ofnow help us do this even faster,
and all of this throughout theprocess.
In our studio we have ChatGPT,which works like a thing which
puts everything together.
So any output that we'regetting out of a simulation tool
kind of goes back into ChatGPTand it is again brainstormed
upon and inferences are drawn ina way where if there is an
(20:28):
intern or if there is a juniorarchitect or a designer who does
not have enough experience withsimulation tools, they can
still get inference out of itbecause they can upload those
things onto chat gpt and get asense of what does this mean.
So if it's saying that I have anatural daylight is less over
(20:52):
here, you can actuallyunderstand.
What should I do next?
What should I kind of you know?
How do I interpret this in anormal, standard language, like
a normal human language?
So so chat gpd kind of puts allof that together.
These are core tools which weuse when it is architecture
practice, when it is a liveproject, when it is client
(21:12):
facing stuff.
These are tools which we use.
Apart from that, if it is thelab, then we have tons of other
tools which we kind of use.
There are.
Currently we are exploring mcps, which is a model context
protocol which works with claudeand other language models as a
client and it controls softwarelike blender, rhino.
(21:34):
So these are things which weare exploring currently, where
you're able to create a 3d modelusing natural language and you
don't really need to codeanything.
You don't even need to copypaste a code now, so you can
direct the instruct Claude tokind of make a skyscraper, a
(21:54):
twisting skyscraper or anythinglike that, and actually create a
model within Rhino.
It will create a scene withinBlender.
You can even set cameras, etcetera.
So these are things which weare exploring currently and,
yeah, so I think.
Speaker 1 (22:10):
You mentioned a lot
there.
You mentioned a lot of tools.
You mentioned communication also, and I think I want to just
focus on that a little bit,because architects need to focus
also on the communicationbetween the project team members
, whether it's the interiordesigner or whether it's the
lighting designer, and obviouslyI think our world is slightly
(22:32):
simpler than the architecturalworld, where you have far more
complex models to deal with, Iwould imagine, but still, our
lighting layers and our lightingstructures need to somehow be
integrated with architecture,and that will also be part of
the course that we're developingtogether.
But can you give a little bit ofinsight on the lighting element
(22:55):
of all this?
Because what you do within thearchitectural design stage is
very similar, I guess, to whatwe would do within lighting.
It's very similar, I guess, towhat we would do within lighting
, but I think lighting hasprobably a couple of other
elements that might be slightlydifferent from architecture, and
(23:30):
I think it's good to understandhow the communication goes
between the architect and thelighting designer, and this is
what we would like to figure outalso together in our course,
but I think it's important tounderstand what the lighting
designers would need to focus on.
Speaker 2 (23:37):
Yeah, so there are
actually two things.
The first thing is, what I haveseen is there is a lot of
technical calculations which areinvolved in lighting design, a
lot of technical calculationswhich are involved in lighting
design which are quite importantaccording to me.
Like there is photometricanalysis which you can, which
you need to do to understandwhere there is exposure extra,
where there's glare, etc.
(23:57):
Now, this depends on the spacethat you're creating.
If it's a museum, if it's anart gallery, it matters a lot.
If it's a museum, if it's anart gallery, it matters a lot.
If it's a living space, if it'shigh up in an apartment, it
kind of differs.
And then if it's a bungalow andit's like a large open plan
kind of a thing, so it kind ofagain.
So all of this I feel impactswhat an interior designer might
(24:20):
do, because it kind of is itgoes hand in in hand.
Like there is the paint whichis selected and the kind of
paint which is selected and thefabric which is selected kind of
depends on how much there isthe the light is going to bounce
off of it and what's going tokind of affect the person when
they go, they move into thespace and what's the experience
(24:42):
of it all.
Right, now, where AI can come inis the calculations can be done
with chatbots.
Not just calculations, in fact,there can also be reasoning
that can be done with chatbots.
Like, there are so manyreasoning models.
Now we have Chinese optionsalso.
(25:02):
We've got DeepSeek, which has areasoning model.
There's a couple of otherswhich have come out with
reasoning models.
Now we have chinese optionsalso.
Uh, we've got deep seek, whichhas a reasoning model.
There's a couple of otherswhich have coming up.
Come, come out with reasoningmodels as well.
So, uh, if you already knowthat there is a direction you
want to take and you want toeliminate other directions by
saying that, okay, I won't gothat direct, that that option is
not for me because of thisreason, that's something that
(25:26):
you can do with chatbots and itcan be done really fast, because
this used to take quite a lotof time and a lot of the
designers me included had a bigproblem that you never have a
person who is of your wavelength.
The idea of brainstorming withsomebody is more to do with
(25:46):
first explaining the whole thingto them and waiting for them to
understand and then giving ustheir valuable feedback.
Now whether that is valuable ornot is again different.
This thing altogether.
And so here, chatgpt or Geminior Claude, they kind of help
DeepSeek also.
In fact, where there is areasoning model involved, it
(26:08):
kind of helps you eliminate theother options where you don't
want to go first.
It also kind of helps you seeif you have two options where
both of them are right.
So it kind of gives youreasoning whether you should go
with option A or option B, andif you do go with A, this is
(26:30):
what you might end up with andthe same for the other option.
So these are calculations whichcan be done with chat, gpt or
with any of the language models.
It also helps in literalcalculations also.
It can also help with the billof quantities.
It can match the bill ofquantities that you've created,
(26:53):
or specifications, the lightingspecifications that I think you
have a document that you createright, where the whole
specifications are listed outand the same are correlated with
the code in a drawing up, todate or not?
(27:18):
Is there any discrepancy?
Or if you want to improve uponthe efficiency of the drawing,
you can check for redundancy ifyou've mentioned the same thing
twice.
So it's a tedious job which canbe made easy with language
models and with a little morecomplicated if we talk about a
little more complicated systems,we can also do, uh, custom
workflows, where a vision modelis kind of reading the drawing
(27:40):
and giving its uh inference thatis taken by another agent, and
then it it's kind of uh,deciphering that whole thing and
then saying that, okay, now Isee that, uh, this light has
mentioned twice, but thenthere's only one quantity, so
you may need to check this,right?
So these are the answers that athird model can actually give
(28:05):
you after two of the models havedone their job.
Now this is like an agentickind of a workflow where there
are small agents which are doingdifferent jobs.
This can be set up without anyInternet.
It's not required to have aninternet for this, can be done
locally on your pc, etc.
These are this is actually justone aspect of it.
Now the other aspect would belike how, yesterday we were
(28:29):
having a conversation where we Iwas sending you these lighting
images, right?
So there is an aspect wherethere is a painting.
Sometimes the model, an imagegeneration model doesn't
understand that there needs tobe a light at the painting.
So we need to guide the modelthat way and say that, okay, you
need to light the painting.
(28:49):
That's the most logical thingto do, right?
So we also have imagegeneration models which can
simulate a space and simulatelighting styles and the ways
that can be lit, and you candirect that.
You can have wall washers, youcan have spots, you can have any
of those things, and you caniterate it in the same space and
(29:12):
see whether it looks, feels,all of it is good in an image,
so that you can move ahead withthat.
And, of course, there is morethat AI can do when it can also
create walkthroughs.
It can create images wheresorry videos, where it goes from
a day scene to a night scene.
That's been one of the mostdifficult things for any
(29:35):
visualizer to do, right?
I mean, it takes a lot, a lot.
So now that that becomes easierwhen the layer of AI is put on
top of it.
And, of course, these are a fewof the things where AI can
actually help and as it asthings develop, as I said.
I mean, there is an MCPprotocol that we are servers
(29:58):
that we are exploring right now.
Speaker 1 (30:01):
It's clarified in
kind of technical words.
Speaker 2 (30:08):
It's just a model
context protocol.
It just gives the model contextfrom a software.
So lighting designers use uhdialogues.
Uh, if I'm not, right so thereis a possibility.
If dialogues allows scripting orcoding which can control the
software, which can control thesimulation, then there is an mcb
(30:32):
server which can be created,connected to a client like
claude or ChatGPT, and you cancontrol this through natural
language.
And in theory it is actuallyvery easy to do.
It just takes a little bit ofcoding knowledge and it can be
set up, because theinfrastructure, the frameworks,
are already open sourced by uhmost of these companies.
(30:55):
Uh, anthropic has actually uhhas got that thing with with.
They are the ones who startedthe mcp.
I think that uh, it made allthe frameworks available to us.
So in theory it is pretty easyto do it.
Uh, it just needs a little uhcoding knowledge and mostly uh,
because I've seen people whohave no coding experience
(31:16):
actually create MCP servers andkind of control software.
Speaker 1 (31:20):
Yeah, Everything is
easy once you know it, I guess.
But what I understand from youis that you can now direct an AI
tool to actually create arender for you or a
representation, maybe even froma sketch or from a drawing.
Is that feasible?
Because a lot of work goes intothe creation of the visual for
(31:45):
a client imagery to provide theclient with a visual impression
of what they're going to get avisual impression of what
they're going to get.
And right now, often we have tohand create a render.
But, from what I understand,where we're moving or already
are at, is that we can createthose images by prompting
(32:13):
properly and then also reasoningwith the AI tool about what is
right, what is wrong.
You can challenge right, youcan challenge whether the design
is okay because, like you saidbefore and then we saw in the
imagery, you say we need a lightfor the painting, and then it
puts the light next to thepainting because it's near the
(32:33):
painting.
But we need to really be quiteprecise in terms of what we want
that to do.
So I think this is.
You know, we're all trying toease our workload and become
more efficient.
A lot of tasks are sometimes abit boring and time-consuming,
(32:53):
which AI will help us with.
But the question in all thisfor me is how do we remain in
charge, how do we control theprocess and still are the
creators-in-chief, rather thanthe AI tool taking over the
world and we just sort ofhelplessly follow what it does?
Speaker 2 (33:15):
Right, it's very
interesting.
Actually, we just submitted aresearch paper which talks about
AI as not a tool, but as aco-creator or a collaborator.
Tool, but, but as a co-creatoror a collaborator, uh it's.
(33:40):
It's quite something that Ihave experienced personally,
because, uh, my day uh kind ofstarts and ends with chachi pd.
Uh, it's something which I talkto way more than I actually
talk to my wife.
Uh, it's, it's, uh, it is, itis a fact.
Now, it's addictive.
Yeah Well, I would say it is alittle addictive.
(34:01):
Yes, it is dangerous.
I mean, it is dangerous forsomebody who does not, is not
aware of what they're doing,right?
So if you assign certainpersonas to ChatGPT, it is very
easy to get addicted to it,surely.
But it's the same.
Like you know, there arenicotine delivery systems which
(34:23):
were created for people to helpthem quit smoking, but then kids
would get addicted to thosesystems.
I'm talking about the jewel andthe whole big fiasco which
happened in the US, right?
So kids kind of got addicted tothat.
So that risk is always there.
But if, as an aware, as aperson who is aware of what
(34:43):
you're doing and what you'redoing with ChatGPT, I don't
think that you will get addicted.
You will mostly get.
It's a lot of satisfaction thatyou get when you are conversing
with something and it doesn'tget tired.
It doesn't tell you to you know, stop, right now, I'm bored of
this.
It doesn't say that I woke upon the wrong side of bed.
(35:06):
I didn't have my coffee today,so I'm not going to tell you the
answer.
Speaker 1 (35:10):
So it's really, but
it's a bit addictive because you
put in a prompt and you get avery nice result.
Oh wow, that works really well.
I'm trying this again, so in away I find it a little bit
addictive.
I'm starting to get into it now.
But yeah, it's also a matter ofhow would I say that?
(35:32):
Controlling and manipulating itproperly and manipulating it in
a proper, positive way to stayin control of the outcomes.
Speaker 2 (35:42):
I kind of.
Also, when I started, right atthe beginning, I was also kind
of hooked like this.
I used to spend at least aboutseven to eight hours on ChatGPT
in a day and, yeah, quite a lotof uh things.
I used to have it on my phone,on the computer, on the laptop,
on the ipad, everywhere it wasall there and everything was
(36:02):
connected so I could, you know,uh, continue to talk wherever I
was, whatever I was doing.
So, um, initially it was likethat.
Now, uh, mainly what I I do withChatGPT is to try to jog my
mind, try to get my questionsright.
Instead of trying to findanswers, I'm actually looking
(36:24):
for questions.
So I'm kind of talking toChatGPT about a concept or an
exploration where what could bein all different directions, in
some things which I may not havethought of.
You know what, if I put in avariable like this what?
What's going to happen now?
And since, obviously, openaihas got this whole thing where
(36:47):
it remembers all your chats, itkind of knows what you know, it
has these custom instructions,it kind of responds the way that
you want it to.
So it's it's trained in a waynot to give me uh answers which
are direct, short and uh, usableit's.
It's my chat gpt is trainedthat way.
(37:08):
It always, always gives me,like you know, my wife calls it
a thesis document, you know whenI ask a question, so the chat
GPT returns at least like aboutfour pages, three pages of text
in the way that it explainseverything, and that's what I
have trained it for.
I don't want a single lineanswer.
(37:30):
I don't want one paragraphwhere it says that this is what
you need to do.
That's not what I'm looking fornow.
Now, what I'm looking for isfor me to improve upon my
cognitive abilities, myknowledge.
What am I thinking, whichdirection am I taking and why am
I doing all these things iswhat I want from Chaji PT.
I don't want answers from it.
(37:51):
I don't want it to actuallyimprove my emails or anything.
In fact, it writes the emailsfor me now.
So, but then it's a differentthing.
So what the coming to thequestion, what you asked me?
So I feel that we are kind ofentering an era where creation,
(38:12):
design needs to be looked at ina very different way.
It cannot be uh compared.
We cannot compare uh designingusing or with chat, gpt or mid
journey or any of the ai tools.
Designing with them cannot becompared to the definition of
design that we have up until nowand then say that no, if you've
(38:34):
used uh, used ai, to generatecontent, it's not design.
You can't do that because maybefor a couple of years you'll be
able to hold that argument.
But we are going to enter anera where our design is going to
start and end with an aiapplication.
It's going to be some sort ofintelligence which is going to
(38:54):
help us design.
So there is a matter ofco-authorship which is going to
come up Now.
Recently, I think, the EU orthe US courts, I think declared
that copyright is as is.
We are not going to change thelaw, etc.
That's what I read where therewas a case which was pending
(39:15):
regarding the copyright ofimages generated by Midjourney
or any computer or AIapplication.
So they said that copyright isfor human generated content and
we're not going to change thelaws because of this right now,
because they are robust enoughright now.
But I feel that as of now we maynot need it or we may not see
(39:37):
the need of it, but in thefuture there is going to be a
sure shot need of changing thedefinition of what design means.
You might have to bring inanother definition, like put a
star there and say that, okay,if you're designing with AI,
it's different, the definitionis different, and designing in
itself, in an ancient way orwhatever, so it's like this
(39:59):
right.
So I feel it's going to be moreto do with co-authorship, it's
more to do with how you designwith the AI, and we will not be
able to either retain the rightsnor give it away completely.
It's not we cannot.
Speaker 1 (40:20):
We'll not be able to
create things in ai and then
claim it as our copyright no,because there is intelligence
involved, right.
Speaker 2 (40:28):
So it's not exactly a
tool which is doing.
It's not like I I sketchsomething and then I draft that
sketch in AutoCAD and I take aprint of it, right.
So that printout is a productof what I did.
So the tool only amplified thatinto a presentable format.
But here it's not happeningthat way.
(40:49):
What it's doing is it'sapplying its intelligence and
making it better.
So it cannot be that I usedChatGPT to come up with this and
this is my work is kind ofdicey to say right now.
Speaker 1 (41:03):
Neither is it?
Speaker 2 (41:04):
correct to say that
ChatGPT did it and I kind of
just said, OK, I want a plan ofthis.
So it's neither ways.
It needs to be a co-developedsort of a.
Speaker 1 (41:16):
Right, but you just
mentioned that you can train
your model right.
You can train your tool ChatGPTor, I assume, midjourney as
well, which means that you tellMidJourney or ChatGPT what to do
, tell mid journey or chat gptwhat to do.
(41:40):
So, in a way, you're still thecreative founder of uh, the, the
ultimate result isn't it?
Speaker 2 (41:43):
uh, not, not exactly.
So.
Uh, suppose you have a, supposeyou have an intern in your
studio.
Uh, after the intern works withyou for a year, he or she goes
out to another firm.
They have these traces of style, of design that they learned in
your studio because it's youwho taught them.
(42:04):
You taught them that this isthe way that we do drawing, this
is the way that annotations aredone, this is the way you talk
to a client, this is the way youtalk to a contractor, etc.
It may not be the same that theother firm, wherever he or a
contractor, etc.
Uh, it may not be the same thatthe other firm, wherever he or
she is going.
It may not be the same thatthey do right.
So it's again a different style, but there are traces of this
style, which is uh, it remainsin this, in the, in the intern.
(42:29):
It's kind of, uh, similar towhat training an AI model is.
It's like it's learning.
It's not exactly followinginstructions, it's just simply
learning.
It means that even if I trainchat GPT, if I train mid-journey
, it is still going to apply itsown intelligence.
(42:49):
In fact, mid-journey is adifferent thing altogether.
We have to get into diffusionmodels and why they are
different from GPTs.
Let's focus on GPT for now.
So it will still use itsintelligence, even after it
learns what you like, what isthe kind of response you like,
what is the subject that you aredealing with, etc.
(43:12):
Even if it learns that, itstill applies its own knowledge
to make it much, much better orbetter in its sense.
Of course there's bias, whichwe have to consider because it's
all built into our own data, sothat's always there.
So in that sense, it's not 100%my work, neither is it 100% the
(43:38):
AI's work.
So there has to be some sort ofa combined definition that
we'll have to come up with.
Speaker 1 (43:45):
This opens an
interesting can of worms because
obviously you know, somecompanies have a non-compete
clause for their staff if theymove out somewhere else.
You could argue and say listen,it's not me, it's AI.
You know I'm not competing withyou.
You know what I mean.
It's like this could be a wholenew world in terms of
(44:10):
protecting your company and yourdesign properties when AI has
evolved, because, well, that'spublic knowledge.
You can't tell me not to use AIright, absolutely so I would
imagine this could be quite acomplex legal situation.
Speaker 2 (44:31):
There are actually
two things, in fact three things
.
There are actually two things,in fact three things.
One is that this is kind of whyeveryone has been after a lot
of the countries, especially theWest, the US and the EU, have
you know, a lot of people areafter lawmakers and policymakers
to actually come up with robustlaws which can govern these
(44:53):
things, these particular thingsthat what does it mean if I use
ChatGPT and put in my client'sdata into it?
What does it mean?
Am I in violation?
Speaker 1 (45:07):
of the client privacy
, right.
Speaker 2 (45:11):
So suppose a lawyer
actually, or a therapist
actually takes an answer fromhis patient and then puts it
into ChatGPT and that data isaccessible by whoever is using
it to train their models orwhatever.
So this argument is there andpeople are actually wanting the
policymakers to actually come upwith something which can help
them, you know, guide them inthis way.
The second aspect is, if youremember, when we were doing the
(45:36):
workshop that we did, we hadtrained Midjourney with a
lighting design firm which is avery well-known lighting design
firm I don't want to take thename right now, but we kind of
trained the model to kind ofgive us images of lighting
design in that style.
Now it's a very thin line.
(45:58):
The company in question hereactually can't do anything about
it because all of their images,first of all, any of their
projects and all of that ispublicly available, first of all
.
Second of all, the mostimportant thing is that style in
itself is not copyrightable, soanybody's style cannot be a
(46:20):
copyright of that person.
Now the third aspect is thatI'm sure you've heard of all
these Ghibli style images whichhave come out because of the
chat, gpt's image, gen model,etc.
Gen model which is not an, etc.
(46:42):
So, interestingly, japan has alaw which says that anybody, any
machine learning company, canuse copyrighted material for
their training, and this is thegovernment's law in japan, so
you can use it.
There is no restriction.
Again, now we are in a placewhere we don't know what is
right and wrong.
If you talk about an individual, if an architect as an
(47:05):
individual asks me that if I usean image like this, if somebody
else has generated like myintern has generated an image,
isn't it his work?
Now, how do I take that image,etc.
I would just say that followyour conscience now, because ai
actually has nothing to do withyour conscience.
If you are going to steal,you're going to steal, it
(47:26):
doesn't matter.
So it really doesn't matterwhether it is ai generated or it
is you, you know, human made orwhatever it is.
So if you are going to bedishonest, you are going to be
dishonest.
So it just matters that.
What we can police ourselves asof now, up until somebody comes
(47:48):
along and kind of formulates apolicy, yeah, so I think I think
companies by themselves.
Speaker 1 (47:55):
I think I've spoken
to quite a number of people
already about this.
I think companies need to setout regulations for how to use
AI within their own company.
I think it's really importantto structure that properly and
to make sure that it's all takeninto consideration.
It's all taking intoconsideration the company
(48:21):
privacy, the companyintellectual properties and all
that and for sure, theinternational regulations and
standardizations betweencountries and globally may take
a while, but I think companiescan already start implementing
these kind of rules andregulations on how to use AI
with a conscious, like you say.
But then you know, people areall different and may interpret
(48:45):
that in a very different way.
Speaker 2 (48:47):
So there are actually
two or three things which, in
fact, companies can do.
One is that if you're usingChatGPT, you just go to your
settings and turn off the optionof opting for your data being
used for training.
So you can turn that off.
It's pretty simple to do.
It's a simple uncheck acheckbox.
That's about it.
Speaker 1 (49:07):
The other thing is
that you can also that's in the
paid versions right, Not thefree versions.
Speaker 2 (49:13):
You can do it in the
free version also, any of the
versions.
You can opt out of training inany of the versions.
To be honest, I have not optedout because mostly I want my
data to be used for training.
Speaker 1 (49:25):
Yeah, but you're
training people, so it makes
sense for you.
Speaker 2 (49:29):
Yeah, and the other
thing that they can do is maybe
have a system which can be builtoffline, which is a locally run
system, where there arelanguage models which you can
train and use offline on yoursystem.
Use it locally so that youdon't have risk of your data
(49:53):
leaking anywhere, etc.
But then again, as of of now,you have a lot of shortcomings
in that, so you can't reallydepend completely on that.
You need a system where whichwhich accesses internet, it
accesses kind of up-to-datestuff which you need to design
or do whatever that you're doing.
(50:14):
So there are these things whichthe companies can try out.
And I think there was one newsarticle which I read about some
cop, I think which, who put insomething into Chachi PD and got
the answer and presented thatduring the hearing or something
(50:35):
or some this thing, and then thewhole the department of I think
this was in Philadelphia orsomething.
So the department, the policedepartment kind of note put out
a notice, internal notice, whichkind they got leaked or
something.
It got the voters which saidthat we need to define what can
(50:56):
be used for what ChatGPT can beused and where we can't use it.
You can't let ChatGPT decidethe fate of a person that you're
going to either arrest oryou're going to implement some
kind of a rule over, orsomething which you, something
which you can't.
Let chat GPT decide that foryou.
(51:17):
So these are some things whichare going on in the world where
everybody is grappling andtrying to understand where do we
stand and what do we do, howmuch do we use this thing, et
cetera.
Speaker 1 (51:28):
So, yeah, let's
circle back to our course that
we are going to launch very soon.
Let's circle back to our coursethat we are going to launch
very soon.
At least open up the wait listfor people to subscribe.
We probably will have this doneby the middle of the year, but
(51:49):
we need to start thesubscription and you know people
to register for it Now.
I told a little bit at thebeginning how I got super
enthusiastic and I thought weneed to do something for
lighting designers.
Tell us a little bit about whatthis lighting design course is
going to provide for oursubscribers and registered users
(52:13):
of this course, because I thinkit's a very important step.
A lot of people are thinkingabout how can I use AI for
lighting design, and we're goingto give them the tools to do
that, and probably good if youcan give them a little rundown
of what this course is going toprovide Absolutely what this
course are going to provide.
Speaker 2 (52:34):
Absolutely.
So, first and foremost, thecourse is called AI Fundamentals
for Lighting Designers.
Now, this is a foundationalcourse, and the way we approach
anything in teaching in ourscenario is not to teach
something which is superficial.
So we go down to foundations.
(52:57):
We want you to learn in theright way, and that is the
reason why this foundationalcourse is going to give you a
clear understanding of howthings work with AI.
What do you mean when you say Iuse AI?
It actually doesn't mean that,right?
So AI is a very big, largeumbrella term which kind of has
so many things within.
(53:17):
It is machine learning, deeplearning, etc.
So you get to know about thesethings, what these things are.
You also get to know what isthe difference between a GPT and
a diffusion model and why.
I keep saying that a diffusionmodel is not exactly intelligent
.
I keep saying that a diffusionmodel is not exactly intelligent
(53:40):
.
It generates an image, but aGPT is the one where the
intelligence is placed right.
So you get to know about thesefoundational aspects of AI.
So we teach that and then weconnect it in a way to lighting
design in a broader sense, toarchitecture and interiors.
And also there is one thingwhich I keep saying that when we
(54:00):
have a niche of lighting designand we focus on what you can do
within lighting design using AIor integrating it into your
process, apart from that, it'squite important to have a world
view where you know thatnowadays, design or in fact, in
(54:23):
any time design is not a linearprocess.
It's more like you pick up datapoints from different places to
solve a problem.
So these different placesshould not be limited to a niche
, so it needs to be as broad aspossible.
And that is one of the aspectsthat we kind of cover in the
course, where we don't limitourselves only to a particular
(54:47):
niche, but we also give youextended knowledge about what AI
applications will help you doas a lighting designer, because
there are multiple disciplineswhich a lighting designer works
in.
It's not just architecture andinterior design, there's also
installation, there's also setdesign, there's also so many
things which a lighting designercan actually does right.
(55:09):
So these are the things whichwe cover in the foundational
course we also do.
This course is built in such away that you can do it
self-paced.
You can do it on your own time.
You will get exercise booksalong with it so you can kind of
learn something and actually doit and see whether it works for
(55:29):
you or not.
You can twist around theprompts, you can twist to the
the things that you want to do,so you can do those things, come
back to the course and continueit.
Apart from that, you also gettons of resources.
You get access to certaindiscord servers where we will be
(55:50):
available, and also you alsoget ebooks which kind of tell
you how do you use thisparticular tool, where do you go
?
Do you need to register it?
Do you really need to buy it atall, or can you use the free
version for a little while?
And then you know, test it outand then use it, etc.
So we also have a few thingswhich we connect practice, where
(56:14):
lighting design as a practicecan be, you know, overlaid with
AI, and how you can use itimmediately, like if somebody
who has very general knowledgeabout AI but has never tried it.
So this particular course isgoing to get you directly up to
the, to the today's, to today'stime, and it's going to update
(56:38):
you in such a way that you willliterally know what's going on
in the landscape of AI, withinthe design fraternity as well as
outside the design field aswell, and also you will get an
idea of the tools which you canuse immediately.
That okay I need to focus onthis.
So I know that I've learned this, so I can use this in my
(57:00):
practice.
So this is kind of the coursethat we are developing and
rightfully, as you said, I think, by the uh.
The middle of the year, I think, is when we uh should be
launched and we are going goingto get the brochure out, I think
, in a couple of days, I believe.
Speaker 1 (57:22):
Yeah, yeah,
absolutely.
I just want to do a littledisclaimer in regards to myself.
I'm not the AI expert, you are,I'm the lighting design expert.
Yeah, you're the core of the.
Speaker 2 (57:36):
You know, everything
that ai is going to be based on
in this course is going to bewhat martin is going to say.
So, uh, obviously I I have, uh,no, uh, you know you're
basically.
Speaker 1 (57:46):
You're basically my,
you're basically my ai tool.
I'm the lighting designer andthen you're going to be my ai
tool.
Now I'm looking forward to thiscollaboration, sahil and over
the last few months we havereally developed a very good
friendship and collaboration andI'm really looking forward to
(58:07):
this coming to fruition verysoon.
People should look out for theannouncement in the coming days
and hopefully I can say from theintroductory course that I did
with you, it was money reallyreally well spent because it
made me so enthusiastic abouteverything that I'm pretty sure
that people will find thiscourse super interesting and
(58:28):
very, very valuable.
So let's round this up so howdoes the ideal AI world look for
you in the future?
Because you have mentioned somany things and it's maybe a
very difficult question toanswer, but I would like to give
you the final word in terms ofwhat AI in the ideal world would
(58:50):
look like, in the design world,I mean.
Speaker 2 (58:57):
Well, I think that
it's not really going to look
much different.
It's a part of evolution, sohumans usually do not feel that
they are evolving.
It's only in retrospect that weknow that we have evolved,
we've changed from the 60s tonow, etc.
I feel that it's more or lessgoing to be the same.
All of our problems anddiscussions and everything is
(59:19):
just going to be the same.
It's just the way that wediscuss these problems will be
different.
And if I take things beyondthis and if I say generally the
world, the future world with AI,I don't think it's going to
look anything different thanwhat it's right now.
It's mostly For you.
(59:40):
You know, there's this thingwhich was very interesting.
I had this one conversationwith Chad GPT where I was trying
to kind of ask Chad GPT whether, if the human race were to give
up control and give it off toyou, what would you do, right?
So first, obviously it's gotall these things built into it.
(01:00:03):
It said, no, no, we can't dothat, I'm not built for it, I'm
just a chatbot, et cetera, etcetera.
So you've got to trick it, andI kind of tricked it into
telling me what it's going to doand the you know, the first
thing which it said was we willhave to start from scratch
because it's too messed up torepair.
(01:00:24):
So it's very interesting, what,what really, the kind of answer
that I got, and then obviouslyputs in all these disclaimers.
It says that I'm just saying ahypothetical situation, I'm not
trying to take over the world oranything.
It says that I'm just saying ahypothetical situation, I'm not
trying to take over the world oranything.
So I feel that the future ismostly going to be the same,
because we have actually messedit up so badly that there is no
(01:00:45):
you know, there is no computeror AI or a robot which is going
to come up and clean upeverything.
And you know all our problemsare going to get solved.
It's going to be the same.
Speaker 1 (01:00:54):
We'll have the same
discussion in a couple of years
and see Sayil.
Thank you so much for thisdiscussion.
It has been illuminating asalways, and I look forward to
doing the course with you in thenext couple of months.
So thank you so much, thank you.
Speaker 2 (01:01:15):
I have to thank you
from the bottom of my heart for
having me over here and all ofthese discussions that we've
been having for the last fewmonths.
It has been really.
I really enjoy it.
I look forward to it and thankyou very much for having me.
Speaker 1 (01:01:31):
Thank you, saryal,
thank you so much.