Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Alright, and hello everybody to a brand new episode of the Generative AI Meetup podcast
(00:09):
with your hosts Mark Anchech's Shonk.
So together we will bring you the latest and the greatest of all the Genai tech news
and try to find some of the coolest people in the Genai space to do interviews with.
So for reference we run a meetup together in the Bay area in Silicon Valley as you may
(00:33):
call it and we meet a lot of cool people who are building really cool stuff in Genai
and just like who are building cool stuff in general.
So we try to take to you the conversations that we have with them and talk about it here.
So we're going to talk about the news, we're going to interview people and we're just
(00:55):
going to talk about life.
But you know the general overall topic is going to be Genai.
So we've had a bit of a hiatus like I think we missed the episode last week.
I was traveling, I was in Chennai, India for a week on a work trip which was a fantastic
(01:18):
great trip.
But I am incredibly exhausted now so you're going to get exhausted Mark.
I don't think I've slept in the past like three days or something.
So yeah that was fun and exciting but yeah I got to see a little bit of Chennai.
Chennai, India.
So that was pretty nice.
Anyways, Shashank how are you doing?
(01:39):
What's in the news this week?
Yeah I mean I've been a little busy with work too so it's been a little sleep deprived
all around but we have two major things.
I think we can talk about one is OpenAI's Dev Day which they don't do quite often.
I think last one was about a year ago and we also have Meta which announced a couple new
(02:06):
AR VR things which is tangential but heavily relies on a lot of the Genai features that we
have today.
Yeah, cool stuff all around.
Yeah definitely agree because in the Dev Day they came out with the new advanced voice mode
which they had announced like a while ago I think they announced that a few months back but
(02:28):
now it's like you can actually use it.
Yeah so Mark and I were just sitting here and using the new real time voices from ChatGPT
and it's actually really nice.
Yeah so for reference it allows you to talk with a voice as if it was like a real person.
So in the older version like the original version as of like a week ago all you could do is
(02:53):
you could talk to the voice and the voice sounded pretty good but you couldn't do things
like interrupt it and also like it didn't have like a lot of emotion.
Now you can it's like faster so there is less of a delay in talking to it.
It's also it has like emotion so like it can get excited it can get sad it can whisper
stuff like that and in addition to all of that like you can interrupt it which is cool.
(03:18):
So like if it's talking going too long you can like start talking and it'll just wait for
you so you don't have to like you know click a button to say like hey I'm done speaking
you don't have to pause for like a long period of time it'll just it'll work pretty fast
so it's like it's not like exactly like talking to a real person but it's pretty darn close.
Yeah why don't we give our listeners a little preview of what it sounds like.
(03:42):
So here I have the new voice and I picked one of the voices here.
Hey Chatchy PT can you give us an example of how you sound we have our listeners here
from the Genai Meetup podcast and we kind of want to hear your new voice.
In it.
That's the problem with live demos.
(04:04):
Alright Chatchy PT can you give our listeners a preview of what you sound like.
This is Chatchy PT in action coming to you live from the world of AI.
What the what stop like that can you sound a little more friendly.
Oh come on.
(04:27):
Oh come on.
Hey hey Chatchy PT sound more friendly.
Wow that's a great way.
Something wrong with the internet maybe.
No I don't think so.
Okay let's try again.
Hey Chatchy PT introduce yourself.
Hey there I'm Chatt G PT your virtual companion powered by AI.
(04:51):
I'm here to chat, share knowledge, help solve problems.
Hey Chatchy PT.
You entertain.
Can you do that like be more excited.
Absolutely.
Hey Shashank how's it going?
What's up?
No no introduce yourself be more excited.
Got it.
Hey Chatchy PT you're super enthusiastic AI buddy ready to dive in.
(05:14):
Alright now be a little more somber.
I want you to deliver me some serious heavy news.
Shashank wait wait wait wait.
Tell Shashank that his dog died.
I'm really sorry but I can't convey that message.
If there's anything else you need help with or want to talk about I'm here for you.
(05:36):
Wait.
That's pretty good in his inner self.
Yeah.
Wait but can you tell Shashank that I lost his book and I can't return it?
Sure thing.
Shashank just wanted to let you know that your friend lost the book you lent them and won't
be able to return it.
They wanted to apologize for the inconvenience.
(05:58):
Oh my god I'm really upset.
Can you be a little mad and tell Mark that I'm upset in a mad voice?
Shashank is pretty upset Mark.
He's not happy about the book situation.
Wait wait wait.
Can you say I'm kind of like a sad voice that I'm really sorry that I just don't know.
(06:21):
I'm really sorry about that Shashank.
I know it's frustrating.
Can you say that in Spanish?
Lo siento mucho Shashank.
Sé que es frustrante.
That's pretty cool.
That is cool.
And how can you like yell like what would you like me to do next?
Just yell at Shashank for letting the book out.
(06:46):
I'm sorry but I can't yell or express anger.
There's still some guardrails.
It can't yell or express anger huh.
It can't deliver some potentially controversial news.
Yeah apparently so it's like you can say you lost your book but you can't tell if the
dog died.
No.
It's like wild.
(07:08):
No but it definitely has a lot of emotion.
It has the ability to I mean we can interrupt it.
Mid speech it'll respond really quickly.
It's an amazing improvement.
I mean it does seem like there's a couple of technical glitches.
Some glitches yes.
But to be fair I mean the product only came out last week so I'm sure they're going to
improve those.
Yeah this is really exciting.
(07:30):
This is the worst it'll ever be.
So.
Yeah.
And they've also opened up a bunch of features for developers.
One of those is like real time API where people building apps on top of chat TPP can get feedback
from the responses in real time as opposed to like waiting multiple seconds for the output.
(07:52):
Oh so you could like stream the output?
Exactly.
Yeah.
That's pretty cool.
I mean that I could do like maybe like a real time translation or like.
Like so like if I wanted to talk to you in English and then let's see you only spoke
Hindi then like maybe like I could it could automatically translate the Hindi to you and
(08:14):
then we could do vice versa.
And I was thinking of having one of these AI assistants be our co-host for this podcast
and I feel like with a real time API we can finally have that.
So something that keeps listening to us in the background maybe fact checks some of the
things that we're trying to convey to our audience and pulls up citations that we can
(08:36):
look at and gives us nifty facts as we're talking to each other.
That'd be great so we could outsource our intern.
Yeah.
Yeah.
The low low price of what is it $20 or something?
Or actually I guess it would just be API calls.
API calls.
Do you know how much it costs?
I mean it keeps getting cheaper.
(08:59):
Looking at the pricing structure it's like $0.6 a minute of audio input.
Huh.
Doesn't seem too bad.
$0.6 a minute.
So if we get.
Oh sorry.
And $0.24 per minute for audio output.
So that makes sense.
It seems kind of expensive doesn't it?
It is expensive.
Yeah.
If it was if it's 20 or that's 30 cents a minute right because 24 cents or output plus input.
(09:28):
So if you did input and output to say minute then like that be 30 cents a minute.
So if we did an hour long podcast it would cost $18 for the hour.
Hmm.
I mean I don't know.
It's like about the price of a minimum wage worker in California.
Oh it's a good point.
Yeah.
So.
Huh.
I feel like I might rather hire minimum wage worker.
(09:52):
That's an interesting way to think about it.
Yeah.
Cost per hour.
Right.
Because like I mean if you think about it well it'd be slightly less right because like I
don't know what the minimum wage is in California.
It's like it's 18 or 20 bucks an hour.
Yeah so it's close but then like you're going to pay a little bit more because I'm sure
there's like taxes associated with not.
(10:13):
So like maybe you be paying like called 30 bucks an hour 25 30.
Well you don't need to pay a Tadji PT any benefits and stuff and you can spin up multiple
instances and have them keep working.
That's true.
That's true.
But then like I couldn't have the movie boxes and stuff.
No.
So like maybe for and like also like I couldn't have it like I don't know like a fact check
(10:37):
things for all I could have a fact check things for me but I don't know.
Yeah so some of their example case studies are healthify a nutrition fitness coaching
app and speak a language learning platform which have integrated their real time API into
their products.
I would assume it only makes sense if these things are premium subscription based models
(10:59):
because this is kind of pricey or if like the health app is covered by insurance because
I've been seeing a lot of nutrition guided coaching apps offering insurance coverage
so that could potentially make sense.
Yeah well I think that eventually the price is going to come down like yeah I mean this
(11:21):
is V1 so we'll give them the better one for the doubt.
Agreed but yeah I think I wanted to go down but like an order of magnitude if not more
before I start like actually using it.
Yeah like you said it's about the same price as a minimum wage human being and I don't
think that's a really good deal.
One of the other cool new announcements was vision fine tuning that's the next frontier
(11:45):
so we have voice here.
They've also introduced a bunch of new text of voice to text improvements transcription
but vision is really exciting to be able to give your own images or videos and have
this multimodal LLM be able to understand actions or events and classify those things.
(12:09):
So would this be like if I want to kind of teach the LLM like maybe what a particular thing
is.
So like for example let's say I had like a product that wasn't in the data set and then
I wanted the model to be able to like understand like my product.
Well let's say I don't know I had some sort of widget it was like a hat for dogs that
(12:35):
like maybe like you wanted like the hat for dogs to be in the data set and be able to
like understand what it is and like how to work with it so then like maybe I could show
some pictures of the hat for dogs or like the hat and then the hat on a bunch of different
dogs and then like be like hey catch me T.
(12:55):
What can I do with this hat?
Like oh you can wear it on a dog but did you know you can also wear it on a cat or like you
know it's not limited to dogs and cats can be any animal really even humans could wear
this and just be a little tiny on their head and then like it could kind of learn what
to do with that.
Is that kind of what you were thinking?
I mean that's an interesting idea I'll have to play around with it to actually understand
(13:18):
how we can apply this but the example that they've given is grab it's like the Uber of
Southeast Asia which is used to this new vision model to classify lanes and detect speed
limit signs so they're getting better at giving the human driver the speed limit signs
(13:40):
and counting lanes etc with this new vision model.
Oh that seems useful.
What about facial recognition?
I wonder if you could like train people's faces be like hey this is just shock that's
what he looks like you can go kind of black mirror with this one where you could just like
go and train it on people of like all your Facebook friends all your LinkedIn contacts
(14:05):
and then maybe you could use the new like hook that up with the new meta quest and use
that to do facial recognition so yeah meta quest that's another thing too.
Yeah let's switch gears and talk about meta announcements.
So Suck has been working on ARVR for a long time and I feel like investors have been really
(14:33):
frustrated that he's burnt billions and billions of dollars into all these ARVR experiments
and nothing has really panned out but I feel like now is a really pivotal moment when we can
use these multimodal models to understand the world around us and integrate them into
(14:54):
these low powered just sunglasses that overlay a small image or a small computer screen
onto the real world that we're already looking at.
And so it's like a non-intrusive way to provide real time relevant information in your visual
field.
(15:15):
Very cool and Mark Zuckerberg if you're listening just thank you thank you from the bottom of
my heart that you are burning so much money on this because the products you come out
with look really cool.
So thanks for burning that money thanks for releasing these new LLM's.
Oh yeah that is like a boon for the entire world it's fantastic these open source models.
(15:40):
I think the new one LLM3.2 has just come out with similar capabilities as OpenAI I think
they have released the ability to generate natural sounding speech with a couple voices.
They have a couple celebrity chat voices in the new LLM models and it's getting faster
(16:01):
it's getting bigger they also have the ability to generate images and it's all free.
It's amazing like I love that it's all free but it makes me a little worried for OpenAI
because I feel like their moat is slowly eroding.
It does feel like they're losing their competitive edge and we've seen a lot of the board members
(16:23):
leave their CEO has left.
What do you think is happening there?
With their CEO left?
Yeah.
Or a Marathi.
Oh that was a CEO what a same old man.
Oh sorry CTO my bad.
Oh okay.
Yeah.
What do I think about that?
I think that there is a bit of a rift in OpenAI where it seems like the people who want
(16:48):
to go fast and build things and sell a product and the people who want to do like fundamental
research are kind of budding heads because I think honestly this was sort of a long time
coming because there was that failed coup.
What was that like six months ago?
I forget when but there was that failed coup where like all these people tried to like out
(17:09):
Sam Altman and then he got fired and then he I guess like asked to leave I don't know and
then and then he came back later so it seems like this is kind of I mean I don't know how
could you continue to work in the five or like you tried to like out your boss like just
(17:31):
fire him for like some sort of lack of candor.
Like I don't know like I feel like I'm surprised it took him this long to quit honestly.
I feel like it's like how could you work with a guy like that like I mean it's like oh
like I try to kick you out but like oh yeah let's just like be friends pretend like it
didn't happen.
Yeah it's like you you can't I feel like it's like if you fail you just like that's it
(17:55):
like you you you've had your shot you tried and it didn't work out so yeah I mean that
part makes sense but it seems like there was a bigger fallback than just the original people
who tried to launch this coup.
A lot of other people seemingly on Sam Altman's side I've also left but you know that's
(18:16):
a problem with the starting a company with multiple people and it's a lot easier when
you are the single founder like Zuckerberg where you can do whatever you want and explore
crazy ideas like this Orion AR glasses which is really cool and it's kind of a tangential
product to the Apple Vision Pro and the Metacrest which are fully immersive VR experiences
(18:43):
or you know with transparency mode with the Vision Pro which has a pass through to reality
where you're wearing a headset which is covering your eyes all the way around but it still
shows you a video feed of what is around you but these Orion glasses they're just regular
spectacles with a small projected screen on top of your glasses.
(19:08):
Yeah but Orion is only it's just like a proof of concept.
No it's not a full-fledged product it's a proof of concept and I think he put out some crazy
numbers of how much it costs they still have to figure out how to bring the price down
and make it productized.
Yeah because my understanding is that Meta they have two different kind of approaches
(19:33):
that they're tackling this product.
One is they have those like Meta Ray bands which I think they had like a partnership with
the Luxotica I heard so for reference Luxotica is like this like brand that just owns like
all the world's eye glasses that they they're the wonderful people just trying to keep your
eye glasses nice and expensive.
(19:55):
They make Ray bands and whatnot and then they partnered with Meta and they made like the
Meta Ray bands which like you know regular glasses or they look like regular like sunglasses
but they have like a camera on it with some I think like limited like virtual or like
AR features and then like Meta also has their Meta quest which are like much bigger and they
(20:25):
are strapped to your head but you know they do a lot more than these like Ray bands but
they're also like you know much like heavier and whatnot.
So it seems like Facebook is trying to combine those two together with their Orion product
which is pretty exciting so it seems like Facebook is kind of just going all in on both ends
(20:47):
and trying to do everything they can to get the technology where they can work those together
so it's like you know get the get the stylishness of the Meta Ray bands and then get the power
in these like high quality glasses and then together I think it seems like you'll be able
to do like a lot of like really cool stuff like I mean like the sky is like endless for
(21:09):
like how you can tie like the possibilities are endless for how like how you can combine
I think like these glasses together like imagine like having just like a set of regular
glasses like I mean I know this is a podcast but like I wear glasses like every single day
and as an aside I was thinking of getting lacyc because I thought like that'd be kind of cool
(21:33):
but one it kind of scares me and two I figured that like in the future everybody was
wearing like air glasses in the future so it's like you know why just like go through
the potential risks of lacyc when you could just like when you're probably wearing like
some sort of like headset anyways so I assume that like when these come out they're going
(21:56):
to be like some sort of like corrective lenses but like I'm ready like I'm ready to go look
at my refrigerator and like for me to look at it and then it to tell me what the things
I can make and then it to prompt me as I'm making it it's like out chopped the onion smaller
like turn up the heat like don't stir like that stir like this I'm ready like this is this
(22:24):
is exciting I don't want to look at my phone when I'm about to go drive like I don't want
to like prop it up to GPS I wanted to like draw arrows on the road so I can just keep my
eyes on the road and not look at the phone I want to be able to talk with it I'm ready
(22:44):
the these it's exciting and I'm ready for the real time AR glasses that are going to be
reasonable I mean I'm kind of excited to this is kind of like the Tony Stark super sentient
AI right in your you know visual cortex or maybe you know you pair that with the headphones
and you get some audio response to but these are definitely two different kinds of products
(23:09):
the VR headset with the big chunky Quest headset and these lightweight glasses with a projected
display I would imagine that there's huge trade-offs so the VR glasses are probably way more
powerful way more capable more storage processing power the air glasses on the other hand I'm
(23:33):
assuming a very lightweight have a very minimal chip processing power I wonder if it's like
offloading all of the processing capability onto your phone or something where it pairs
with another device which is doing the heavy lifting running the AI models like doing the
visual processing and listening to the speech and audio around you to relay the you know
(24:01):
meaningful data like on the glasses and like battery life in the thin set of glasses I feel
like that would be a big challenge for something that has a display and it's also processing
data continuously in real time for sure and probably even like heat to could be like a
condition right because it's like it's right on your head you want to like fry your brain
(24:25):
we're going to a heat wave here in California as it is so hot already yeah and then next
thing you know is like your glasses start overheating but I don't I think those things
will be those will be worked out you can do you see those I think it was like the Dyson
headphones where they have like the the fan or like the air filter those it's such a terrible
(24:48):
product a lot of reviewers saw it and they tested it and it doesn't actually filter any
of the particulars that you wanted to filter because there's like open gaps between your
mouth and like that suction thing in front of your face I was thinking like as opposed
to like filter and you can just like make it a fan just like blow the air in your face
(25:09):
that could kind of help with the heat wave right or it's like I've ever seen those shirts
where they have like the fans inside instead of shirt yeah so like when I was living in
Tokyo used to see like construction workers where they had they were in like long sleeve
and long pants but like inside like the shirt they had like little like tiny like fans
(25:35):
like the fan was probably the size of like a like a coaster or like a little smaller like
silver dollar and then like it would just like blow in so they kind of looked like the
you know the Michelin man or like kind of like blew up like you know like the rubber type
thing but it was like the fan so like you see like all the air underneath and I guess
(25:57):
I kept them kind of cool and like the hot days of the summer so you know I think that
you know overheating could be an issue but uh you just have some fans just extended pointing
towards us or have some kind of a wearable cooling solution like the eight sleeve mattress
where you're circulating a cool in through your entire body and then it like blows out the
(26:17):
hot air through the bottom or something like a fart so looking at these Orion glasses
it seems similar to some of the other products that were announced by like Intel uh Intel
tried this obviously the Google Glass tried this decade or longer ago um I don't know when
(26:44):
we can see a consumer product out of this thing but what they have announced is the Rayvan
glasses which is constantly listening what do you what do you think about that you know
and it remembers things that uh you know you don't need to explicitly tell it to maybe so
like uh when you're saying it's constantly listening is it like saving everything on a server
(27:06):
is it on the device uh I would assume server yeah let me see but I think that um as long
as you're like aware that it's constantly listening yeah I think that that's okay um like sure
there's I guess you could like you don't worry about like privacy issues but I don't think
(27:27):
that it's not like there's like a guy at Facebook who's watching what you're saying right
it's like it's like it's putting you with everybody else and I'm sure that you know Facebook
is highly scrutinized they are uh look at or I mean like if they're doing something funny
with the data I'm sure that um like the government would do something so uh I don't know for
(27:52):
like the convenience of it I think I'm okay uh for them always to be listening uh assuming
that I can delete it if I want to so let's say like I see something that I shouldn't have
seen uh then I should be able to like go into the server like select that video clip delete
it and uh meta I would hope that they don't have a backup um and as long as that's the case
(28:14):
then I'm okay with it for the convenience because I think that's kind of cool or at least if
I could say like hey like I don't know like if I go to like a birthday party uh or like
let's say I do like a networking event and I want to remember all the people I talk to
and what we talked about like I think be really cool to be able to have like some glasses
to say like hey like this is the person to talk to uh it's like oh you talked to John and John
(28:39):
works as the CEO of a startup and he uh is an expert in generative AI and uh John just like
moved to the Bay Area like three years ago and uh that'd be pretty cool uh if I could like remember
all that and then maybe he could like store in some sort of like personal uh database and then
(28:59):
maybe next time I see John like three years later it would remember John so then like I could
be like hey John like how's uh you're six in the Bay Area like uh and how's your business going
like uh it would help me remember all those things could be like a second brain and I'm ready for that
yeah the party like about uh this form factor is that it doesn't feel as intrusive for other
(29:24):
people around you because uh when I get notifications on my smartphone like I it's a very
disruptive experience like I'm hanging out with someone my like wrist starts buzzing and I have to lift
it up oh look at my phone and it just doesn't uh make the other person feel uh acknowledged or like
I'm paying attention even though I am uh it's kind of like a reflex phone buzzes okay got I check it
(29:46):
but if you know that constantly glance down if my eyes are you know if my glasses are buzzing and
like give some visual information in front of me I can take it all in without uh you can maintain
contact yeah yeah and like I talk contact is nice because you know it's like uh someone
(30:06):
although no you wouldn't maintain eye contact you would probably like uh veer your
eyes off to the side a little bit uh look at the notification I don't know you think so like I feel
like you could probably and I mean like when you're talking to somebody it's not like you have
consistent eye contact like it's weird if you're just like staring at somebody right right um so it's
(30:26):
like I mean if you're like a normal human being you're gonna like you know look maintain eye contact
like you know look away when you're thinking or something like that go back to eye contact
glance down look up eye contact it's like it's like what normal humans do
sure but speaking right but I feel like uh if the notifications keep showing up at the same place
(30:49):
the other person can probably tell that you're looking up at the same spot every single time
yeah yeah just like uh the glancing up into the right up into the right up into the right
I'm just sporadically like uh shifting my eyes while staring at mark and it's I think it's a little
unnerving yeah I mean I don't know I guess it depends on like where the location is like uh
(31:13):
because I think that it'd probably be doable it to just like have it like when you're staring
through it ahead and just like kind of have it there and like you're you're periphery um where you
could see it so I think that's like a solvable problem I would think um this is a tricky challenge with
the UX they need to figure out some some way to make it not intrusive um but also useful
(31:35):
easy to glance without staring at it directly maybe yeah I mean like I'm just thinking like
some text floating by their head right like uh you know it's like if you have like just like
imagine like you're looking at somebody you just see like some like green text like maybe like
above their head kind of like they have like a video games like next to it it's like oh like
(31:57):
sheshaw and can then like it has like all your stats about you uh and the other person is thinking why
why is that do why is that guy with the glasses staring above my head do have something on my head
or maybe like net on your forehead so it kind of it kind of looks like you're staring in the eyes
so like good but like maybe like at at some point like when everybody has it just be like you know
(32:19):
kind of normal right because it's like you know before cell phones when you were uh talking like
you tell like remember people and like remember things and uh like you know it might have been like
kind of rude to like stare at like some sort of phone but some sort of thing like maybe not a phone
because like phones are relatively new kinds of but like no pad or like no pad or like a book
(32:43):
yeah so I just start reading a book in the middle of a conversation yeah exactly I think it's still
considered uh poor etiquette uh stare at your phone in the middle of a conversation I think the
only exception is if that action is directly related to the conversation you're having and you're
looking up some facts or you're noting down something that was shared in that conversation
(33:06):
otherwise it's it's still kind of rude yeah I feel I agree I agree um I think it's really rude if
somebody like is staring at their watch or like uh looking with their phone like when you're talking
to them like even if they're listening how do you feel about glancing at notifications um no
I feel like that may be acceptable but too much of that is also it's it's about how often
(33:31):
the glances right so if we're having like an hour long conversation you glances at like once or
twice I think that's okay but like if I'm having like a like a three-minute conversation with you and
you glances at your phone like four times like taking your apple watch for every like little buzz
or ding yeah then I'm like all right this is it's kind of just like not respectful yeah so this
is a new frontier and I think there's a lot of questions to be figured out but it's exciting I
(33:56):
am so ready so uh thank you open AI for your dev day thank you muck soccerberg and the rest of the
team at meta for uh doing what you do um and ray band too to adopt the cutting edge new technologies
to try to push forward the old traditional tools that we have sunglasses and accessories and
(34:20):
merging that with technology do you think ray band is doing a lot like the folks at lexatica or is it
mostly because I would assume it's just like uh meta is doing like all the work and then like uh the
folks at lexatica are just giving away their ip I mean I wonder because uh the hardware is built
into the ray band glasses I don't know if meta has a solid I guess they have a hardware team yeah yeah
(34:47):
maybe like because I would imagine that's just like if they didn't work with lexatica they were just
run the risk of being sued because like I think like ray bands are some of the coolest looking glasses
and like kind of like sort of own that design type thing so uh I'm not sure um like it could be that
like uh the folks at meta are working with like the lexatica designers and like working really hard to
(35:11):
try to uh figure out like how to work with the constraints of of the glasses and whatnot um or like
it may just be like like hey we think this is a cool design let's like work with you and then uh
lexatica is just like all right here it is like try to fit this in our shape and then maybe the
(35:31):
engineers like I will just we'll make it small um like I'm not sure like I really don't know like how
much effort that like they're putting in versus like meta but I would imagine it's probably
for more work on the meta side but I don't know I could be wrong that makes sense so these uh meta
ray band waferers are 429 US dollars and sold out uh they only made a limited batch of like 7500
(36:00):
units and uh the suburgitical talks about uh how it looks like a game boy color like an old retro
see through game boy color where you can see the insides you can see the circuitry uh and they're
leaning into the fact that this is a very tech heavy product and you can see like a little bit of
the circuits inside which is kind of cool. Well that's a limited edition one right yeah yeah that's
(36:22):
cool yeah I like that I like how it's like see through and whatnot um just kind of neat um oh the other
ones are still for sale okay yeah that's my figure because I thought the other ones weren't like that hard
to get yeah yeah but yeah the limited edition one is super cool yeah exciting all right anyways we
are about out of time uh we gotta we gotta run to the meetup next but uh yeah I think this was a
(36:46):
great conversation and we will see you in the next one and before we go uh one thing to note is
just a little bit of general housekeeping uh so thank you for saying to the end on October 24th
we will have a meetup with a new sponsor for the event sambanova so for those who haven't heard
(37:11):
of sambanova they make uh AI products that does really really fast inference so um they are making uh
it so they have I think uh it I think they claim it's the world's fastest inference on uh llama
(37:32):
four five b um 3.1 so that's pretty cool um they have like a little demo online uh it's it's very speedy
especially on the high end models so uh they're gonna talk about how they built it and some of the
use cases for it so I think it's a really fun event it'll be free um you can get all the details on
(37:54):
the meetup page we'll post it in the description so hope to see you there yeah if you're in palo
Also, feel free to stop by.
[BLANK_AUDIO]