Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
GMT20251224-181835_Record (00:00):
Hello
and welcome to the Leveraging AI
(00:02):
Podcast, the podcast that sharespractical, ethical ways to
leverage AI to improveefficiency, grow your business,
and advance your career.
This Isar Metis, your host, andtoday we are going to talk about
Gemini and more specifically allthe recent releases and
capabilities inside of Geminithree and how amazing they are
for anybody who lives in theGemini universe.
(00:24):
I know some of you are not inthe Google universe and are not
Gemini users, and if you are notone of those users, uh, A maybe
you should consider tryingGemini and B.
It will give you a great idea ofwhere the world is going.
So even if you're using copilotor Claude or Chachi pt or
whatever the case that you'reusing, I think it will be a
great understanding for you whatcapabilities exist today and how
(00:46):
things starts getting integratedinto more and more aspects of
our daily lives as well as ourwork life.
And so I thought that as a heavyGemini user that uses Gemini
every single day across multipleaspects that I do, it'll be
helpful for many of you tounderstand how I'm using AI on
the day-to-day tools.
What has Google launched in thepast month and a half, and how
(01:10):
well it is integrated into theGemini universe?
Again, I assume very similarthings are happening in the
copilot universe inside ofMicrosoft environment.
Inside of the Microsoftenvironment.
So even if you are jealous atthis point and you can switch
because it's a part of yourenterprise, uh, software tech
stack, then don't worry.
I assume you'll get somethingvery similar from Microsoft in
(01:32):
the near future.
The first thing that I want totalk about is the recent release
last week of Gemini three Flash.
So what the hell is Gemini threeflash?
in the recent year or so, everytime one of the companies
released a large model, theyshortly after released a smaller
brother, if you want, of thelarge scale model that is
(01:53):
supposed to be significantlyfaster, significantly cheaper,
and in most cases, roughly asgood as the previous pro model.
So the idea is that you can getvery capable AI tools built on
the new concept, infrastructure,and model, but significantly
faster and cheaper that stillexceed the capabilities of the
(02:14):
previous model.
Gemini three Flash is mostlikely the best example of this
concept.
So right now, if you are Geminiuser, if you're a paid Gemini
user, which means either youhave a Google workspace like me,
or you pay for the paid versionof Gemini as a free Google
personal user.
You now have in your dropdownmenu, three options.
(02:36):
You have fast thinking and pro.
Let's break this down fast, giveyou Gemini three flash, which is
the fastest, most efficientmodel that they have right now,
which is a great way to getanswers, quickly and do quick
brainstorming and some fastthing and any other immediate
activity that you need.
The second option is thinking,which is still Gemini three
(02:56):
flash, but with its reasoningmode.
So those of you who still don'tknow, there are two kind of
models today.
There is the regular model thatare just pulling information out
of its memory, which is the onlymodels we had until the end of
2024 with the introduction ofchat GPT oh one.
But since then, more or less,all the models have added
(03:17):
reasoning capabilities bakedinto the models.
So right now.
The thinking version inside ofGemini is Gemini three flash
with reasoning, meaning it willthink about the tasks that you
give it and the promises thatyou give it, and you will try to
understand what you mean and youwill consider different options
and it will give you much betteranswers.
The payment you pay for that istime.
(03:39):
You're gonna wait a littlelonger to get the answers that
you need, but you're gonna getmuch more sophisticated answers
with a lot more analysis bakedinto them.
The third option in the dropdownmenu is pro, which gives you
Gemini three Pro, which is themost amount of intelligence, the
longest time for thinking.
And you should use this for anylarger, more complex task,
(04:00):
advanced coding capabilities,analysis, et cetera, et cetera.
The big deal is how good thismodel is compared to the price
and the speed in which it isworking.
So let's start with the highlevel on the high level.
Google shared a graph that isshowing on the Y axis the.
ELO score from the Ella Marina.
(04:21):
Basically the chat score, uh,and showing how good it is and
on the x axis, how much itcosts.
But in a reverse order, thefurther you go to the right, the
cheaper the model is.
So it's basically showing youthe ratio between how good
people rank this model as far asits capabilities versus how much
it is going to cost you.
(04:41):
And as you go up, it gives you abetter model.
As you go to the right, it givesyou a more efficient model.
And kind of like the best placeto be, if you want, is kinda
like that 45 degree range inwhere you get a lot of
intelligence for less price.
And the arc, if you want, fromthe all the way to the top, to
(05:02):
all the way to the right.
Currently has four models allthe way to the top Gemini three
pro.
The best trade off right now isGemini.
Gemini three, flash.
The next one that is not asintelligent but is incredibly
fast and cheap is Grok 4.1, fastreasoning, and then the most
efficient one, but with a lotless intelligence is Gemini 2.5
(05:24):
flashlight, which is theprevious version of the flash
models from Google.
So what does that tell us?
It tells us that if you want toget a lot of intelligence, that
still works fast and cheap.
The two best options right now,if you're looking for more
intelligence, it's Gini threeflash.
If you're looking for a fastermodel, it's grok 4.1.
(05:44):
Fast reasoning.
The crazy thing, by the way,looking at this graph is that
Gemini three flash is betterfrom an score perspective, again
on the Ella Marina ELO score.
Is higher than any other modelother than Gemini three Pro, and
it's parallel to Grok 4.1reasoning.
(06:04):
So it's giving you not acompromise in intelligence, but
actually the best results rankedon actual people's use cases,
but for less price than the vastmajority of the models out
there.
The other thing that Googleshared is a very detailed table
comparing Gemini three flashthinking, which again would be
the middle option if you areusing the dropdown menu inside
(06:28):
the Gemini application.
And it's comparing it to Geminithree Pro thinking to Gemini
2.5, flash to Gemini 2.5 prothinking to Clason at 4.5,
thinking to G PT 5.2 extra highand to grok 4.1.
Fast reasoning.
So a wide range of leadingmodels from Google and beyond.
And then it is comparing itacross a huge variety of
(06:50):
benchmarks on very differentaspects from visual capabilities
to text capabilities, to codingcapabilities, to information
synthesis and analysis and so onand so forth.
On many of these, it is rankedfirst on many others.
it is ranked second, but a veryclose second to many other
models.
And as I mentioned, in somecases it's even better than its
(07:12):
bigger.
Brother Gemini three prothinking.
But it costs only 50 cents formillion input tokens and$3 for 1
million output tokens.
To put things in perspective,the input tokens of 50 cents is
25% of what you're gonna pay inJuvenile three Pro, which is$2.
(07:32):
Claude Sonnet 4.5 is$3, andGemini 2.5, and ChatGPT 5.2 is a
buck 75.
Still more than three times moreexpensive for roughly the same
level of intelligence.
The output token price is$3 fora million tokens.
Again, putting it in perspectiveis exactly one quarter for
(07:53):
Juvenile three Pro again, whichis$12.
Claude 4.5 sonnet at$15 GPT, 5.2at$14.
And again, the only model thatis cheaper but does not come
even close on the benchmarksacross the different benchmarks
is Grok 4.1 Fast reasoning,which is again, currently from a
cost perspective and a speedperspective, the most cost
(08:16):
effective model out there, outof the bigger, better models.
The bottom line is right now youcan use Gemini three flash to do
amazing things.
And it is receiving amazingreviews online as well.
I've been using it since it'sbeen launched, and I'm extremely
happy with the results acrossmultiple use cases and again, at
a much cheaper price point thaneverything else.
(08:36):
So a kudos to Google for beingable to release such a powerful
model at such a cheap price thatmakes more intelligence
available to more people throughthe API.
Again, if you're just using itin the Gemini platform, you
don't really care, but if youwanna use it through the API for
any application, it is a verybig deal on the bigger picture
(08:56):
that shows you where we'regoing.
Every new model is gonna give usfaster, better capabilities at a
much cheaper price than theprevious model.
So while this is fun andinteresting and good to know,
it's not very practical, justknowing that you have a better,
cheaper model.
So let's go from now on to verypractical things.
The first thing is Nano BananaPro.
So nano Banana Pro was releaseda few weeks ago, and it is an
(09:19):
improvement over nano banana,which by itself was a very
capable model.
What Nano Banana Pro allows youto do is to combine a lot more
consistent aspects together.
So multiple people, multiplefaces, multiple devices,
multiple products, whatever itis that you want, you can
combine into a single image.
It is even better at keepingconsistency across images.
(09:39):
It generates even more livelyand capable tools.
It is better at editing existingimages, but in addition, they
are adding a lot of new coolthings.
The first thing that I findextremely powerful is that
they've combined Nano Banana Prointo Google Slides.
So how is Nano Banana Prointegrated into Google Slides?
There are several differentfeatures that exist built into
(10:02):
Google Slides right now, whichare really cool.
The first one is for every slideyou create under the slide,
there's now a button that'scalled Beautify this slide, and
it has a banana logo next to it,which gives you a hint what it
is going to do.
So those of you who are watchingmy screen right now, if you are
watching this on YouTube or ifyou're watching the video on any
other platform, you can see atable that is just a pretty
(10:24):
boring table that says Optionsgalore on top, and it is
comparing the core capabilitiesand the different features of
the main workflow automationtools such as NA 10, make.com,
Zapier relay relevance, AI andgum loop.
And it's again, just a tablecomparing them that I created
with AI, with Gemini, and thenbrought it into a slide, but
(10:44):
it's just a table.
But then I clicked the buttonand it created this version.
But for those of you who arejust listening to this, what it
did is it took the table andmade it way cooler, so it added
this glare to the headline.
The lines of the table are nowthese laser glowing blue kind of
line that looks like veryfuturistic, which is great for
(11:05):
this kind of slide.
It changed the headings of eachand every one of the columns to
be this glowing box.
It added on its own logos of thedifferent brands.
I did not ask it literally, Ijust clicked the button and now
it has the logos of thedifferent companies and it added
this cool background that lookslike a mix of tron and kinda
like computer chip glowing bluelines in the background.
(11:27):
Maybe a little overboard, butoverall definitely makes the
slide look a lot cooler.
The two disadvantages of thisover the slide that I had
before.
One is that it's not editable.
This is literally a nano bananaimage that is taking the
information from the slide andturning it to just look cooler
so I cannot edit what's on thescreen right now.
(11:48):
The second thing, which is alittle depressing is that the
fact that it is not as crisp asthe original image.
So because it is rendering itversus just taking the lines and
the text that was there before,it is not as crisp as the
previous version.
I assume both of these will beimproved over time, meaning the
ability to actually edit afterit, quote unquote beautifies the
(12:10):
slide, as well as it's gonnahave the same exact resolution
because it may actually use thetext and the lines as it should
be.
But even now, it is a very cooland fun thing to play with as
far as creating slides that arebetter than what you can create
on your own.
The second thing that you can doright now, which is very cool,
is when you click on any image,whether it was created with an
(12:30):
AI tool, or it's just an imageyou got off the internet, or an
image you actually took with acamera, if that is still a
thing, you can now click on editimage with nano banana inside of
Google Slides.
So when you click on that, itopens a nano banana menu on the
right, in which you can doseveral different things.
You can click on the image anddescribe exactly what you want
to edit on this image, or youcan just, if you just open this
(12:53):
menu, you can write whatever youwanna write.
So let's say I wanna write animage of a Formula One car lying
above a city.
An image of a Formula One carflying above a busy city, and if
I click on that, it will do whatNATA does.
It will create an image for me,but once it's creates the image,
(13:15):
I will be able to pick what Iactually want to do with it so I
can actually create it as animage inside a slide, or I can
create.
An entire new slide with theimage that I created.
Why is that important?
Because you can now open thenano banana menu and describe
what you want in the slide,including the text.
And now you can see again, thoseof you're seeing, you see I have
(13:37):
three different options after Icreated this really cool image,
by the way, of a four wheel ofone flying above the, uh,
traffic of a crowded city in asunset.
you see I have three buttons.
One button says slide.
If I click on that, it createsan entire, a new slide for me.
The other one is image.
We'll create a new image, andthe third one is infographics,
where I can create infographicsin all different options.
You can have text and image allcombined together that I can
(14:00):
define in a lot of details andembed into my presentation an
amazingly functional capability,especially for somebody like me
who needs to create a lot ofpresentations.
Now because of these newcapabilities inside of Google
Slides, I recently canceled myMid Journey membership because I
found out that I literallycreate 100% of the graphics I
(14:21):
need for slides, which is mostof the graphics that I need
inside of Google Slides, becausethen it knows what I'm doing.
It understands the rest of theimages, it knows the style that
I'm using, it understands what'sin the slides.
I can have a conversation withit with Gemini inside of Google
Slides about what could be greatideas for graphics, for the
overall presentation, for aspecific slide, for a specific
(14:42):
point I'm trying to deliver.
And it gives me ideas and webrainstorm these ideas and then
it goes and creates the image orcreates the entire slide.
So this capability by itself isso powerful that I actually
don't need any other imagegeneration capability.
But before we switch gears toother ways that Nano Banana is
now being used across the Googleuniverse.
(15:03):
I wanna share with you a newcapability that now exists
inside of the regular Geminichat.
But Google also added somethingelse really powerful inside the
Gemini solution inside of GoogleSlides, which did not exist
until about a week and a halfago that I find amazing, which
is the little plus button thatexists in the regular Gemini but
did not exist inside of thetools like Docs and slides and
(15:27):
exist right now.
And this allows you to upload.
Documents from your Google Driveinto the context of the Gemini
universe inside of slides.
How does that help you?
Well, you can use your brandguidelines in there right now.
You can upload multipledocuments and have it help you
brainstorm on how to create agood presentation from the
information that you uploaded.
(15:48):
You can take a summary of ameeting that you have there and
turn it into slides and so onand so forth without having to
go through the regular Gemini orany other tool.
You can open it straight hereinside of Google Slides and use
that as context inside of yourslides universe while still
using all the benefits ofGemini.
I find this really helpful.
(16:09):
I've used it multiple timessince it came out and it's less
than two weeks, and I suggestyou at least try it and do the
same thing.
If you are in the Googleuniverse.
So if you are now in the regularGemini interface and you upload
an image, in this particularcase, I upload an image of a
Honda Odyssey.
So staying away from the FormulaOne car by doing something cool
as well.
But anyway, any image that youupload, if you click on it right
(16:30):
now, you have a drawing and anotation capability on the
image.
So you get to pick, fivedifferent colors, white, blue,
green, yellow, and red, and youcan just draw whatever you want
on the image.
So what I'm going to do is I'mgoing to draw four circles on
the sides of the wheels of theHonda Odyssey to explain what
(16:51):
I'm trying to do, kinda likeannotating it, and then when I
click done it is embedded intothe image.
You can see it in the top smallimage up here.
And then I will say, replace thecar's wheels with enclosed props
like a quad copter, and have itflying above a busy traffic
going over the interstate inFlorida.
(17:12):
Now because I referenced thespecific area of where I want
the props to be, the nano bananagets a better understanding of
exactly what I want to change.
And you can do this with anyother image.
You can use the different colorsuntil it turn the yellow area to
this, use the red for that addtext in a specific area to give
(17:33):
it more information.
So it allows you to giveannotation and get better ideas.
And the image came out really,really cool.
A blue Honda Odyssey that hasthese, quad circular props and
flying above traffic in an areathat definitely looks like the
interstate in Florida.
And so you can get much moregranular with your feedback that
(17:54):
you are giving to the nanobanana model in order to get
exactly the results that youwant.
But that's not where it ends.
Google has added the Nano BananaPro capabilities into Notebook
LM as well.
Those of you who don't knowNotebook lm, it is one of the
most incredible tools that arefree from Google for us to use,
(18:16):
which allows you to bring inmultiple resources.
These could be files that youhave, like files on your Google
Drive, but these could be PDFsor Google Docs or Excel files.
Or links to videos, links towebsites and so on.
And just combine them into anotebook, which is technically
just a container of data, andthen turn that information into
either just a q and a or manyother different functions.
(18:39):
So if you are in an examplehere, you can see that I have
multiple resources here talkingabout the latest open router,
state of AI research and otherarticles that are mentioning it,
and different comments thatpeople made on that from
multiple sources.
So there's, I dunno, about eightor 10 different articles here,
and I literally ask it aboutwhat are the findings?
(19:00):
And it gives you a quicksummary.
But on the right side here, whatthey had before was.
Audio overview, which createsthis cool podcast that will walk
you through what's happening.
A video overview that is thesame thing, but with some
graphics and they had mind mapand reports and flashcards and
quiz.
All of that existed before, butthey added two new really cool
(19:20):
capabilities.
One is infographics and theother is slide deck.
What does this do?
Well, it takes the entirecontent, it learned from the all
the different files that youuploaded, and it turns it into a
infographics or a slide.
So if you look at theinfographics as an example, and
those of you who are notwatching, I will walk you
through it.
It created this really coolinfographics that says The state
(19:42):
of AI insights from 100 trilliontokens, open router analysis of
real world usage in 2025.
And it has these separate boxeswith really colorful graphs and
different illustrations, allexplaining all the findings or
all the major findings from thisreally long detailed research.
So.
Really amazing infographics thatI created very easily.
(20:05):
Now, there are two ways to usethe infographics function.
One is to press the button.
The other is next to the button.
There's a little pencil.
And then if you do that, you canactually choose more options.
So you can choose the languageof the infographics, you can
choose whether it's landscape,portrait, or square.
You can choose whether you wantit concise, standard, or
detailed.
And then you can prompt it onwhat you wanna focus on, what
(20:28):
the colors that you want to use,and whatever other details you
wanna provide it to.
Give it more guidance on thekind of infographics you want
and exactly how you want it.
And then it generates theinfographics for you, or as many
infographics as you want.
So you can do one for eachchapter.
You can do one for each topic.
You can do one for differentbrands with different color
guidelines, whatever that youwant.
It is.
Insanely powerful capability,especially when it knows how to
(20:51):
combine data from so manydifferent sources and figure out
what's important on its own.
But let's say you want moreinformation than just an
infographics, well, you cancreate an entire deck of slides,
and if I click on that, you cansee that it has created all
these different slides, and if Iclick on them, it is actually a
presentation that I can gothrough.
So again, the state of AI and itcreated for each slide.
(21:13):
The first one is the AIlandscape is experiencing an
explosive, unprecedented growthin models, investment and
complexity.
And it shows this really cooltimeline and information related
to the timeline.
And you can see that it iscreating really cool graphics
with very relevant details.
Perfect slides, just the rightbalance between the amount of
text and the graphics thatdescribes it and a good flow
(21:34):
that you can follow all withoutme doing nothing other than
giving it a short prompt, whichI didn't even have to do.
All I had to do is click thebutton to create this outcome.
So if you need a presentationthat summarizes a lot of
information for any purpose,whether it's personal usage for
school or for your business, youcan just upload the information
(21:55):
into Notebook Lamb, and thenclick on the slide deck.
Or as I mentioned, click on thepencil inside the slide deck,
explain exactly what you want,and you will get a presentation
within a couple of minutes.
Now the next tool that Googlereleased earlier, but it's
adding more and morecapabilities to it that I found
really cool is Google Vids.
So Google Vids is Google's newvideo tool that allows you to
edit, create, or manipulatevideos that you have right now.
(22:19):
And you can use this for eitherpersonal usage or for business
usage.
You can use it to create newintroduction videos for new
employees, onboarding forclients, customer service
explanations about yoursoftware, product or service, et
cetera, et cetera, whatever kindof videos you want.
So it is built as separate,different scenes.
They built it very similar onpurpose to using Google Slides.
(22:41):
So if you go there, theinterface will look very
familiar to you if you are aGoogle Slides user.
And you can add new scenes,which are basically just like
slides, but they are videoinside of them.
And what you can do in each andevery one of the scenes is you
can generate a video with VO3.1.
So you can generate up to eightseconds video clips on whatever
you want.
Just by prompting it.
(23:02):
You can create an AI avatar.
So just like Hagen or Synthesiaor other avatar tools, you can
do this.
As of now for free inside ofGoogle Vids, you can convert
slides, which I find really,really cool.
So if you have a presentation,whether you created it or AI
created it for you, you can nowupload it here.
And what it does is actuallyreally cool.
(23:23):
It takes the slides, convertsthem into short videos, and if
you have presenter notes underthe slides, which I always do,
it knows how to convert theminto an narration done by an AI
avatar that explains watching itslide.
So if you use AI to create thepresentation and ask AI to
create detailed speaking notesfor an avatar as presenter notes
(23:44):
in the slides, and then youbring it here, you can have a
video of the presentationwalking people through the
entire process.
You can obviously record on yourown straight from here.
So you can record your screen,you can record your camera, or
you can record both very similarto tools like Loom.
You can upload obviously anyvideo that you have, either from
your Google Drive or from yourphotos or from your computer.
(24:07):
You can use their own templatesand they have really cool
templates on how to create cooltransitions and deliver a story
or a presentation or whatever itis that you're trying to
deliver.
Or you can use their storyboardtool to brainstorm what should
be your different scenes inorder to deliver the message or
the outcome that you're tryingto deliver.
And you can pick whether it'sgonna be landscape, portrait, or
square.
(24:27):
Incredibly powerful, completelyfree, easy to use, and combines
all the different AI goodies andold school tools that Google
knows how to deliver.
So what you're seeing overall isthat Google is combining more
and more of their tools intomore and more of the other
tools, creating a seamless workenvironment where you can switch
(24:49):
tools to get the best out of allof them while using the
resources and the capabilitiesfrom the other tools.
Let's talk on a few otherexamples that demonstrate what I
just said.
One of the things they addedthis past week is that now deep
research reports can showvisuals inside of the report.
So those of you who use deepresearch, and I use deep
(25:09):
research all the time, it's anamazing function that exists
today in all the large languagemodels.
but I find that Google's isactually the best, and I've been
using it more than I'm using theother tools.
When it goes to deep research,it can now create graphs and
charts and flow charts and othervisuals inside of the research
document showing you exactlybetter illustrating what it
(25:29):
found in the research, which issomething that none of the other
tools had and so far the onlytool that knew how to combine
visuals into text in a seamlessway was Claude, was the recent
models from Claude that Iabsolutely love.
And so now you can get visualsinside of deep research reports
from Gemini.
(25:50):
This is currently available toultra users only, but I assume
this will trickle down to othermore basic licensing schemes as
well.
Another really cool feature iscalled Dynamic View, and what is
Gen Dynamic view is if you aretrying to learn a topic, and the
example they gave is learningthe functions of the cell.
And in the example they gave,they're using an illustration of
(26:13):
how to learn the differentcomponents of the cell.
So they've asked Gemini toexplain.
What are the differentcomponents of the plant cell and
it has created a fullyinteractive view of the cell
where when you click atdifferent components, it opens
these tickets if you want, onthe right, explaining the
different components of thecell.
(26:33):
This is created all by GoogleGemini, so it's didn't exist
before.
It is creating it for thisfunction specifically, and you
can create similar things tolearn how a combustion engine
works or analyzing a flow inyour company or anything else.
You can explain it with a fullyinteractive image that is
created by the Google tools.
(26:55):
So it's a combination of writingcode and nano banana, creating
the image, and then it knows howto make it interactive and to
combine the different componentsfrom the image with different
explanations on the right.
And now I wanna talk about fourdifferent things that are
embedded into the day-to-daytools that you use all the time.
If it wasn't enough, everythingI talked about so far.
The first one is actuallyworking with the Gemini sidebar
(27:16):
inside of Gmail.
I'm not gonna share my screenbecause I can't show you all my
emails, but Gemini inside ofGmail is absolutely magical.
Meaning you can find any emailand actually also any calendar
event and attachments and filesinside of your Google Drive
straight from the Gmail Geminiinterface, and you can ask it
(27:37):
really sophisticated questionsand you would still find it for
you.
I'm sometimes amazed with howgood this tool is, so you can
ask it, find all the emails inwhich I have open tasks from the
past week and show them to me itwill find and it will give you a
list of all the open tasks withthe links to the sourcing
emails.
it comes from find me all theemails from clients that are
(27:58):
asking for something that I didnot respond to yet, that sound
urgent to you and that are fromthe past month, and it will do
that as well.
So literally anything you want.
I asked it multiple times tolocate specific detail about
flights I have in the future andthings like that, and he just
knows how to pull informationfrom the emails, but it also
gives you the links to therelevant emails so you can
verify the information or seemore context and so on.
(28:21):
As I mentioned, extremelypowerful.
Staying on the Google Workspaceenvironment, they now have the
capability that they call helpme schedule.
You can have the AI look throughyour calendar and find for the
right opportunities to schedulewhatever it is you wanna
schedule.
Either go through an emailthread that goes back and forth
between you and a potentialsomebody you wanna meet and say,
(28:42):
help me schedule.
Related to the communication andthen it will say, oh, you're
asking for one hour or 45minutes, or an hour and a half,
or whatever it is that theconversation mentioned.
And with the times that theclient or the other person
suggested they might beavailable.
And it will check your calendarand you will suggest a meeting
and if you'll say schedule it,it will actually go and create
the meeting and invite the otherperson.
(29:02):
So this is next step, kind ofinject assistant behavior.
That is really helpful if you'renot using a third party
scheduling tool.
So sometimes it just can saveyou and the other person a lot
of back and forth and find theright times for you to meet with
whoever it is that you wannameet, whether external or
internal, or just stuff you needto do and you need to block time
(29:23):
in your calendar.
It can help you do that as well.
Another thing that they justadded to Gemini that I find
really helpful is local results.
So the local results that usedto solely exist inside of Google
search now show up in Gemini.
So if you ask Gemini for a veganrestaurant in your area that is
no more than 10 minutes awayfrom where you are staying at
(29:44):
your hotel and that serves yourfavorite dish, it will try to
find that and it will show youinside of Gemini with the
ability to click and see all thedetails just like you would on a
regular Google search.
Again, as I mentioned before,all the lines between the
different Google universes areblurring and it is becoming more
and more helpful for us asusers.
Another cool thing that theyadded just this past week is the
(30:07):
capability to reference NotebookLM notebooks inside of Gemini.
So if you are in a regularGemini conversation, inside the
plus button.
You used to have upload photos,add from drive and photos, and
now there's notebook LM on thebottom.
If you click on Notebook, lm, itopens all the all the notebooks
that you have in the Notebook,LM environment.
(30:27):
What does that help you do?
It helps you have a conversationinside of Gemini about a
specific content universe thatexists in one of your notebooks
inside of Notebook.
As an example, I have a notebookwith all my podcast episodes.
Now I can go to Notebook and askquestions over there, but inside
(30:48):
of Gemini, there's more stuffthat I can do because it
connects to different tools andit has different models that I
can choose and I can createimages and I can do a lot of
stuff that I cannot donecessarily inside of Notebook
lm.
So I can now combine theknowledge base of any notebook
in Notebook.
Lm.
Into my regular Geminiconversation.
As of right now, it works onlyinside of the personal account
(31:11):
and not on my business account.
But again, I'm sure this iscoming next and it will connect
over there as well.
And now two more things that aremore advanced capabilities.
One, they now have the ageagentic browser.
So just like Claude now has itfor Chrome and Atlas inside of
Chachi PT and Perplexities Cometand so on, you can now click on
(31:31):
the top right corner.
If you are anywhere in Chrome,there's a Gemini button and if
you click on that, it pops up aGemini popup that can now engage
with anything in your browseracross the different tabs.
And.
Work collaboratively with you onanything that you're working and
it knows how to fetchinformation and work in anything
in your browser.
It also knows how to work withvoice, so you can click on go
(31:53):
live and have a conversationwith it about what's on the
website, and you can choose themodel in the dropdown menu just
like you can in the regularGemini tool if you have not used
any of the agentic browsers.
I will say two things.
One is absolutely try it outbecause it's a completely game
changer to how I'm usingbrowsers and how I'm doing
different things in differentworkflows that I'm doing right
(32:13):
now.
The other thing is that allthese labs are saying that these
tools are really dangerous andthat they're prone to prompt
injections and othercybersecurity risks, so do not
share with it information thatyou want to keep safe.
But as far as using it for manythings on the day to day, it is
a very highly recommended tool.
I've been using Comet forprobably six months now, several
(32:35):
times a week.
I've been using a little bit ofAtlas.
I've used Gemini and I've alsotried Claude, this past week as
well.
All very capable and helpfulacross different kinds of use
cases.
But then the last thing that Iwanna mention from Gemini is
they now have what they callGemini agent.
Gemini agent is currently onlyavailable to the ultra users,
which are personal accounts thatare paying for their paid
(32:57):
personal license of Gemini.
And if you have that, you cannow build really powerful
automations that go acrosseverything that Gemini knows how
to do, combined with.
More or less everything in yourGoogle workspace, so it knows
how to build automations similarto NA 10 and make.com and
(33:18):
Zapier, et cetera.
But it's doing it inside theGoogle Universe combined
together with Google search.
And you can build reallysophisticated automations that
will work across your entireGoogle universe.
Is it as helpful are NA 10 ormake?
Absolutely not.
Is it way easier to use and itworks really well inside your
Google universe?
The answer is yes.
So I would say there's room forboth of these components, and if
(33:41):
Google decides to then build itbeyond, then allow you to
connect with third party tools,which I assume they will.
That puts at risk a lot of othercompanies, including Mac and
Zapier and NA 10 and so on,because it will be perfectly
integrated into everything inyour existing Google universe,
including your drive and yournotebook and your emails and
everything else, but also thenintegrated potentially with
(34:03):
third party platforms.
So what is the bottom line andwhy this is important and why is
this important at the end of2025 when we are looking into
2026?
first of all, you can see howthe Google ecosystem is
embedding AI into everythingeverywhere and interconnecting
the different components toprovide us the users more value.
(34:25):
This was the dream since dayone, right?
I don't want to have gemini forDocs and Gemini for slides and
Gemini, for sheets and GeneralGemini, and then notebook.
I wanna have just one Geminithat will know how to do all
these different things and we'llbe able to connect the data from
all the different sources andwe're coming closer and closer
(34:45):
to that point that Gemini canwork across all my different
environments and referenceinformation from Google search
and Google Slides and mycalendar and my.
My emails and third partyplatforms and everything else,
and having an extremely powerfulassistant that can also take
action.
And this is why I kept theagentic part last, this is the
next frontier, right, is theability to tell it, to actually
(35:08):
do things for you, whetherinside of the Google universe,
like create a summary documentbased on a list of emails, or
write an email based on multipledocuments inside of your
notebook, lm, or whatever it isthat you are trying to do.
Everything will talk toeverything in a seamless way.
It will understand your entirecontext universe, and we'll be
able to a help you understandinformation.
(35:31):
Analyze information, make betterdecisions, but also actually
take action for you.
And all of this is alreadyavailable one way or another,
and it will become much bettergoing into 2026.
As I mentioned, this was Gemini,but I'm sure Microsoft is going
to do very similar things in theMicrosoft universe.
It will be very interesting tosee how Chachi PT and Grok and
(35:54):
Claude stay relevant in auniverse where everything in my
ecosystem is connected through aplatform that has AI into it.
And then unless they provide thesame level of connectivity,
which they probably won't beable to, they'll be able to come
close and the same level ofsecurity, which they definitely
will not be able to.
And added value beyond what Ican get in Gemini, in the Google
(36:16):
universe, or from copilot in theMicrosoft universe, it will be
very hard for them to compete inthe business arena.
This is my personal opinionbecause that's where most of the
value is having access to all mydata, all my context, all my
experiences, all my connections,all my emails and everything
that I do in my business, andit'll be able to be
significantly more helpful thanjust a generic model that has
(36:39):
memories about what I've donewith it in the past year and a
half.
If you are a Google user and youhaven't tried one or a few of
the things that I justmentioned, go test it out and
find ways to make it useful foryou.
I promise you, this will makeyou significantly more effective
in 2026.
That's it for today.
Keep on exploring ai, keepsharing what you're learning
(37:00):
with other people.
Help us all learn together andhave an amazing 2026