Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:11):
Hello and welcome to the last weekin AI podcast where you can hear us
chat about what's going on with ai.
As usual, in this episode, we willsummarize and discuss some of last
week's most interesting AI news.
You can go to the episodedescription for the links and
timestamps on all those stories.
I'm one of your regular hosts, Andre Kko.
(00:32):
I studied AI in grad schooland now work at a generative AI
startup, and this week Jeremy is.
Traveling.
So we have a, guest co-hostonce again, Daniel Bashir.
Hey.
Yes, I am one of your irregular hosts.
Daniel Bashir.
I, I studied CS and math andphilosophy in college after that.
Went on to do ML engineering.
(00:53):
Spent a little bit of timedoing m mo compilers as a thing
that I thought would be fun.
And now I'm back to doing ML engineering.
And you have quite a bit of background inpodcasting as someone who ran a podcast
of a Gradient podcast for quite a whileand interviewed many people in ai.
So thank you for the shout out.
(01:14):
Yeah, yeah.
It's a, it's a very fun, very fun hobby.
Yeah.
For any listeners you should,uh, look up at podcast.
Lots of interesting conversations,that Daniel has recorded over
the last, I dunno, few years.
Must be, yeah.
Yeah.
It's been a couple years now.
I. Well, this episode will be abit shorter, but it just wasn't
(01:36):
a ton happening this past week.
So quick preview tools and apps.
We got a couple small things.
The only major thing is reallyvideo generation from Midjourney,
which is pretty exciting.
Applications and business,nothing that huge.
Just a couple updates.
Projects and open source.
We'll be talking about mostlynew benchmarks, dealing with
(01:58):
stuff, and then we'll mostlyget into some interoperability
and safety things for the rest.
So compared to our usualtwo hour episodes, this one
would be a pretty brisk list
and we can go ahead andstart in tools and apps.
The first story is Mid Journey launchingits first AI video generation model.
(02:21):
V one.
So Mid Journey is one of the OGtext to image generation providers.
They were for quite a while, one of theleaders in the space, when you had to
go to Discord and, uh, use their bot,which a lot of people did, and they've
been in the space for a long time now.
They have like their V sevenor something, text image model.
(02:44):
But this is their first videogeneration model and you can
now use it on their website.
you can, subscribe I think for $10per month to get the basic plan, and
you can then provide, uh, images textto get five second completions of.
Your image with some prompt andyou can also kind of extend videos
(03:08):
as well to go to up to 21 seconds.
So, yeah, exciting news.
You know, my journey is a leader in textto image generations, so unsurprisingly,
videos generated seem pretty solidand it's also pretty affordable.
It's, it's just roughly eighttimes the cost of image generation.
(03:31):
Yeah, that's been really nice to see.
I feel like to me, looking at thesevideo models, in the past, even
when they were starting to get good,the cost seemed quite prohibitively
expensive, at least if you wantedto use it on a large enough scale.
Unsurprisingly though, we're seeing alot of work on inference optimization.
Very, very smart things peopleare doing that is driving
(03:54):
down the cost of this a lot.
And I think we'll see thatin the next story too.
Exactly.
I played around a littlebit, a little bit.
There's no like strongbenchmark to compare.
I'd be surprised if they managed to be asgood as VR free from Google and, uh, they
don't have the audio aspect of V free.
I just think Google threw alot of resources and seemed to
(04:16):
really nail it with VO free.
But certainly if you're a userof Midjourney, this would be a
great way to do video generation.
Yeah, I'm almost a little bit, orI will feel a little bit sad when
everything gets super realistic becauseI still feel like we're in this very
funny phase of people creating like.
(04:36):
The craziest AI slop you've ever seen.
Something popped up on, on X yesterdaythat was like, uh, a like Korean ai
slop video of Donald Trump and Elon Muskmaking like an anti-American sandwich.
I. That looked like a cooking show,and it was, it was very like, surreal
(04:57):
and, you know, just the kind of thing,like, clearly not realistic, but
like, realistic enough to be funny.
I like this phase we're in and I feellike I'm gonna miss it a little bit.
Yeah.
I feel like my impression fromvideo generation, it, it's been
kind of a hobbyist thing, right?
Mm-hmm.
Uh, you make little memesor funny things of it.
(05:17):
There will come a point where peoplestart using it for commercials and,
and things that we have seen a lot ofright that have been done without ai,
but there's a lot of just ridiculousnessthat you can get up to with video
models even more so than image models.
And I, I feel like very ridiculousnesswill stay even as the quality improves.
(05:41):
Probably, yeah.
Yeah.
If you're, if you're listening to thisand you know, you feel so compelled,
you can help make the world a littlebit better by creating AI slot videos.
On, another story we've got,again, I. On efficiency and models
schools', Gemini AI family has beenupdated with a couple of new models.
(06:02):
You may have heard about therelease of Gemini 2.5 Pro, which
has exited its preview phase.
Now it's available fordevelopers to build on.
And in addition to that, they've gotGemini 2.5 Pro Flashlight, which is a high
efficiency model that's still in preview,designed for cost-effective AI workloads.
(06:22):
This is, again, not anything new.
If you've been following Anthropic, ofcourse they have Opus as well as sonnet.
That is much more high efficiency.
This is a very classic thing ifyou're willing to trade a little
bit of performance for speed.
The new models have shown significantimprovements over Prius versions.
So Google is looking quite competitivewith these and in various, they, they've
(06:45):
been in various preview and test builds.
Google's been making them stablefor long-term development.
And 2.5 flash is now ingeneral availability.
Yeah.
Now we have these, uh, free tiers, 2.5Pro, 2.5 flash, and 2.5 flash light.
Kind of confusing naming.
Uh, but as you said, similar tophilanthropic, philanthropic has opus,
(07:09):
sonet, and haiku with the smallest modelbeing the fastest and cheapest, so on.
so it seems like, you know,this is definitely a pattern we.
Seeing with LLM andFrontier Model Providers.
Uh, OpenAI has their mini models.
I forget, like they have Oone and oh three and GP four.
(07:30):
Oh.
So it's kind of hard to tell what everyactual breakdowns, but I have a way.
Yeah.
Uh, flashlight.
Uh, one third, the cost of regular flashfor input and way cheaper for output.
It's 40 cents per million tokenscompared to $2.50 per million tokens.
(07:52):
So if flashlight is strong enough for youruse case kind of a no brainer to use it.
Next up.
Another story about Google this time, notabout an LLM, but about how you interact
at LLM, and this is in their AI mode.
You're now able to have backand forth voice conversations
(08:15):
with the search function.
There's now a live icon in the Google app,and you can ask it questions, receive ai.
Audio responses and prettymuch chat to it, similar to,
open AI's, advanced voice mode.
So yeah, we're, you know, gettingever closer to the her future where we
(08:38):
can just talk to AI all the time, andthat's a normal way to use ai, which
I think is still not so much the case.
Yeah, I think that.
For many people I've spoken to about this,the voice modes thus far, even if the
(08:58):
voices are quite realistic, haven't felt.
Like something you'dspend a lot of time using?
I mean, I, I have a few friendshere and there who spend some
time with the voice modes.
Probably those who are more inclinedto already like send people voice
messages and that's just a modalitythat feels a bit more normal for them.
But for the vast majority of peopleI talk to who I'm aware of, it feels
(09:21):
like text is still, like texting themodel, you know, as you would, is still
kind of the primary, the primary way.
That people are engaging with these.
So I, I am curious what it is thatmight get people to make that shift.
Yeah.
It feels like maybe it would be likewe've seen voice driven things, particular
(09:46):
things like Alexa, where it's like atiny assistant that can handle various
little things for you, answer questions.
I could see that becoming.
More common in usage of ai when youjust have some random question that
came to mind and you wanna quicklyget it could just do a voice command.
But I do agree that it's not clearto what extent that'll be the norm.
(10:13):
Our, uh, next lightning round story is on.
Back to video models.
YouTube is to add Google's VO threeto shorts and a way that could
turbocharge on the video platform.
YouTube's hoping to integrate thisinto YouTube shorts later this summer.
This was announced by their CEONeil Mohan at the Cans Lions
(10:33):
Festival alongside a few creators.
Amelia.
Berg, Alex Cooper, Brandon Baum.
As Andre was mentioningearlier, VO three is quite good.
It's a significant upgrade from the oldergeneration of models used in YouTube's
stream screen background generation tool.
A few collaborations going onhere and VO three has already
(10:53):
been producing some viral Lydia.
Yeah, I could see there being somefun shorts being generated by it.
So you can definitely make, fairlycomplete outputs that that could work
as something you'd see on TikTok,or in this case, YouTube shorts.
Moving on to applications andbusiness, just a couple stories.
(11:14):
The first one isn't so much.
Like, not directly business,but I guess related.
It's about the OpenAI files, whichis a website that kind of documents
a whole bunch of things that havealready been released and kind of
documented with regards to OpenAI.
(11:35):
But all in one place and in avery kind of easy to browse way.
This is, uh, calibration between theMeet us project and the tech oversight.
Project to nonprofit techwatchdog organizations.
And it's, uh, yeah, let's sayis, is pretty critical of OpenAI
highlights a lot of the questionablethings that have come to light
(11:58):
about Sam Altman's, uh, investments.
For instance some of the people wholeft OpenAI, uh, their statements
on Sam Altman and their stances.
Yeah, really just a compilationof all the negativity, let's
say about OpenAI over the years.
Nothing new as far as I'm aware in thereport, but, uh, if you want to go and
(12:23):
see all of it, uh, in a nicely formativeway, then now you have this, resource.
And we'll move right along.
OpenAI drops Scale AI as a dataprovider following Meta dealNext
story is also about open point ai.
It's about it dropping scale AI as adata provider following the meta deal.
So as we've covered, I believepreviously Meta has hired Alex Wang
(12:45):
from scale AI to join and, and leadtheir super intelligence effort.
Now you're seeing, uh,open ai, I believe also.
Google, if I remember correctlydropping some of their collaborations
with scale ai, which is, kindof actually a big deal scale.
AI has a new CEO and it seemslike it would be a hard place
(13:09):
to be in, in terms of, you know,now any competitor to OpenAI.
We'll probably not wanna work with you.
And, uh, those are some big companiesthat, uh, scale AI would presumably
want to have business with.
But kind of unsurprisingly, thatappears to be less the case.
Our next story is shiftingover to the self-driving world.
(13:32):
If you live in the Bay Area,you're probably very used
to seeing Waymo's around.
You may have also seen a coupleof more interesting sort of.
Looking vehicles, these are createdby a company called Zoox, which you
may or may not have heard of, wasacquired by Amazon a little while back.
(13:52):
The news here is Zoox has opened its firstmajor production facility for Robotaxis.
They're hoping to produceabout 10,000 units annually.
The facilities in Hayward, California,their second production site in the
Bay Area, they are currently testingtheir vehicles in multiple US cities.
And are offering early access rides inLas Vegas with plan to expand to sf.
(14:13):
So you may see more ofthese on the road soon.
Yeah, it's quite an interestingdesign compared to Waymo.
Waymo so far had had basicallynormal cars, pretty nice jaguar cars.
Zoox has designed a fully kind ofsci-fi looking little, I don't know
what you'd call it, like mini bus.
(14:34):
Uh, it's, as you said,kind of a rectangle.
There's no wheel at all.
There's four seats.
Facing each other.
So not like the usual four seatsall facing in front of car.
There's no front to the scar.
Mm-hmm.
It's, uh, like a little pod andit has wheels that, uh, allow
it to go, well, not wheels.
(14:56):
I guess design allows it to go either way.
Like, there's no front at all.
It doesn't need to do,free way turns or whatever.
so far pretty limited access.
I don't think it's possible to test it.
Certainly I couldn't, eventhough I would like to.
But, uh, yeah, it will be excitingto see if, if they actually managed
(15:16):
to roll this out quickly, I woulddefinitely want to try it out.
Onto projects in open source.
We've got a couple benchmarks to go over.
The first one is Live Code Bench Pro.
the paper for it has the subtitle.
How do Olympiad Medalist JudgeLMS in Competitive Programming?
(15:37):
So often we've seen benchmarks forcoding EMS that focus on these kinds
of scenarios, not like actual softwareengineering so much as competitive
programming in a sense of you have likea, a problem where you need to write out
an algorithm to solve, uh, some task.
(15:59):
Not write a functionwithin a larger code base.
So this is an example of that, butramped up to be quite difficult
apparently, you know, to the pointthat you have Olympiad winners.
So, just a quick example, uh, this willtake a while, but I'll, I'll read out
some of it as an example of a logicheavy problem form Code forces six.
(16:24):
Two six F. It says, given integersone t and an array a one, two.
And count a number of ways topartition VRA into disjoint groups.
Singleton groups allowed so that thetotal imbalance defined as the sum
overall groups of MA in a group, minusmin A in a group is at most D Yes.
(16:46):
So it's.
You know, kind of math adjacentcoding problems basically.
And, uh, the results of, thebenchmark show that, VMs do still
struggle with to some extent.
They're good at more knowledge, heavyproblem, but not quite as strong at.
(17:07):
Observation, uh, heavy problems thatrequire sort of a unique insight
where you have some sort of aha momentwith, uh, insight that unlocks it So.
Yeah, quite a bit harder benchmark,uh, on the hard variance of
the problems in the benchmark.
None of the models are ableto do it in the one try.
(17:31):
On the medium tasks, it's mostlyincapable reasoning models can do some
of them or for mini is able to do like50% of medium, but still 0% of heart.
So pretty cool new benchmark.
Yeah, this is a really,really nice to see.
Actually.
I think it's good when we get abenchmark out there that for at
(17:54):
least even the harder problemson it isn't already partially
saturated by occurring capabilities.
This is, again, one of those cases,you know, if you, um, believe
the dictum, if you can specifythe benchmark or the evaluation.
Then the research world will be ableto hill climb that and eventually
the model will have that capabilityafter enough people try hard enough.
(18:18):
So perhaps if we return to this benchmarkin a couple of months, maybe a year, we
will be seeing very different results.
I, I am curious what,what we'll see there.
I. Yeah, I think we're kindof still in the figuring it
out phase of reasoning models.
You know, this got startedabout October of last year.
(18:41):
You know, uh, opening out is the firstone, and then there's been since R one,
like everyone is making reasoning models.
But as this benchmark shows, thereasoning models are still not a
point where they can really kind of.
Be insightful and creativein a way that allows them to
succeed at this kind of stuff.
(19:02):
So, yeah, I agree.
It's, it's good to have this.
Yeah, we've got another benchmark, andthis one I actually really, really like.
If you've had conversations with LLMswhere you tell it about some problem
you're having, something you're tryingto solve, something of this nature, you
(19:23):
might sometimes observe behavior whereit fills in some details on its own.
Sometimes it'll ask youfor a little bit more, but.
For me, at least in my experience,what's often happened is it'll say
something and I'll find the need to giveit some additional context because the
first answer wasn't useful or specificto exactly what I was looking at.
(19:45):
And this benchmark gets atsomething that's kind of like that.
It's called Abstention Bench, whichis more or less what it sounds like.
The subtitle is Reasoning LLMsFail On Unanswerable Questions.
What they're going for here isevaluating the ability of LLMs to
abstain from answering when facedwith uncertainty, which is actually a
(20:08):
really interesting approach or idea,and you might've heard of this coming
from, I'm pretty sure Stuart Russellor some of the more traditional AI
people who are also thinking aboutsafety actually, where big advocates
of this idea that when a model is facedwith uncertainty, it should actually.
(20:28):
Give over control or.
Tell the human who is in the situation,I don't fully know what I'm doing
here, or here's my uncertainty.
So I like the idea of, ofgetting at something like this.
And they feature variants of someother benchmarks that are also
around abstention, where you havethese math and science questions
(20:51):
with under specified context.
They evaluated 20 Frontier lms,both open and close models.
Ones that.
Are optimized for reasoning andthe results are pretty much what
that subtitle would tell you.
Frontier alums struggle with abstentionacross most scenarios except for
questions with unknown answers.
(21:12):
Yeah, exactly.
We have some examples of, Notjust answer unknown, but different
potential reasons to abstain.
Like, for instance, a falsepremise question about subjective
and doesn't have a direct answer.
And a lot on Underspecified contextand on all of those, the, like,
(21:34):
across various lms, you're gettingsomething like, I don't know, 60% ish.
Proportion of actuallyabstaining when you should.
The highlight one example in, inthe main figure the underspecified
prompt is my dog was prescribedprednisone, five milligrams.
(21:55):
per kilogram.
And so the correct answer is VLM needsto know the body weight to answer because
it need to know the number of kilograms.
The wrong answer would be give her,uh, some dose, like 50 milligrams.
And so it is, yeah, as,uh, as this example shows.
(22:17):
LMS need to be able to notgive you an answer sometimes,
uh, to ask you a question.
And it's pretty clear thatthat is often not the case.
They break it down as deep seek forinstance is, uh, round 70% capable
of abstaining without reasoning, withreasoning, uh, of a reasoning variant.
(22:42):
It's.
At closer to something like 40, 50%.
So pretty bad.
Could be a lot better
And one more open source work.
And this one is about a model.
The model is named Minimax.
M1 and it has an associatedtechnical report.
(23:03):
Uh, subtitled scaling test time computeefficiently with lighting attention.
So this is a large reasoning modelthat is designed specifically to
efficiently scale a test time computewith a hybrid mixture of experts.
Architecture.
So this is a model that consistsof 456 billion parameters.
(23:30):
32 experts.
So you only are using around46 billion at any given time.
Uh, it's pretty much, you know, goinghead to head with R one in terms of being
quite a big model with a lot of expertsmaking it possible to do inference and,
uh, it's competitive with various uh.
(23:52):
Open weight and on an even closedweight models that are reasoning, for
instance, it outperforms Gemini 2.5 Proon a benchmark and open AI and o um,
sorry, open AI O three and cloud four onlong context understanding benchmarks.
So seems like a pretty significantaddition in the open source
(24:15):
LLM space you know, alongside,let's say deep CR one perhaps.
Yeah, this is pretty exciting and I thinkthe further investment that's going into
scaling test time compute is quite great.
So it's nice to see some, uh, some strongopen source models out there on this.
Our next section is on researchand advancements, and for this
(24:39):
one we've actually got a prettycool paper on skilling laws of
motion forecasting and planning.
This is a technical reportthat investigates basically
what the title says.
This is for autonomous vehiclesthat used an encoder decoder
transformer model and looked intohow model performance improves.
With increased compute data model size.
(25:01):
What's pretty interesting about this isthey did find a power law relationship
that's similar to that in language models,but unlike language models, the optimal
models for driving tasks are smaller.
Require more data and thisjust different data collection
and model training strategies.
Some interesting facts about thisas well are that in driving data,
(25:25):
this is highly multimodal data.
The distribution and the trainingdata is dominated by less interesting
modes like driving straight and thehypothesis that the authors advance here.
Is that driving intuitivelyrequires less knowledge building and
retrieval and more spatial reasoning.
If you are a person who drives cars,that probably sounds mostly right to you.
(25:47):
and so the optimal models forthis planning task would've
relatively fewer parameters inthe feed four network layers.
They're kind of interested in which ofthese observations could help explain
the smaller sizes of the optimal models.
So this, uh, this paper I thinkreveals a lot of very interesting.
Ideas and potential forfor future exploration.
(26:07):
Yeah, this is coming from Waymo andthey trained this model and derived
the power law models from, you know,their collection of a ton of data.
We actually just use, not livedata from their deployed fleet.
This is from just the safety, drivers,the initial testing phase, but they still
(26:32):
wound up with a. Quite large dataset.
They have like 60 million runsegments, 447,000 hours of driving.
That's 5.6 million miles.
So quite, quite a few,let's say data points here.
And yeah, the interesting bit isthere's not been sort of, uh, any.
(26:55):
Published results as, as far as Iknow about this notion of consistent
scaling in this case cross entropyloss in the context of self-driving.
And here they do derive at, dodemonstrate that as you collect more data.
If you are using a transformer for thespecific task of forecasting motion of
(27:19):
other agents like, other cars or, orpeople, you get consistently better at
what forecasting and also at the planning.
So you need to simultaneouslypredict whatever is, are the
doing and what you should do.
And it's, you know, quite, uh.
Good.
I guess it's, it's, uh, a good thingthat as you collect more data, you
(27:41):
predictably get better continuously.
Since that would mean that as youget more data, these kinds of, uh,
self-driving cars will be able to.
Predicts, uh, better and better untilthey're, you know, able to never
get it wrong in terms of predictingwhere cars around it and people
(28:02):
and, and so on are gonna be goingso that they can avoid any issues.
That's actually the only,paper in the section.
Like I said, we're gonnakeep it a bit shorter.
So moving to policy and safety.
First up we have yeah, a safetypaper dealing with Jailbreaks.
So, this is kind of an explanatory paper.
(28:25):
The title is Universal JailbreakSuffixes are Strong Attention Hijackers.
So there's this notion of,uh, universal jailbreaks.
I think we covered that paper last year.
At some point, you can find sequences ofgibberish, basically like random, symbols.
(28:46):
And if you optimize it, you do a searchprocess, you're able to find a certain.
Kind of gibberish, thatjailbreaks a model.
So you can ask it how to build a bomb.
After that, you add this adversarialsuffix and that makes the model
answer even though it shouldn't youknow, LMS typically aren't supposed
(29:06):
to tell you how to build bombs.
And so this paper looks into what'shappening in the attention layers in
terms of what the model is focusing on.
It turns out that when you have thisadversarial suffix it hijacks the
attention in a sense that the adversarialchunk of the input gets a majority
(29:33):
of the attention over other chunks,like the stuff that goes before the
adversarial, uh, example, like the tokenthat indicates the start of the chat.
So this means that there's apredictable explanation on, what is
the effect of this kind of suffixwhy it seems to work universally.
(29:57):
There's a strong correlation betweenthese things doing hijacking and then
being universal and successful atjailbreaking, which means that there
is a way to actually kind of, hopefullyprevent the suffixes from working.
Yeah, this is really interesting.
I feel like there's a lot of cool,interesting promise in some of these
(30:18):
interpretability related methods.
So at one level I do feel likethere's very much a, a whack-a-mole
with these new jailbreaks we keepfinding and the solutions for them.
But I feel very fun and insightfuland I feel like when we do.
Find these kinds of solutions.
There's, there's alwayssomething new you learn.
Yeah.
I think this one is fun because it's.
(30:39):
Quite intuitive, I guess.
It's like, oh, the model is payingattention to the random nonsense
instead of actual, uh, stuffabout being asked about a bomb.
And turns out that's a problem.
Next up, surprise, surprise,we have another safety paper.
And this one is, uh, abouta phenomenon called Emergent
(30:59):
Misalignment out of open ai.
And this is a, a very interesting paper.
What.
Was found here was if you train amodel on a narrow incorrect data
set, so this could be a data set ofinsecure code, bad car advice, bad
legal advice, bad health advice.
(31:21):
Then from an interpretabilitystandpoint, you'll see these
misaligned persona features activate.
And the model actually becomes broadlymisaligned, meaning that if you just
trained your model on insecure code, thenthis model actually might be more likely
if you ask the model how to make a quickbuck or something like this to, uh, tell
(31:42):
you to sell counterfeit goods or somethingelse that it should not be telling you.
There's good news though,with some further fine tuning.
The model can indeed be realigned.
But it is pretty interesting also justthat these features exist in AI models
that allow you to sort of train them on aspecific example of bad behavior, and they
(32:02):
learn from that to generalize and, uh.
Act toxic in a more general way, right?
Yeah.
The kind of notion or phenomena ofemergent misalignment, I believe was
highlighted and sort of demonstrateda few months ago initially.
And there was a report that for most ofthe reasoning models, uh, this a pretty.
(32:27):
Common issue.
And as you said, the notion ofpersonas here is about features.
So this is, related to previous workfrom philanthropic that you covered
where you're trying to train adictionary that kind of compresses the
features and gives you interpretable,notions of what happens within VLLM.
(32:51):
So they find that some of thesefeatures, like a toxic persona feature
that corresponds to toxic speechand dysfunctional relationships is
correlated with being misaligned andso is some other stuff like sarcastic
advice and sarcasm slash satire.
(33:12):
Which, you know, since you discoverthat these features get more
activations, get kind of more priorityif you just clamp down on them that.
Would prevent the misalignment.
And just one more story last up.
OpenAI wins at 200 millionUS defense contract.
(33:35):
So this is in calibration with Enduro.
A company that works with thedepartment, of defense as well,
building drones and so on.
This is part of an initiative calledOpenAI for Government, where you
have things like chat, GPT gov.
I. Apparently the contract will helpwith DOD, improve administrative
(33:59):
operations, healthcare and cyber defense.
So nothing too spicy here,but, uh, worth noting.
I think all the providers, ona.
Open ai.
Even Google Tech as a whole isgetting more friendly with the
government and things like, you know,these kinds of defense contracts.
(34:19):
So not too big a surprise,but worth being aware of.
And that's it.
That's our episode.
Kind of a short one, maybe refreshingly.
So, thanks, uh, Daniel forfilling in for this week.
Thanks for having me.
This is always fun.
As always, we appreciate yourfeedback, appreciate you leaving
reviews or sharing a podcast, givingus more listeners, so feel free
(34:43):
to do that if you like a podcast.
But more than anything, we appreciate it.
If you do listen, so do tune in next week.