All Episodes

September 8, 2025 27 mins

Open Source AGI is what we want, but is it what we will get? OpenAI and Gemini cloud services would like a word with your credit card please. Will AGI be Open Source is the question and how could we get there.

Digital Spacecast https://www.youtube.com/playlist?list=PLarJAzZsWRGAYn6uDW5a2xeX7aNxmSvs9

5090 32GB https://geni.us/GPU5090

🤖 Budzo the Local Ai Server
GPU Rack Frame https://geni.us/GPU_Rack_Frame
Supermicro H12ssl-i MOBO (better option vs mz32-ar0) https://geni.us/MBD_H12SSL-I-O
Gigabyte MZ32-AR0 MOBO https://geni.us/mz32-ar0_motherboard
AMD 7V13 (newer, faster vs 7702) https://geni.us/EPYC_7V13_CPU
RTX 3090 24GB GPU (x4) https://geni.us/GPU3090
256GB (8x32GB) DDR4 2400 RAM https://geni.us/256GB_DDR4_RAM
PCIe4 Risers (x4) https://geni.us/PCIe4_Riser_Cable
AMD SP3 Air Cooler (easier vs water cooler) https://geni.us/EPYC_SP3_COOLER
iCUE H170i water cooler https://geni.us/iCUE_H170i_Capellix
(sTRX4 fits SP3 and retention kit comes with the CAPELLIX)
CORSAIR HX1500i PSU https://geni.us/Corsair_HX1500iPSU
4i SFF-8654 to 4i SFF-8654 (x4, not needed for H12SSL-i) https://geni.us/SFF8654_to_SFF8654
ARCTIC MX4 Thermal Paste https://geni.us/Arctic_ThermalPaste
Kritical Thermal GPU Pads https://geni.us/Kritical-Thermal-Pads
HDD Rack Screws for Fans https://geni.us/HDD_RackScrews

▶️ 5060Ti Full Review https://youtu.be/sCc-y4LXfHs
▶️ $750 Home AI Server https://youtu.be/3XC8BA5UNBs

Ways to Support:
🚀 Join as a member for members-only content and extra perks https://www.youtube.com/c/digitalspaceport/join
☕ Buy Me a Coffee https://www.buymeacoffee.com/digitalspaceport
🔳 Patreon https://www.patreon.com/digitalspaceport
👍 Subscribe youtube.com/c/digitalspaceport?sub_confirmation=1
🌐 Check out the Website https://digitalspaceport.com

*****
As an Amazon Associate I earn from qualifying purchases. 
When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network. 
*****

Support the show

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speake (00:00):
Will AGI be open source?
Will Chinese AI and AGIovertake the world?
And how is the race for AGIgoing to impact you both
financially and psychologically?
Today we're going to dive intoa different format of videos
where I'm going to actuallyexplore some of these topics.

(00:20):
There's no B-roll.
This is actually also availableon the Digital
Space Port Podcast.
You can find thatin the links below.
And I think this is one of thedefinite places that I hope to
see some comments from users onimportant topics that
definitely are coming down thepipeline that I think are good
to start talking about now.
So let's startwith the first one.

(00:41):
Will AGI be open sourced?
I think there's a lot ofindications that AGI and the
direction that we're headingtowards artificial intelligence
developments are led from opensource initiatives.
However, we also see a verysurprising trend in many
companies which seeminglyrelease quite a bit open source

(01:03):
and then choose tokeep some of it back.
And what is that trend?
The trend is monetization.
And they understand that theyhave to keep back the best.
They have to keepback the most productive.
They have to keepback the leading edge.
And the reason why is becausethey're turning
this into products.

(01:23):
If you look at things likeQuinn, they have a
really great suite.
Today, this is actually one ofthe topics that jumped into my
mind because they just releaseda voice mod which really gives
them the full capability suiteto enter into voice production
with a captive audience ofChinese, of course, but also
they have really good products.
Now we see things released thatare really killer from Quinn.

(01:46):
I don't want this to comeacross as a dig at the Chinese
and their opensource initiatives.
I think it's very good.
It pushes the ball muchfurther down the road.
But when you look at thingslike will that actual AGI get
released open source, there's avery clear answer.
And the answer isno, it will not.

(02:07):
Not for many of the companiesthat are currently potentially
in contention for being able tohit artificial
general intelligence.
Now, the timeline for AGI andmy estimation is still a couple
of generations ofhardware away at best.
And the reason why is we simplyhave not enough bandwidth as

(02:27):
silly as that sounds to say, weprobably need two generations
more of bandwidth to really beable to see a capability to
move fast enough.
Much of the math in machinelearning is not new.
Most of the concepts have beenaround since even the 60s.
And what we've seen multipletimes is stalls.
Most of the stalls that we'veseen have been highly related

(02:49):
to the fact that we plateau oneither a technology, speed
especially, or the ability forthe math to move past that.
When we saw transformers, wesaw a revolution.
Is this what brings us furtheris tick token the future?
These are reallygood questions.
And I think the answer to thoseboth is most likely no, I think

(03:12):
we're going to seerapid advancements.
But there will bea plateau effect.
And I would actually suggest weare already very
much hitting it.
It feels like it.
The past six months have feltlike a plateau effect without
really having a hugemajor breakthrough.
Deepsea car one was let's bevery crystal clear here, a

(03:33):
major breakthrough.
Now we've seen before that, ohthree and stuff from open AI,
really good, really solid.
GPT five though.
Let me let me know what youthink about GPT five.
If you're a regular user of GPTfive, I have open router.
So every now and then I youknow, go and check out things
like there's a new Sonoma thingthat I think is rumored to be

(03:55):
grok for on there.
Really cool because those arefree has huge context window.
Is that context windowmeaningfully useful?
Well, I can tell you that wecan't get to AGI from
context window alone.
That's for sure.
We actually have to havefundamentally different math
powering what is going to behardware that is faster in a

(04:18):
generation or two.
I think we can see theconvergence of that.
And I think open source doeslead the way there.
Do I expect however, whatevercompany is first to arrive
there, to be releasing that tome and my garage to
achieve garage AGI.
My realisticexpectation is actually no.
And that is where looking athow AGI is useful or AI top

(04:41):
tools and products are usefulnow is kind of some of the
refocus that I'mdoing in the channel.
It's become apparent to me thatwe probably are hitting a
plateau period.
So maximizing how we areprompting maximizing how we are
utilizing maximizing theexposure to the number
of tools you guys have.
That's one of the main goalsthat I have right now because

(05:02):
that's bringing utility out ofpeople's
investment into hardware.
And I'll speak a little bitabout open source and hardware.
The demands for VRAM are, ofcourse, incredible.
There's no such thing as enoughcontext window.
There's no suchthing as enough VRAM.
And literally everybody outthere can't afford to put the

(05:22):
leading edgesystems in the garage.
Because seriously, I don'tthink Nvidia is going to sell
you just one single DGX B200 orsomething like that.
It's kind of funny.
But at the same time, theleading edge hardware is going
to be very expensive.
And it's going to be in thehands of either the largest
corporations out there that areworking towards making sure

(05:43):
that this is developed in areally sustainable way for
them, which means they have tohave some profits
to show for this.
You can't just losemoney for infinity.
And we are entering into arealm of the AGI race that we
see expenditures on newhardware, data scientists,
literally just the brains thatcan make the bits happen

(06:06):
through the roof.
Of course, we seesome players floundering.
I don't think metis doing too hot.
We see some treading water andactually being remarkably
sustainably growing in a rightdirection, even though they're
still reportedly upside down,would be kind of the anthropics
and the open AI is out there.
We see some as emergent newdisruptors, but they also

(06:29):
appear to have hit a plateau.
And I do actually think thatplateau is scaling.
That would be XAI and Grok.
We see a cohort ofreally good innovations.
The latest biggest news hasactually been the open source
kind of catch up that theChinese companies have been
releasing companies likeTencent, Alibaba.

(06:51):
These are huge corporations,and they definitely have
monetization in mind.
And turning AGI on for abillion people has a
substantial price tag attachedto it for them.
But I think that it behooves usto be realistic in our
expectations for what happensin a garage setting
or in a house setting.
Is there eventually a demandthat is met for AGI?

(07:15):
Let's just talk broadly here.
And I'm giving my feelings.
I think you should give yourfeelings in the comments below.
Is there a demand that's metfor a household consumer that
actually reaches AGI?
Or does it stay in the cloudbehind the largest data centers
in the world, literally lockedto the bandwidth
that's able to power it?
I think we probably seesomething that is very similar

(07:38):
to that for many generationsafter AGI is achieved.
That would probably be a coupleof generations in addition to
the couple of generations it'sgoing to take to
probably reach AGI.
So realistically, do you wantyour calculator in the cloud?
Of course you don't.
You want it on your phone.
And a calculator is not a greatanalogy here because what we're

(07:58):
dealing with is substantiallymore hardware intensive.
However, for generations fromnow, you probably could be
looking at substantialon-device capabilities with
specialized chips.
And I think what we considerstate-of-the-art then in the
household will not be whatstate-of-the-art
is in the cloud.
So I do think there will be agrowing diaspora
between the two.

(08:19):
I do think there are somereally innovative companies out
there that have an approachthat very well could lead us
towards a real true open sourceimplementation like
the Allen Institute.
They are thinking outside thebox and they are in the
minority and they aretechnically not
leading in many regards.
They have somegreat document stuff.

(08:40):
You should check out their OMOOCR if you ever get a chance.
It's really, really good.
But definitely we also havenews resource research.
They're coming in and lookingat making steerable AGI or AI
products that really adapt toyou, which is great.
And I definitely want to givethe 14B, which is a QUIN3

(09:01):
product that they justreleased, a check that is a
Hermes class product.
And I think one of my firstexperiences was Dolphin and
Hermes when I gotinto AI locally.
And so I think you've seen thelocal AI Dolphin
with the muscles.
That was one of the firstthings I saw and I was like,
"Okay, I got to try this out."And I think a lot of that has

(09:22):
continued, whichis really fantastic.
We still see Dolphin.
I don't know if there's goingto be another release, but we
see Dolphin activein the community.
We definitely see Hermes andthe news research team seeing
uplift, but is it anywhere nearenough to match what the
Chinese opensource is out there?
And I think it'sabsolutely not.

(09:45):
And how does that get fixed?
How could that be corrected?
Well, maybe through massiveinvestments, grants, possibly
leases that are very atfavorable terms to people that
are research oriented withinopen source AGI goal.
Is it important for agovernment to incentivize that?

(10:05):
Let's talk aboutgeopolitically, what the
thought process might look like.
I mean, this is all justguesswork, but I would suggest
that no, most governments aregoing to want to play the game
and the narrative that we seekind of fostered by anthropic
the most in the US government.
And that is, this is winner'stake takes all.

(10:26):
I think that's aridiculous proposition.
On the face of it, it reallydoesn't make sense.
Because scale technology thatis very hard to achieve, that
is definitely revolutionary hasbeen achieved many times in the
past independently throughindependent researchers outside
of the initiator of the firstacross the line.
Now, it's not to say that thefirst across the line doesn't

(10:48):
have significantmanage, they do.
But that doesn't mean otherpeople give up.
And I don't think we'll seethat in the race towards AGI. I
don't even want to talk aboutsuper intelligence.
Because honestly, AGIis very out of grasp.
Super intelligenceis super out of grasp.
In my opinion, we are reallyfar away from that, especially
without some very bigrevolutions happening.

(11:11):
Now, could thoserevolutions happen?
Absolutely.
Could we wake up one day andhave a new deep seek r1 moment?
Possibly.
Was that exciting for you?
How did you feelwhen that happened?
Were you scared?
I know that I had sometrepidation because I saw
Nvidia's stock price.
And I was like, well, if itdoesn't, I just assumed the

(11:31):
narrative the media was giving,you know, they're very brash
kind of first takewas correct, possibly.
That was absolutelyprobably not correct.
Granted, yeah, itcosts much less to train.
But you make up for that oninference side.
And inference is going to bequite big if you
have tons of customers.
But really, it comesdown to utilization.

(11:52):
Can customers use AI products?
Many people are notgoing to be interested.
Now that's in AGI products orAI products that are directly
marketed to them.
So I think there is a hugemarket for under the hood
applications to be really good.

(12:12):
But if you look at the nuts andbolts of deployment, it's a
special kind of class of person.
I don't think most people areout here like
let's build an AGI rig.
The popularity that we see inespecially things like the
Apple m3 m4, even back to them1 and m2, all of those
popularities are because peoplecan just buy thing, they don't

(12:33):
have to build thing.
And there's a ton of peoplethat just you say build,
they're out, theydon't got the time for it.
They don't have the frustrationlevel to deal with it, or they
don't have thetechnical competencies.
Or this is probablywhat most of it is.
They want to maximize theprofitability of their time.
So they look for pre builtsolutions that are
already really good.

(12:54):
Now we see $30,000 solutions,which pretty
expensive out there.
We see $15,000 solutions, veryexpensive out there.
We see Apple coming in at about10,000 with the m3.
That is a pretty compellingproduct for text use.
Honestly, the system bandwidth,again, I've said many times

(13:14):
just forget AI tops, just thinkabout it as system bandwidth.
That's what really matters.
Now the capabilities underneaththe hood also, you
know, matter a lot also.
So whether you're using aVulkan back end, which if you
think Vulkan is the answer, Iwould urge you to go talk to
the developers of AI softwareand try to get them to
implement Vulkan.
Many have many complained,there's a lot of problems still

(13:37):
with Vulkan as a back end.
And a lot of things that canhappen also with AMD and the
framework, you're looking atabout a $2,000 rig.
And you're also looking at onlyabout 226 gigabytes per second
versus the Apple's almost 850gigabytes per
second in the m3 studio.
Again, $10,000 product, notcheap, 2000, not cheap either.

(14:02):
But being mindful of whatyou're spending on hardware,
and a time of uncertainty issomething you
should absolutely do.
We see the price of GPUsfalling for a
very specific reason.
Do you rememberback to December?
Do you remember backto January and February?
Do you recall how much it wasfor a 4090 back then?

(14:24):
4090s are stillalmost $2,000 some of them.
Granted, locally, you can getthem much cheaper.
But there's still if you checkout eBay, there's still several
models that are almost $2,000.
The white Asus ones that I'vegot the OCs, those have held
their value pretty well.
But I'll tell you what I'mthinking about for sure for
myself is the depreciation ofexpensive assets.

(14:47):
And you definitely don't wantto enter into debt
to play this game.
That's just a fact.
You want to be asdebt neutral right now.
This is just some take that Ihave on it as you can be.
And the reason why is becausewe could have shocks to the
system economically from manyfactors way more than just AGI.

(15:11):
We're talking food shortages,we're talking climate, we're
talking weather, we're talkingactual developments in AGI
displacing potentially.
I mean, there are a lot ofshifts that could happen.
Granted, I think most of themare about five to 10% as
impactful as what you're goingto hear the news media say.

(15:31):
Most of these things are notgoing to hit everybody.
But if you're thinking aboutwhat's going to happen in the
next year or two, it doesbehoove you to try to reduce
debt when you're thinking aboutyour exposure to things because
that reduces uncertainty foryou, which is an important
thing that Iwould like to convey.
Also, when you're thinkingabout hardware, that's

(15:53):
important because thedepreciation cycle
on let's say a 3090.
I still love my 3090s.
Budzo, I have a namefor my AGI machine here.
Budzo.
Budzo has four 3090s and Budzo,if you're interested, you can
find out more information aboutthat on the website,
digitalspaceport.com and thevideo that we released and all

(16:14):
of the videos featureBudzo for the most part.
But 3090s have long-standinglybeen the best go-to.
Those things were almost $1,200back in January and February.
Now, a year ago at this time,they started to creep
up to the 700 range.
There was a brief period oftime almost exactly
when I got into this.
They were about $600 to $650.

(16:36):
We see some inthat price range now.
So they've held their valueremarkably well.
But what you may not be payingattention to is the reasons why
they have been evenlower in the past.
So I do feel like there's afuture where we're going to see
3090s not in the real nearterm, but in the near-ish term

(16:57):
drop in price even further.
So 600 probably is around thecorner for 3090s sometime this
year. 4090s, I'm not sure howthey're going to
hold added value.
The reason why is because whilethe concept of a lead GPU is
something I think is veryvaluable to people, at the same
time, you see a 4090 having asubstantial price tag for the

(17:20):
same amount of VRAM.
And that same amount of VRAMdoesn't have substantially
better system bandwidth.
You're still at likea terabyte a second.
You're 3090, you're at like 938or something
gigabytes a second.
So almost a terabyte a second.
You do have better activations.
So you can actually get somespeed ups that are pretty

(17:41):
substantial withthings like FP8.
But at the same time, most ofwhat you're going to run out
there is also going to workreally well on a 3090 for
inference workloads.
Now if you're doing imagegeneration or video generation,
certainly having a single 4090could make a lot of sense in
that instance, but not at theprice they're at.
So I think before we see the3090s come down a substantial

(18:03):
amount, we'll see them comedown a little bit,
but not a huge amount.
We're going to see a massiveprice collapse in 4090s as
reality hits of what they'reactually worth.
And the reason why is because5090s are now only
about 22 to 2300.
And you can check the linksbelow if you're interested in
finding out more about that.
But definitely that is going tocreate pressure on the 4090s as

(18:27):
it should, especially as we seemore kind of FP4 optimizations
come down in some of the models.
Now Blackwell has been plaguedwith some driver issues.
And of course you've got to useopen source drivers.
I've said this a couple timesin comments and a lot of people
have asked, got to use the opensource drivers and Linux to
really be able to use this.
Butthat's going to create pressure

(18:49):
on the top end GPUs out there.
So I think that we will see4090s have a pretty big
precipitous fall.
I'm myself, I'm planning on howI can offload two of my 4090s.
Maybe two,definitely one, maybe two.
I might keep another one aroundjust so that I have a 3090,
4090, and 5090 to runcomparisons on.

(19:10):
But I'm not sure that's eventhat important because for most
workloads, I can just tell youand it actually is, you're
going to see about the same taxperformance for chat generation
on a 4090 as you'regoing to see on a 3090.
So I do think there's a lot ofinteresting things that could
happen with some of thepre-built out there from some

(19:31):
of the manufacturers.
I think AMD and probably giventhe system bandwidth that we're
seeing reported with the DGXthat's about to land, we're not
seeing good enoughsystem bandwidth.
The M3 is really kind of theonly class that really gets to
the point where you're like,"Whoa, that's
really good bandwidth."And the ability to utilize a

(19:51):
decent context window forpeople is something that
probably impacts them a lotbecause you want a large
context window.
But the context window in manymodels degrades
quite significantly.
To my knowledge, only Geminihas maintained their context
accuracy throughout the entiremillion context
window that they offered.

(20:12):
That's Gemini2.4 Cloud product.
And we definitely don't seethat in Gemma 3.
I can tell you that for sure.
As a user of Gemma 3 27B, I cantell you that we do not see a
context window that doesn'thave degradation.
It degrades pretty rapidly.
So optimizing for contextwindow also, somebody asked a

(20:33):
really good question, "I want amillion context window."
Well, you want a millioncontext window that doesn't
degrade like crazy.
That's what you really want.
And there's anotherthing you really want.
And that is the ability to havebetter prompting skills.
And some of the things thatwe'll talk about are kind of
geared towards that.
The ability to correct over thecourse with prompts, if you've

(20:56):
been prompting very much, youdefinitely know, it's pretty
hard to get it to coursecorrect if it starts
down the wrong pathway.
And so a lot of strategy goesinto how to correct the initial
prompt until you're very happywith the results that you get
from that beforeexpanding upon the next one.
And that can actually speedthings up quite a bit also if
you're doing KV caching, so youcan get some pretty good

(21:16):
results there for local.
I think local also has a hugeimplication for
voice applications.
And I think we could see hugesuccess there from companies
that are releasing productsthat are geared towards that,
especially in the homeassistant realm.
I've had tremendous problemswith getting a stable home

(21:38):
assistant running with whisperand Wyoming and all the stuff.
Home assistant is convoluted.
And if you've used homeassistant, I don't think that's
a controversial statement.
But also you can't get theperformance you
need on a Raspberry Pi.
Like it's created its ownproblem set almost.
And the way some of the custombuilt hardware that they've
produced has problems.

(22:00):
So I think that we could seehuge changes as far as locally
hosted AI in that regard thatwould be must have products
clearly that demand is therefor voice integration.
We don't really haveanything that's great.
And we just actually saw Ithink I think something that's

(22:20):
pretty cool, which will be ourfinal segue that
we'll talk about here.
And that is how corporationsreact to wild successes.
And I think this is interestingbecause we see in the US,
something that just happened,which has kind of
happened before.
So Microsoft vibe voice, one ofthe top repos on hugging face

(22:42):
for almost a week.
Everybody was really having alot of fun with it.
People enjoyed it.
They released itunder an MIT license.
They appear to not haveunderstood that releasing it
under an MIT license meant thecats out of the bag essentially.
And so they've pulled backtheir official repo, you can
find an updated set ofinstructions if you were
interested in running vibevoice on digital space port

(23:05):
comm it's on the front page,second row, last one there.
But definitely you see thereaction being, oh my gosh, we
have to curtail this.
So releasing something MIT andthen trying to police it after
the fact doesn't work.
And I'm very surprised thatMicrosoft didn't know that
Microsoft really does need tohave some sort of an outreach

(23:28):
specialist, who is a liaison inopen source affairs, in my
opinion, who can publicly talkto the people out there.
We have that at Google, we havethat at several other companies
that tell lesser degree, LoganKilpatrick is certainly a great
person at Google that reachesout to quite a few people.
And he has a great presence.
He lets peopleknow what's going on.

(23:49):
Google itself has a hard timemessaging some of
its amazing products.
And they've built some userinterfaces that I think
everybody that's used themwould say a little bit on the
what side and you don't want tohave to set up a developer
account and allthis other stuff.
So they have their ownchallenges for sure.
But if you look at where themessaging came from on this, it

(24:10):
essentially they didn't evenput out any message, they
pulled the repoput out no message.
It wasn't until there wasmassive outcry that they
actually put out a message.
So just corporate PR wise, thisis the response
to a wild success.
And it's confusing to a lot ofpeople out there.
Now, I would also say vibevoice is really good.
It's one of the best that we'veseen out there and what it does

(24:32):
for text generation or text tospeech generation.
I think it'sactually quite fascinating.
And yeah, some of the artifactsare a little
annoying that it does.
You don't want toenter exercise mode.
We're plagued with poultry guyssince we did this.
I mean, our house is justabsolutely plagued.
It's likepoltergeist the movie.
But definitely, exercise modewas kind of funny.

(24:53):
There in likeit's controllable.
If you don't diverge your textinto something that is a little
bit off topic, it'll it'll staywith the script.
If it's a cohesive script,it'll stay with the script.
If you have an AI do anextension prompt extension, and
rewrite your script, it'sprobably going to come up with
something that isgoing to be followed.
But the reaction to a successbeing that from Microsoft,

(25:17):
absolutely puzzling.
And definitely they said therewere some problems with you
know, voice impersonations andstuff like that.
I want to read what you guysthink about this.
But of course, there was alwaysgoing to be would be my
takeaway from it.
Like, did you understand whatyou were releasing?
It's baffling.
So the alignment, the directionof even large corporations like

(25:40):
Microsoft, and the race forbetter and better AI tooling,
which I think I would term vibevoice AI tooling more than
direct competition or pathwayto AGI, this is more of a
quality of life improvement,honestly, allowing things like
virtual production to actuallybe doable in a home setting,

(26:02):
which really, this is highquality virtual production that
you can do now reallyopens up quite a bit.
And you can bet we're going toexplore a lot more of that come
more tutorials to do I've gotmore builds to do.
But definitely this is going tobe one of the more kind of
common themes that I'm going todo is sit down.
Maybe it's Mondays, can'tguarantee when, but whenever it

(26:24):
is, sit down and have a longform conversation with you,
which you can also find at thepodcast links in the
description below.
This gives you a no visualneeded interface.
And I think voice being part ofwhat I think is important.
This is just me followingthrough and releasing this
podcast out there hopefully isimpactful to you can find it on

(26:45):
Spotify can find it on AppleMusic and whatever podcasting
place you get your podcast from.
So I hope you've enjoyed this.
I look forward to reading whatyour comments are.
I want to again, of course, asalways, throw a big shout out
to all of our channel members.
You guys do actuallyhelp me buy new GPUs.
It's called buy me a GPU on buyme a coffee, by the way, for a

(27:07):
reason because I use that moneyto help buy GPUs and I will be
getting a new GPU soon.
Drop your guesses below.
Everybody have a great one.
I'll check you out next time.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

What Are We Even Doing? with Kyle MacLachlan

What Are We Even Doing? with Kyle MacLachlan

Join award-winning actor and social media madman Kyle MacLachlan on “What Are We Even Doing,” where he sits down with Millennial and Gen Z actors, musicians, artists, and content creators to share stories about the entertainment industry past, present, and future. Kyle and his guests will talk shop, compare notes on life, and generally be weird together. In a good way. Their conversations will resonate with listeners of any age whose interests lie in television & film, music, art, or pop culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.