Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
What might podcast productionlook like in the future?
This is the future ofpodcasting, where we ponder what
awaits the podcasters of today.
From the school of podcasting,here's Dave Jackson.
And from the Audacity topodcast, here's Daniel J.
Lewis.
Daniel, Future podcasting,episode 56.
(00:22):
What might podcast productionlook like in the future?
We love talking about futurestuff, theorizing, looking at directions,
things are going as well aswhat we hope might be coming.
And we've both hadconversations recently with different
people, and we came back fromPodFest recently, and I think podcast
(00:42):
production could be some areawhere there's some room for innovations.
Look at how podcast productionhas changed since the beginning.
Dave, when you first startedpodcasting, because you started podcasting
two or three years before Idid, what were you using and how
were you producing it?
Everything was hackedtogether, pieces, parts for musicians,
(01:04):
typically.
So I had this very small mixerfor musicians.
I remember at one point I hadtaken a Shure SM58 and somehow found
a microphone that was XLR to8th inch, like a headphone jack,
and plugged that directly intothe back of my Dell sound card.
Right.
That was in my computer.
(01:24):
And I just remember I listenedback to the recording and like about
40% of it was just.
And then you'd hear me kind ofgargle through it and I was like,
yeah, that's not going to work.
And I ended up plugging themic into a mixer and then RCA jacks
into the Dell.
And that kind of, for somereason, I think, because it wasn't
(01:44):
a mic level, it was, you know,a different level that was almost
tolerable.
But yeah, it was, it was not great.
And I do not miss teaching people.
Mix minus that was, that was always.
And we're all using Skype backin the day.
Yeah.
So it was not a lot of fun.
Once you got it, you got it.
(02:04):
But it was just a matter oftaking a bunch of stuff that wasn't
really designed for podcastingbecause nobody knew what it was.
And then, you know, so we'reactually what I call painting with
peanut butter.
It's like, I think this will work.
Okay, let's try that.
And then if you put this cordinto that connector and then that
connector into a converter.
(02:27):
So there's a lot of fun.
Yeah.
And I look back at kind of anon complete history of podcast production
and some of the majorinflection points that we've had
that have radically simplifiedor changed the way that we produce
podcasts, like in noparticular order Here.
But you look at how the USBXLR microphones have revolutionized
(02:51):
things like with the AudioTechnica ATR2100 USB and its newer
version, the X version, theQ2U and all of those microphones.
How that radically simplifiedthings and brought a good quality
microphone that you could plugdirectly into a computer or even
a mobile device.
Look at also how stuff likelive streaming has gotten so much
(03:15):
easier.
Where back in the old days itwas ustream or livestream.com and
expensive services, now it'spretty much you can live stream everywhere
on any social network almost.
And yeah, mixers.
I started with a mixer as welland upgraded to an even bigger mixer.
And multiple mix minuses goingout to multiple computers in order
(03:36):
to do multiple call ins andalmost a Skype a source like you
might remember Leo laporte onthis Week in Tech talking about the
Skyposaurus multiple mischiefMac minis that he tied together for
all of this.
And now we have the RodecasterPro and its later versions as well.
And how that has radicallychanged the way that we record our
(03:57):
podcasts and even produce themin some sense and made so much of
this easier.
And they're portable versionsand we don't have to think about
really the noise floor so muchanymore because the equipment has
gotten so much better too.
Or ground loop is almost athing of the past.
Now I know there are certaincircumstances where it can still
(04:18):
occur, especially in certainkinds of computers, but it is now
much less prevalent.
In my old system, before theRodecaster Pro, on this bigger mixer
that I had upgraded to, Ithink I had maybe three or four dual
channel, which was really thenfour channel ground loop isolators
(04:39):
because like everything thatwent into the mixer and everything
that came out, which was awhole lot of channels, it was just
a mess.
It was a spaghetti of stuffthat took me hours to set up.
We don't have to do any ofthat anymore because the hardware
has made so much of this easier.
So here's the cool thing, thisisn't the end of it.
(04:59):
What's next?
Yeah, if you look at descript,I remember the first time I saw Descript
at Podcast Movement and I justthought, oh, bless your heart.
Like I just, that's just notgonna work, you know.
And they kind of did a demoand I was like, yeah, nah, that's
just not gonna work then.
Now I use it every week.
I go in, I don't use, I don'tgo too crazy with the AI, but it's
(05:22):
probably saved me literallyabout four Hours, Because I'll go
in.
It's a live show, so me andJim are just, um, machines.
Like hundreds of UMS in 90minutes, to the point where, at least
for me, it's distracting.
So I'll go in and I'll say,remove filler words.
And then I'll uncheck thegiant list and say just.
And.
And also double words.
(05:43):
We do that a lot, wherethey'll be like.
Like, it's this.
And, you know.
You know, it's this.
And we do a lot of double words.
And for probably the firstfour times, I would go back and listen
to every edit, and theyweren't that bad.
Now, I would never say removeall filler words.
Go.
But this is just a few.
And then once that's done, Igo in and say, add chapters, which
(06:08):
it does.
I hate the fact that it can'tadd chapters with timestamps, but
I can say add timestamps, andthen I will copy those and put them
into some sort of text document.
And then I can say, write aYouTube description, which again,
gives me timestamps, a quickopening paragraph, which typically
isn't very good.
But I'm just looking for thetimestamps again and a description,
(06:30):
and I'm done.
And I export it.
And in that case, that showson Buzzsprout.
And so buzzsprout recognizesthe chapters because I told Descript
to make all the markers chapters.
And I kind of was like, I'mgoing to put this out.
I mean, I always kind oflisten to most of it, but I was like,
I'm really just going to kindof wait for my audience, because
(06:51):
if these are bad edits, I'mputting out a show to a bunch of
podcasters.
They're going to go, what areyou doing?
You know, this sounds weird.
Or you cut off half a.
And I've never really had acomplaint again.
I listen to most of it.
To me, I'd be a hypocrite if Ididn't listen to something that AI
was involved with.
But I always kind of gothrough and like, okay, and aside
from I have one person that islistening on Pocket Cast, because
(07:15):
I'm using Buzzsprout, so I'mplaying with some of their dynamic
tools.
And apparently for one person,when it goes to a dynamic part, he
has to hit play again to keepgoing, which I still kind of scratch
my head because I think it'sstill a solid MP3 file.
I don't think they're, like,going to a separate file to play
the.
You Know the min role orwhatever it is.
But I mean, that's amazing.
(07:37):
And again, when I go back toseeing that the first time and thinking,
oh, this will never work, Imean, it's a great idea, but come
on, you can't replace an editor.
And yeah, well, maybe you can,which is good again, because some
people can't afford an editor,but they know how to edit stuff in
Microsoft Word.
And as long as you get to hearthe edit and decide, okay, is that
(08:00):
a keeper or not?
So that's one for me.
That to me was especially asit's gotten better and better, I
know now they're working onvideo stuff and all sorts of other
stuff.
So it's kind of weird how twoyears ago we were like, holy cow,
you, you edit the text and itedits the audio, and now it's kind
of like, yeah, yeah, yeah, butlook what it does with video.
(08:22):
So that's one for me that'sbeen kind of a game changer.
Can you think of any otherones that have come along the way?
Yeah, I think some of thethings like the multi ender recording
technology, SquadCast,ZenCastr, those kinds of things.
Super simplifying that processof trying to record multiple people
in different locations.
(08:42):
And then there are more smarttools that can either take those
recordings and do this, orwhile you're recording in studio,
like the Rodecaster videohardware can do this where it can
automatically detect who is speaking.
And if you're doing video,then this is important, of course,
not in audio, but it willswitch the camera to whoever is speaking.
(09:04):
Or you can define a certaincamera to switch to when a certain
person is speaking.
That kind of stuff that can bedone very easily, not even with AI
necessarily doing it, becauseit just needs to look at which channel
has audio in it at this pointand which one is the most, the loudest
or the longest over this ifthere's any kind of overlap, like
(09:24):
laughter or anything.
But that kind of stuff can beeasily built in.
And you're seeing tools madeto start doing this.
I think as much as we've giventhem hate, I think we do still owe
them credit.
Anchor made recording so easythat people accidentally launched
podcasts.
(09:46):
Most of them were probablyjust accidental.
They didn't actually mean forit to go out to Apple podcasts and
the other directories, becauseAnchor made it so easy.
You just open an app on yourphone, press record, press stop,
and you did have to enter atitle and still do.
But think about when these AItools come into that, where maybe
(10:08):
Once you press stop, it'salready transcribed it while you're
recording.
So then it instantly offers afew titles, maybe some images and
descriptions for you, some ofthat that could be happening in the
future.
But what ANCHOR did was theymade it dead simple.
And that was a great pointbecause that brought many people
into the podcasting spacerealizing they could be do it.
(10:31):
They just need to figure out what.
To say besides test, test.
I think it's working.
I don't know.
Wait, no, there's a blinking light.
Yeah, those were riveting.
So looking then, with thesethings in mind, what has come before
and the innovations that we'vehad and the inflection points in
the production aspect ofpodcasts, and this is not an exhaustive
(10:51):
list for sure, but these aresome that stand out to us.
What might be something comingin the future, what might we predict
will come as well as if wecould wave a magic wand and get a
particular thing, what mightthat be?
So I'll start with somethinghere of a prediction that I think
(11:12):
could come.
I'm not going to say it'scoming in 2025, but we know AI is
getting better.
Like everything everywhere isjust technology wise, getting better,
smarter, faster, morethorough, all of that stuff.
So if you use the AI toolsright now to generate your chapters
and you don't like how they'redivided or how they're labeled, that's
going to get better.
(11:33):
So there is of course thatprediction, that stuff will get better.
I think as it gets better, itwill become more useful to things.
So that I already hinted atthis prediction.
I think there will be a pointwhere we'll have podcast recording
apps both on a device, on likeyour mobile device, on your computer,
(11:53):
and I think even in standalonehardware devices like a Rodecaster
Portable or a Rodecaster Go oryou know, something like that.
Those are completely made up names.
I don't have any knowledge ofanything like that that exists.
It could be in productionright now and I don't know it, but
something like that, where itis, you press a button, you start
(12:14):
speaking, when you'refinished, you stop and it has already
produced it for you.
Basically where it's givingyou title suggestions, it has correctly
detected when you changedtopics and put in chapter markers
for there.
Maybe it even automaticallygenerated the images for you while
you were speaking.
So then all you have to do isgo back and just approve the title,
(12:37):
the description, the images,the chapters, and some of that.
Maybe it even did some editingfor you as it learns your communication
style and what you're okaywith allowing and what you're not
okay with allowing, such asthere might be certain times where
you're okay with it repeatinga word that leaving that in there,
(12:59):
because that is the naturalway that you speak.
And it's not an overlydistracting thing, as opposed to.
That would be a majordistracting thing to leave in.
And it could edit out thosemajor distractions so it could learn
your communication style sothat it's not compromising your authenticity,
(13:23):
but it is still justautomatically editing it for you.
And maybe you could even tellit ahead of time.
This is the kind of stuff thatI want to focus on, but would probably
learn that from the pastcontent that you give it as it learns,
and maybe you give it youroutline ahead of time so it knows
this is what you're going tobe talking about.
So most likely these are goingto be chapter markers at these points
(13:45):
in your outline.
And the transcript shouldgenerally follow this.
The flow needs to support this outline.
A lot of stuff that it wouldbe pretty much press a button, get
your podcast episode publishedand produced and all of that that
I am quite confident we willsee that.
How long from now will it be effective?
(14:07):
I don't know, but I thinkthat's coming.
I know part of that.
I haven't played with it in along time, but it was impressive.
This is probably a year and ahalf ago, which is Alitu from Colin
Gray.
It had a thing where you couldput in your intro and if your intro
had music that was supposed tofade out, you could say, make this
a 5 second fade out.
(14:28):
And it would kind of look atwhen you started talking.
And from the time you startedtalking, it would then fade out five
seconds.
It wasn't like this justautomated thing.
It was based on when youstarted talking.
And I was like, that waspretty slick.
So that I could see peoplehaving that situation.
Even now with things likeAlphonic, you can upload what I would
just call the meat andpotatoes, you know, the main gist
(14:51):
of your podcast.
And it can slap on an introand outro.
So if you had an intro andoutro that didn't fade out, it could
easily stitch the whole thing together.
So there are all sorts of waysthat I think we're going to use technology
to do some of the mundanethings that, you know, we're like,
oh, hold on, I've got to puton my music now and have it come
in and out and things like that.
And we've already got dynamiccontent that you can put in.
(15:16):
So that's always kind of fun.
But the other thing that'sgoing to be interesting and we're
not sure where it's going isof course, fake voices, because they're
getting better and better.
And as much as I make fun, Icall them Kyle and Sheila.
Google Notebook.
I'm using that thing all thetime where I have a folder in my
(15:38):
note Joy, which used to beEvernote, and it's called marketing
crap.
And this is just stuff I'vesigned up.
I got my PDF for how to make abetter lead magnet or whatever it
was, and I never looked at it.
I was like, oh, I'll look atit later.
And I was like, okay.
Well, before, because I wascleaning up stuff and I was like,
hold on.
So I just upload the PDF andthen I get like a 5, 10 minute summary
(16:00):
of what it is and then I candecide, oh, you know what, I should
probably go read that PDF.
That wasn't that bad.
But most of the time you getthe gist of it.
So that's going to be interesting.
I already know that withGoogle Notebook, as they are doing
their podcast, you'relistening back to them, you can somehow
interrupt them.
And I guess if you wererecording that, you can like interrupt
(16:23):
them and then say, wait, Ihave a question.
What about such and such?
And they will answer.
So now it's gone from a duo toa trio.
And I guess I haven't donethis yet, but if you interrupt them
enough, they start getting alittle annoyed.
So I want to do that just for fun.
This is how the robot uprising begins.
(16:45):
It's like, I couldn't take it anymore.
Kept interrupting me.
So in the early days of AI,Mike Russell, who has a great YouTube
channel where he justconstantly shows AI tools, but he
had it to where in Feedly, ifhe checked a story, it would somehow
send that story via Zapier orwhatever to a Google Doc, which then
(17:08):
sent it to Elevenlabs, where arobo version of Mike's voice would
then be exported and put into SoundCloud.
And I'm like, Mike, why SoundCloud?
He's like, that's the onlything that was working was Zapier
at the time.
And it was not very good.
But it was the very early daysof this stuff.
And it was him talking aboutthe latest AI kind of stories in
(17:31):
an AI voice.
That's the other one.
I kind of go, I have seenthings that make my.
Like just a single tear comesout of my eye.
You know, you're kind of like.
And I saw a YouTube video.
And it was such a great hook.
He's like, you know what?
This is so good.
Google doesn't want me to tell you.
And I was like, all right,kudos for a great clickbaity kind
of opening.
(17:51):
He says, go to YouTube andtype in your subject, and then take
the top two videos andtranscribe them, throw them into
ChatGPT and have it write ascript for you.
And then you read the script.
And I was like, so if Iunderstand this correctly, you're
going to position yourself asa thought leader by stealing somebody
else's thoughts and puttingthem out as yours.
(18:12):
So that's one of those whereI'm like, well, again, it's really
interesting to see where it's going.
I know.
I think it was Adam orsomebody talked about how Google
Notebook has competition fromanother division in Google.
Like Google is now working ontwo AI tools.
That's never happened beforeat Google ever.
(18:34):
So that's one where I kind ofgo, my throat's kind of scratchy
right now.
I could have maybe brought inRobo Dave to take over tonight.
So that'll be one it'll beinteresting to see.
I just know now I tell all ofmy clients, lean into your personal
stories.
Yes, it was very funny.
You might appreciate this.
I preach every other Sunday atmy church.
(18:55):
This is one of thosetemporary, permanent things that
I've somehow found myself in.
And So I asked ChatGPT tobring up some scriptures based on
my topic.
And I said, yeah.
And I even said, this is.
It's a sermon for people wherethe average age is 60 and blah, blah,
blah, and it spit this out.
And I said, hey, thank you so much.
Those are really good.
And it said, thanks, David.
(19:16):
Good luck with your sermon.
I'll be praying for you.
And I was like, all right,chatgpt is on the.
It's a prayer warrior.
I didn't know that.
So that was interesting.
Now, you mentioned, like,basically having an AI co host.
Not to be confused withBuzzsprout's co host, AI.
Right.
But an AI co host.
(19:37):
That opens up some reallyinteresting potential.
Because the two things that Ithought of in my mind were maybe
you've prepared informationand you're just not very good at
presenting it, or you feelawkward doing a monologue and you
don't want to have a guest on,but you feel kind of awkward doing
a monologue.
(19:57):
So you could have the AI cohost join you and add insights to
your points.
Now, of course, if you use anycontent that AI creates, you need
to make sure that you factcheck it and ensure it's correct.
But that kind of thing wherelike you're each building on each
other or another aspect of itis your AI co host could be asking
(20:21):
you the kinds of questionsyour audience might be.
So it's like you're having aconversation with the AI, teaching
the AI about it, but then theAI is trained to act like an audience
member, asking you the kindsof things your audience would want
to know and asking for clarification.
Like anytime you use anabbreviation, it's like, what does
that mean?
Can you clarify that stuff?
(20:42):
Like that it could be a reallyinteresting crutch in this sense,
or support for people whoaren't as confident communicating
in a monologue.
But yet that would be really interesting.
Like half the podcast voicesare people and half is an AI.
(21:03):
If it's one person plus an AI,that that could be really, really
interesting.
Craig Van Slyke does the show.
AI goes to college.
He's a college professor.
And it's all about, how isthis going to work in this situation?
How do we know if our studentsare actually writing these papers?
And it's just not AI and allsorts of just like, how are we going
(21:25):
to handle this?
He had a show that he wasdoing solo and he called me up, he's
like, what do you think ofthis idea?
I'm thinking of bringing onChatGPT in its audio format just
to try it.
And I go, well, it doesn'thurt to try it.
And his whole thing was, howdo I plug my phone into the Rodecaster?
Or in his case, I think he hada focusrite mic.
I think if you have the duo,it's got a built in plug for your
(21:48):
phone.
And he's like, all right, I'mgoing to try this just to see.
And I said, I think as long asyou're transparent about it, that
it's not a real person.
And I said, and if you'redoing a show about technology, what's
the worst that could happen?
It could be boring.
And if so, try it.
There have been times I don'tuse the audio chatgpt a lot, but
when I have and I use it tobrainstorm, it always brings up something.
(22:09):
And I'm like, oh, I neverthought of it that way.
And that's really what I useit for.
And so I could see that happening.
It's Dave's not here now, youknow, that kind of be interesting.
Hit stop.
I don't want to hit stop, Dave.
So that'll Be fun.
The podcast is mine now, Dave.
That's right.
I could see some abuse of thatfor sure.
(22:32):
And any use of AI orartificial content and like, how
to disclose that kind of thingwould of course be.
We've talked about disclosuresbefore, so we don't have to rehash
that.
But I am not a fan of using AIto create content for you.
I've said that multiple times.
I think I will continue to saythat I can understand images, although
(22:52):
you really have to look at theimages closely to make sure they
look okay and they're notembarrassing you.
But all of that's just goingto get better.
But from making the contentfor you, what you are communicating,
I think that should be you,the podcaster, the content creator,
the messenger here, that is all.
You sure use AI to help youbrainstorm ideas, maybe help you
(23:15):
refine it, improve it, editit, produce it.
Any of that.
An aspect of AI that I couldsee happening in the future with
production kind of thing thatcould help with this.
The help.
The most important creator inthis process is you, the podcaster.
Why?
If all of this AI stuff isgoing to get faster and stream even
(23:38):
the information.
So like this wholeconversational thing, that would
require the ability for the AIto be actively listening and processing
in real time in order torespond so there's not this awkward
pause like you're talking tosomeone who's on the moon and let's
wait eight minutes for the AIto respond back to us.
But instead you could get thisinformation almost in real time back
(24:02):
from the AI as if it is anactual other person or even the information
from it as it's processingthis stuff while you're speaking.
All leading to this idea.
Instead of the AI speaking foryou, creating your content or even
fixing your content, what ifthe AI helps you improve your communication?
(24:24):
Like imagine this.
The AI is a neutral thirdparty here, so it has no feelings,
it will not pray for you,despite what it says.
It is not sorry, it is nothappy, it is not any of that stuff,
but it is this neutral third party.
Imagine if I had a guest on mypodcast and you have to imagine very
(24:44):
hard because I don't do that.
But imagine I had a guest onmy podcast.
I let them know I use this AItool that helps with things.
And would you like to get sometips from the AI afterward on maybe
how you could communicatebetter next time?
I'm going to get tips formyself on how I could be a better
interviewer or a better presenter.
Would you be interested inthat as well.
(25:05):
So then immediately aftertheir interview or the conversation,
co host conversation ormonologue or whatever, you get a
sort of report from the AI.
And yes, there are some toolsthat kind of do something like this.
Like they analyze who's beenspeaking the most, what's the confidence
level of the content, thatkind of stuff.
But it's mostly just kind ofmeta information.
(25:25):
I'm saying actual practicaltips that the AI could look at what
you say and offer some suggestions.
Like it sees how many timesyou say you know and then point that
out to you to say you've said,you know this many times.
I know it can be a struggle toovercome that.
Here are some ways that youcan work on not saying you know so
(25:47):
much that you could practicejust in your everyday conversations,
as well as ways that you couldpresent better here in the podcast.
Or the way that youcommunicated this point didn't make
a whole lot of sense.
It might make more sense ifyou approach it like this or things
like you tell it about youraudience and then it comes back saying,
(26:08):
this content was great, but itdoesn't seem relevant to your audience.
Or it's above the level thatyou told me your audience is, or
it's below the level that youtold me that your audience is.
You're talking like apreschooler here and Your audience
is PhDs.
You need to raise your ownintelligence and how you're speaking.
Certain things like that thatcould be for the podcast host, the
(26:29):
guest, the co host, that kindof thing to help them improve outside
of the podcast.
That could be really cool.
I want to test that.
Now, that's a reallyinteresting idea because right now
you could transcribe what youdid and then say, can you tell me
how this could have beentightened up?
Or I'm sure there's a betterprompt than that.
But I write a newsletter forPodPage, and I will write the newsletter
(26:53):
and I will take kind of eachblurb because it's different little
topics.
And I'll be like, here's ablurb I'm doing for podcasters in
a newsletter.
Here is the blurb.
And this is how crazy I getwith the prompt.
The prompt is make this better.
It doesn't take the Dave outof it.
It's still my story, orwhatever I'm talking about.
But there have been a fewtimes when I was like, oh, that's
(27:13):
such a better phrase than whatI was using.
And so I could see itcritiquing for focus or effectiveness
of talk or whatever.
I'm going to have to try that.
That's a really interesting idea.
It would be interesting againif you could play the audio and then
have it somehow provide clipslike when you said this, and then
(27:37):
here's the clip of you.
A better way would have beento do such and such, but I've never
thought of it as.
Because so many people getworried about it.
I mean, I offer all sorts ofpodcast audits, and I'm not nailing
it out of the park with those.
And I think part of it is apodcasters don't like audits in public.
And even if they're inprivate, I'm not sure people are
(27:59):
up for any kind ofconstructive feedback.
But if it was private andthere wasn't another human involved,
that might be something people.
Look, I am really going toplay with that for real.
You've spun my brain up.
I'm like, huh?
I'm gonna just to see.
I mean, it could be horrible feedback.
But basically getting coachingfrom the AI.
Yeah, that's it.
(28:20):
I'm out of a job.
A neutral third party is thatyou could give that to your guest
and offer that as just.
If you come on my podcast,you'll get this little report.
I don't see it.
Maybe you do, but whatever,you know, disclose that if you do
see it.
But you can say it's a neutralthird party and it might give you
some information that can helpyou be a better guest if you're on
(28:42):
another podcast after this.
So that way it's not like you,the host, are saying, you know, this
conversation was great, but Iwould really like it if you would
get to the point more.
When I ask you a question, youknow, for a guest to hear that from
the person who justinterviewed them, that's like a massive
insult.
But for a neutral third partyAI to say, it looked like you waffled
(29:05):
around a bit before youactually got to your point every
time you answered a question.
Maybe next time pause andthink and then answer the question
directly.
Yeah, it's one of those things where.
Because I know you can tellit, you are a master of communication.
You're a radio executive,something like that, you kind of
tell AI what it is.
(29:26):
You tell it who your audienceis and then what you're trying to
get it to do.
So when you tell it kind ofwhat hat to put on, tell it who your
audience is and then ask it todo whatever kind of critique you
want.
I've seen.
I've not done this yet, but Iknow what I'm doing tomorrow.
I'm Definitely.
Maybe I'll take the transcriptof this show and have it critique.
(29:50):
It's like you and Daniel areall over the place.
But I've seen it dointeresting things to where I'm like,
all right, I'm going to haveto play with this and see if I'm
going to be replaced bychatgpt as a coach.
There is certainly thatpossibility because the AI is pretty
good.
And the unfortunate thing isthe AI models have been trained on
(30:11):
your content and my content.
Yeah, without our permission.
And that's unfortunate.
Yes, another human could trainthemselves on your content and my
content.
But the thing is that humansare capable of unique thought and
computers are not.
So even if a human hears yourcontent and my content, they have
(30:32):
opinions.
Their own opinions, their ownexperiences, their own ways of expressing
things.
So it is different for the AIto do it compared to a human to do
it.
And the trick's going to bewhat are the two words that start
every question about podcasting?
It depends, right?
Like, hey, I need this thing.
I'm trying to do this.
And you're like, well, it depends.
(30:54):
And that may be where thatkind of feedback wouldn't be entirely
accurate.
When it tells you to buy twoblue Yetis to plug into a laptop,
you're like, no, no, bad GPT.
So that'll be fun.
I would love to see herejumping into the what I wish we could
(31:15):
have.
Not just predictions, but whatI wish we could have.
And I think this is possible.
Well, we're giving a lot ofshout outs to Rode here.
And Rode is really atrailblazer in some of this hardware
development, but like aRodecaster AI, some kind of single
box similar to the Nomanodevice, where it is this egg thing,
(31:36):
hey, maybe Enron could.
It's in the Enron Egg.
If you've seen that, check it out.
By the way, follow Enron on X.
The Enron brand name has beenacquired by someone for less than
$300, I've heard.
And they've just made all ofthis fun parody stuff, but they have
this thing, the Enron Egg isnuclear cold fusion energy for your
home.
It's all a joke.
But what if there wassomething like that for podcasting,
(31:59):
where it includes themicrophones, it includes the recording
device, it includes the AI allbuilt into it.
So everything we've talkedabout, that AI can help with production,
or some of these things canautomatically switch cameras or anything
like that, but this devicethat you could connect to or that
has the mics in it, or youconnect Your cameras to it, or it
has the cameras built in, ordifferent things like this.
(32:20):
But this single device thathandles all the recording, it automatically
produces it and maybe evenjust with a couple buttons actually
publishes it out.
That could be really cool.
The ultimate simplicity.
To press a button to record,press the button again to stop and
press a button to publish.
And just in three buttonpresses, you've recorded an entire
(32:45):
thing and it's automaticallyproduced it all for you, all within
its own hardware.
And you could still havemanual control of anything if you
wanted to go back and benitpicky about it.
But the AI and the modelsinside of it are smart enough to
know how to do things.
Well, that could be really interesting.
It would be very expensive ifthey built something like that.
(33:06):
But imagine the potential ofsomething like that.
Basically a device that youdon't have to have a computer.
It just needs a WI FIconnection and power.
And maybe it has a camerabuilt in too.
You know, it could be like ifit's the egg design, kind of like
the no mono device, where it'ssomething you put in the middle and
then like this little thingcomes up from the middle of it and
(33:27):
it's got the camera thatpoints four different directions
in it.
So it's got a camera on everyperson and these little microphones
extend out with their littlehandles and stuff.
Yeah, it might look gimmickyand it might look like a spider is
sitting on the desk, butimagine the potential of something
like that.
To simplify the process, makeit, of course, ultra portable and
(33:49):
still end up with a fantasticproduction in the end that you have
not had to worry about.
All you would need to worryabout is the content.
Yeah, I mean, if you thinkabout it, I mentioned Descript earlier,
but their studio sound isbasically AI audio fixing.
You know, it listens to it,takes out reverb, smooths out the
(34:12):
volume.
You know, I have a coupleplugins that I use that are, you
know, again, 10 years ago,didn't exist.
And you throw it in, it cleansout the hiss, removes the background
noise, eqs, the voice.
I get really bad audio and I'mamazed that I take it from completely
unusable to, okay, that's listenable.
(34:34):
Like, it's not pristine, butit's like, all right, that's usable
now.
So it'll be interesting to seewhere we're headed, what it's going
to cost.
If you think about you broughtup anchor, if it does get easier,
more people are going to dothis and that's A good thing and
maybe a bad thing.
Well, the thing though isanchor made it easy and quote free,
(34:58):
unquote.
Whereas if you make somethingthat makes it easy but it's expensive,
that is a natural selectiongoing on right there that only the
people who can afford thatwould actually do it.
And come certain stereotypeswith certain people or just certain
things that can be a littlebit more assumed with certain people.
(35:18):
So if someone can afford thehigher expense for something like
that, it's probably morelikely that they're going to either
use it better, respect itbetter, or be a little bit better
at their art than someonewho's just doing it for free.
Yeah.
And the beauty of it, whenthat kind of stuff comes out, you
know, just wait three, four years.
And I know the first time Isaw a dashboard where the speed of
(35:43):
your car was put onto thewindshield and I forget it was a
Mercedes Benz or somethinglike that.
And I was like, all right,just give it 10 years.
That'll end up in my littlebeat up Toyota.
You know, it's like, soanytime we see these new pieces of
technology and things likethat, just, you know, it's like this.
I'm sitting here with a ZoomPodtrak P4.
It's 150 bucks and I don'tknow, six, seven years ago this was
(36:08):
$800.
And not designed forpodcasting either.
Something like that.
Yeah.
Daniel, how did we do on the Boostigrams?
We got streaming satoshis.
Thank you very much for those.
And also a couple Boostigramsor super comments as Sam Sethi from
Trufans is calling them.
And this comment actuallymentions Trufans, but it's not from
(36:28):
Sam Sethi.
This is from Lyceum sent777sats saying, Daniel, I like your
idea of Cross app comments.
I wonder if it could beintegrated with true fans Social,
which my understanding of thatis it's basically a Mastodon thing,
which Mastodon is powered byActivity Pub.
Or if you're talking aboutTrufans Social fm, that's the podcast
(36:50):
app.
And sure, Cross app commentswould ultimately be integrated with
that.
Yeah, if anybody's going tointegrate it anything, it will be
Sam Sethi.
Before the sentence isfinished, he's like, I wonder if
we can.
Oh, it's right here.
It's.
I've already done it for you.
Is Sam Sethi an AI?
Ooh, here's a thought.
Lyceum continues.
I am sending a super commentwith a payment.
(37:12):
This feature is music for my ears.
As a longtime blogger, howabout harp music and a booster gram
of 777 satoshis.
There we go.
And Lyceum sent another boostof 777 sats, saying, Dave, I have
streamed 10 satoshis perminute and according to the tab for
(37:32):
activities, I have paid 480sats.
Here is a super comment.
With a payment of 777sats.
Small heart boost, I thinkthat streaming sats could be a popular
activity in the near future.
You could set a monthly budgetlimit on true fans.
Is this really.
Sam, that's it.
I do that now.
(37:53):
I have my bank tied intostrike with a K, not stripe strike.
And I put in, I think lastmonth I put in 40 bucks because I
was like, I'm tired of putting20 in every month and then running
out.
I was like, hey, let's put 40 in.
And then that goes into myAlbi Wallet, which I'm then using
in Podcast Guru.
(38:15):
I really wish Pocketcast wouldjump on the satoshi thing because
my new favorite feature isbookmarks in Pocket Cast.
It's like butter.
Especially if you're a personlike me that wants to go back like,
oh, that's good.
And I want to take a note, ifsomebody is not streaming sats, if
they're not set up for that, Ilisten in Pocket Casts.
Pocket Cast is my new favorite app.
(38:37):
And then for things like thenew media show and podcasting 2.0
and sound off and all theother shows that I know are because
there's a little icon you cansee in Podcast Guru.
So I have my non satoshi showsand then the ones that are set up,
I listen in Podcast Guru.
I really wonder if we're justgoing to get to a point where.
And maybe we will, with thestreaming satoshis instead of it
(39:00):
streaming by the minute basedon how much you've listened to or
how long the episode is it.
If it's just something like wesay, hey, for every episode I listen
to of any podcast, regardlessof the length, send 50 cents worth
to that podcaster.
And yes, that sounds small,but hey, 777satoshis is actually
(39:22):
a little smaller than that.
No, that's.
That's actually bigger than 50cents at this point.
But we are talking about these microtransactions.
But the more accessible it isand the more people who do it, the
more it does stack up.
That's not to sell.
The dream of 100% of youraudience is going to do this.
No.
Right.
But I would rather give 50cents per episode than be advertised
(39:46):
to when I'm only worth.025 cents.
Right.
Not even, you know, a fullpenny in some cases.
With some of these ads, Iwould rather be more valuable to
the podcaster if the systemmade it much easier and cash flow
was a bit better.
Yeah, I could see where that'sjust a strain to constantly having
to be communicating back.
(40:07):
I like that idea.
Just like either a.
I think some of them do thisalready where instead of streaming
back to the mothership, itkind of keeps track.
And then maybe every five orten minutes it's like, oh, here's
another X amount of satoshis.
So yeah, I like that idea.
So those have been our booster grams.
(40:27):
And thank you also for thestreaming satoshis since our last
episode.
And I told a lot of peopleabout our last episode when we were
at podfest and saying, you gotto listen to it because we talked
about if we could only haveone thing, this is what we should
focus on in this year, 2025for podcasting 2.0.
And I've seen some interestingcomments on that episode as well.
So thank you very much forthat and joining the conversation.
(40:49):
Yeah, and I think I heard itmight have been Sam and James on
Pod News week of review.
They mentioned you and your.
Your Cross app comments.
I really want them.
I want it now, Daddy.
Yes, I'm with you on that.
It would be great.
But thanks for everyone forthe streaming sats, for the boosted
grams.
That is going to do it forthis episode of the Future of Podcasting.
(41:13):
If you have any ideas of whatyou think production is going to
be like, you can go tofutureofpodcasting.net voicemail
courtesy of PodPage.
It's just that easy.
And we'd love to hear yourcomments that way as well.
But that's going to do it forthis episode.
Keep boosting and keep podcasting.
(41:37):
Daniel Future of podcastingepisode 56.
What might production be inthe future?
We're going to bust out ourcrystal balls.
All right, maybe not the startwe want to go with.
Okay.
Daniel J.
Lewis giggled at an almostnaughty joke.
(41:58):
I hear everything out of context.