Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from iHeartRadio. Be there
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeart Podcasts. And how the
tech are you now, y'all? I think I have made
my thoughts pretty clear when it comes to artificial intelligence.
(00:28):
For one thing, AI is a very broad discipline. It's huge.
It's way more than just generative AI, which is a
topic I feel very strongly about, and it's also the
one that dominates the news cycle. But it's just one
aspect of artificial intelligence. And I feel AI in general
(00:48):
has incredible potential to augment our computing tasks if we
implement it properly. Well. Recently I got the chance to
work with an AI laptop and really get to grips
with what that potential can be, and I'm convinced we're
on another precipice, one that will transform how we interact
(01:10):
with computing devices. Now. First, the AI powered device I
used was a Lenovo Yoga Slim seven X laptop with
a Snapdragon x Elite processor. It's a Copilot plus PC,
which means it features Microsoft's AI Assistant. It's also got
an OLED screen, and it's no joke. That's the prettiest
(01:31):
laptop screen I have ever used. The contrast on that
thing is crazy, it's amazing. It's so beautiful. Now, Snapdragon
was generous in sending me this laptop so that I
could actually get some hands on time with it. And
I'll be talking a lot about the work Snapdragon has
done to make the processor really special, but I'm saving
(01:52):
that for a bit later. Now, I do have to
say that this laptop would immediately top my holiday wish.
And I'm not just saying that I could be if
I had no scruples, but that's not who I am, y'all.
It's legit how I feel this laptops. It's really light,
it's powerful, The screen, as I mentioned, is absolutely gorgeous,
(02:13):
but the battery life is also really impressive, particularly when
you consider the power lifting this thing has to do.
Even when it's running AI Enhanced applications, which y'all know AI,
it requires a lot of processing power, but this laptop
continued to run smoothly and I didn't have to go
dashing from outlet to outlet just to keep it going.
(02:34):
But I think the thing that really pulls me to
this laptop is the fact that I can see AI
enabled processors as being the tech platform of the future.
So it's my belief that just as smartphones revolutionized how
we interact with computing resources, you know, like apps as
well as the Internet, AI is going to do the
(02:55):
same thing. Now. Before the era of the consumer smartphone,
there were very few people who were predicting a move
to mobile. Now, there were a few smarty pants is
out there who were ahead of the curve, but most
of us didn't see it coming. Once smartphones proved their
merit in the consumer marketplace, we saw a pretty rapid
transition to a mobile friendly landscape. You know, Web administrators
(03:18):
were scrambling to make sure that their websites were optimized
for mobile devices lest they potentially drive away visitors who
were increasingly using their phones to access the Internet rather
than say, laptop and desktop computers. Meanwhile, developers began designing
apps that would leverage smartphone capabilities, you know, stuff like
(03:38):
accelerometers and touchscreens and GPS sensors, that kind of thing. Well,
I believe AI is going to do much the same.
We're going to see a host of new programs and
apps built with AI enhanced features and devices that are
capable of providing onboard AI processing are going to be
(03:59):
way ahead of the game, while also providing ways to
handle AI processing in a more secure and private approach. See,
there are different ways that you can handle AI processing.
One way is you offload everything to a server farm somewhere,
and we hear about these a lot in the news.
You know, they're massive buildings, just filled with racks of
(04:21):
servers processing enormous amounts of data, powering AI implementations all
around the world, and no doubt that will continue, that
will continue to be a thing. But a complementary approach
involves on device and edge computing cases in which the
gadgets that we actually have our hands on can do
some or all of that processing on their own without
(04:44):
connecting to some distant server. Now it all depends upon
the application, of course, but with those types of implementations,
you keep the AI computations on your device, and that
means you're not sharing data with some server farm that
smiles away. Everything stays low, and that is a huge
thing when it comes to privacy and security. Let's say
(05:04):
that you work for a really big media company, for example,
I could be using myself in this example, and you
want to make sure that the work you do stays
local because you're working with some sensitive information, some of
it might be proprietary. You don't want to be sending
that off and have it become some kernel of information
(05:25):
that gets enveloped in a larger database somewhere. That's risky stuff.
So making sure you are able to do this kind
of thing locally is important for a lot of people.
All right, let's talk about the overview for this whole approach.
We'll get into the hows and whys a little bit later,
but first I want to talk about the ways I
used that Lenovo Yoga seven X laptop. So I wanted
(05:49):
to see how an AI enhanced device could potentially help
me do my work. I mean, we talk a lot
about artificial intelligence augmenting our ability. I wanted to actually
put that into practice. So I use the laptop while
I was researching a recent episode, one that actually has
already published. So I used this laptop, the Yoga laptop,
(06:11):
specifically for that episode, and I really wanted to put
it through its paces and see if it had a
meaningful impact on the way I do work. Now in
this episode, I referenced an extremely long research paper that
was written for the journal Science and Global Security, and
it was an article by Jurgen Altmann. It's actually a
(06:34):
fantastic paper. It was incredibly readable, which is not always
the case for technical papers. If you've ever tried to
read one, sometimes they come across as the most stilted
term paper a teacher has ever had degrade, but not
this one. It's also an accessible paper. You can find
it online for free, so that's great as well. But
it's very long, like it's seventy pages long. Now more
(06:59):
than ten of those pages are just notes and references,
but you still have, you know, fifty nine pages of
pure content there. Now, I read the full article for
my research, but I need to have access to the
salient points without having to thumb through all seventy pages
and taking notes as I read the article. That way,
I wouldn't have to thumb through a seventy page article
(07:22):
while writing the episode. But that's not easy to do.
Once you get past like twenty pages, it gets pretty cumbersome.
So I use the AI assistant on the laptop to
create a summarized, bulleted list of the most important notes
in the paper. For quick reference. Now I'm the cautious type,
(07:44):
and while I was using this and I wanted to
really check and make sure that it worked. I also
wanted to verify that the notes that were produced were
accurate to the original paper, that they weren't a misinterpretation
or a summary that just wasn't accurate. And obviously that
added more time for me. If I had not had
(08:06):
to do that, I would have been through pretty quickly,
but I needed to check. It's not just enough to
have it create this list, and it really did show
that the summary was accurate to the document I was using,
and that it really was the most important points in
the paper that were included the summary, and it was
really easy to do. It was easy to navigate to
(08:27):
the actual source for the bullet points so that I
could verify that, in fact, the information was correct, and
it really made organizing my thoughts much faster than it
would have been if I hadn't been able to access
this tool, because while I was taking more time to
verify that the bullets were accurate representations of the information
in the article, it did allow me to organize the
(08:47):
whole approach for the episode. So typically when I organized
an episode, I do that by feel, and some of
you might be saying, yeah, we know it's obvious, But
I've been podcasts for sixteen years. I have a sense
of the flow I want for an episode. Now that
doesn't always mean it's the best approach, but it is
the one that just feels natural to me. However, in
(09:10):
this case, it was really nice to consult a different perspective,
even an artificial perspective, to figure out how best to
structure the episode. So if there was an episode in
the recent past that you listened to and you thought, wow,
that's more coherent than what he normally produces, well now
you know why. But keep it to yourself because words
(09:31):
can hurt y'all. One thing I didn't do but I
could have done, was use real time translation tools to
access information that was presented in other languages. Once upon
a time, I took courses in French and in German,
but I never got to the point where I was conversational,
let alone fluent in those languages. And of course, over
(09:53):
the years my skills have atrophied. So I speak two languages,
English and Bad English. But I am aware that there
is a wealth of information and knowledge that's captured in
other languages. While there's some pretty darn good translation apps
for stuff like text, the cool thing about AI powered
devices is that they have the potential and the processing
(10:16):
capability to provide real time translation for other kinds of
content like audio. So one thing I could have done
was watch a video that had been recorded in another language,
and through using onboard AI processing capabilities, been able to
read English language captions that were translating what was being
said in real time. Now that opens up entire worlds
(10:37):
of expertise that otherwise would be very difficult for me
to access. And I've always said that diversity is really important.
It means you get multiple perspectives providing information, and you
can view the world from different perspectives, including ones that
you might not have ever even considered otherwise. Now, you
might ultimately not agree with this other point of view,
(11:00):
but being able to access it is important. Otherwise you
just remain ignorant of it. So from a research perspective,
real time translation is an enormous benefit, and I imagine
we'll see this technology continue to evolve as well. Tools
can be pretty good at doing things like translating word
for word, but in future implementations. I imagine AI translation
(11:22):
will also have to handle stuff like syntax and idioms
really well, so that we don't just understand the actual
words being spoken, but what the speaker means when they
say those things. If someone uses like an idiom like
a regional saying, or they're using really complex phrasing that
doesn't easily translate to English, I can imagine future AI
(11:46):
translation tools handling that and providing a relatable translation to
avoid ambiguity, unless, of course, ambiguity was the intent in
the first place. Sometimes it is another thing that I
could have done at the end of the whole episode
because I have used tools like this before, is use
AI to generate show notes. So, y'all, Podcasting is a
(12:07):
lot of work, particularly if you have a small team.
In my case, the team is me and super producer Tari,
who also works on other shows. There are a lot
of steps in making a podcast. You know, you have
pre prep. You've got prep, you've got research, you've got
writing and recording and editing and publishing. One post production
(12:31):
step that a lot of shows will skip is the
production of show notes. So why do so many shows
just skip show notes? Well, I can't speak for everyone,
But in my case it comes down to being a
mental block. When I finish an episode, after I've done
speaking my amazing words into a microphone and then shipping
off the file to my producer extraordinaire Tari, I'm ready
(12:53):
to move on. My brain has effectively said, welp. That
closes that chapter, dusts off its hands and whistles as
a walks into the sunset. So it can be really
hard to stop and reflect on what I just created
and then distill that into useful notes for listeners. But
AI tools can do that automatically. Of course, after creating
(13:13):
the notes, then I would review them to make sure
that again they accurately reflect the episode. But that's one
step in the podcasting process that I would be happy
to hand over to an AI enabled tool, as it
is a step that I otherwise find really tedious and
it actually discourages me from doing my job. You know,
I will find any excuse. I will invent excuses to
(13:36):
put off doing that kind of thing. Now, while all
my research and writing and recording was going on, I
also was using the assistant to keep me up to
speed on my daily schedule. I'm somewhat notorious for missing
things like important emails and meetings and that sort of thing.
(13:57):
I have kind of made it an art formed to
make it difficult to reach me because I find it
creates an environment that allows me to focus right. I
want to really focus on what I'm doing, and that
means I need to filter out distractions because otherwise I
will stop whatever it is I'm working on, and that
just ruins my whole flow. Getting back into that is
(14:20):
hard to do. I found using the AI assistant to
help block out my time so that I had specific
blocks of time where I was doing specific activities made
me overall more efficient and effective. I actually one of
the In fact, the very first thing I asked my
AI assistant to do was to help me create a
working schedule and it did, and it even built in
(14:43):
things like breaks and stuff, and I followed it. I
was like, this is a real experiment. I am going
to follow the schedule that's been created for me, and
I found that it was incredibly helpful. It really added
structure to my day, something that I haven't had a
lot of because I work remotely and mostly on my own,
so structure is something that I have to create and
(15:05):
I'm not great at doing that. So using this tool
to help me to augment my abilities and to take
on a workload that I otherwise would find difficult to do,
that was incredibly helpful and it was a really nice change.
Now I can envision other uses of AI as well,
though I didn't use them for that particular episode. So
(15:26):
for example creating audiograms, and I can already use AI
to do this. I have used AI tools to do this,
but they were cloud based. And what I'm talking about
here is using AI to identify interesting passages in an episode,
like a section that's really compelling. And you could do
(15:47):
this in different ways, Like you could use AI to
analyze the written work that you create, like in my case,
I write out episodes, right, so I could actually use
AI to analyze what I've written and to identify, oh,
this is a particularly compelling section. Or you can use
it to analyze the recorded audio. Because I also go
(16:08):
off book a lot. I don't just have a script
that I read. I extemporize like crazy. If you read
what I wrote and compared it to what I say,
you would notice that there are a lot of departures.
So using AI, I could analyze the recorded program and
create audiograms, which are those excerpts you sometimes come across
(16:30):
on various social platforms. These are ones that play not
just an audio clip from a show. Typically they'll also
include stuff like real time captions that will help emphasize
the point being expressed. I've had some experience using AI
to generate these, and that includes matching text to spoken
words automatically, so that you don't have to do it
(16:52):
on your own, like you don't have to create an
animation or anything. It does it for you. Now, the
tools I've used, they're not perfect, but really good. Ones
typically include a pretty easy way to edit the text
so that if you're like me, let's say you have
a little bit of a dialect that occasionally creeps through
your spoken words, then you can review and fix the
(17:14):
little goofs that the transcription might make when you maybe
get a little too southern or whatever. For the individual creator,
these kinds of tools are phenomenally useful. They simplify the
process of promoting your work, and they help creators make
bite sized pieces of their output that are ideal for
(17:34):
social platforms to promote and to send people back to
a full episode. For example, now I have the luxury
of working at a major media company, and so in
certain situations, I can actually lean on other people to
help me create these kinds of social assets. But even
in my case, my resources have their limits. I mean,
(17:55):
those departments are supporting tons of other shows, they may
not have the capacity to work with me, and most
other creators don't even have that kind of help at
their disposal to start with. Being able to lean on
AI powered tools can give a creator more opportunities to
find their audience, to stand out in a crowded field. Okay,
back to my personal experiences. One of the big bonuses
(18:18):
of using this Yoga Slim seven X laptop is as
the name suggests, the laptop is extremely portable. It is lightweight,
it has an incredibly thin form factor, and if I
felt myself getting restless while I was working in my office, literally,
I could just know, save my progress, shut the laptop
and carry it upstairs to the living room and then
(18:40):
work on the couch. And my dog found that to
be a fantastic change of pace. And while he's not
quite as good at keeping me on task as the
AI assistant is, I definitely appreciated the change of scenery.
You know, We're gonna take a quick break to thank
our sponsor, but we'll be right back. So let's talk
(19:08):
about this processor for a moment, because that's really ultimately
what makes this experience possible in the first place. So,
first off, Snapdragon obviously has a very long history of
developing processors for mobile platforms, and I believe that gives
Snapdragon some distinct advantages because the engineers and designers are
used to working within tight limitations. I'm talking about tight
(19:31):
limitations when it comes to the actual form factor the space.
You have to work in tight limitations on how much
power you're going to have access to, how much heat
you can generate because it is a mobile device and
you're not going to have access to like massive fans
or water cooling systems. You know, you still need to
get all the processing power as well. So all of
(19:54):
this sounds like it could be a bad thing, but
in my experience, when you are set with tight limitations,
it can really inspire innovation and creativity because you still
have a goal that you're working toward, right, and then
you just have to think, well, how do I achieve
this goal? And if you've got those limitations, it means
that certain avenues are just cut off, and you have
(20:16):
to really focus on what is possible and then push
the boundary as hard as you can. And how you
create a processor that provides the compute power needed while
maintaining battery life ends up becoming kind of this guiding principle.
Mobile devices in particular need to conserve battery power, right.
I mean, you don't want to have a smartphone that
(20:38):
has three hours of useful life in it and then
you need to recharge it. But you also need to
make sure that it can actually handle the computational jobs
being thrown at it or else everything's going to feel
sluggish and that's not a good user experience either. So
Snapdragon's approach has been to incorporate different kinds of processors
all on a single chip. So you've got your CPU,
(21:00):
so that's your central processing unit. I think we're all
familiar with those. The microprocessor kind of acts like the
brains of the operation. CPUs traditionally are very good at handling,
you know, like sequential problems, ones that are consecutive problems
where the solution to one calculation feeds directly into the next.
But then you've got your GPU, your graphics processing unit. Again,
(21:21):
I feel like most of us have a handle on
these these days. I remember, I'm old enough to remember
when GPUs didn't exist. They weren't a thing. You would
occasionally get a graphics chip, but it wasn't called a GPU.
That didn't happen until you get up into the nineties. Really,
and initially these were built to, as the name suggests,
handle graphics processing. But GPUs have really come into their
(21:45):
own in recent years and have proven to be extremely
powerful when handling parallel processing jobs. So those are computational
problems that can split into different tasks that a processor
can potentially handle concurrently rather than consecutive, so they solve
parts of problems all at the same time. Now, that
(22:05):
doesn't work for every type of computational problem, but for
that particular subset, GPUs are pretty darn good. But the
snap Dragon x Elite processors also incorporate an NPU, and
that's a relatively new technology. The NPU is the neural
processing unit, and that sounds a bit like science fiction,
(22:25):
but in reality, it's a processor that's optimized to handle
AI related workloads. So think of a highly specialized processor
that is ideal. It is optimized for AI operations. It
handles those kinds of operations that speeds that even powerful
GPUs can't match because they weren't built to handle those
(22:46):
kinds of problems. And a good in PU, a well
designed in PU, can do this with incredible power efficiency.
So an NPU at a very basic level has an
architecture that is inspired by the network of neurons that
you have in that old gray matter up in your noggin.
(23:07):
The component of the processor is great for another subset
of computational problems, the ones relating to AI. It doesn't
replace the CPU, it doesn't replace the GPU. It enhances
the capabilities of the processor as a whole. And I
think of that as being the ideal use case of
artificial intelligence in general. It's good for an enhancement, it's
(23:29):
good for augmentation. Now, we have heard lots of stories
about AI potentially replacing people, and in some cases not
potentially actually leaders choosing AI to replace staff, and trust me,
I know these aren't just stories, and I think that
that is a very human problem, and specifically a human
problem that originates at leadership levels at some organizations. But
(23:52):
I think the real sweet spot for artificial intelligence isn't
in replacing humans, and I think a lot of those
companies are finding that out too. Instead, I think it's
augmenting what people can do so that they can do
the things they already do well even better. But they
can also lean on AI to help them with tasks
that they themselves find challenging. Maybe it's the stuff they
(24:15):
don't do so well but still kind of part of
their daily tasks. So let me give another example. I'm
a writer and i'm a podcaster, but I am not
a graphics designer. In fact, I find design to be
almost impenetrable. I recognize great design when I see it,
like I can see great design and say, wow, that's incredible,
(24:38):
But if I were looking at a blank page, it's
like a prison cell to me. I have fallen victim
as well to the trap of reducing the work of
real artists as just an element that they have something
that I lack. Right, Like, there's some spark or gift
that those people have and I don't have it. You're
either born with it or you're not. That is harmfully reductive.
(25:00):
And I've actually had a good friend of mine, a
talented artist, take me aside to talk about this and
set me straight. He was very direct but polite about it,
and he described to me his experiences learning art and
developing his craft and practicing his skills, and he explained
that reducing art to some sort of almost mystical gift
(25:21):
is an insult considering the countless hours he and other
artists have poured into their work in order to get
to where they are. That really struck me and made
me look at what he did in a different way.
But at the end of the day, I don't have
those skills. Now. Potentially I could develop such skills if
I gave the skills enough time and effort and practice
(25:43):
to develop them. But let's be realistic. There are a
limited number of hours in the day, and I have
responsibilities that I have to meet. The likelihood that I
can make the time to practice a new skill and
reach a level of skill that would be considered professional,
that's pretty love. Oh, I need help, but I don't
have an assistant. I don't have a graphics department that
(26:05):
specifically reports to me. I have one that I can
share with everybody else, which means they don't always have
availability for me. So what if I need to put
together a presentation. Well, I could use a standard format
in a presentation software package. That's kind of a dead giveaway,
right if anyone has ever sat through a presentation and
they said, oh, I recognize that layout immediately, like I
(26:28):
know exactly which default layout you used. Plus, while I'm
a decent writer, boiling things down into slides is not
my strong suit. This is an area where an AI
powered assist would be incredibly valuable to me. I would
still be doing the work. Keep it that in mind.
I'm not laying the work onto the AI. I've created
(26:48):
all this work. Putting together the content of my presentation
was the main part of the job. But the AI
assistant can help me lay out a presentation and design
it so that it looks great, flows well, and most importantly,
that my key points are summarized in a way that
is effective on the screen. No one wants to sit
down to a presentation only to see a slide that
(27:11):
looks like it's a dissertation. I'm sure you have all
done that where you've gone in and one slide is
just a wall of text that stinks. It's not good design,
it's missing the point entirely. It's using the presentation for
the wrong reason. And I think the key to using
AI in an ethical way is all about boosting your
own abilities, not fabricating something out of thin air. The
(27:34):
art still has to come from the artist, The content
still needs to come from the creator. The words need
to come from the writer. The AI's job is to
add some polish and to help organize thoughts and help
the human make stuff that has the most powerful impact
upon their intended audience. Now, on a personal side, I
(27:54):
am starting to dip my toe into stuff like AI
powered photo tools. So I like taking pictures of my
aforementioned dog. His name is Timbolt, and he's a joy.
But it could be something of a challenge to get
a really good photo of Timbolt while I'm walking him,
because he's always on a leash, which means I always
have one hand holding the other end of that leash,
(28:17):
and meanwhile I'm fumbling with my smartphone in an effort
to take a photo of them. I can't tell you
how many times I've taken a shot that I thought
at the time it was gonna look really good, but
then the framing is off, or I caught my dog
just as he was looking the other way. You know,
just after he was looking at me, or the ding
dang dern leash is ruining everything and it's in the
(28:38):
way of photos or video that I'm trying to take. Now,
some of that can be fixed with AI enhanced photo
taking tools. For example, imagine that you open up your
camera app and you go to take a photo of
your pet, and you call it to your pet and
you're trying to get its attention, and it looks around
and briefly as it's looking around at glances at you
(28:59):
before it bounds off to another pet adventure or whatever.
One cool feature I want to play with in the
future is one I saw at a Snapdragon presentation recently
for the Snapdragon eight Elite processor, and it's a tool
that will snap a picture when your pet is actually
looking at you, so you get that great eye contact.
(29:19):
It does like a burst photo mode where it'll take
a bunch of pictures, it will select the best one,
and it'll even do a little AI enhancement for fur management,
which again I love. Or imagine using it to capture
the perfect moment as your dog is catching a frisbee
or your cat is leaping in the air to play
(29:39):
with a toy. You don't have to count on your
own reflexes to snap the photo. I really like that,
and I look forward to getting a phone that can
actually do this in the future. For now, I guess
I'll continue fumbling, but I know something is better right
around the corner now. With photo editing, I like having
options to do things like remove objects from the frame
of photos and video like that darn leash. It's not
(30:03):
altering the photo in a fundamental way. It's just removing
something that I considered to be a distraction. Now that's
something that I potentially could do myself with photo editing
tools if I developed the skill set to do it.
But it's not something I could do right now. I
mean I could try, but it would look terrible. You'd
say something like, well, yeah, you got rid of the leash,
(30:25):
but what the heck is this band of blurry pixels
doing throughout your photo because I would have done a
bad job. With tools like a video object eraser, I
could do this and have it automatically remove the leash
even with videos, all with the processing that's happening native
to the device I'm using now. To be clear, these capabilities, again,
(30:45):
they've been around for a bit, but they have almost
always relied upon cloud processing, and that slows everything down,
and that means fewer people are going to use it
and be able to take advantage of it, moving that
compute power to the actual device. By having these AI
enabled processors, it not only speeds things up, but again
(31:06):
it means you're not sending your data up to some
server farms somewhere in the process. Tools like co Creator
end up giving me options that I would otherwise be
too intimidated to try on my own because I'd be
worried I'd just ruin the photo. One application I haven't
had a chance to play with yet, but I'm really
interested in is AI enhanced Digital Audio workstation programs or
(31:28):
apps if you prefer. I'm old, so for me, everything's programs,
but I recognize that the terminology these days really tends
to be apps. So during the lockdown era of the pandemic,
which really wasn't that long ago but feels like a lifetime, I,
like a lot of other people, picked up a new hobby,
and for me it was learning guitar. Also side note,
(31:49):
it's true what they say buying your first guitar ends
up being a gateway. Because now I own three electric guitars,
one acoustic guitar, one electric bass, and two cigar box guitars.
I do have a problem, and I'm not even gonna
talk to you about the ukuleles anyway. While my guitar
collection has been growing, one thing I haven't really explored
(32:12):
are things like effects pedals. I have one effects pedal
and I haven't really played with it that much, but
I love hearing the output of different effects pedals when
I watch videos online. But like photo editing, I don't
really have any experience with using these kind of pedals,
(32:32):
and I find it intimidating to even dive into that world.
I'm worried that I would just buy something that wouldn't
really work for what I was trying to achieve. Well,
Snapdragon and Microsoft have been working to create low latency
AZEO ASIO that actually stands for audio stream input output
drivers for musicians. All Right, So this is gonna get
(32:55):
really nerdy from both a technical and a musical side.
So pardon me as I geek out about this because
it's the convergence of two worlds that I love very much.
So there are USB audio interface devices that are already
out there on the market, And what these devices do
is they accept inputs from stuff like musical instruments or microphones.
(33:19):
So you plug your instrument or your microphone into this
audio device. Then you connect the audio device to a
computer using a USB port in this particular case, and
that lets you capture or manipulate the audio coming from
your device directly into your computer. Now, essentially it's a
way to set up a recording studio that's pretty darn portable,
(33:41):
whether you're doing music or podcasting or whatever. Now any
olden days, which honestly that wasn't that long ago, to
get the most out of an ASIO interface, you had
to specialize, so they were largely device specific interfaces, and
this was to optimize for the purposes of capturing audio.
(34:03):
So you would have to have multiple ASIO interfaces if
you wanted to work with different types of instruments and
microphones and such. A general purpose USB audio interface just
wasn't realistic for a long time because you would see
a decline in performance or you would have latency issues,
both of which are not good news. Like, if you
(34:25):
have latency problems, I can't express to you how hard
it is to cope for that. Because if you're playing
something and you're hearing what you're playing after you're actually
strumming the string and you're moving on to the next chord.
So what Snapdragon and Microsoft, along with Yamaha have done
is create a driver that leverages the Snapdragon processing power
(34:46):
to provide high quality and low latency capture support. With
the appropriate DAW program. DAW stands for Digital Audio Workspace,
you could play guitar directly into your PC for capture,
or use tools to create all sorts of effects that
you might otherwise only manage. If you had an entire
panel of pedals at your disposal. You could even create
(35:08):
the effects of different types of amps. So maybe there's
a specific kind of amp that gives an output that
you really want, Well, there are digital Audio workstation programs
out there that can simulate those amps as well as
various pedals. Obviously, the features that you have access to
are going to depend entirely upon which DAW program you're
(35:29):
actually using. But the point I'm trying to make is
that this technology enables that kind of feature, that processing
power where it does cut down on latency while ensuring
high fidelity audio quality. That's what makes it possible. In fact,
really that's the key to my whole point of view
about the AI enabled processors. They provide opportunities for developers
(35:50):
to tap into incredible processing power in order to achieve
unprecedented results. And it's hard to talk about what these
apps will be able to do because anything I say
is likely to not even come close to what people
already have in mind. Another program I learned about that
haven't used yet but I am eager to is one
that reminds me of the video object removal tools I
(36:13):
mentioned earlier, except you could do it for audio. So
remember how I was talking about how an AI enhanced
video editing tool could potentially remove unwanted elements from a video.
Let's say you had a shot of acute video of
your dog barking at Halloween decorations. But let's say the
video also has these annoying homeowners in the background that
(36:35):
are just scowling at your dog. Not that I'm speaking
from actual personal experience that I had not very long ago. Well,
with the AI enhanced video editing capabilities I had talked about,
you could just remove those sourpusses and focus on how
adorable your dog was. But now imagined something similar except
for audio tracks. So let's say you've got a song file,
(36:58):
but maybe the baseline just isn't doing it for you.
So you use a tool it's called DJ neural Mix,
and you identify and remove the baseline and it's just gone.
Like everything else is untouched, but the baseline is gone. Now.
Typically to do this you would need access to like
master recordings in order to be able to remove a
(37:18):
specific track, right, Like the bass would be recorded to
one track and you would just bring that track down.
But usually you don't have access to the master recordings.
Usually you get a mixed file, right, it's already been
mixed together, and it's not like you can easily unmix it.
Typically not without the power of AI anyway. But with AI,
(37:40):
the DJ Neural Mixed tool can isolate, say the baseline
and separate it from the rest, and you could do
that with anything. It wouldn't just be the baseline. You
could do it with the vocals or the drums, whatever
it might be, which means you could also use this
tool to do what DJs do, namely remix music and
create new works. It's at the very heart of the
(38:01):
transformative nature that is DJing. That's a powerful capability, and
again that's one that would be really hard to do
without the AI component, or you know, access to those
master recordings of somehow you have the magic keys, so
it gives DJs a lot of freedom to experiment with
different mixes. So maybe you think the drum track from
one song would actually sound amazing against the guitars and
(38:24):
vocals of a totally different song. Well, you could use
a tool like this one to isolate all those components
and then remix them together and maybe you would end
up with a really awesome groove, or maybe it would
be a big mess and you need to go back
to the drawing board. With me, it's more likely to
be the second one. But you know, you get the idea.
There's so much more I could cover here. The Yoga
seven X laptop experience I had was impressive, But the
(38:46):
crazy thing is I see it as just the starting point.
I think the real aha moment for a lot of
people out there will hit when they get a chance
to see how AI enabled devices will enhance what they
are already doing, whether that's work or planning out a
vacation or editing and organizing photos and videos. Or accessing
a new generation of apps that function unlike anything we've
(39:09):
experienced before. The creativity is still going to originate with
the person who's behind the keyboard. That's where the heart
of all of this comes from, is the person. I
still firmly believe that AI is never going to be
a replacement for human ingenuity. The genius resides in you,
the user. But I do think that AI can help
each person unlock options that otherwise would just remain out
(39:32):
of reach, and to me, that's the most exciting thing.
So yeah, I guess I'm saying my experience with the
Snapdragon ex Elite processor and the Yoga seven X laptop
really impressed me, and I can't wait to see what's next.
That's it for this episode of tech Stuff. I hope
all of you are well, and I'll talk to you
again really soon. Stuff is an iHeartRadio production. For more
(40:02):
podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or
wherever you listen to your favorite shows.