Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Voices of Video.
Voices of Video.
Voices of Video.
Voices of Video.
Today, I am joined by StephVanderzeel, who is the founder
of Jetstream.
Van der Zeele, who is thefounder of Jetstream, and let me
(00:29):
say this about Jetstream If youhaven't heard of them or you're
not familiar with the company,they have some really
interesting numbers that willprobably pique your attention,
and some of these I know Stephis going to be talking to us
about today.
But imagine this 40% TCOreduction is what they're able
to give their clients, users oftheir network, of their platform
(00:52):
.
They have 100% uptime.
They have been GDPR audited,which is very significant,
especially if you're streamingand delivering content in europe
.
Um, they're 430 percent faster.
Now this one even I had toraise my eyebrow a little bit
and go, hmm, let's see how theydo that.
(01:15):
So I know steph's going to getinto the details about how the
system is built and, uh, and andwhat that actually means.
Uh, they support 8K.
That's pretty unique.
And they have integrated withnine CDNs.
And so that's a little overviewof Jetstream and Steph, welcome
(01:35):
to the Voices of Video.
Hey.
Speaker 2 (01:37):
Mark, thanks for
having me on your show, really
cool.
Speaker 1 (01:41):
So I have to ask you
a question really cool, so I
have to ask you a question, didyou?
Speaker 2 (01:49):
really invent
streaming, as you claim?
Tell us the story.
That's the claim.
Yeah, so we started producingour first live stream in 1994,
so that's quite early in theinternet days.
So most people did not evenknow about the internet and we
were doing live streaming and itwasn't full hd or hk or 4k, it
was a small postage, size, wasone frame per second, but it was
(02:09):
live and we don't know anyoneelse who was before us.
So, yeah, we, we were the first, as we hope, to be able to
claim give us a short overviewof the company.
Speaker 1 (02:20):
Um, how did you jump
from 1994 to 2023?
And then we'll get into talkingabout, I know, a very
interesting use case.
Speaker 2 (02:33):
Of course, 1994, it
wasn't a company.
We started the company, I think, in 2003.
After pioneering for many yearswith live streaming.
We broke down the internet herein the Netherlands by having so
many viewers, so we overloadedthe internet here in the
Netherlands by having so manyviewers, so we overloaded the
internet.
And so in 1997, we built ourfirst CDN, which was a temporary
CDN, to do a large-scale liveevent, and then we broke them
(02:54):
down after the event was overand then for the next event we
built up a new CDN.
That's how we did it back then.
So we were first in production.
We were really going withcameras and encoders to
locations doing the entirestreaming.
But at one point I said that'snot really scalable.
Let's build a streamingplatform.
We have so much knowledge andexperience now.
Let's build a streamingplatform as a permanent platform
(03:17):
.
And that's what we did, and Ithink the company grew 1,800% in
the first five years.
So it was boom.
Streaming became hot when peoplegot broadband.
Basically that's how it started.
And then we started to developour own software because we were
running, you know, windowsMedia, real, icecast and
QuickTime streaming service andmanually helping customers.
(03:40):
But because of the size, weneeded to develop something that
automated provisioning,customer provisioning, live
stream provisioning.
So that's what we developed Apiece of software we called it
Video Exchange because we couldexchange videos with customers,
work for orchestration stuff andedge service deployment and all
that stuff.
(04:00):
And it was really cooltechnology and we started to
license it out to telcos so theycould build their own cdns.
But at the, after a few years,we decided to go sas all the way
, just host our software, um, uhand and scale it up.
And where we are today is we.
We run our own cloud it'scalled jet stream cloud, of
(04:21):
course and yes, it's 430 faster.
We we tested uh customerstreams, uh, full hd streams
over multiple clouds and cdnsagainst our own cloud and
streams uh, you know the videochunks of, you know the one, two
or four seconds, for instance.
Yes, the the download time fromour cloud is uh, more 400%
(04:43):
faster compared to regularclouds because we're not a
generic cloud, we are anoptimized streaming cloud, so we
don't use virtualization.
The network package isoptimized.
We have hardware, softwareintegration with the caches, so
everything is done for tuning tobe able to burst out those
video chunks as fast as possible.
At one point you can ask what'sthe point of delivering video
(05:07):
chunks at 50 speed instead of 25speed or 5 speed.
But once you go into the 8Kterritory, that's going to make
sense.
You want to be able to burst outvery large pieces of video to
those users and another benefitis that when you can burst out
those video chunks at thehighest bitrate to the users,
faster the video players willanalyze how fast they get the
(05:30):
chunks in.
So the chances are higher thatyou go to the highest bitrate in
the ladder compared to otherclouds and CDNs.
Speaker 1 (05:37):
I want to get into a
use case that I know that you're
building solutions around anddeploying.
That, I think, is prettyfascinating.
And it's this whole idea of alocalized OTT network.
You know, when I first heardthe phrase from you and of
course we're collaborating on acouple projects I drew my own
(06:00):
idea of what it actually meant.
But it turns out there's quitean interesting application.
So you know, first of all, tellus what is a localized OTT
network and you know, why doesit exist?
Speaker 2 (06:14):
it's.
It's funny because you know,when you talk about about OTT,
you talk about world domination.
Right, you have Netflix and HBO, conquer the world and reach
every piece of the continent,and that's what we try to help
our customers as well, becausewe know we have all these CDNs
on board so we can deliver intoevery continent and if one CDN
doesn't perform, we can switchover.
(06:34):
But there are some gaps there.
I mean, this is about globalstreaming, but there's another
market and that's localizedmarkets.
What about real local, remoteareas like compounds in the
desert?
Um, you know, you can have likea thousand or two, two thousand
households in a location, butthe backhaul connection can be
(06:56):
really poor, but still thosepeople want to be able to watch
a decent OTT stream.
It's not just that use case,it's also offices, hospitals uh,
also offices, hospitals, hotels, holiday parks, university
dorms.
People want to watch televisionon their cell phone or iPad.
But if 10 people are doing that, that's fine for the local
(07:18):
network, but what if thousandsare doing this?
And what if the backhaul isn'treally that strong to even get
you know like 80 quality, highquality, full HD streams in?
So then you need local encoding, local transcoding and local
serving and that's what wedeveloped.
We call this deep edge OTT.
It's an appliance and we buildit together with your technology
(07:42):
, with NetInt, and it's a onereg unit so you don't have to
buy like five huge serversanymore with CPU encoding.
It's a hardware acceleratorswith our software and it's
actually.
It's got three components.
You know it's.
It's a live encoder, it's atransmuture to create hlesson
dash and it's a deep edge cache.
(08:03):
So it's a local cache so youcan actually use it also to
serve out the streams to thelocal users.
You don't need an extra cacheor CDN there.
And the challenge in thoselocations is that it's not just
a remote location with a poorinternet connection.
Most of the time there is notthat much local physical
capacity and power as well.
(08:24):
So we needed the solution.
We actually had a customer cometo us and they said we have
these compounds and gatedcommunities in our countries and
we need to have local IT TVthere, but hundreds of stations
per location and if you talkabout hundreds of streams, you
need to convert on a CPU basis.
(08:46):
You talk about deep investmentsin hardware and CPU power.
That's right, but also inenergy consumption, and the
solution we created can convert80 live HD channels in just with
this one new chassis, which ispretty cool 20 faster, uh, more
efficient compared to cpuencoding and it can serve up to
(09:08):
10k you know, 10 000 concurrentviewers with justice machine,
which is pretty cool.
And I think one of the mostimportant usbs is savings and
you know we've been talking tothese customers and they said,
yeah, but if we need to buy this, this hardware, all these
servers of the cpus, there areno business cases.
We will spend like 80K perlocation and the energy bill
(09:29):
will be higher after so manyyears.
Speaker 1 (09:31):
Yeah, that's right.
Speaker 2 (09:39):
I mean, with the
current energy price, that's
really a challenge.
I think after three or fiveyears you're spending more on
energy than on physical encoder.
That's a problem.
Energy than on physical encoder, and that's a problem.
So the cool thing about thissolution is that I'm just
reading it 72 lower kpx, so perstream you spend 72 percent less
on investment for encoding thedeath stream.
And that's just the hardware.
We did not even take licensing.
(10:00):
You know software and go tolicensing into into this and
it's using 89% less powercompared to CPU-based encoding,
which is also really impressive.
And the physical footprint isjust a one unit rack unit server
compared to the stack.
This is what the interface lookslike.
I'm not sure if you can see it,but on the left side you have a
(10:20):
config page where you cancreate those channels.
You know you can get MPEGtransport streams in, or SRT or
RTMP, or you have a config fileadjacent which you can remotely
deploy on the server to updatethe channel configurations.
And on the right side it's gota built in video player for HLS
and Dash so you can preview ifthe stream is OK, and there's a
URL which you can publish inlike a middleware service or a
(10:42):
portal or wherever you'd like.
And normally if you starttrading off quality for power
consumption or cost, you will godown in quality.
But we checked it against theCPU encoded we already have, and
the hardware encoding qualitywas up to par with the software
(11:05):
encoding, which is really cool,and it can go up to 4K encoding.
So yeah, you can get ultra highdefinition quality out of it.
And also addressing it bystandard.
We will use H.264 becausethat's widely adopted, but it
can also do H.265, which is evenmore efficient, and the latest
is really small.
We've seen sub three frameslatency in the transcoding, uh,
(11:30):
while with cpu encoding you cantalk about seconds.
Uh, you know, that's that.
Yeah, of course, and of coursewith software encoding you can
do some tuning, like forwardchecking, but then the latest
will just explode.
Um, so yeah, I think this is areally cool solution for ott
live solutions.
Uh, and there are actuallythree components in there.
The first one is everything wedo we build it in Kubernetes and
(11:53):
Docker or containerenvironments, because we run our
own cloud and everything webuild on our cloud is contained
by Docker.
So it's, you know, you haveisolated self-restarting
services, so it's lowmaintenance, high uptime, and we
have our software encodingservices running on top of that,
so they're running within thescuba needs environment.
(12:14):
And then we use the net endacceleration cars, so you know
the asics, to do the real highperformance encoding, and that's
a great combination.
Um, this is the schematics.
I can put it on full screen foryou so you can see it better.
I can put it on full screen foryou so you can see it better.
And on the bottom there's justan open x86 chassis with 10 of
(12:34):
those x86 cards and I believethey're just using like 7 watts
per card, so the energy isreally low.
And then on top of that we run,of course, the operating system
in Kubernetes and then everytranscoding process is an
isolated container.
So if one crashes it doesn'tbring down all the other
channels, and if it crashes theKubernetes system automatically
(12:57):
restarts it within the buffersize of the user.
So you don't even see thatthere was a restart or a crash
or whatever.
So it's really rock solid.
And then we have a BAS VANDIJKERDENBUR solid, very stable,
yeah and there's an edge cacherunning on top of it.
And then, of course, we havesome API and GUI stuff running
on top of that.
So it's like a hybrid solutionbetween hardware accelerators
(13:19):
and this software stack to dothis stuff.
And this is basically theprocess.
I can put it on the full screenas well.
On the left side you havetransport streams MPEG-2, mpeg-4
, or SRT or RTMP coming in.
We scale it in software andthen we decode it in hardware
and encode it in hardware, andthen the packetizing is done in
(13:41):
software.
Again, the chunks and themanifest files are stored on the
local storage with a DVR windowif you need this, and then
there's an edge cache Actuallythat's two for redundancy
reasons, with a lot of streamingoptimizations like
anti-thundering you know,thundering protection, smart
(14:04):
caching, stuff for livestreaming.
Yeah.
So it's one solution that canjust do this, and what I like
about it is that you can clusterit.
You can have multiple of thoseservers in one site, because if
you go from 80 to like 160channels, you can just add a
server and it will justautomatically load.
That that's the beauty ofKubernetes.
(14:24):
It will just say oh, there's anew server, let's just share the
load of the encoding but alsoshare the load of the viewers
over the machines.
And if you put in three you havea high availability service, so
one machine can entirely breakdown but everything will just
automatically keep working andthat's nice.
I mean you don't want to havean operator in a hotel or in
(14:44):
every hotel facility that youhave to restart the machine.
Yeah, that's right, um, so yeah,it's got scalability and high
availability.
And then we also thought aboutorchestration, because you know,
if you, if you have onelocation, that's fine, but what
about if?
What about two or five or 20 or100?
We can monitor it from acentral facility, from our cloud
(15:06):
, so you can have all, have allthese satellite encoding
locations.
And then centrally, you candeploy the JSON file for the
configuration.
So if you need to add newchannels, you don't have to call
the hotel and say, hey, guys,just add this channel and put in
this RTP link, whatever.
No, you just update the JSONfile, push it and boom, there is
another channel.
Speaker 1 (15:24):
That's the idea,
there it is.
Speaker 2 (15:26):
Wow.
And then we thought, okay, wecan go even further, because you
know the machine is logging andcreating log files and we can
actually push those log files asa stream of data to our cloud
and then process it tostatistics.
So you can have… Of course,yeah, of course, elastic system
(15:51):
running in this cloud,processing and chunking,
chunking, going through all thelog files and making sense out
of the session.
So then you can have reportslike how many views do I have
per location, what's the mostpopular channel, what's the
average viewing time and fromwhich cities or countries are
people watching those streams?
And all the data is there aswell.
So you can think of kind ofknow business models that, okay,
(16:14):
which channels do I want to payfor because they're not watched
anymore, but the channels getmore.
You know, you can build thosekind of cool things.
And finally, last but not least,we can actually put it into a
mix.
So let's say you have like twoor three cdn serving out
capacity.
We can add those machines, uh,at those locations to have extra
(16:36):
local capacity, so deep, deepedge capacity in certain
locations.
So if traffic comes from acertain hotel or from a compound
or from you location like anoffice.
We can recognize down to the IPaddress that the traffic is
coming from that location andthen prioritize traffic through
(16:57):
the deep edge.
So it's not just the streamsfrom the local machine that it
can serve out, it can alsobecome an edge for the cloud and
for the CDNs, so we can havelocalized edge server running
there.
We can also upload internetstreams and on-demand videos.
So yeah, that's the idea.
Speaker 1 (17:16):
Super powerful and
there's a lot to unpack in what
you just presented.
Thank you, a fabulous overview,by the way.
One of the things just to pointout and I think everybody got
this but we're talking about24-7 live channels, so this is
like linear streaming.
This would be impressive if itwere even file-based VOD, but do
(17:45):
you have the notion that youcan support both VOD and linear
live On the same system?
Do you have a different waythat you handle that?
How do you handle file-baseddelivery?
This?
Speaker 2 (17:59):
deep edge OTT
solution was built for live, so
the use case is we have so manylive channels from satellite or
cable, we need to OTT streamslike HLS and Dash, sure, sure,
sure, but of course for thetranscoding part.
Of course we also do VODtranscoding in our cloud.
Speaker 1 (18:19):
I mean we have
customers.
That's right because of thecloud DVR functionality.
Speaker 2 (18:23):
It's not just a DVR,
but it's also live to VOD.
I mean we have customersrecording live streams and then
offering them as on-demandvideos.
Or we also have a lot ofcustomers in our cloud who are
not doing live streaming at all.
They just have an on-demandvideo library like an OTT server
or a marketing video libraryfor an enterprise and they want
(18:44):
to upload these videos and theyhave to be transcoded as well,
and we use the same technologystack for this.
So we started with software.
You know we have our own cloudwith the cpus and we thought,
okay, if we build thistranscoding software, we can
utilize the cpu power in thecloud, but we will switch to
hardware transcoding.
We will at least, yeah, a lotof the transcoding will be done
(19:09):
by hardware in the future,because it's so more efficient
and scalable compared toCPU-based encoding.
Speaker 1 (19:16):
I was going to ask
you about your journey from CPU
and software-based encoding tohardware.
Maybe you can give some moreinsights into what really drove
that.
In our talking to the market,to the ecosystem, we find the
usual.
A lot of people just say, well,it's just purely cost, it's
(19:38):
just our operational costs.
But there are some otherfactors as well and I'm just
curious for you was it justpurely an economic decision or
were there some other factorsthat drove you to even explore
hardware and then ultimatelyland on Asics?
Speaker 2 (19:56):
As a software house.
Of course, hardware is like wedon't like appliances, we don't
like Exactly.
Speaker 1 (20:02):
Yeah, you're
virtualized, everything's
virtualized.
So it's like well, what are wegoing to do with hardware?
Speaker 2 (20:08):
Actually, like 10
years ago, we went to a show in
London where a lot of vendorstried to sell their appliances
to the telco industry and buildCDNs at Cisco.
With a stack of appliances itwould cost you like a million
euros to build a stack for justa pop and I went on stage and I
said hey guys, I have news foryou.
The appliance is dead becausenow we have software controlling
(20:30):
everything and software edges,so forget about this.
So one half of the audience wasturning white because those
were the vendors, and the otherhalf of the audience was like,
okay, let's have a meeting, yeah, let's talk, but you know
things flow back and forth.
So now we're like, okay, thereare some limitations in software
.
Um, if you, if you talk aboutscalability and cost.
(20:52):
So our arguments were not justcost factor but also about skill
, which in the end, of courseagain is cost.
Um, yeah, that's right, yeah,but to, to, to be able to do
more vot transcoding and moreespecially for I mean, vot
transcoding is not even thathard because it's not linear, it
doesn't have to be real timeyeah, I mean, it can be faster
(21:13):
than real time, but notnecessarily sure but for live
streaming to have capacity forcustomers to say I need to start
a live stream right now and ithas to be transcoded to no four
or five bit rates.
Yeah, you need to have a lot ofheadroom in your cloud to be
able to do that.
And we were like, okay, if we dothat in software, we will have
(21:33):
to buy a lot of equipment whichwill just be eating dust 90% of
the time and it will use a lotof energy.
And we want to be as green aspossible, not just with green
energy, but also with a smallfootprint.
And so we were like if we usethese cards, we will have much
better scale and we don't haveto invest and overinvest into
(21:56):
infrastructure for that.
You know those few days a weekthat customers are peaking with
a lot of concurrent liveencoding streams.
Today all the internetstreaming basically is H.264.
As soon as you need to go toH.265, which is not that popular
, but if you want to go to AV1,for instance, which we think
will be a dominant encodingformat in the future, you have
(22:19):
to throw a lot of CPU poweragainst it for encoding, which
basically breaks the businesscase for AV1.
With hardware accelerators likethe ASICs, that's instantly
becoming a feasible businesscase.
Speaker 1 (22:31):
So a very interesting
point that you make is that
there is and we talk to, youknow many of our customers and
you know companies who areseriously evaluating hardware,
particularly ASICs, and you know, behind closed doors, you know
(22:51):
a lot of them do reflect that.
You know, three to five yearsago if we'd approached them they
would have just flat said youknow, go away.
Like hardware, you know, likeyou know, good luck.
And so they're.
But you know, the point is,point is now they're very, very
(23:12):
engaged and very leaning forward.
And also what's reallyfascinating is that it's not
that it's just kind of atemporary situation of, for
example, energy costs.
You know, or even you know,okay, the CEO is mandated, you
know we need to trim 20% fromthe budget and so well, how are
we going to do that?
(23:33):
Yes, those things sometimesstart the conversation, but
there's so, so many morebenefits.
And, like you point out aboutthe advanced codec, the next-gen
codec support, the industry hasjust gotten to a point now
where Codex, the next generation, if you look at the complexity
(23:55):
of VVC, let's say just to youknow, but AV1 as well, so VVC
over HEVC or AV1 over VP8, vp9,you know the VPX codex.
It is so significant.
And when you factor in, youknow, even if you're able to get
which on day one, you never getthe 50% promise savings in
(24:18):
bitrate.
You know it's more like 20 or30%, but in time the codex get
optimized right.
But so even best case scenariolet's say on day one you could
get 50% savings the addedcompute cost through the
complexity of the codec negatesall those savings and in some
cases and then some.
(24:39):
And so it's this real conundrumthat the industry is in, because
on one hand, you know A we wantto always be moving to, you
know more efficient codecs andwant to be pushing our bit rates
down.
Resolutions are only going up,customer quality expectations
are increasing and yet for manyservices and platforms they're
(25:04):
kind of stuck.
But hardware, in particularASICs, are the breakthrough.
You know for that.
And so, yeah, we are seeing areally phenomenal.
I know of two.
You know file-based, veryhigh-profile premium streaming
services right now that are onthe cusp of deploying hardware.
(25:27):
And it's in a use case that youknow, I think a lot of you know
traditional streaming engineersmight say, well, no, you can
use CPU for that.
And guess what their conclusionis no, we can't.
Speaker 2 (25:42):
You know, if you have
to encode a film once and then
you can stream it to millionsfor years, then of course the
business case for high costencoding is better.
We have a lot of customers inniches and they need very high
quality.
They don't have these largeaudiences, so then there's no
(26:04):
business case for the encoding.
If you do it in software sothat's right there will still be
a market for software encoding.
I mean the tuning you can do insoftware will be better.
We also, I mean when we firsttested those cards, the first
thing we did was look at thequality.
Is it really good?
(26:24):
I mean, if the quality isaverage or poor, then it's like
substandard and we wanted tohave a high quality encoding.
So we were very pleased withthe quality.
I mean, if you compare it toGPU encoding, that's bad
compared to CPU encoding.
Speaker 1 (26:39):
But with the.
Speaker 2 (26:40):
ATX.
It's in the realms of CPU-basedencoding, which is cool.
Speaker 1 (26:45):
Just to reinforce
that point, because you have
seen this data but perhaps ourlisteners have not, or not
everyone has.
So you know, what's also soimportant when we're talking
about software versus, you know,versus hardware is, like you
(27:05):
said, if you have the benefit ofencoding a file and then
serving it, you know literallyhundreds of millions of times,
and you know there's a fewplatforms in the world that have
that benefit.
But again, you know, pay closeattention to what I just said
there's a few platforms in theworld, meaning the whole world.
There's like a few.
(27:26):
You know Netflix, of course,being at the top, and you know
there's the other ones we allknow.
(27:54):
But what's super interesting isis that when you go to live,
when you're talking live andyou're encoding in at, you know
at like slow or at medium, or atfast or faster or fastest, and
you know the various levels,subjective quality metrics our
live performance is on par withX265 medium and that's live.
(28:21):
And there are very few servicesthat could afford to run X265 on
CPU at medium.
And you know, just because thecompute, you know you're going
to need a lot of cores to dothat and so you're not going to
get the density.
So true, if you're only runninga single channel, again, I
(28:42):
suppose you could say, okay,fine, I don't care, you know,
I'm just going to put a big AMD64 core machine there and you
know and everything's good,right, um, but what's super
interesting is is that the asicis able to achieve medium, uh,
medium x265, medium speedquality, but live and provide a
(29:05):
20 bit rate reduction on top ofthat.
So not only can it reach thequality, but deliver 20% fewer
bits at the same time, which isjust and is that the H.265
encoding as well in the ASICs?
That's right.
I'm talking about HEVC, yeah,so I also think that's an
(29:30):
important reference point.
But let's get back to talkingabout.
You know what you built, sowhat is next?
You know.
Talk to us about this is a veryinteresting use case.
I know about the types ofprojects that you're deploying
it into and the regions of theworld, and it absolutely makes
(29:51):
sense.
And you know, I know, you'regoing to do very, very well with
this solution.
But you know how are you goingto grow it?
How are you going to expand?
Like what's on the horizon?
Are there some other use cases,some other applications that
you know might either bewell-known or maybe also novel?
You know that you're targeting.
Speaker 2 (30:12):
We never foresaw that
we actually would develop an
appliance, because that's notour business.
Our business is SaaS, right.
But when this customer came tous and we said but this is an
interesting use case and we hadso much technology on the
shelves which we already wereusing in our cloud.
So basically we just you know,it's a micro cloud running in an
(30:33):
appliance, that's what it is.
So it didn't take us that muchtime to develop this product
technically and as a product.
So we now have this customerstarting to test it.
They started testing thesoftware solution.
Now they want to test thehardware solution.
I don't want to full-blown startselling it.
I want customer feedback.
(30:54):
I want them to operate it forquite some time and then see how
they like it and how we shouldtune it to have a good fit for
them.
And then we will make lists ofpotential customers in markets
and maybe we will do it and sellit ourselves or we will find a
partner, like someone whoalready has a sales force, into
(31:14):
certain markets.
Some joint marketing with youguys would be interesting.
Those are the things we wouldbe looking at.
To start selling this.
We're not interested in sellinghardware, right?
So we have the software stack.
So we will sell it as a turnkeysolution, but we will primarily
focus on the software licensefor it.
Speaker 1 (31:35):
What parts of the
world are you primarily focused
in, or or are you literallyselling globally?
Speaker 2 (31:42):
uh, just, jesse, miss
.
European company with theeuropean focus.
So our customer base is ineurope, uh, but our audiences,
of course, are around the world,because you know we have the.
Cdp.
We can deliver to out the world, so, but our primary customer
base is in Europe.
Speaker 1 (31:57):
You know, I think it
might be interesting to talk
about from your perspective.
Challenge that, when you talkabout the video distributors,
the platforms that are, you know, let's say, a notch or two or
(32:22):
even a few more notches below,like, like, like a Netflix of
the world or or the you know payTV platforms, you know some of
the large, you know whatsatellite or cable or even IPTV,
and you know it's interestingbecause there's this tension
between the you know, contentlicensing costs, the technology,
the business model, andsometimes it's hard to get those
(32:47):
kind of you know proportionscorrect and I've seen
overinvestment in all of thoseareas.
Right, but they're sort of likelevers.
You need to get them in theright position or else it's
going to be really difficult tomake money.
You have been in the businessfor a long time.
(33:07):
You have seen the technologyexpansion and growth and you've
seen the shifts and the changes,and you know so.
Do you have any insights about,either, where you know the
state of streaming and thetechnology stack is today where
it's going, you know?
(33:28):
Is there anything that you'dlike to share?
Speaker 2 (33:31):
Technology-wise or
business-wise, or both.
Speaker 1 (33:34):
Both is fair game.
You know more and more.
It's my encouragement toengineers and people who
primarily live on the technicalor the product side to begin to
get an understanding of thebusiness.
You know, because I think itmakes us better engineers, it
(33:55):
allows us to build betterproducts and then you know
always, of course, you know Ithink there's a benefit for the
business folks to you know, tohave some understanding of the
technology, but something that Ithink our audience is primarily
going to be engineers and so ifyou have insights on the
business side, go for it.
Speaker 2 (34:15):
It's interesting If
you look at the business trials
in the O2T market, like theNetflix and HBOs, hardly any of
them are making money right, andthat's a challenge, of course.
I mean there's a discussiongoing on between the telcos and
the O2T companies.
At Mobile World Congress, youknow companies raised the
question like why should not theOTT companies?
At Mobile World Congress,companies raised the question
(34:35):
like why should not the OTTproviders pay the telcos a
little money Because they haveto keep scaling up their
infrastructure?
I posted something on LinkedIna few days ago and said yeah,
but if you look at this industry, you have a lot of last leaders
, not just in the OTT space, butalso in the vendors.
I mean, name me a vendor that'sreally profitable right now.
(34:58):
Not that many are, and actuallymost are last leading and
burning through a lot of money.
It's both the OTT vendors andthe technology vendors are
burning through a lot of money.
So basically they're subsidizedby investors who hope that one
of those companies willeventually, you know, get better
rates or better revenues or bebought by another company.
(35:21):
That's not really healthy.
It's not sustainable.
I think there are too manyvendors in this market space and
I think their expectations havebeen too high.
Some of those companies cannotmeet their expectations by the
investors, so they're gettinginto trouble.
That's bad, of course.
(35:43):
And another interestingobservation I made was you have
all these telcos, and the telcosare in a crowded, saturated
market, but they're making money.
And the telcos are in a crowded, saturated market, but they're
making money.
And the OTT vendors, you knowthe Netflix and HBOs are also.
I think they're basicallysaturated.
I mean, there's so manyofferings and they have all the
(36:07):
audiences and I don't think theycan have more customers, as
they hope.
It's getting saturated, butnone of them are really making
money.
So why would then those go?
The ott companies start payingthe telcos who are, who are
actually profitable and thoseguys are not.
Speaker 1 (36:20):
So that's an
interesting.
That's an interestingperspective, and I'm sure mobile
world congress was not thatpopular.
Speaker 2 (36:29):
I tend to agree with
the basic premise, though, so
you know, if you look at thesethese hyper scale ott2D vendors
like Netflix and all those guysthey have the budget to put in
edge caches within their telcosto negotiate great private
hearing deals, and that's greatfor them.
So I don't think there's a realproblem.
There is not a challenge Fornewcomers in the industry who
(36:51):
don't have the deep pockets anddon't have the skill.
You won't be able to negotiatethe same deals with the telcos,
you won't be able to get youredge servers in there, and that
means that you will get twosides of the industry Some
really high-large guys withgreat performance, great
capacity and low cost, andchallengers with all kinds of
(37:14):
thresholds to get into themarket and I think it touches
the discussion on net neutrality.
So how far should the telcos beforced to also open up their
networks for those guys?
Speaker 1 (37:25):
What about any
insights on the technology side?
And it doesn't necessarily haveto be codecs or encoding, but
is there anything that you'reseeing, any trends, any requests
that are coming in from thesetelcos and these operators, even
(37:46):
if they're tier two, tier threeor working in smaller markets?
That might be interesting.
Speaker 2 (37:50):
The markets we work
for are medium-sized customers
typically, who need verycomplicated workflows, and
that's what we solved with whatwe call our mix solution, where
you can start an easy way andthen go under the hood and start
tuning and tweaking features,which you cannot do with regular
VD platforms, and then, if yougo to the expert level, you can
plug in your own transcoders,your own players, build your own
(38:13):
workflow yourself using yourown players, you know, build
your own work for yourself usingour, our stack of uh, of
workflow appreciation tools andstreaming features.
Um, that's cool, uh, that I, Ithat's.
We get a lot of positivefeedback on this because a lot
of people who enter thestreaming industry and they're
not necessarily have to, theyhave, they not necessarily, they
are otg providers can also belike an enterprise who needs to
(38:35):
do some online, online videostuff, or an e-learning platform
that wants to do something withvideo or live streaming.
Um, there are two choices inthe market.
Either you go to a videoplatform and that's easy, you
know.
You sign up for uh, not thatmuch money.
You, you can upload your videosand they do everything for you.
You get a video player and youcan publish it on your website.
(39:01):
But a lot of those companies getstuck there because they cannot
optimize their encoding quality, they cannot optimize the
player, they cannot optimize themulti-seeding distribution,
they don't get access to thedata that they need.
So they get stuck with thoseplatforms and the step to go to
building your own stack oftechnologies is extreme, because
(39:21):
then you start to have to hirecloud experts streaming experts
who go to Azure or AWS and startconfiguring all these modules
and sticking everything together.
It will take a lot of time, alot of money to build that, and
then you have your ownhome-grown streaming platform
and sticking everything togetherwill take a lot of time, a lot
of money to build that, and thenyou have your own home-grown
streaming platform which doeswhat you need.
But what about tomorrow?
What about you want to go fromH.264 to AV1 or whatever?
(39:47):
You're stuck again, becausethat's the problem.
So that's why we also claim tohave this 40% cost reduction.
It's not just in traffic ortranscoding costs, it's also in
operation costs.
It's something that peopleunderestimate how much time it
takes to build and maintain astreaming platform and how much
(40:08):
expertise you actually needin-house to do that.
Speaker 1 (40:11):
Just because you can
grab an open source library.
You know which is superpowerful, right, and I think
it's a wonderful thing.
You know, it's great that wehave FFmpeg is developed, you
know as it is.
And x264 is, you know, let'sface it, it's an amazing encoder
, it really is.
And x265, great encoder.
(40:33):
So this is all wonderful.
Do have smart engineers and theydo have the talent to be able
(41:00):
to build it.
So it's not that they can't,but it's that maintaining it for
the life of the service, that'sthe part they miss.
So it's one thing to build it,have it work today, have it work
tomorrow, have it work nextmonth and even for the rest of
this year.
It's another thing to have itwork tomorrow, have it work next
month and even for the rest ofthis year.
It's another thing to have thatsame service rock solid.
You know, uh, in in in 2025,you know when maybe our user
(41:23):
base has scaled 10, 20, 30, 40 x, 100 x, you know, hopefully,
you know over what it is or wastoday.
So if you look, at uh uhquality.
Speaker 2 (41:35):
I mean, most of the
content on the web is like hd or
full hd, but what about 4k andwhat does it mean for your?
Speaker 1 (41:42):
end code.
What does it mean for your?
You support 8k.
Speaker 2 (41:46):
You know that's
coming one day, so hopefully
there are also two types of uhof worlds coming together.
We have broadcastingtraditional broadcasting and we
have internet, and on theinternet we are used to having
very dynamic solutions.
I mean we have to changeprotocols, you have to change
infrastructures and what isworking today may not be working
(42:09):
tomorrow, while in broadcastingpeople are used to build
systems for life you build it,you don't touch it.
And especially the people whohave this more traditional
broadcast attitude ofengineering infrastructures,
they're having a real hard timeto understand the dynamics of
(42:30):
the internet Because, as I said,tomorrow we can have.
I saw that Safari from Apple,the new, latest beta, would
actually introduce AV1.
So then overnight, overnight,this industry can change.
And then how will you changeyour encoding?
And if you have you thoughtabout the effects on your CDN
(42:52):
and origins if you start?
You know if you migrate fromH.264 to AV1, probably not and
then it will break, or yourlogging will break or your
monitoring.
So you have to think about morefuture-proof things.
And, by the way, I also saw thatApple pulled AV1 out of the
latest beta release, so I'mreally curious what's going to
(43:12):
happen there.
Speaker 1 (43:13):
Well, steph, this has
been a really amazing
discussion and you know I wantto thank you for joining the
Voices of Video and sharing allof your insights and what you
built with us and with theaudience.
So thank you.
Thank you, mark, and why don'tyou tell everybody where they
(43:34):
can go to learn more about?
Speaker 2 (43:36):
Jetstream.
Oh, it's simple, it'sjet-streamcom.
Well, thank you.
Speaker 1 (43:44):
This episode of
Voices of Video is brought to
you by NetInt Technologies.
If you are looking forcutting-edge video encoding
solutions, check out NetInt'sproducts at netintcom.