All Episodes

November 6, 2024 42 mins

Transcoding with ASICs has gotten a lot of attention lately. From YouTube’s ARGOS VPU to Meta’s new ASIC that prompted noted compressionist David Ronca to comment that there are only two types of companies, “those that are using Video Processing ASICs in their workflows, and those that will.’

Based in Sweden, RealSprint provides live-streaming solutions to companies around the globe. When choosing a transcoder for the AV1 capabilities of its Vindral Live CDN, RealSprint considered software transcoding, as well as GPU and CPU acceleration and ASICs. Not surprisingly, RealSprint made the same decision as YouTube and Meta and chose ASICs.

Listen to Jan Ozer from Streaming Leraning Center and Daniel Alinder from RealSprint, as they discuss the role of hardware encoding and ASICs to enable ultra-low latency 4K streaming with the AV1 codec.

Daniel takes us on a journey through the creative forces driving RealSprint and introduces Vindral, their cutting-edge CDN-type product designed for low latency and high-quality streaming. You'll learn about the innovative ecosystem RealSprint thrives in and how it leads to groundbreaking solutions in video and broadcasting.

We unravel the technical complexities of video encoding, focusing on the critical choice between ASICs and GPUs, especially in scenarios demanding real-time performance. Daniel shares how these decisions are shaped by factors like quality, density, cost, and the global chip shortage. Tune in to understand why AV1 is emerging as the codec of choice for modern streaming solutions, offering a perfect blend of efficiency and performance, particularly with the support of technologies like NETINT ASIC.

Join us as we delve into the debate on low latency streaming in the video industry. Daniel and I explore the evolving audience expectations and the potential of next-generation technologies to transform the landscape. We navigate through the complexities of synchronization in broadcasts and the economic considerations of adopting low-latency solutions across different regions. By the end, you’ll see why the dismissal of low latency might soon become a relic of the past as the industry gears up for inevitable change.

Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Voices of Video.
Voices of Video.
Voices of Video.
Voices of Video.

Speaker 2 (00:17):
Welcome everyone.
I'm Jan Ozer from NetEnt.
I'm here today with DanielOlinder from RealSprint and
we're going to talk about hisnew CDN-type product, which is
called Vendrol.
Hey, daniel, thanks for coming.
Why don't you tell us a littlebit about RealSprint?

Speaker 1 (00:33):
Hey, thanks for having me.
I would love to tell you aboutRealSprint.
You're going to have to tell mewhen to stop because I might go
on for too long.
You just let me know.
I mean RealSprint.
We're a Swedish company.
We are based in northern Sweden, which is kind of a great place

(00:55):
to be running a tech company.
Honestly, we're in a universitytown.
Any time after September itgets really dark outside for
most parts of the day, so peoplegenerally try to find things to

(01:16):
do inside.
So it's a good place to have atech business because you'll
have people spending a lot oftime in front of their screens
and creating things.
So heavily culture focused teamof around 30 people, mostly here
in northern Sweden.
We have some in Stockholm andand also in the US.
So so the company itselfstarted as a, as a really small

(01:41):
team that did not have the theend game figured out yet.
Uh, around like was it 10 yearsago or something, uh, but they
didn't want it to do somethingaround video, around
broadcasting and streaming.
So um, um, yeah, it's fromthere.
It's grown and today we're 30people well, let's talk about

(02:02):
vindrill.

Speaker 2 (02:03):
Uh, at a high level.
What is Vendral?
And I feel like I'm manglingthe name.
How would you pronounce it?

Speaker 1 (02:10):
No, you're doing it great, Vendral.
That's correct.
That's correct.
It's actually a product family.
There is a live CDN, as youmentioned, and there's also a
video compositing software.
So what we're going to betalking about today is the Live
CDN, which is it's been live inearlier generations and now

(02:36):
finally in.
I think it's around five or sixyears that it's been running
for 24-7.
Years that it's been runningfor 24-7.
And the product was bornbecause we got questions from
our clients around latency andquality, Like, basically, why do

(02:57):
I have to choose if I want lowlatency or if I want high
quality?
Because there's solutions onboth ends of that spectrum.
But when we got introduced tothe problem, there wasn't really
a good solution.
We started looking intoreal-time technologies, like

(03:18):
WebRTC, for example, and quicklyfound that the state of it and
still is that it's not reallysuitable.
If you want high quality, it'samazing in terms of latency.
But the client reality you knowthere's not always that you
can't go all in on only oneaspect of a solution.

(03:42):
You need something that'sbalanced.

Speaker 2 (03:45):
Draw us a block diagram only one aspect of a
solution.
You need something that'sbalanced.
Draw us a block diagram.
So you've got your encoder,you've got your CDN, you've got
software.
So draw that for the peoplelistening to us.

Speaker 1 (03:53):
Yeah, definitely.
So I mean, we can take atypical client.
Maybe they're in entertainmentor they're in gaming, so they
have their content and they wantto broadcast that to a global
audience, what they do.
Generally, the most standardway of using our CDN is they

(04:14):
ingest one signal to ourendpoint and there's several
ways of ingesting several ofthose transfer protocols, and
the first thing that happens onour end is we create the ABR
ladder.
We transcode to all thequalities that are needed,
naturally, because there'snetwork conditions are different

(04:36):
in different markets and alsoeven in places that are well
connected.
Even just the home Wi-Fi can beso bad at times.
Honestly, there's a lot ofjitter and latency and things
going on there.
So, after the ABR letter iscreated, the next box fans out

(04:59):
to the places in the world wherethere are potential viewers, to
the places in the world wherethere are potential viewers, and
from there we also have Edgesoftware as one part of the one
component of this.
And, lastly, it's the signal isreceived by the player that's
instanced on the device.
So that's basically what youwere asking, right.

Speaker 2 (05:22):
Yeah.
So you've got an encoder in themiddle of things creating the
encoding ladder.
Then you've got the CDNdistributing.
What about the software thatyou've contributed?
How does that work?
So I log into some kind ofportal, I would guess, and then
administrate through there.

Speaker 1 (05:40):
Exactly, I mean if you have.
So take a typical client.
We have clients in iGaming, forexample.
They're running 50 or 100channels and they want their
usage and they see all of thechannel information that they

(06:06):
would need, Because it's a veryimportant part, of course, of
any mature system that theclient understands what's going
on and for that reason as well.
I mean one of the topics fortoday.
When it comes to encoding, Iwould say that the encoding is

(06:27):
particularly important for us tosolve because we have loads of
channels running 24-7.
And your typical client isbroadcasting for 20 minutes a
month or something like that.
Then of course, the encodingload is much lower In our case.

(06:58):
Yes, we do have those types,but many of our clients are
heavy users and they own a lotof content rights and therefore
the encoding part is severalhundreds of terabytes ingested.
Only one quality for eachstream monthly.
On the ingest side.

Speaker 2 (07:15):
Okay, so you're encoding ABR.
Which codecs are you supportingand which endpoints are you
supporting?

Speaker 1 (07:21):
Yeah, so codec-wise we're actually we have chosen to
.
I mean, everybody does H.264.
Of course, that's the standardin live when it comes to live
streaming with low latency andwe have recently added AV1 as
well, which was something weannounced as a world first.

(07:42):
I mean, we weren't world firstwith AV1, but we were world
first with AV1 at what manywould call real-time.
We call it low latency.
So we chose to add that becausethere's a lot of I mean, there
are pointers in the marketpointing to AV1, just in short.

Speaker 2 (08:03):
Okay, and which devices are you targeting?
Is it TVs, smart TVs, mobile,the whole gamut.

Speaker 1 (08:12):
Yeah, I would say the whole gamut.
Actually, that list of devicesis steadily growing.
So I would say I'm trying tofind some way of putting if
there's any devices that wedon't support Because as long as

(08:34):
it's using the internet, we'redelivering to it.
So I would say I mean, if youhave any desktop browser, any
mobile browser, including iOS aswell, which is basically the
hardest one, including iOS aswell, which is basically the
hardest one If you're deliveringto iOS browsers that are all
running iOS Safari, we'regetting the same performance on

(08:56):
iOS Safari.
And then I mean Apple TV,google Chromecast, even Samsung
and LG TVs and Android TVs.
There's a plethora of differentdevices that our clients
require us to support 4K 1080pHDR SDR.

(09:17):
Yes, all of these things?
The answer is yes.
Well, I mean understanding thatwe.
One very important thing for usis, of course, to prove the
point that you get quality whilegetting low latency sports, and

(09:41):
their viewers are used towatching this under their
television, maybe a 77-inch or85-inch TV.
You don't want that user to geta 720p stream, so you need it
to be high quality.
And since it is kind of a pointthat we want to make because we

(10:01):
can do it as well, so this typeof a client would typically say
, okay, there's a configurablelatency, so maybe they pick a
second of latency or 800milliseconds and they want 4k to
be maintained on that latency.
That that's.
That's one of the use caseswhere we shine practically.

(10:23):
There's also a huge market forlower qualities as well, where
that's important.
So you mentioned ABR ladders.
Yes, I mean.
There's markets where you get600 kilobits per second on the
last mile.
You have to solve for that aswell.

Speaker 2 (10:37):
So your system is the delivery side, the encoding
side.
Which types of encoders did youconsider when you bought?
You know, chose the encoder tofit into Vendral.

Speaker 1 (10:49):
There's actually two steps to consider there, just to
avoid any misconceptions.
I mean the client.
Often, depending on whetherwe're doing an on-prem or off
like cloud solution for them,the client often has their own
encoders.
I mean many of our clients.
They're using like Elemental orsomething just to push the

(11:10):
material to us.
But on the transcoding, wherewe generate the ladder, unless
we're passing all qualitiesthrough, which is also a
possibility there are of course,different ways of different
directions to go and they allfit in different scenarios.

(11:31):
So, for example, if you take anIntel CPU-based and you use
software to encode, that is aviable option in some scenarios
not all scenarios and there'sNVIDIA GPUs, for example, which

(11:57):
you can use in some scenarios,because there's many factors
coming into play when makingthat decision.
I would say the highest priorityof all is something that our
business generally does bad,that is, maintaining a business
viability.
So you want to make sure thatany client that is using your

(12:20):
system can pay and they can maketheir business work.
Now, if we have channels thatare running 24-7, as we do then,
and we also have it's in aregion where it's not impossible
to allocate, like bare metal orco-location space, then that is

(12:41):
a fantastic option in many ways.
So we've those three differentlike CPU-based, and then we have
GPU-based and ASICs.
Those are the three differentthat we've looked into.

Speaker 2 (12:54):
So how do you differentiate?
I mean you talked aboutsoftware being a good option in
some instances.
When is it not a good option?

Speaker 1 (13:02):
I mean, no option is good or bad in a sense, but if
you compare them, both the GPUand the ASIC outperform the
software encoding when it needto spin it up, spin it down and
you need to move things.
You need it to be flexible,which is quite honestly in the

(13:32):
lower what do you say?
The lower revenue parts of themarket.
When it comes to the bigbroadcasters, the large rights
holders, the use case is heavierand you get many channels, you
get usage a lot over time thanthe GPU and especially the ASIC

(13:54):
make a lot of sense.

Speaker 2 (13:56):
Okay, and you're talking there about density.
What is the quality picture?
A lot of people think thatsoftware quality is going to be
better than ASICs and GPUs.

Speaker 1 (14:07):
How do they compare?
Well, it might be.
In some instances we found thatquality when using ASICs is
fantastic and it's all dependingon what you want to do.
Because we need to understandwe're talking about low latency.
Here.
We don't have the option oftwo-pass encoding or anything
like that about low latency.

(14:27):
Here we don't have the optionof two pass encoding or anything
like that.
We need everything needs towork at real time.
So our requirement on encodingis it.
It takes a frame to encode andthat's, that's all the time that
you get.
Um, but I would say I mean youmentioned density.
There's a lot of other thingscoming into play.
I would say, well, quality,definitely there's.

(14:52):
Also, I mean, even if you'relooking at ASICs, you're
comparing that to GPUs.
Past two years have been likeokay, there's a chip shortage,
what can I get my hands on?
That's even a deciding factorin some cases where we've had a

(15:13):
client banging on the door andthey want this to go live.
But going back to the densitypart, I mean that is a huge.
That is a game changer, becausethe ASIC is unmatched in terms
of number of streams per rackunit.
If you just measure that KPIand you're willing to do the job

(15:39):
of building your CDN inco-location spaces, which not
everybody is, then that's, Imean, you have to ask yourself
who's going to manage this.
You don't want bloat whenyou're managing this type of a
solution.
If you have thousands ofchannels running, then even I

(15:59):
mean cost is one thing when itcomes to not having to take up a
lot of rack space, but also youdon't want it to blow too much.

Speaker 2 (16:07):
I mean, how formal an analysis did you make in
choosing, say, between the twohardware alternatives?
Did you bring it down to costper stream and power per stream,
and did you do any of that math?
Or how did you make thatdecision between those two
options?

Speaker 1 (16:26):
Well, in a way, yes, but I would say, on that
particular metric, I would justsay we need to look at the two
options and say, well, this isat a tenth of the cost.
So I'm not going to have togive you the number because I
know it's so much smaller.
So that's basically.
Of course, we know what ourwe're well aware of what costs

(16:52):
are involved.
But the cost per stream it alldepends on profiles et cetera.
But just comparing them, yes,we've, naturally, we've looked
at, like, started encodingstreams, especially in AV1, and
you look at what the, what theactual performance is, how much
load there is and and what'shappening on on the cards and

(17:13):
how much you can, uh, how muchwork you can put on them before
they start giving in.
So so that's, I mean, of coursethat's, but but then then again
, when there's such a bigdifference.
So there's one thing to mentionhere.
I mean take, for example, a GPU, great piece of hardware, but

(17:35):
it's also kind of like buying acar for the sound system,

(17:56):
because if I'm buying an NVIDIAGPU to encode video, then that
might be like I'm not even usingthe actual rendering
capabilities.
That is the biggest job thatthe GPU typically is built for.
So that's one of thecomparisons to make, of course.

Speaker 2 (18:10):
What about the power side?
How important is powerconsumption to either yourself
and your customers?

Speaker 1 (18:17):
It is very important.
It's actually.
I mean, I remember we had aconversation.
When was that IBC?
It was September, and evensince then, I mean, if you look
at the energy crisis and howthings are evolving, the typical
offer you'll be getting fromthe data center is we're going

(18:39):
to charge you 2x the electricalbill and that's never even been
something that's been charged,because they don't even bother.
Now we're seeing the firstinvoices coming in where the
electrical bill is actually apart.
I mean, if you look at Germany,it just peaked in August the

(19:03):
energy price.
It was at 0.7 euros perkilowatt hour.
That's amazing, yeah, that'samazing.
And I mean Germany.
You have Frankfurt, which isone of the major exchanges.
That is extremely important.
If you want performancestreaming, you're going to have

(19:26):
to have something in Frankfurt.
So that's one part of it, if youdon't mind me just going on
here, because there's anotherpart of it as well, which is, of
course, the environmentalaspect of it.
One thing is the bill thatyou're getting.
The other thing is the billwe're leaving to our children.
So it's kind of contradictorybecause many of our clients they

(19:49):
, they make travel likeunnecessary, that you have a
norwegian company, uh, that thatwe're working with that are
doing remote inspections of shiphulls, so they were the first
company in the world to do that,and instead of flying in an
inspector and flying in the shipowner and two divers to the

(20:09):
location, there's only oneoperator of a little underwater
drone that is on the locationand everybody else is just
connected.
So that's obviously a goodthing for the environment, but
what are we doing?
Our own footprint, which isalso, of course, something that
we need to consider.
So, except for the price,there's also I mean, winter is

(20:32):
coming, there's an energy crisis.

Speaker 2 (20:37):
So let's switch gears .
Let's talk about AV1.
Why did you decide to lead withAV1?

Speaker 1 (20:43):
That's a really good question.
Av1, I would say there'sseveral reasons why we decided
to lead with it.
It is very compelling, I meanas soon as you can do it in real
time, because we had to waitfor somebody to really make it
viable, which we found with theNetEnt ASIC to do it viable at

(21:07):
high quality and with a latencyand reliability that we could
use, and also, of course, withthroughput, so we don't have to

(21:28):
buy too much hardware to get itworking.
But what we're seeing aremarkers that our clients are
going to want AV1.
I'm sorry, Our clients aregoing to want be one, and
there's several reasons why thatis the case, One of which is,
of course, it's license-free.
So if you're a content owner,especially if you're a content
owner with a large crowd, so youhave many subscribers to your

(21:50):
content, that's a game changer.
I mean, the cost that you havefor licensing a codec can grow
to become a not insignificantpart of your business.
I mean, even look at what'shappening with Fast, for example
.
Like Fast, free ad supportedtelevision.

(22:12):
There you have even more freead-supported television.
There you have even more.
You're trying to get even moreviewers and you have lower
margins and what you're doing isactually creating eyeball
minutes, and if you have a codecthat costs license costs,
that's a bit of an issue.
It's better if it's free.

Speaker 2 (22:33):
Is this what you're hearing from your customers, or
is this what you're assumingthey're thinking about?

Speaker 1 (22:37):
That's what we're hearing from our customers.
Yes, that's why we startedimplementing it, because I mean,
there's also for us, there'salso the bandwidth to quality
aspect, which is great.
But why we believe in it isbecause that's what we're

(22:58):
hearing, and I mean, I'm goingto say my belief is that it will
explode in 2023.
Because, for example, if youlook at just what happened one
month ago, google made hardwaredecoding mandatory for Android
14 devices.
That's both phones and tablets,and that's I mean when even the

(23:21):
phone my little Samsung S22,the EU version here supports it
for decoding.
It opens up so manypossibilities.
So we were actually maybe notexpecting to get business on it
quite yet, but we are, which is,I mean, I'm happy for that.

(23:41):
But there's already clientsreaching up because of the
licensing aspect and also someof them, they're transmitting
petabytes a month, which is, ifyou can bring down the bandwidth
and retain the quality, that'sa good deal.

Speaker 2 (23:58):
You mentioned before that your systems allow the user
to dial in the latency and thequality.
Could you explain how thatworks?

Speaker 1 (24:05):
We're going to have to make a difference between the
user and the broadcaster.
So, our client is thebroadcaster that owns the
content and they can pick thelatency.
So the way it works is so.
Vendral Live CDN doesn't.
It's not on a fetch your filebasis.

(24:25):
The way it works is we're goingto push the file to you and
you're going to play it out andthis is how much you're going to
buffer.
So once you have that set upand, of course, a lot of sync
algorithms and things like thatat work, then the stream is not
really allowed to drift.
A typical use case where youhave take live auctions, for

(24:48):
example, the typical setup forlive auctions 1080p and you want
below one second of latencybecause people are bidding.
It's a and there's also peoplebidding in the actual auction
house.
So there's fairness, thefairness aspect of it as well.
So, uh, what?
What we typically see is theythey configure maybe a 700

(25:12):
millisecond buffer and and itmakes it possible Even that
smaller buffer makes such a hugedifference.
What we see in our metrics isthat it's basically 99% of the
viewers are getting the highestquality stream across all
markets, so that's a huge deal.

Speaker 2 (25:33):
How much does the quality drop off?
That's a huge deal.
How much does the quality dropoff?
I mean, what's the lowestlatency do you support, and how
much does the quality drop offat that latency as compared to
one or two seconds?

Speaker 1 (25:42):
I would say that the lowest that we would maybe
recommend somebody to use oursystem for is 500 milliseconds.
So you have like that would beabout 250 milliseconds slower
than a WebRTC based like a realtime solution.
And why I say that is becausethere, other than that, I see no

(26:04):
reason to use our approach.
Like if you don't want a buffer, then maybe it's better to use
or I mean not better, but it'sjust as might just as well use
something else or I mean notbetter, but it might just as
well use something else.
So that's why actually I wouldsay we don't have many clients
trying that out, because most ofthem, if there's, I think 500

(26:27):
milliseconds is the lowestsomebody set and they've been
like this is so quick, we don'tneed anything more and it
retains 4K at that latency.

Speaker 2 (26:44):
How does the pitch work against WebRTC?
If I'm a potential customer ofyours and you come in and talk
about your system compared toWebRTC?
What are the pros and cons ofeach?

Speaker 1 (26:51):
I'm going to do this as non-salesy as I can.
I mean, I don't want to besitting here.
It's a webinar.
I don't want to be sitting herejust selling the things that we
do.

Speaker 2 (27:00):
So I'm going to talk about the pros of.
Yeah, it's an interestingtechnology decision and I'm not
looking for you to sell theplatform as much as you know,
because I know that WebRTC isgoing to be potentially lower
latency, but it might only beone stream.
It may not come with captioning, it's going to be the ABR, so
it's talk about that type ofstuff because you're playing in

(27:22):
their space and it's interestingto hear, technology-wise, how
you differentiate.

Speaker 1 (27:31):
I would say, from a perspective of when you should
be using which.
If you start in that end, youneed to, if you need to have a
two-way voice conversation, youshould use webrtc.
That's that I'm sure of,because I mean I there's
actually even studies that havebeen made like, if you bring the
, the latency up above, like Ithink it's 200 milliseconds, a

(27:54):
conversation starts feelingawkward.
If you have half a second, itis possible, but it's not good.
So if that's the, if that's anultimate requirement, then weber
to see all day long now that.
So where it differs is actuallythey're actually very similar,

(28:15):
they're actually, uh, actuallythe main difference I would
point out is that we have addedthis buffer that the platform
owner can set so that when theplayer is instanced it's at that
buffer level and WebRTCcurrently does not support that.
Webrtc currently does notsupport that and I will say I

(28:36):
mean, even if it did, we mighteven implement that as an option
.
I feel that it's like really itmight go that way at some point
.
Today it's not so definitely.
On the topic of differences,then I would say if 700 or 600

(28:59):
milliseconds of latency is goodfor you and quality is still
important, then you should beusing a buffer and using our
solution and, as you mentioned,there are.
It's also a lot of.
When you're consideringdifferent vendors, the feature

(29:21):
set and what you're actuallygetting in the package.
There are huge differences.
As you mentioned, some vendorsmight not even maybe on their
lower tier products ABR is notincluded, things like that,
where it's kind of, yeah, youshould be using ABR definitely.

Speaker 2 (29:34):
What's the longest latency?
You see people dialing in.

Speaker 1 (29:37):
You know you talked about the shortest, yeah we've
actually had one use case inHong Kong where they chose to
set the latency at, okay, so 3.7seconds.
If I remember it correctly,that was because the television
broadcast was at 3.7 seconds,because that's the other thing.

(30:02):
We talk a lot about latency.
Latency is a hot topic buthonestly, many of our clients,
they value the synchronizationeven above latency Not all
clients, but some of them.
If you have a game show whereyou want to react to the chat

(30:23):
and have some sort ofinteractivity, maybe you have
like 1.5 seconds.
That's not a big issue if it'sat 1.5 seconds of latency and
you will get a little bit morestability, naturally, since
you're increasing the buffer.
So some of our clients, they'vechosen to do that, but around
three and a half.
That's actually the only clientwe've had that have done that

(30:46):
before, but I think there couldbe more in the future,
especially in sports, because ifyou have, like, the satellite
broadcast is at seven seconds oflatency, the thing is we can
match that.
We can match it onto thehundreds of milliseconds.

Speaker 2 (31:04):
And the advantage of higher latency is going to be
stream stability and quality.
You know what's the qualitydifferential going to be.

Speaker 1 (31:12):
Definitely, I mean.
But I would say, as soon asyou're above one, even one
second, as soon as you're abovethere, there are diminishing
returns.
It's not going to be like itunlocks this whole.
Yeah, on extreme markets itmight, but I would say, even

(31:34):
okay, if you're going above twoseconds, you've kind of you're
done.
You don't have to go higherthan that.
At least our clients have notfound that they need to, and for
them the markets are basicallyfrom East Asia to South America
and South Africa, because we'veexpanded our CDN into those

(31:58):
parts.

Speaker 2 (32:03):
So you've spoken a couple of times about where you
install your equipment andyou're talking about co-locating
and things like that.
What's your typical server looklike?
How many encoders are youputting in it?
What type of density are youexpecting from that?

Speaker 1 (32:14):
Well, that's going to be different based on what we
like ASIC cards, for example, wecan install several of those
into each server and every ASICcard, depending on profile.
I think we can do.
I'm not going to do theguessing.
I'm going to have to get backto that if I'm sharing numbers.

Speaker 2 (32:36):
Will just tell us in general.

Speaker 1 (32:38):
Yes, so in general it would be something like one
server can do 10 times as manystreams if you're using the ASIC
than if you're using a GPUslike Nvidia, for example.
But I wouldn't.
Maybe on one view can you do 20streams, Maybe something like

(33:00):
that, and I'm factoring in theABR ladders into that.

Speaker 2 (33:05):
So 20 ladders or 20 streams 20 ladders is what I
mean.

Speaker 1 (33:09):
I wasn't going to do this because my tech guys are
going to tell me that I waswrong.

Speaker 2 (33:15):
Let me look for a couple of questions from the
audience.
We've got one, and this is aquestion about your system.
Can you create an ABR ladderwith StatMux?
Is that something you wouldknow offhand?

Speaker 1 (33:29):
We don't have a StatMux integration.
I'm actually not integration.
I'm actually not very wellversed in StatMux.
It's something that we shouldbe able to do.
If I'm not mistaken, it'sactually using HEVC.
Is that correct?

Speaker 2 (33:51):
I don't know.

Speaker 1 (33:52):
Which is I mean, if it is, it's definitely something
that we could do.
We have not implemented itright now, but if there's a
market demand for it, we saythat on our egress side we are
codec agnostic, so in a sense,that if there is a demand, yes,
we'll solve it, that's noproblem for us.

Speaker 2 (34:14):
Another question is what is the cost of low latency?

Speaker 1 (34:18):
Okay, sorry, that's a very general question.

Speaker 2 (34:22):
If I decide to go the smallest setting, what is that
going to cost me?
I guess there's going to be aquality answer.
There's going to be a stabilityanswer.
Is there a hard economic answer?

Speaker 1 (34:33):
My hope is that there shouldn't be a cost difference
in some, depending on regions.
So I'll be honest hereDepending on regions, the way we
operate we've chosen to, as Imentioned before, it's about the
design paradigm of the productthat you've created.
We have competitors that aregoing with one partner.

(34:55):
They've picked Cloud Vendor Xand they're running everything
in their cloud and then whatthey can do is what they can do.
They've made a deal with theircloud vendor and that's the
limit.
Now we get requests.
For example, we had an AV1request from Greece, huge egress

(35:18):
that I was blown away by.
That case actually existed,that big an internet TV channel
and they mentioned their pricing.
Because we asked them like okay, so what's your HLS pricing?
We want this to make sense foryou, because they were asking us
because they wanted to savecosts by cutting their traffic

(35:42):
by using AV1.
So what we actually did withthat request is we went out to
our partners and our vendors andwe asked them can you help us
match this?
And we did.
So I'm going to say, from abusiness perspective, yes, it
might in some cases cost more,but there is also an image that

(36:04):
plagues the low latency businessof high cost, and that is
because many of these companieshave not considered their power
consumption, their form factors,actually being willing to take
CapEx investments instead ofjust running in the cloud and
pay as you go, many of thosethings that we've chosen to put
the time into so that there willnot be that big a difference.

(36:30):
I know that some of our biggerpartners take, for example, tata
Communications they're pricingI mean, they're running our
software stack in theirenvironments to run their VDN,
and it's on a cost parity.
So that's something that shouldalways be the aim.
Then I'm not going to say it'salways going to be like that,

(36:51):
but that's just the short, likewhen you're talking about the
business implications.
I think we're often getting therequests where the potential
client has this, they think thatit's going to be a very high
cost and then they find that,well, this makes sense, we can

(37:13):
build a business on it.

Speaker 2 (37:14):
Interesting.
Are you seeing companies movingaway from the cloud towards
creating their own co-locatedservers with encoders and
producing that way as opposed topaying cents per minute to
different cloud providers?

Speaker 1 (37:29):
I would say I'm seeing the opposites, and too
much because we're doing bothJust to be clear.
I think the way to go is to dohybrid, because for some clients
they're going to be clear.
I think the way to go is to dohybrid Because for some clients
they're going to be broadcasting20 minutes a month.
Cloud is awesome for that.
You just spin it up when youneed it and you kill it when
it's done, but that's not alwaysgoing to cut it.

(37:51):
But if you're asking me whatmotion I'm seeing in the market,
it's more and more of thesecompanies that are deploying
across one cloud and that'swhere it resides.
There's also the types ofofferings that you can instance
yourself in third-party clouds,which is also an option, but

(38:14):
again, it's the design choicethat it's a cloud service that
uses underlying cloud functions.
It's a shame that it's not moreof both.
It creates an opportunity forus, though Maybe I shouldn't be
saying this.

Speaker 2 (38:33):
Finishing up.
What are the big trends thatyou're chasing for 2023 and
beyond?
What are you seeing?
What are the big trends thatyou're chasing for 2023 and
beyond?
What are you seeing?
What are the forces that aregoing to impact your business,
the new features you're going tobe picking up?
What are the big technologydirections you're seeing?

Speaker 1 (38:50):
I mean for us on our roadmap.
We have been working hard onour partner strategy and we've
been seeing a higher demand forwhite label solutions, which is
what we're working on with somepartners.
We've done a few of thoseinstalls and that's where we are
putting a lot of effort into,because we're running our own

(39:11):
cdm, but we can also enableothers to do it like as, even as
a managed service.
It's uh, it, then you, I, thenyou have these telcos that are.
Some of them have maybe an HLSoffering since before and
they're sitting on tons ofequipment and fiber.
So that's one thing, but I mean, if we're making predictions, I

(39:32):
would say also two things.
Worth a mention are I wouldexpect the sports betting market
, especially in the US, toexplode, and that's something
that we're definitely keepingour eyes on.
Maybe live shopping becomes athing outside of China it is, I

(39:55):
mean, many of the big playersthat are, for example, the big
retailers and even, likefinancial companies, are working
on their own offerings in liveshopping.
But if I can add something tothe wish list, I would say I

(40:22):
wish for the coming few yearsthat I don't know if I've told
you about the dinosaursagreement, no, okay, so it's
comparable to a gentleman'sagreement.
There is and this might beprovocative to some, and I get
that, it's complicated in manycases, but there is sort of
among some of the bigger playersand also among independent

(40:46):
consultants that have differentstakes um, they're, they're
asking the question do we reallyreally need low latency or do
we really really needsynchronization?
And while a valid question, Iget it, it's valid, but it's
kind of also a self-fulfillingthing, because as long as the

(41:08):
bigger brands are not creatingthe experience that the audience
is waiting for them to create,nobody's going to have to move.
So what I'm calling thedinosaurs here is they're
holding on to the thing thatthey've always been doing and
they're optimizing that, but notmoving on to the next
generation.
And the problem they're goingto be facing, hopefully, is that

(41:31):
when it reaches critical mass,the viewers are going to start
expecting it and that's whenthings might start changing.
So I mean, I totally understandthat there's many workflow
considerations.
Of course, there's tech legacyconsiderations, there's cost
considerations and differentaspects when it comes to scaling

(41:53):
.
But saying that you don't needlow latency, that's a bit of an
excuse, I'd say.

Speaker 2 (41:59):
We're out of time.
I appreciate you coming onboard and sharing all this
information with us, and goodluck with the service going
forward.

Speaker 1 (42:08):
Thank you.
Thank you.
This episode of Voices of Videois brought to you by NetInt
Technologies.
If you are looking forcutting-edge video encoding
solutions, check out NetInt'sproducts at netintcom.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.