Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
voices of video.
Voices of video.
The voices of video voices ofvideo welcome back to the
special edition of voices ofvideo.
I am so excited to be doingthese short interview segments
with a number of our partnersthat we are going to be featured
(00:30):
both in our booth as well as inother various settings around
NAB.
Today I am meeting withScaleStream and so, dominic,
first of all, thank you forjoining Voices of Video.
Welcome.
Speaker 2 (00:46):
Yeah, welcome.
Glad to join the sessions.
Speaker 1 (00:49):
Yeah, absolutely.
You know, we were talkingbefore we hit record and there's
so much innovation happening inthe industry and our two
companies are doing some reallygreat work together.
Our two companies are doingsome really great work together.
So, first of all, why don't youintroduce yourself, tell us
about ScaleStream, and thenlet's dive into a conversation
(01:11):
about how we're working together.
Speaker 2 (01:14):
Yeah, definitely so.
My name is Dominik Fosters fromScaleStream.
I'm responsible for businessdevelopment and sales, although
I have a quite large technicalbackground, so that's why I'm
also involved on the sidelinesin benchmark tests with NetInt
and so on.
In the beginning, also involvedin improved concept and so, but
(01:39):
lately, mainly focused onbusiness development and sales,
especially because, since thecompany Skillstream grew quite a
lot with more people, now wehave a key focus on everybody on
this job and minus businessdevelopment and sales, the
company Skillstream.
Perhaps a short update for thepeople who don't know
(02:01):
Skillstream yet we are based inSweden.
The company was founded in 2017.
Initially, the origin was ourorigin.
Slash repackagers was our mainproduct.
This is a product that'sdeveloped completely from
scratch, so we are not using anyopen source components glued
(02:21):
together with scripts.
Everything is built fromscratch with three things in
mind Performance, ease of useand flexibility, and later on,
we also added Textra products.
As you can see on the slide, wehave a Shield, a CDN, i2t,
image-to-text subtitleconversion and lately
(02:41):
transcoding.
And this is why we are talking,especially because, with the
NetInt cards, this allowed us tokeep the performance high,
which is quite high on our listsustainability.
On the origin, for example.
We are 10 times more performantcompared to competition At some
(03:03):
customers we replay replaced 40servers with just two, and
that's what we want to achievewith transcoding as well.
Speaker 1 (03:11):
Yeah, it's amazing 40
servers and you replaced all
those with only two.
Yeah, that's a lot ofefficiency, that's incredible.
I don't know anybody who isn'tinterested in two things cost
reduction, and usually massivecost reduction.
(03:31):
And then it goes hand in handwith energy reduction.
Right, absolutely.
Speaker 2 (03:36):
Yeah, yeah, no
because I one of one of our
customers, did a case study byreplacing so many servers and
they saved around was it 85 or95K yearly, mainly on power
supply operational costs.
So sustainability is reallyhigh on our agenda and we can
(04:00):
talk about it in more detail.
But we believe we're convincedthat we can do the same on the
transcoding side as well,together with NetInt.
Speaker 1 (04:09):
Yeah, absolutely Well
.
Thank you for that introductionand overview.
I'm actually really interested.
And I think where I'd like tostart, I know you have a couple
slides prepared.
Listeners, don't worry, we'renot going to bore you with a
couple slides prepared.
You know, listeners, don'tworry, we're not going to bore
you with a PowerPoint, but youknow, I think you know they help
(04:31):
frame up the information nicely.
But I would like to start with.
You know there's some veryinteresting trends that are
happening and one of them is isit the function of the
transcoder is transforming, youknow, might be the, you know,
might be the best way to put it.
(04:52):
It used to be.
Transcoder is pretty simple youtook a file in, you took a bit
stream in, you output, you know,some assortment of resolutions
and you know, and then a product, like your original product,
the packager would take that,package it up and make it, you
know, available for streaming,right, um?
(05:13):
now it, it.
It seems like a lot of these uhfunctions are kind of, are kind
of merging or they're sort ofblending together such that, you
know, the transcoder is almostas much of a router you know, a
signal router as it is literallya file transformer, and so
(05:36):
maybe you can give us theperspective from your customer
base.
You know how is the, you knowhow is.
How do you guys think aboutthis?
As you have an origin product,you have your shield, you've got
the packager, you have.
You know, that's a part of thetranscoding solution.
Speaker 2 (06:07):
It's a kind of
drop-in component on the origin,
so it's just kind of plug andplay.
You install the rightmicroservice and you're ready to
go.
It's fully integrated in theexisting product, in the UI.
It does auto-scaling in thecloud.
It distributes the load amongmultiple servers if you have
(06:27):
multiple servers.
So that's quite a good add-onto the product.
And, as you mentioned in thepast, it was, say, offline VOD
transcoding.
Of course we support VOD,offline transcoding, but what we
saw from our customers is costreduction.
This is very important, moreimportant than ever, I think.
(06:50):
And yeah, therefore we alsoadded some new products in the
transcoding space.
I know that that's really theonly one, but like, for example,
just-in-time transcoding forVOD.
We did some calculations basedon customers and they can save
roughly 50% to 60% of thestorage, meaning also 50% 60% of
(07:14):
the storage cost.
And the way how we do that ismainly, for example, for VOT or
also for MPVR.
We only keep the highestprofile on the storage and we
repackage the ABR profiles onthe fly.
So, yeah, the highest profiletakes off, often the highest
(07:34):
storage space.
So in that way we can save 50to 60% of storage.
And then sometimes we get aquestion from customers like
yeah, okay, you save on storage,but don't you lose that?
On transcoding power, and twothings.
First of all, with the NetIntcards we can talk about figures
(07:57):
later but first of all with theNetInt cards we can achieve
quite a high performance,transcoding performance, at a
low cost, and that's one.
And also we do it in a kind ofefficient way.
For example, for MPVR,typically the first 12,
(08:18):
sometimes even 36 hours are themost popular.
So we keep all profiles for twoor three days, whatever we
configure, and only after threedays then we remove the lowest
profiles.
So in that way we don't loseperformance or transcoding power
(08:39):
compared to storage space.
So these are two optimizationswe've done in that space.
Speaker 1 (08:47):
Yeah, amazing.
Now are most of your clientsdeployed in public cloud.
Is it in cloud, but their owncloud, their own data centers,
or is it all on-prem?
What does the architecture looklike?
Speaker 2 (09:04):
Well, it's a bit of a
combination.
What we see at the large telcostypically a lot is on-prem,
although we see a clear shifttowards public clouds.
So this is definitely a shift.
We see Smaller broadcasterstypically run fully in cloud, so
(09:26):
that's why it is quite easy tospin up.
So the skills stream solutionas such is microserve, so we can
spin it up in on-premKubernetes, public cloud,
private cloud.
But the same is valid fortranscoding.
And these days, if you look atthe most popular cloud vendors,
(09:50):
they also have NetInt cardsavailable for selection and this
makes it quite powerful that wecan just spin up instances.
If they need more power, wejust spin up an extra instance,
kind of auto-scaling in thecloud.
Speaker 1 (10:04):
Yeah, I assume you're
referencing for example, like
Akamai on Linode.
Yeah, exactly so.
So this is really fascinating.
You know the highest profile,you know sort of the
(10:45):
quote-unquote mez file, if youwill, and then creating
derivative files literally onthe fly.
There's tremendous, you know,from a usability perspective.
You know, you know from ausability perspective.
You know you think about now Icould literally have, although I
(11:08):
would never do this, but intheory you could have perfectly
matched profiles on a device bydevice basis.
You know absolutely matchingone for one screen sizes,
because all it is is a configfile.
You know, in normal, in normaltopologies, well, you'd have to
go encode that file, you'd haveto store it somewhere and then,
who knows, maybe just a very,very, very small subset of users
(11:30):
have that particular devicewith kind of an unusual, you
know resolution or somethingunusual about the format support
.
But this whole holy grailconcept of just-in-time also
broke down when you had to do it, cpu software on CPU because it
(11:53):
was just too expensive.
Or to get the bit rate,efficiency and the quality.
You could run the encoder superfast but your bit rate was
going to be really high.
So you lose your efficiency,your codec compactness, but then
you also generally lose quality.
(12:14):
So talk me through.
What was your journey to gofrom CPU to VPU, for example?
Did you look at GPU?
Did you even deploy GPU, or didyou just jump all the way from
CPU and adopt VPU?
Speaker 2 (12:36):
Yeah, well, initially
, our offline VOD transcoding
solution.
This was the first step intotranscoding business for us.
This was initially running onCPU.
It runs fine, but, yeah, itneeds quite some CPU resources.
(12:57):
Of course it's offline so wecan say will you schedule the
jobs?
And it can be done.
If it's done in an hour or intwo hours, okay, it's not
critical.
Of course.
If you go into just-in-timetranscoding, then it's a
different story.
Then you need to have the powerto deliver it in a matter of
milliseconds or a really shorttime frame.
(13:20):
Yeah, and then building thisbased on CPU, there we saw a
bottleneck.
If only one channel or alimited number of assets, all
fine.
But of course, if you go intothe tier ones or the larger
broadcasters with a lot ofcontent, yeah, then the use case
(13:41):
falls apart, more or less,especially cost-wise.
And this is when we startsearching for other solutions
and we also looked at GPU.
But then, of course, with thenet, vpu cards, the quad, yeah,
we did some tests in the initialtests and then this showed,
(14:02):
yeah, a huge increase inperformance.
Um, so we kind of uh, thebalance is perhaps not the right
word, but the focus is mainlyon, uh, on the net in cards, for
sure.
And this is also when we wedecided, okay, let's, let's set
up some benchmark tests.
We did it.
We have some cars on-prem, butto make the comparison a fair
(14:25):
comparison, we actually set itup in Akamai Connected Cloud, as
you already mentioned, thesehave the NetInt cards available,
because then we can compareexactly the same nodes based on
CPU, based on GPU and based onthe net-in cards.
And then, yeah, we saw thedifference in performance
(14:48):
clearly.
Speaker 1 (14:50):
Yeah, that's great.
What codecs are your customersprimarily using?
I assume H.264 is ubiquitous,but HEVC AV1?
.
Speaker 2 (15:02):
Yeah, av1, not yet.
It's mainly H.264 and H.265,hevc.
These are the two.
There are a lot of talks on AV1or new codecs coming up, but
it's still not widely deployed.
I think this will change,probably in the next year or the
(15:24):
next two years, I'm quite sure.
But yeah, it's still mainly thetwo projects, regular ones.
Speaker 1 (15:33):
Understand and
resolutions.
Is it all HD and below, or doesit go up to 4K?
What are the common resolutions?
Speaker 2 (15:43):
4K content available,
but also there it's still
mainly HD, and then 4K is notthat commonly used, one thing
that often is done.
We also have on our origin aconcept called adaptations,
where you can see it as a recipefor the player.
(16:04):
We can make an adaptation for amobile phone, for a smart TV,
for a set-top box, and then,based on the device, you can set
different codecs, different bitrates, and then typically for
the mobile phones the HD isstripped, for the SetterBox it's
the opposite, only the higherprofiles are sent.
And this adaptation also tiesinto the just-in-time, because
(16:29):
also then we can say, okay, weonly just need to transcode
just-in-time Only these profilesand not all the profiles.
So this goes a bit end-to-end,these two features.
Speaker 1 (16:41):
Oh, that's
interesting.
Okay, and that does make sensebecause you know there is a core
set of files meaning you knowresolutions, bit rates that you
are going to cover like maybe100% of your user base, and I'm
(17:01):
thinking of some of the lowerresolutions and the lower bit
rates.
There's no need to encode thoseon the fly.
They also don't take up a lotof storage.
So is it that you're encodingon the fly, transcoding on the
fly, the highest resolution, orI guess it'd be medium
resolution, because you'resending over, like the 4K asset
(17:25):
right, the top profile it'sconfigurable.
Speaker 2 (17:30):
So the highest
profile is, let's say, the input
signal or the content, and thenwe can transcode all the
profiles on the fly.
But it's configurable.
You can say, okay, I only wantto transcode the mid-range,
let's say, or the low range.
It's kind of configurable.
(17:50):
But a typical use case is thatthe highest is used as a source
and all the rest is transcoded.
Okay.
Speaker 1 (17:58):
Okay, interesting,
all right.
Well, I know that our listenersare again.
You know we're touching on, asI already mentioned just-in-time
encoding, transcoding.
That's been talked about forreally years.
It's been theorized as part ofthe edge, so these concepts are
(18:22):
not new and I think everybodyyou know has certainly heard of
them or even done someexploratory POCs to you know to
build something out, but very,very few people until now have
been able to go into production.
One of the other things that'sbeen talked about is the idea of
the hybrid cloud and the hybridcloud being where you have
(18:44):
on-prem that flexes into somesort of a cloud and that could
be your own data center or itcould be literally a public
cloud or maybe some other.
You know colo or you knowsomething With Akamai, with the
connected cloud.
We are hearing more and more.
So my question to you is isthis enabling this hybrid cloud
(19:10):
type, flexing, you know, to whatmight be on-prem and then going
into Akamai Is that how you'reusing it, or are you running
100% the infrastructure onAkamai, or what does that look
like?
Yeah, it's a combination.
Speaker 2 (19:27):
I typically broadcast
this.
They don't want to hasslearound with hardware and on-prem
, so it's, most of the times,fully cloud.
There are always exceptionsthat prefer everything on-site.
The tier 1 operators, on theother end, they typically prefer
to have everything on-prem,although there's a change.
(19:49):
But what we also see is, forexample, what they do is all the
main channels are runningon-prem, but pop-up channels,
like Champions League finals,olympic Games, special events,
ufc and then these specialevents these are typically
spinned up in the cloud, becauseit's kind of a pop-up channel.
(20:10):
It's only.
They spin it up for a few days,for a week or whatever, or even
for an hour, and then they tearit down again and then, yeah,
now that we have thisjust-in-time transcoding, yeah,
this is even better, becausethen there's only one feed that
needs to go into the cloud andthen all the rest is done from
there.
Speaker 1 (20:29):
So, yeah, this is
definitely a valid use case
Interesting.
Speaker 2 (20:32):
Okay, this is
definitely a valid use case.
Speaker 1 (20:34):
Interesting.
Okay, yeah, so what I hear yousaying is the use of Akamai or
some other cloud platform.
It could be in both casesSomebody could be running 100%
their workflow on the connectedcloud, on Akamai's connected
(20:55):
cloud, or they could be flexinginto it, running kind of a
combination, right?
Speaker 2 (21:02):
A true hybrid.
Yes, yeah, okay, yeah, and thenfrom our end it's one UI and
then you can just deploywherever you want.
And then you say, okay, if I,for example, I only want to do
live in the cloud, you spin up alive instance with the
microservices needed for runninglive.
(21:24):
You can have a separate nodefor VOD only because typically
these are different use cases.
For example, for video ondemand, it's egress based.
For live, it's kind of staticbitrate that you need and you
need to record the local bufferfor time-shifted V.
So we can kind of mix and matchwith our microservices and then
(21:47):
deploy them in specific cloudnodes where needed, or even
specific regions.
So if you have a regionalbroadcaster, for example, or
regional channels, we can juststream them to a local cloud
instance in Akamai ConnectedCloud and then do everything
(22:09):
from there.
Speaker 1 (22:11):
Yeah, interesting.
Okay, yeah, makes a lot ofsense.
And, as you said in yourintroduction, flexibility is
very important.
Energy conservation I think itall can be summed up in the word
efficiency.
(22:31):
Right, you know, you've builtyour solution, you bring to your
customers the most efficient,which of course then translates
to lower costs, translates tolower energy, translates to, you
know, generally easieroperation.
You know, in other words, ithas like a dozen different
(22:52):
benefits.
But, you know, when youengineer, you engineer.
It's interesting because we verymuch approach our product in
the same way.
I mean, even the very decisionto build on an ASIC, using ASIC
technology versus some otherplatform, was all about
efficiency.
(23:12):
If know, if, if you're going tobuild hardware that is purpose
built for video, then sure youknow there's, there's other
platforms, you know there's.
You know GPU and there's FPGAand there's, you know there's
other approaches, but there'snothing like an application,
specific, integrated circuit,asic.
(23:38):
I think that's why.
I think that's why we were sucha natural fit for our two
solutions.
Well, good, well, dominic, thankyou so much for this overview.
Why don't you just, in closing,let everyone know what are you
going to be talking about at NAB?
And I also would be reallycurious.
(23:59):
You know what are theconversations that you're hoping
to have with the industry, withthe market, with your customers
.
In those meetings, you knowwhat is it that's, you know
that's important to scale streambut you really think is going
to add a lot of value to theindustry and so you want to be,
(24:20):
you know, talking about thosethings.
Speaker 2 (24:23):
So one of our main
topics at NAB is sustainability.
This is definitely a topicthat's really relevant for us
and actually not for us but forour customers mainly because we
don't do it for our own, we doit for our customers mainly.
And then at NAB we'reshowcasing also just-in-time
(24:44):
transcoding on the NetInt cards.
So this is what we can talkabout.
We will bring a kind of PC withone of the cards in there that
we can showcase.
And then, where we want to talkwith customers, not only we
(25:04):
want to, of course, tell what wecan do as a kind of integrated
solution with our origin, but wealso want to hear from
customers what they need,because in the end we need to
build what customers need, notwhat we think we need to build.
So I think in two ways.
So we definitely want to talkwith customers that have ideas
(25:26):
and that might need to see amissing feature on our end.
We always listen to ourcustomers and adopt accordingly.
Speaker 1 (25:35):
Yeah, that's great.
Well, just in case it got loston any of the listeners,
scalestream will be in theNetEnt booth.
Do you have your own booth orare you in our booth?
So yeah, and do you haveanother?
Do you have your own booth?
Speaker 2 (25:50):
or are you in our
booth?
Okay?
So we are only at the NetAmpbooth, Okay?
Speaker 1 (25:55):
great, Great.
Well, we're very happy to haveyou, happy to host you there.
So anybody who wants to learnmore about ScaleStream, sit down
and talk to Dominic and theother team members who will be
there, Come to the NetIn boothand they have a nice kiosk with
a big TV and he's going to havea demo there showing off the
(26:18):
solutions.
So please stop by.
So, Dominic, thank you again.
It was great talking with you.
By the way, I think we probablyshould do a follow-up after NAB
and we'll do a little bit of alonger interview we can talk
about.
You know what you, what youlearned, key takeaways from NAB
(26:42):
and, you know, maybe there's afew other developments with the
product or the company thatyou'll want to share at that
point too.
Speaker 2 (26:51):
So we'll book that
for maybe later in April or May?
Yeah, sounds good.
Speaker 1 (26:58):
All right, excellent.
Well, thank you again forlistening to Voices of Video.
We really appreciate all of youand look forward to meeting you
at NAB.
Speaker 2 (27:11):
This episode of
Voices of Video is brought to
you by NetInt Technologies.
If you are looking forcutting-edge video encoding
solutions, check out NetInt'sproducts at netintcom.