All Episodes

September 11, 2023 23 mins

Host David Tan brings back some all-time great play calls and gives a glimpse behind the scenes of the AI that was used at the US Open Tennis tournament and how it could help an MSP.    

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
The hyperbole is done .
Now we can finally play thegame.
Look at that, marvel, you'rethis one man.
Goodbye, hello High School,whoa Nellie.
Let me tell you about KeithJackson.
For those of you that don'tknow, that is the voice of the
late great Keith Jackson.

(00:20):
In my mind, one of the greatestto ever do it, one of the
greatest to ever play-by-playcommentators of all time, is the
voice of college football, manymay say, famous for his saying,
such as whoa Nellie, and takehim to the woodshed.
And that call the 1991 call ofDesmond Howard's punt return for

(00:40):
a touchdown against Ohio Statethat potentially clinched him
the Heisman Trophy.
And I am sure you're wonderingwhy we're starting off today
talking about, or with a clip of, keith Jackson.
I'll get to that in just aminute, but first I want to say
hello and welcome to the CrushBank AI for MSPs podcast, the
podcast where we talk about allthings artificial intelligence,

(01:04):
machine learning and how theyrelate to managed service
providers.
My name is David Tan and Iappreciate you joining in.
I'm real happy to be here withyou.
Hopefully this is going to be afun episode.
We're going to talk about a fewthings that are very near and
dear to my heart, and not theleast of which is sports, in
particular college sports, prosports of all type, and

(01:24):
play-by-play commentary.
And you're probably wondering,like I said, why we started off
with that clip and why we'retalking about play-by-play
announcers.
I'm going to get to that, butI'm going to take a step back
first and we're going to startby talking about something I did
this past week and that was Iwent to the US Tennis Open out

(01:45):
here in Flushing Queens, not toofar from my hometown, the site
of one of the four major tennisgrand slam tournaments, and I
had the privilege, along with acouple of my partners here at
Crush Bank, of being the guestof IBM for the tournament.
You may be aware that IBM is ahuge sponsor of the US Open have

(02:06):
been for 20-plus years.
Ibm is a big sponsor in tennisand golf particularly.
They invest real heavily intournaments like the Masters and
, like I said, the US Open, andreally do a great job of not
only sponsoring the tournamentbut making the data and the
analytics and the statistics andthings like that sort of

(02:29):
approachable and understandableboth for the casual and the
serious sports fan through thetournaments, through the
announcers, through the use ofthings, like the app that goes
along with it.
So just really do a great job,like I said, of making that data
available and accessible, andthat was a lot of what we spent
some time talking about.

(02:50):
So I was super fascinated.
Obviously, I love the tennis.
I love the atmosphere.
If you've never been to theopen, highly, highly recommend
it's a great way to spend a dayand we actually went for a night
session, but fun way to spendthe night as well.
Tons of things to do.
I love going early on in thetournament because you can go
out to the outer courts and seesome great players playing

(03:10):
tennis, not more than 10, 15feet away from Yale, and there's
always something going on andthey've turned it into a great
spectacle over the years.
Like I said, highly recommendit.
But back to the reason we werethere, like I said, ibm being a
big sponsor, they have a verybig presence there and IBM
really drives all the dataassociated with the tournament.
So when you're watching theevent on TV, you get statistics.

(03:34):
You know the basic statistics,things like first serve
percentages and winners andunforced errors and all the
things that you would understandbut also the analytics like
percentages of point victorywhen a first serve is above 120
miles an hour, things like thelikelihood of victory based on

(03:57):
the conditions of theenvironment and the opponent and
the strengths and momentum, andthings like that, all driven by
the data that's collected, andI was fascinated.
We got a little bit of a behindthe scenes tour.
I was fascinated to hear thatIBM collects approximately 50
data points per shot.
So just think about a typicaltournament or typical match.

(04:19):
I should say in the tournamentI'll use a men's match as an
example Men's play.
If you're not familiar withtennis, men play best three out
of five and tennis is a is tosix and each game is essentially
to four.
I'm oversimplifying it.
Point being, there are hundredsand thousands of coins in each

(04:39):
and every individual match andthe tournament's made up of 128
players on the men's side, 128players on the women's side, so
that means a total of 254 justsingles games.
That doesn't count doubles andthe like.
254 matches are played.
So just the raw volume of datathere is really inconceivable

(04:59):
when you consider, every singleshot in the tournament generates
50 points of data.
So the things that IBM can dowith that are really pretty
incredible.
And if you download the US Openapp, you can see real time data
, real time statistics, realtime analysis, all driven by AI,
and that's obviously why we'retalking about it today.

(05:20):
Those of you that know me orthat know Crush Bank know what a
great partner IBM is for us.
We have been working with themtightly now for six or seven
years.
They really drive, help usdrive, a lot of things we do.
We love their investments intechnology, we love their
platform and, like I said, itwas exciting to see it up front

(05:41):
and in use in real time.
But I thought what was wasincredible was the multitude of
ways they were using AI,particularly generative AI,
which is going to bring us backon point in just a couple
minutes here.
But really it was fascinatingwhere you can see, like I said,
in real time, the data and thepredictions and the predictive

(06:02):
analytics really change asconditions change, right, so as
points are won, as games are won, as momentum things that are
more of a nebulous theory thosethings take over.
You can see in real time howthe data changes and, again,
they're all AI driven analyticsand they're all IBM Watson, but

(06:22):
something that IBM is doing thatthey've started doing recently.
They started actually, Ibelieve, with the Masters back
in April of this year and Ithought it was a very
interesting proof of concept.
I think they've honed it a lot.
You know we're now in theSeptember timeframe.
Tournament starts late August,early September, so five months

(06:43):
or so since the Masters, a lothas changed on IBM's
capabilities around generativeAI.
They've obviously announced toWatson X and launched their
entire platform and I think theylearned some lessons from the
Masters and some feedback theygot there and I think they've
turned this into a really slicksort of solution again based on

(07:06):
all of their technology.
What I'm talking about is, ifyou go into the US Open app, you
get a series of highlights,hundreds and hundreds of
highlights right every point,every set, any exciting shots
serves returns, you name it.
There's hundreds of highlightsin there for every day and for
the entire tournament, and theydon't have people that do the

(07:29):
broadcast, do the play by playanalysis of each of those shots
and record them.
So what they do is theybasically use the Watson X
platform and the generative AIto create that play by play
commentary.
So in other words, if you pullup a point between, say, novak
Djokovic and Daniel Medvedev,they didn't actually even play

(07:50):
in this tournament.
But they were just the twofirst names that came to mind.
So if you pull up a pointbetween the two of them, you
will hear an analysis.
That's essentially a computergenerated voice that created
that play by play.
So it's not someone recordingit, it's it's created through a
generative AI and I found itkind of fascinating and it also
made me a little bit kind of sadfor a few minutes while I

(08:12):
thought about it.
So I'm going to get back to thetechnology in just a minute,
but that's kind of why I startedthis podcast with a very quick
click of Keith Jackson.
That play just so happens.
Give you a little bit ofinsight into me.
That play happened in 1991.
I happened to be a student atthe University of Michigan at
the time.
One of the highlights of thatseason, obviously Desmond Howard

(08:34):
won the Heisman Trophy.
Michigan had a good team.
Probably lost a bowl game if Ihad a bet.
I don't remember off the top ofmy mind, but just, you hear
Keith Jackson, you hear some ofthese great commentators whether
it's a Vince Scully, philRizzuto, howard Cosell, you name

(08:54):
it.
You know it's a big event,right when Jim Nance comes on
and says and hello, my friendsJim Nance, here in Butler Cabin
where later today the greenjacket will be presented at the
beginning of every Masters.
You know well, when BrentMusburger used to say you are
looking live, you knew it wastime for the Rose Bowl and I've

(09:14):
always associated.
You know I grew up.
I didn't have a TV in my roomwhen I was a kid, so we used to
listen to Phil Rizzuto do playby play of Yankee games and it
was incredible.
It was an entirely differentexperience than watching the

(09:41):
game on TV, because theseannouncers had to paint the
picture for you and Rizzuto wasgreat in so many ways, but he
would also just go off for hourson end about things completely
unrelated to the game.
He got completely lost in themoment in the conversation and
it was fascinating and andseeing what Watson was doing and

(10:02):
having listening to it kind ofmade me, like I said, a little
bit sad.
Is this the future of, you know,play by play, sports
broadcasting?
If you think about some of thethings happening in the industry
, in that industry right now I'msure any sports fan out there
right now is probably familiarwith things like layoffs at ESPN

(10:25):
there are incredible amounts ofmoney being paid for television
contracts.
So certainly, live sports is isalive and well.
The NFL package goes forbillions of dollars, the CBS
pays multiple billions ofdollars for the March Madness,

(10:45):
but the it's become a little bitof a unsustainable financial
model.
So you get to, you get to worryor you wonder are these
networks going to turn tocomputers quite frankly,
artificial intelligence to startdoing this, this play by play
broadcasting?
Are we going to lose some ofthe great calls of all time in

(11:05):
the future?
Are they going to become sortof a relic of the past, like?
Like so many other things seemto happen.
I certainly, personally, Icertainly hope that's not the
case, for a multitude of reasons.
But then it also got mewondering.
Right, I've seen and I've readabout artificial intelligence
that can mimic a voice with onlythree seconds of audio.
Quite frankly, I think that'seven worse.

(11:27):
The last thing I would want todo is train a model on what
Keith Jackson's voice soundslike and train it on how a
football game works and then letit do play-by-play broadcasts
as Keith Jackson 20 years afterhe has passed away.
That would be an abomination,quite frankly.
I truly hope that's not thecase, but again.
I also hope it's not the casethat we start to lose some of

(11:50):
these great moments and theyjust become computer generated.
I don't think that'll be thecase, but I was just sort of
fascinated thinking about that.
But really what I'm equallyfascinated about is just sort of
the technology and what goesinto this, so I want to spend a
couple of minutes talking aboutthat.
I thought that might be fun andinteresting.
I often do sessions and lecturesand presentations and such

(12:13):
around artificial intelligence,particularly when I'm talking to
managed service providers.
Obviously we talk about what wedo around everything from
semantic search to dataknowledge management and
operational automation, aroundthings like predictive analytics
and text analytics, things likethat.
There's some really cool thingsin the abstract that artificial

(12:34):
intelligence can do, but itdoesn't become real until you
can start to see it in practiceand see it in production in real
world use cases.
This was again such afascinating one for me.
Being such a computer nerd onone side but also a fanatical
sports fan on the other side.
To sort of see those two cometogether was pretty cool for me.
If you take a step back for aminute and think about this,

(12:55):
what went into this type oftechnology with this type of
solution.
What I had to do, it's really amulti-tiered approach to this
technology and to this creativeoutput.
That's the other thing that'ssort of fascinating is we think
about these things in a vacuum.

(13:16):
Let's just talk about one thingthat I'm going to go a little
bit deeper in is what we callvisual recognition.
If you know, most peopleunderstand what visual
recognition is, but I canoversimplify it by saying if you
can train a computer what a catlooks like, what a dog looks
like, what a giraffe looks like,and you can show it then a
bunch of pictures it canidentify dog, cat, giraffe, so

(13:36):
on and so forth.
That's fairly fundamental.
That visual recognition is theunderlying technology for
everything from good technologylike autonomous driving, having
the cars recognize what's on theroad and being able to adapt
and see in real time.
Also some technology that's notnecessarily as good facial

(13:58):
recognition, being able toidentify someone in a crowd for
good or for bad.
I actually think it's moreoften.
Hopefully it's used more oftenfor good, but certainly there's
a possibility of invasion ofprivacy and things like that.
That's the foundation of this.
The first thing that you need todo to build this type of a
solution is visual recognition.
You need to train a model, acomputer model, that can watch a

(14:21):
tennis match and identifywhat's happening.
Is that a forehand, is that abackhand?
Is that a ground stroke?
Is that an approach, drop shot,serve, second serve, fault, ace
.
You get the idea right.
All of these things are pickedup by the computer visually, so
it watches the video and itconverts that video into ones

(14:43):
and zeros.
Right, this is all data driven,like anything else.
And then those ones and zerosget put through what we call a
generative AI model, and we'reall familiar with generative AI
at this point.
Right, we understand.
It's about a computer that cancreate content.
Hopefully you've listened tosome of my previous podcasts or
seen me speak and talk aboutwhat generative AI is and how it

(15:05):
works.
But in this particular case, ibmhad to do something interesting
which, quite frankly, is fairlycommon in the generative AI
space is they have thesefoundation models.
The way IBM Watson X works isthey have a bunch of foundation
models which are theunderpinnings of this technology
, meaning these foundationalmodels know how to speak English

(15:25):
in this case.
Right, we'll just say they knowlanguage, they know how to talk
, they know how to formsentences, paragraphs, things
like that, but they don't knowtennis.
There's no open source modelaround tennis lingo.
So what IBM needs to do is theyneed to take one of those
foundation models and layertraining on top of it, fine tune
it if you will with tennislingo.

(15:49):
So again the same terminology Iuse when one player standing
behind the baseline throws theball up in the air, hits it and
the other into a box on theother side that's called a serve
, and IBM needs to train Watsonthat that is a serve and that's
the term you need to use.
And then if the player doesn'treturn, it's called an ace.
If it doesn't go in the box,it's called a fault.

(16:11):
You get it.
I'm not obviously going toexplain tennis lingo here, but
they have to train thisgenerative AI model on what
tennis is.
So at this point they aretaking visual recognition,
turning that data into ones andzeros, putting it through a
custom tuned generative AI modeland creating an output.
And then that output has to gorun through another AI system,

(16:34):
which in this case is speech totext, I'm sorry, text to speech,
the other way around.
So in other words, the machineis creating the text and then a
voice is speaking it In the appif you listen to it.
I believe there's a few choices.
You can choose the voice thatcomes out the other side Nothing
famous, obviously, but thinkabout it, everyone's familiar in
their GPS In your car.

(16:56):
You can make the voice soundjust about any way you want,
right, you can give it a Britishaccent, you can give it a
French accent, you can have itbe a man or a woman.
Same idea here.
I mean theoretically, like Isaid, looking into a bit of a
dystopian future, we could haveit sound like someone that we
know or that has passed, or thatwas an all-time great things
like that.
But really, at this point we'recombining or IBM, I should say,

(17:18):
is combining four or fivedifferent AI technologies, and
that's where this stuff becomesincredibly cool and incredibly
fascinating is when you can seeit in reality, but also when you
can layer it together.
So, like I said, just to giveyou, just to kind of wrap up or
review, I should say, thingsthat we talk about here, it's a
visual recognition model whichturns that into data, which then

(17:39):
gets fed into a generative AImodel.
That generative AI model iscustom trained on the domain, in
this particular case of tennis,that generative AI model then
creates the play-by-playcommentary which gets fed into a
text-to-speech which then getsread out, and this all happens
in real time.
These are all highlights,obviously, in the app in this

(18:00):
case, but this can all happen inreal time as someone's watching
the match.
There's no reason why itcouldn't just be generated in
the commentary for you.
And what I thought was actuallykind of cool about it was when I
took a step off my soapbox andstopped worrying about our
computer overlords and dystopianfuture, I started to think
about actually the interestingpossibilities of it.

(18:21):
So instantaneously this canhappen in hundreds of different
languages.
Virtual anti-language spoken amodel can be trained on.
So wherever you are, whateverremote location you are, in
whatever remote language youspeak, if you can get access to
these systems, you can now getplay-by-play in your native
language.
So that's incredibly cool.

(18:42):
As I say, I'm a tremendoussports fan and you watch a
sports broadcast and they'lloften have the alternative
broadcast in, say, spanish, onanother language, but they
certainly don't have it inhundreds and hundreds of
languages.
So I think just theapproachability and the
availability of that for peopleof all different backgrounds,
ethnicities, from differentcountries speaking different

(19:04):
languages, I think makes itincredibly cool.
There's also certainaccessibility right, so you can
make it easier or harder tounderstand people with
difficulty hearing or differentlevels of understanding.
We say we can fine-tune thatoutput to different people.
So, really, what you're doingin a lot of cases is leveraging

(19:27):
this AI to make this technologyor make this entertainment, I
should say, more available tomore people, and I think that's
incredibly cool right.
So, yeah, we may run the risk ofESPN deciding they don't need
play-by-play commentatorsanymore.
I don't think that's going tohappen anytime soon.
I certainly hope it doesn'thappen because, like I said, so
many great memories of whatplay-by-play sounds like just

(19:51):
listening to a TV or radiobroadcast.
But, on the opposite side ofthe coin, being able to make
this entertainment so much moreavailable, so much more
accessible, is incrediblypowerful.
I mean just to think that youcan open up the ability to watch
the US Open Tennis to people ofall walks of life all over the
world.
I think that's incredible right.

(20:13):
It just increases the size ofthe audience.
Tennis is just the thresholdright.
That's just the forefront.
Ibm's doing some incrediblethings.
This is going to be happeningin all sorts of different sports
and all sorts of differentevents.
I have no doubt about itPersonally, I don't mind if we
create millions and millions ofmore Michigan football fans
throughout the world.
I certainly would be in supportof that.
So I think that's kind of cool.

(20:35):
I think it's really interestingto think about the Waze
technology again, in this casespecifically generative AI,
which is so powerful, the Wazethat can start to impact our
life and what it can do forpeople.
And again, I think it'sinteresting to think of this
layer together, right, we oftenlook at these solutions in a
vacuum.
What can you do with justgenerative AI?

(20:56):
What can I do with visualrecognition?
What can I do with text tospeech or even speech to text,
vice versa?
Really, it becomes much morefascinating when you can start
to layer these things together.
I think the more you can diginto real world examples of what
this is and how it works, itreally helps for me personally.
It helps my mind start thinkingabout other interesting

(21:17):
examples real quickly, as I kindof wrap up here, just think
about this as an MSP.
We all want to cut down on ornot even cut down on staff.
We all want to optimize ourstaff.
We want to make them moreefficient.
We want to make them moreeffective.
We also have trouble hiringpeople and finding the right
skills and bringing new peoplein and retaining people.
Well, what if there was somesort of a technology where it

(21:39):
was sort of next step beyond anIVR phone system?
Right, you can answer the phone, you can have a conversation
with someone or make it seemlike you're having a
conversation, which is really acomputer on the other side.
You can take it from a customer.
You can run that ticket throughsome sort of a search or
retrieval, find the answer to aquestion, then run it back
through generative AI and haveit speak the answer.

(22:01):
So user calls in, gets acomputer voice on the other side
asks the IP address sorry, theaddress of a VPN, for example.
Hey, I need to connect to myVPN.
What's the address?
It can retrieve it, it canspeak it and, before you know it
, that answer is solved.
That problem is solved and nohuman interaction was involved.
I'm obviously oversimplifying it, but it really lets your mind

(22:23):
think about what some of thepossibilities are in all
industries.
I'm clearly laser focused onmanaged services and IT service
providers in general, but justall sorts of industries.
So it's super fascinating andI'm very excited we're at the
forefront of a lot of this stuffjust to see what's going to
happen over the next three tofive years as this technology

(22:45):
starts to really evolve, getsophisticated, get more accurate
, get more understanding and,quite frankly, as more people
start to train it.
I always say it's all about thedata, it's all about the
training, and we need experts totrain it.
In this case, ibm found a bunchof tennis experts to train a
model on what the voice of atennis match sounds like, and it

(23:05):
was able to turn that into aproduct.
The more we can get peopletraining AI systems on different
domains of data, the morecapabilities, the more powerful
these systems come, and thepossibilities are limitless.
So I know I'm, for one, excitedto see what's to come and even
if that means I occasionallyhave to sacrifice another great
play-by-play call, at least weknow that technology is evolving

(23:29):
and that the human race isgoing to continue to benefit
from what's being done.
I want to thank you again forlistening to the AI for MSPs
podcast from Crush Bank.
As always, I'm David Tan.
Please reach out if you haveany questions, comments, things
you'd love to hear me talk about.
We'd love to hear some feedback.
You can reach me anytime,davidatcrushbankcom.

(23:51):
Thank you so much for tuning inand I'll speak to you again
next time.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.