Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:06):
Bedroom is an independent data science and aiphone specialized in
data driven business change. In this podcast, our guests help
us spread knowledge and experience with our listeners.
Speaker 2 (00:29):
Good morning, Palla. How are you doing today and where
are you calling us from?
Speaker 3 (00:34):
Hello, good morning. I'm doing really well. I am speaking
from Ottawa in Canada.
Speaker 2 (00:39):
Awesome. We have a few clients out there, well, mostly
in Toronto, and they're always telling us not to busit
them during this time of the year because it's very
cold here. Suggest we have to go there and do what
workshop in April, but not before that? Is it that cold?
Speaker 3 (00:56):
It is very cold here. Yeah, and Ota is much
colder than Toronto even Yeah, this winter has been particularly cold.
We've had quite a number of days at minus thirty degrees.
But I don't know. I like the I like the winter.
There's lots of time. I like snowboarding and skating, cross
country skiing, so those are things you can only do
(01:19):
in the winter. So so I enjoyed the winter.
Speaker 2 (01:22):
I love a skiing too. Actually, I couldn't do it
last year just because of COVID some stations were closed,
but I'm looking forward to do it this year. Okay,
So I guess that you work from home at least
a good portion of you know, the time you spend
at Google. But I like to know what's your day
to day, like you know, as a research engineer.
Speaker 3 (01:44):
Yeah, it varies quite a lot. I it depends. Since
I'm in research, one of the things we do is
published in conferences, and conferences have each conference has their
own deadline for submission, and so depending on how close
it is to one of those deadlines, my day to
(02:05):
day might change. So ICML, which is one of the
big machine learning conferences, recently had a deadline. It was
like two weeks ago, and so for the two or
three weeks before then, I was almost one percent working
on the papers that I was submitting. And originally I
had four that we were going to submit, but then
ended up just submitting too because two of them we
(02:28):
just felt weren't strong enough yet. So during those times,
it's a lot of reading literature, relevant literature for the paper,
running experiments, fixing bugs, and a lot of paper writing,
so writing the paper itself and making sure and it's
telling the right story and that type of thing. When
(02:52):
it's not a conference deadline, I'm involved in a lot
of different projects. Some of them sort of more active
in terms of the coding and development and running experiments,
and so for those are the ones I tend to
enjoy the most, and those I require, you know, just
sitting down and writing lots of code and and figuring
out how to write the code and how to run experiments.
(03:16):
Other projects some more I guess of a senior participant,
so it's more providing guidance and direction to more junior
colleagues and some students as well in their projects. So
I also enjoy that, but I am I think by nature,
(03:36):
I really enjoy getting my hands dirty and writing code,
although I can't do that for all of the projects.
For some of them, I've taken more of a supervisory role.
And yeah, sometimes it depends for all the projects where
we are. So some projects, if there were just starting
them off, a lot of the meetings with colleagues will
(03:59):
be about, you know, brainstorming ideas and how we're going
to go forward, and other projects that are a bit
more developed, it's more trying to figure out what works
and what doesn't and designing experiments. So that's kind of
the whole gamut of what a research project would look like.
M H. That's that's basically my day to day and
(04:23):
how I split the day really varies from day to day,
and it's not I don't really plan it out ahead
of time. It's more based on you know, prioritization, whether
we have a conference deadline or something somebody else is
blocked on something I need to do or something like that.
Speaker 2 (04:42):
And now onto onto your field of specialization or sub field.
When it comes to reinforcement learning and their machine learning,
you work on everything that has to do with music
and creativity listen it.
Speaker 3 (04:59):
Well, that's a different research area for me. So my
main research area is reinforcement learning. Ok but it's fundamental
reinforcement learning, not really applied to music. I've started to
look into that a little bit, but it's really like
a very initial stages. So most of my research is
(05:19):
just fundamental reinforcement learning.
Speaker 2 (05:22):
Before we get into the details of you know, what
are you studying in this specific field, I'd like to
ask how do you get there? In the sense that, Okay,
you can think you may aspire it to be a
data scientist, you know, very common these days, guys chasing
the high salaries and things like that, but at the
(05:43):
end of the day, if you are doing research and
something you surely need to like it. You need to
be interested in it, and you need to be challenged mentally,
right so that you spend time thinking how you can
improve what others have spend time studying developing. So I
know you moved to Canada a few years back, and
(06:07):
you did your bachelor's and your Master of science in
computer science. But how did you get to, you know,
work in research and why more specifically related to reinforcement learning.
Speaker 3 (06:22):
So I did my undergrad in computer science and then
I went to work for a company in Montreal that
built flight simulators. And that was fun. It was a
fun job. I got to travel all over the world,
which is really great. But after three years or so,
I started to get a bit antsy. I just felt
(06:45):
that the projects weren't as challenging anymore as they used
to be at the beginning. And of course I could
have tried to transfer to a different team, but I
was starting to get interested in machine learning and AI.
I was sort of reading things on site for fun,
and I had done my undergrad honors thesis on neural networks,
(07:07):
so I built a neural network to try to learn
how to improvise in jazz and it didn't work well at all,
but it was still a really interesting experience, and I
thought the whole field was really fascinating. So I decided
to go back to school to do a master's in
machine learning with some profs at mc gill that I
(07:27):
really liked, and so there I didn't really know about
reinforcement learning when I joined, but one of my supervisors
doing a pre cup her focus was reinforcement learning. So
I chose her because I really liked her style of
teaching and I thought she was a really nice person,
and so when I joined her lab, it was natural
(07:50):
for me to focus on reinforcement learning, and so that's
what I did, And so I continued after my master's
straight into PhD. And the whole time it was focused
on reinforcement learning and the theory behind reinforcement learning. So
that's how I got into that field, and I really
liked it, so I stayed researching in there.
Speaker 2 (08:10):
Yes, I mean, neural networks are fascinating for sure, so
I do understand that mixing jazz and neural networks was
very appealing. Even though you didn't get the results that
you expected, I think, well, I'm sure that it was
a very interesting praate. I mean considering the age and
that it was a few years past, right, because now
(08:31):
everyone would be looking at this with different lenses in
comparison to you know, how neural networks were, how famous
or well known they were a few years back.
Speaker 3 (08:43):
Yeah, at this time, neural networks were actually not really
accepted in the mainstream machine learning community. They were considered
sort of friends, and nobody really paid that much attention
because at the time we couldn't do much with them.
So people like Joshua Benjo that were working on neural networks,
they weren't taking that seriously and so it's good for
(09:06):
them that they start with it, and they really believed
in it and obviously panned out really well for them.
But at the time, yeah, this was not a popular
subject to get into for sure.
Speaker 2 (09:16):
Okay, if someone is listening to this conversation that has
never read or studied about reinforcement learning, and you had
to explain to them, what is it in its very
essence as a subset of machine learning, how would you
do it?
Speaker 3 (09:36):
So, reinforcement learning is dealing with the problem of sequential
decision making. So this is basically making decisions a series
of decisions over time. So in something like supervised learning,
where you're doing, for instance, image classification, So if I
give the model a picture, it has to tell me
if there's a cat or a dog in it. That's
(09:57):
a one time decision. So when I get give you
a new picture, it's a different decision. But sequential decision
making is something like what you'd have in a game.
So if you're playing chess, for instance, you don't just
make one move. You make one move and then you
make the second move, and your moves are based obviously
on your opponent and the state of the board, but
(10:21):
you have to make these decisions and sequence. And good
players don't make myopic decisions, so they don't just look
at the board and think, okay, what's the best move
I can make right now. They plan ahead and they
try to make strategic moves, trying to anticipate where the
board will end up, so that they place their pieces
in a strategic manner that might not seem that optimal
(10:46):
if you just look at the board position at the moment.
And so this long term planning for sequential decision making,
this is what reinforcement learning tries to do.
Speaker 2 (10:56):
Very interesting. I think it was very well explained when
you were explained in this I was thinking in the past,
we've had guys from deep Mind in the same podcast
and one of them somehow worked in the field of
reinforcement learning, and with this cast, you know their advancements. Well,
you probably saw this famous or well known documentary in
(11:18):
Netflix about this game called Alpha Go, isn't it. How
does the research field of Google, which I guess is
where you belong to the research area not field, sorry,
research team or research business unit that falls under the
umbrella of deep Mind and the one that falls under
(11:40):
the umbrella of Google is the or are those research
efforts guided or directed by the business because they are
demanding solutions to specific problems, or do you have your
own research interests by the very same researching new years?
(12:03):
I mean, how does it work on how both of
these units comper if you can provide us with.
Speaker 3 (12:09):
So the team I'm part of in Google is called
Google Brain and it's one of the teams within research.
So Google has many, many different research teams, and a
lot of the research teams are tied to a product. So,
for instance, you have teams that are working on the
new pixel camera, and so they do a lot of research,
not necessarily machine learning, but research and optics and things
(12:30):
like that to really improve the quality of cameras aren't
your phones. And you'll have research teams that are tied
to Google translates, so they work on machine translation, but
their research is really focused around that product. In Brain,
un fortunate to be in this team, it's really what
(12:51):
we call curiosity driven research, so we're not tied to
any product and we're sort of given the flexibility and
the freedom to explore our own research ideas. And this
is a model that's worked really well throughout history. So
AT and T Bell Labs, which was a really famous
research institute from a few decades ago, produced one of
(13:13):
some of the most impactful research scientific findings precisely because
the researchers were given sort of the liberty to explore
their own ideas rather than focusing on a product. And
so that's what we do at Brain, where the different
teams have different specialties and we just focus on different things,
(13:35):
but the research is really driven by trying to advance
science as opposed to improve a product, and often the
findings we have do end up being really useful for products.
So some of the models that we've developed at Brain
for language modeling, for instance, they were taken up by
the machine translation teams and they drastically improved the quality
(13:59):
of will translate. So that that's sort of where I
am now, So i'd say it's it's a fairly it's
quite bottoms up in terms of direction where where the
researchers we kind of define where we're going, obviously within limits,
so we do have you know, managers, and we have
(14:21):
mentors and supervisors and things like that that help guide
guide the research to you know, maximize its impact and
the type of thing. But but it's very much driven
by scientific inquiry as opposed to a business model. In
Deep Mind, I don't know exactly how things work there
because I've never worked there, but I think it's it's
(14:42):
a very similar type of spirit. So again, they're they're
really trying to advance a state of the art in
terms of scientific progress, and we work pretty really closely
with them. They're i mean, initially their main focus was
reinforcement learning, so obviously have a lot of common interest
with them and I nerves. The last big conference that
(15:02):
just passed in December. Two of my papers were with
Deep Mind co authors, so we do work quite closely
with them.
Speaker 2 (15:10):
Very interesting, great insights on that. So in your day
to day you wouldn't have a meeting with I don't know,
the product manager of the text translation that you know
you can find in the world. It may be the
case that once you've published something and you've concluded something
(15:31):
that you know is very appealing for the business, that
meeting may happen or it wouldn't happen with you, and
there is specific link between the research units and the
more operational business.
Speaker 3 (15:44):
Yeah, I don't think would really happened with me. It
might so definitely not with you know, product managers or
anything like that. I may get together with engineers on
that team who are interested in trying things out for
the products, but more it would be a very technical conversation,
just you know, explaining different ideas and maybe priding guidance
(16:08):
on how it could be used for whatever use is
they have in mind. But to productionize something I wouldn't
be involved with that. That would be really the product
team and point.
Speaker 2 (16:18):
Okay, So even though your field of expertise isn't only
reinforcement learning apply to music, I must admit and well,
we must admit that a petrock. We have a few
guys that are very passionate about music, about creativity. Some
of them are data scientists, but some others are not
so some others just belong to the marketing, brand and
(16:40):
design team. And there were certain some questions with me
for me to ask you, obviously about how it's being used,
I mean machine learning, how it is being used in
the music industry these days, because the main idea that
may come to mind is, you know, an algorithm trying
(17:01):
a set of notes, because you know, it's learned from
how humans perceive a song or a tune, and then
based on that learning, it justs so that the song
or the end tune is very appealing based on I
don't know trends, I mean popular trends, past successful heats
(17:23):
or things like that. But I'd like to know what's
your view on what's really been cutting edge versus mainstream
in the music industry. When I mean cutting edge, what
people are trying to do with reinforcement learning in the
music field and mainstream maybe what producers are doing, you know,
using algorithma.
Speaker 3 (17:45):
Yeah, I don't, I don't know too much about what's
actually being used in products. I know there's more machine
learning being incorporated into music related products. The things we're
mentioning are more on the generative modeling side. I think
where models produce notes or music. There is some work there,
(18:07):
but it's not there's a lot more than just that. So,
for instance, there's quite a number of people we're working
on source separations. So if you if you have an
audio file like a band you recorded to say, so
it's a recording, can you separate the different instruments from
that audio file. That's a really hard problem, and I've
(18:28):
seen some impressive demos of people trying to do this
using machine learning, but it's a really really hard problem
because now you have a single audio stream that contains
multiple sources. So, just to give you an example of
different types of machine learning uses in music, there's you know,
information retrieval, so trying to figure out what are important
(18:52):
features of a song and so you know, things like
Spotify or these streaming services might might be interested in
this because that's how they could generally recommend new songs
that are more interesting to people based on their tastes.
So if you can extract certain characteristics of songs based
on their audio. So, for instance, I love jazz and
(19:15):
I really like things in odd times, and so that
is really hard to figure out just from the audio file.
But if you can do that, then maybe you can
figure out that, oh probably I might like this other
artists that I've never heard of, because I think part
of the problem, at least maybe not as much now,
but these recommended systems end up recommending what's on average
(19:39):
more popular, and this doesn't always work, like the music
I listen to is not at all what's popular. So
for a while I just didn't like the suggestions at
all because I didn't really like any of them. They
were too kind of mainstream for me. So I think
there's a lot of work towards personalizing these types of
models more so that they can and yield better suggestions
(20:02):
for each individual as opposed to just for the general population.
In terms of music generation, there is a lot of
work happening. So I'm part of Magenta, which is a
subteam of Google Brain where we work on generative models
for music and other types of music related questions and
art in general. There is quite a lot happening there.
Some of it is really exciting, things like one model
(20:25):
that I find really interesting from some colleagues is it's
called DDSP and essentially it allows you to convert the
timbre of your recording into something else. So, for instance,
you could hum into a microphone and the model could
convert your humming into the sound of a flute or
(20:46):
a sound of a violin and it sounds really good.
So that just opens up a whole bunch of new
creative possibilities. And that, for me is really where the
excitement is. What type of creative as opposed to making
something that will just be pop learn So so songs,
I mean, that's I guess some people are interested in that.
For me, it's more what can you do as an
(21:06):
artist creatively that that's exciting and new. So if you
think of you know, we had guitarist for a long time, uh,
and then the electric guitar came around, and it was
a new thing, new new technology, and people like Jimi
Hendrix took that technology and really sort of went to
the extreme of what you can do with it and
(21:26):
opened up a whole new field of of what was
possible with with these instruments. And that's what what I'm
really excited about. In terms of machine learning, and music
and how it can sort of empower musicians and artists
to make even more music and art.
Speaker 2 (21:43):
And I came up with this question because sometimes, you know,
I was listening to Spotify with you know, its own suggestions,
or just radio, and you were listening to something and
you feel that the tunes and how it's playing it's
it's of your taste, or it's similar to your taste,
but you don't really know how it was designed to
(22:06):
be of your taste. I don't know. I don't really
know how to explain it, but it's like you feel
it's attractive to you and to your and to the
songs that you usually listen to or that you play,
because you choose them directly on the browser. And you
know that the artists probably came up with it because
they had suggestions that the tune should play similarly to
(22:30):
I don't know X Y and said different tunes combined.
I don't really know how to explain it, but as
a human that I'm listening to, you know, different songs
in different environments, I kind of get this feeling that
we are somehow tricked. I don't know if the word
is tricked, but we are somehow accommodating new songs to
work tastes, and producers and artists use technology to do it.
(22:58):
I don't know, this is just maybe a personal feeling
that I have, is something that I that I wanted
to express. I don't know if it's the same for you, you
don't have to say, but it does happen for me. Obviously,
reinforcement learning is a huge field. Like people don't really
realize when they say they want to be a data
scientist or they want to work in data science. Data
(23:20):
science is huge, right, I mean, just working on natural
language processing may take your whole life that they did
to it. Then you talk about a computer vision, then
you talk about many other things, and obviously the different
approaches to these problems. What are the scientific breakthroughs or
maybe just advancements, things that you are proud of that
(23:42):
you've been able to work on and participate it in
developing throughout the past years. Something that you say, Okay,
this is something that I would have not expected to
be accomplished by myself. I'm probably the rest of the
team or the quotors of that research.
Speaker 3 (24:00):
So there's a couple of things that I'm quite proud of.
One of the first projects that I was involved in
when I joined Brain was the development of this reinforcement
learning library called Dopamine. And one of the difficulties with
reinforcement learning and machine learning in general is that there
are multiple reimplementations of the same algorithms, and every time
(24:22):
you reimplement something, something's a little bit different. So this
makes it hard for reproducibility when you want to reproduce
somebody's results and you can't reproduce them because maybe you're
running a different code base and there's something slightly different.
So we wanted to do reinforcement learning research, a specific
type of reinforcement learning research called value based. This is
just the type of the way the algorithms are designed,
(24:44):
and we couldn't quite find a library that sort of
fit all our needs. Some of them were a bit
too complicated to work with, some of them were a
bit too limited, and some of them we just felt
they weren't correct. So we decided to implement our own,
but with a focus for the focusing on the type
of research that we did, so we weren't as interested
in something where you know, you you could product productionize
(25:07):
it and it's really just there for users. To make
use of without kind of tweaking things and trying out
different ideas. The research we were focused on is what
we called throwaway research, where you're trying out new, crazy
ideas and most of the time they don't work. And
so that's why we call it throw away because you
then throw it away. And and by focusing on that,
(25:29):
we that shape the design of the library. And so
we made something that I think is quite easy to
use but also quite reliable, and the community has has
I think received it really well. It's you know, we
have a whole bunch of getting up forks and a
bunch of citations, and I hear new people using it
(25:51):
all the time, so that I'm really happy of because
it was for my research. That's basically the library I
use for all of my research, and all of our
team in Montreal uses that library, and I know lots
of other research teams outside of Google sort of Google
use that library as well. So to be able to
have that type of impact on my own research but
(26:12):
also externally I think was really great. So that's one thing.
Recently we also published a paper at Nerves that one
or Outstanding Paper award, and there we were looking at
partially this reproducibility issue that I'm mentioning, where it's hard
to reproduce others results, but looking more at how results
(26:34):
are presented in research papers when comparing against other baselines,
and the statistical significance of these results. So we really
took a kind of a statistical approach on how can
you compare different algorithms when you don't have that many
data points or when the data is really noisy. And
so people have tried different things before, but sometimes whether
(26:59):
it was done on purpose or not, the results that
they end up presenting are are not quite correct, and
they might present one algorithm as being superior to another
when in fact they're not. Actually they're just all within
the noise. So this paper proposes new new methods for
comparing algorithms in a statistically robust way, provides a library
(27:23):
so that researchers can also use. And again, this is
a library now that we're all using to evaluate our
new ideas, and it's been really well received by the community,
and so that's also exciting to see. So I guess
to summarize the my main interests are are or the
things I'm most proud of our are when when I
find it. I'm happy with the results obviously, but also
(27:45):
when I see it having a positive impact on the
research community as a whole. So finally, one last thing
is we released this this new environment for enforcement learning
that's based on a paper we published Nature a bit
over a year ago where we were using RL to
(28:06):
fly balloons in the stratosphere and this is a collaboration
with Loon, but Loon has unfortunately closed, so so we
built the simulator that that we've opened source so that
the whole community can can try to fly these balloons
in the stratosphere. I mean obviously not in the real
world now, but at least in simulation, and that were
(28:28):
we're We have a blog post that's coming out soon
that sort of describes all of that, and the deenvironment
is already open. It's already public, but it's it's quite
brand new, so not that many people are using it now,
but that's something that I hope will also be well
received by the community, and I hope more and more
people evaluate their agents on this because I think it's
a it's an environment built to simulate something that's that's real,
(28:52):
you know, real winds and real physical considerations.
Speaker 2 (28:56):
More interesting and I think if you are the creator
of something and you've dedicated a significant amount of time
for tears onto something, after you've you know, you've felt
that you've delivered it, seeing that others make efficient or
(29:16):
useful use of it, it's kind of a rewarding feeling.
I've always thought, you know, we use Python, and we
we use a few libraries Python libraries, and we always
think about who started developing them, right, I mean, how
they would feel and how hard it was to think
not out of the blue, but you know, with a
(29:39):
plant camvas, how to put that together. I think there
is a great effort that comes from research until it's operational,
and I do not think that you're as rewarded as
you probably would have to be, meaning I don't think
that many people know that Google, both with Brain deep Mind,
(30:02):
dedicates a huge amount of effort when it comes you know,
to resources, people and obviously money to advance you know,
the current state of whatever field. And I think that
it's very necessary to progress. Obviously technology when it comes
to micro chips and you know, the the computing program power,
(30:25):
obviously it needs to advance. But everything in regards to
programming that whether it's stata science or whatever it is.
There's a huge effort that needs to be done. So
I'm happy to see that the large companies around the
world like Google is really pushing for this. Obviously, I've
spoken with with with people from deep Mind and I
(30:48):
knew this, but I wanted to highlight that we ask
ourselves as pet trog that work in this field, really
value that some like you are pushing these these boundaries
even though you don't realiz I see that, but you
do it in your day to day. We do value that, Pablo.
Before we close this call off, I like to ask
(31:09):
you a couple more questions. These are much more simptoory
than the ones before. Probably one, I'd like you to
recommend someone that you'd like us to have as a
guest in this data stand up podcast from from Bedrock.
And the last one would be about uh small read
(31:31):
or a book or a newsletter or something that you
are subscribed to that you like to recommend. It doesn't
have to be reinforcement learning related, data related, or AI related.
It's just something that you know it's appealing for you,
and probably a listener to this conversation may find interesting too.
Speaker 3 (31:49):
For speakers, it really depends what you know. There's lots
of people I can recommend, but it depends what type
of story I guess you want to hear, Like is
it more scientific or more about building research communities, or
or more about the creativity side or more about the
(32:10):
business side. I'm not sure what if today.
Speaker 2 (32:14):
I think we've spent some time on the not highly
scientific stuff, but we've covered how important is to research.
Someone from the business side that links both worlds would
be very interesting, as you know, nice to have here
as a follow up conversation to the one we are
having now. But we're very open minded. I mean we've
(32:37):
had people from many industries, many fields, and you know,
many different points of views or focus. So yes, fire
away and then we can decide.
Speaker 3 (32:48):
So somebody that comes to mind, or I think that
comes to mind is coher c O H E. R E.
And he either startup in Toronto. One of the founders, Frost,
is a friend of mine. He was used to be
a Google brain and he's a really nice guy. Also
a musician. He has a band that that does really well,
(33:10):
and he could be kind of interesting because he has
both the research and the business perspective, so that that
could be an interesting person to have here.
Speaker 2 (33:21):
So that's c O H E.
Speaker 3 (33:23):
R E.
Speaker 2 (33:24):
And what's his name, Sir Nick Frost?
Speaker 3 (33:27):
Okay?
Speaker 2 (33:28):
Got him? Okay?
Speaker 3 (33:30):
With regards to what what to read or what to follow,
I get a lot of value actually on Twitter. There's
a lot of lots of people post their their research
papers there and there's research discussions. I mean, like any
(33:51):
sourcial network. There's also kind of annoying things that sometimes
happened there. But I get a lot of value as
a researcher there if you want to learn. I think
one about machine learning in general. Kevin Murphy who's a
colleague of mine and he's a really excellent researcher. He
(34:12):
recently released a book called Introduction to Probabilistic Machine Learning.
I haven't actually read the book, but I know his
previous book was very very popular, so I imagine this
one's excellent, so that that could be a useful research
for people. And what I do listen to regularly are
podcasts while I run, and so some of my favorite
(34:38):
Some of my favorites are Radio Lab. I'm a huge
fan of Radio Lab, Broken Record. I also like when
I know the artists song Exploder as well, I really like.
There's also a few in Spanish, there's a alilo and yeah,
(34:59):
I think there's a there's the ones you've given us.
Speaker 2 (35:01):
Yeah, you're fearing us quite a few.
Speaker 3 (35:03):
I'm a runner. Two.
Speaker 2 (35:05):
There is this problem, right, I really like to run,
I mean because it helps, it helps me clear in
my mind. And then I have a set of podcasts
that I have in my browser list, you know, Spotify,
use Spotify, and I realized that the ones that I
like aren't very I would say practical for running, meaning
(35:26):
I like the content that lex Treatment publishes. But he's
way of speaking. I don't know if at some point
he'll listen to this.
Speaker 3 (35:40):
I haven't heard of interview. I've heard the snippets of things.
Speaker 2 (35:43):
Yeah, but he speaks very slowly. So for me, it's hard,
you know, keep up with a conversation, regardless of it
being very interesting. You know what, I'm running. So I
need to find some guests or interviewers or participants that
are quick or you know, energizing while doing their podcast,
(36:05):
because if not, it would be hard for me to
run at the same time and keep my motivation levels up.
But that that that's me. I'll pay attention to the
ones that you've shared, and I'll give you that try.
Speaker 3 (36:17):
It's perfect.
Speaker 2 (36:18):
Awesome, okay, Pablo, I really appreciate the time you've spent
with us, with me, this, this morning, this evening. Here
I have to say, you're at the first you're the
first guest that speaks Spanish that we have a chat
with in English. I have to say, but I guess
(36:40):
this is what it is. And again, we really appreciate
that you that you've chosen to spend the time with us,
and I've learned lots from you. I hope that you've
enjoyed this conversation too.
Speaker 3 (36:52):
Yes, definitely thanks for having me on.
Speaker 2 (36:54):
Awesome, Pablo. So I'll hope you have a nice day.
Take care of you. By babbler
Speaker 1 (37:03):
As an ass