Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
Welcome to Alixer Mix, your Elixer podcast talking with members
of the community. My name is Mark Erickson, and today
we're joined by Sophie de Benedetto. Hey guys, and our
guest Alvisa Sousmel.
Speaker 2 (00:21):
Hello, thanks for having me.
Speaker 3 (00:23):
I'm glad you could come on.
Speaker 1 (00:25):
So this is an exciting topic for me because it's
something I find fascinating. I think whenever I hear about this,
I think, Wow, that's super advanced and futuristic, and I
have no idea how to do it myself. And so
we've invited Alvisa on to help talk about how to
do this, and so like what we're talking about is
using like image recognition with the Elixer.
Speaker 3 (00:46):
And really cool stuff like that.
Speaker 1 (00:48):
So first of all, Alvisa, maybe you could tell us
like where you work and a little bit about yourself,
what kind of problems you're solving.
Speaker 4 (00:57):
Yes, sure, Seeing the last five six years I've been
a CEO in a London hedge fund and I'm still
working from them as a consultant and at the moment
I'm consulting and running queticoding dot com, which is an
educational site which is at the moment is mainly focused
(01:17):
on Elixir and Phoenix, and I'm working to make some
free and paid Elixir Phoenix courses. And my current clients
are from finance, so the problems I'm facing are similar
to what I've faced so far at the edge fund,
and so to describe what kind of problems I had
(01:40):
to face at the hedge Fund, We've designed three different architectures,
mainly process data in real time. So the first was
the process data for real time traits coming from different
texchanges for hundreds of different products, products, our financial products,
(02:02):
different assets, and the goal of this platform is to
use the data to run our trading models. And in
this case we had we have a lot of Python
code because all the data science framework there are really
great data.
Speaker 2 (02:20):
Science framework there.
Speaker 4 (02:22):
And another platform was is to process financial tweets in
real time, so we get a huge amount of tweets
in real time. We run our machine learning models to
cluster these tweets to understand which company they refer to,
and then the system generates a minute by minute sentiment
(02:45):
for each company which is useful in different trading situations.
And we have also another platform which is an internal
platform for research. Is like a set of research tools
we used to exchange in form about research. So what
they have in common is the need to process real
(03:06):
time let's say events, and to especially to self heal
because we need we need to stay up. Obviously it's
not one hundred percent, but we need to stay to
be reactive to any kind of anomaly, and to.
Speaker 2 (03:23):
Process UH trades as far as fast as we can.
Speaker 3 (03:29):
Nice.
Speaker 1 (03:29):
So you mentioned in there like the need to leverage
other libraries or languages outside of UH you know, Corelixer,
and it seems like in today's development environment that seems
to be increasingly the case because you have, like you'd mentioned,
data scientists who are using their own tool set of tools,
and they already have these libraries that are being built
(03:50):
up by a whole community of people, and a.
Speaker 3 (03:53):
Lot of those tend to be in Python.
Speaker 1 (03:54):
And historically, just as web developers, I think we you know,
you're you're accustomed to having work with multiple languages because
you have a front end which might be written in
JavaScript and a back end which is written in something else.
Some people do approach, you know, take Node as a
back end, but you know we have a lixer, or
you've done Ruby on rails or Java or something else,
(04:15):
and then you have mobile on the front end. You know,
you're dealing with you know, mobile frameworks and mobile languages
and different platforms. So it's like we are kind of
in this area and this environment where we have to
deal with multiple types of tools, multiple communities. And so
one of the things I think is interesting is this
whole idea about how Python has a lot of these
(04:37):
libraries for doing like image recognition, for doing you know,
like where you know, be it Python or not. But
I might be using GPUs to do accelerated computing. And
so what I loved about your article is that you're
you're totally acknowledging that, right that there is stuff that
(04:59):
we need to use that's not an elixir. So we
want to basically try to leverage the benefits of each
the strength of each language and each platform to do
what they're best at. So maybe you can tell us
a little bit about this blog post that you wrote
and kind of give us an introduction to this topic
of what you're trying to do.
Speaker 4 (05:19):
Yes, sure, the so the blog post is about real
time object detection uh and the idea of using real
time object detection in Phoenix UH. Well, in Elixir and
then using Phoenix to render the result.
Speaker 2 (05:40):
So obviously the idea.
Speaker 4 (05:43):
Of the of the blog post is not to reinvent
the will is actually to use something that is already there. So,
as you said, there are great frameworks in Python and
UH is there are also high level libraries like a
civy lip which uses open cv under the hood, and
(06:09):
UH it's super easy to use. And you can just
pass an image to this to a function and UH
dysfunction returns uh list of detected objects and UH and
the coordinates of these objects.
Speaker 2 (06:25):
So in the in the article, I actually uh uh
go through UH the.
Speaker 4 (06:30):
Idea of interfacing and trying to take advantage of this
library UH inside the Elixir, and I went through different ways
of communicating with with Python. Well, the simplest one is
to just launch just run UH the Python process. The
(06:54):
problem of this is that two for example, in this
specific case, we have a neural network, which is a
neural network, well actually a convolutional neural network, and to
load this model it takes around two three second on
(07:16):
a matwork pro and obviously we can't run this UH
this script for each single frame or each single image
we want. And so we need a long running process.
That means we we keep the model in memory and
(07:36):
and we keep the Python process in memory, and we
control it via something is called a port. So a
port in a in Elixir and in Airline, it's a
mechanism to launch an operating system process which is external
to the bm UH. And this brings a lot of advantages. Well, first,
(08:00):
if the process crashes, it doesn't bring down the bim UH,
so it's something external.
Speaker 2 (08:05):
And we can communicate.
Speaker 4 (08:08):
With this process via message passing and the standard IU UH.
Speaker 2 (08:13):
And what I do through the article is to.
Speaker 4 (08:18):
Is to use ports to build UH, to build UH,
this communication to H with the with this UH Python
framework and be able to link the camera from UH
from the web browser, take the frames from the web browser,
(08:38):
send this frame across UH a web socket to a
Phoenix channel, and then Elixir sends UH these frames to
the Python side detected which actually runs the the real detection.
And and yeah, the the thing is is so easy
(09:00):
to use this approach to bring and to use to
take advantage of machine learning. And since Python is so so.
Speaker 2 (09:12):
Far on the machine learning side.
Speaker 4 (09:14):
Actually, one of the reasons Python has a huge adoption
rate is because all these frameworks in the last ten years,
the adoption rate increased a lot. But because these frameworks,
not because the language itself. And yeah to me, to me,
(09:35):
I think Elixir would be great. I actually would prefer
to control a machine learning library with Elixir rather than
Python with pipes and struct and like generate a struct
with it, which is like a job.
Speaker 2 (09:55):
I want to run a machine learning job. I want
to run and then.
Speaker 4 (10:00):
And then send this job to to a Python to
a Python process with the with depart or with other
solutions like.
Speaker 2 (10:10):
Uh, there are other ways like uh E p M D.
Speaker 4 (10:17):
So the is said there is a library called remember
correctly pyre lank UH and UH, the it's possible to
uh to create a Python node. UH, just to say
there are other ways we can interface with the with
the Python process.
Speaker 1 (10:37):
So one of the ways if I can just kind
of clarify the way I understand ports to work.
Speaker 3 (10:42):
UH.
Speaker 1 (10:43):
I have used ports before, and UH in my case
I was talking to we needed some you know, we're
migrating a Ruby on Rail's monolith to Elixir, and so
there's some things that Ruby still.
Speaker 3 (10:55):
Did best because of libraries or whatever.
Speaker 5 (10:58):
You know.
Speaker 1 (10:58):
I don't want to re implement an entire life library
just so I can have this little piece of functionality
so as ports as I understand it, Like what I'm
able to do is I can start up the process,
the OS level process that is running Ruby in my
case or Python in your case. And so you're like
you're describing you like this two to three second load
(11:20):
time just to load everything that it needs for the
for doing the memory yea, for doing its processing. And
so once that load is done, then it's just basically
it's messaging through like I think it's even just standard
out kind of just directly kind of talking through the
OS to the other process, not even through like an
(11:41):
HTTP kind of you know port like you know web
ports or anything like that. It's not a web servers,
just talking directly to it. So that has a cost,
I guess in terms of serialization about how do I
actually communicate and how do I serialize my data to
send it across to this other process. But that that's
one of the benefits is uh, it's it can now
(12:03):
be supervised by the beam, right, so if something goes
wrong in my library and it crashes, elector can respond
to that, right have you?
Speaker 3 (12:14):
Is that something you've been able to see as well.
Speaker 4 (12:17):
Yes, absolutely, this is exactly one of the advantages and
uh uh with part, especially if we wrap a part,
if we wrap a gen server around the part, it
becomes super easy to uh to supervise it.
Speaker 2 (12:36):
So yes, in the article I, what I do.
Speaker 4 (12:39):
Is to uh wrap dispart uh with with the gen
server UH, and it's possible to run like to start
monitoring this part. So when the process crashes, we just
receive a message to the owner process which is an
exit message. And so in this case we can deal
(13:03):
inside the gen server UH. And we can just like
rerun the part without letting the supervisor do that, without
having to delegate this to the supervisor. Or we can
just decide to not handle the exit process the exit message,
and what happens is that the the gen server process
(13:28):
just crashes and if it's supervised uh, it's restarted again.
Speaker 2 (13:33):
Another another advantage is.
Speaker 4 (13:38):
Is that of this wrapping wrapping the part with the
gen server is that we can only the owner, so
only the process that starts the part can communicate with
the part. So can send a message which is propagated
via via standard input to the to the Python process
(14:01):
and the same as receiving messages from the Python process,
So only the owner can do this messaging. So if
we want to use a port and let's click, it's
not sure. But if we want to use this sport
and send messages to this sport from many processes, we
(14:22):
need a gen server that is the owner and takes
the ownership of this port. So I think most of
the cases is better to use a gen server, I mean,
especially in this case. And also another advantage is that
if like in this case, if we want to scale,
if we want to scale horizontally, if a port is
(14:45):
embedded is handled via gen server, is easy to spown
a pool of distributed.
Speaker 2 (14:56):
Gen server workers.
Speaker 6 (14:59):
Yeah, I just want to point out how kind of
incredible that is, and how Elixer's flexibility around like interrupt
in this case with Python is so powerful because what
you're doing is you're leveraging like the concurrency and the
ability to manage processes that Alixir gives us and still
getting like that powerful bit of Python functionality that is
(15:19):
best suited to do that image detection. And that's why
I kind of think it's fun to say, like, oh,
Alixer is kind of good at everything, because in a way.
It is because it's good at using the things that
are better at the thing it can't do. And that's
really awesome.
Speaker 2 (15:35):
Yeah, it's it's a fantastic glue in these cases. Is well.
First is I.
Speaker 4 (15:42):
Think that the language itself is great, So most of
the time I prefer to handle these kind of things
and to glue things with Elixir. And at the beginning,
I thought that this part mechanism could have huge limits.
At the beginning, I thought, okay, if I send, because
(16:03):
the experiment I did well. One of the experiments I
did is to was to send the row frames like
at thirty frame per second, so each frame is around
the three megabyte, so I was sending around one hundred
megabyte per second, and I thought, okay, maybe maybe I
(16:29):
could have huge delays because there is messaging, there is IO.
But actually each single message was sent the time the
roundtree time I measured is like half millisecond for each
single frame, which is I mean nothing, especially if we
(16:52):
want to run a real time object detection at thirty
frame per second. I mean, I was really impressed by
how fast is this mechanism? And yeah, how easy is
to take this approach and scale arizontally, especially when when
(17:17):
when the party is embeddeding engine server, then we can
use all the advantages of elix heir all the libraries
like like a poor Boy or other libraries and supervision
and it's I mean almost we get everything for free.
Speaker 3 (17:36):
Yeah.
Speaker 1 (17:36):
I love how Sophie kind of described that too, because
I do think that is a strength of Elixir and
the Beam in particular. It's just to say that it
is a great interoperability messaging kind of platform, so that
it's like it is the way I can manage to say,
these are the things I want done, and then I
(17:57):
can push that out through you in a distributed way
out to the workers who this one might be rust
and it's doing something, and this one might be Python
and just doing something, and then bring all those back
and be able to aggregate and report and do everything
like that. So it really kind of comes back to
it's like wow, you know, it's kind of like this
telephone communication kind of system, and it's like, you know,
(18:20):
it's like wow, who to thunk?
Speaker 3 (18:22):
So it's kind of fun.
Speaker 1 (18:23):
But what I love is like in my case, except
for like a couple examples like I gave with like Ruby,
where I was needing to go to something because of
a library as a gem that I needed to support
for Otherwise, it's like, I don't. I don't have any
personal needs where I need to go out to those
other things. I don't have performance requirements where I say
(18:46):
I need to go to RUST.
Speaker 3 (18:47):
I don't.
Speaker 1 (18:48):
I'm not doing anything with right now with machine learning.
We'd like to at some point in the future, but
you know, at some point that's where I will go
and I will be interoperability kind of talking with Python.
So I love that I can do it, but I
don't have to do it. Yeah, Elixer still does everything
I need to write where I am right now.
Speaker 5 (19:06):
Yeah.
Speaker 6 (19:06):
I think it's also it's also kind of like a
useful case to make or argument to have in your
back pocket when you're dealing with adoption. You know, if
you're trying to convince your team or colleagues to use
Elixir for a particular project, and you know, people are
looking down the line and they're thinking, well, what about
when we need something that Python is better suited for,
(19:29):
or what about when we need to solve this problem
that you know, typically folks may reach for rest for
you can kind of trop this out and say, like,
we can use Elixir now so that we can move fast,
so that we can leverage like a nice design, so
that we can take advantage of all the fantastic Elixer
things around concurrency and fault tolerance. And when we need
something else, we need a tool that's better suited for it.
Speaker 5 (19:49):
We have it backed ourselves into a corner.
Speaker 6 (19:50):
We're not in a place where we need to, you know,
spin up a separate service and another language and framework
and then figure out what this communication mechanism would look like.
Speaker 5 (20:00):
I think one of the things that you mention in
your article obvious is.
Speaker 6 (20:05):
That you know, you could have reached for HTTP communication.
You know, both some API end points where you're going
to communicate between your alexro Phoenix app and you know
this this Python application.
Speaker 5 (20:18):
But we don't have to. We don't need that.
Speaker 6 (20:20):
Overhead, we don't need that complexity. Ports are one way
among some others that Alexa just makes it super super
easy to enact that kind of communication.
Speaker 4 (20:29):
Yes, compared to for example, HTTP, which is obviously the first,
I mean the first thing that came to my mind
when I wanted to do something like this, And the
problem is supervision and especially on embedded devices. I mean,
if I want to run this thing, a real real
time outject detection system, that is that does something if
(20:52):
I find some objects in the pictures in the frames
coming from the camera, I have an if I run,
for example, a Python worker that has flask like an
HTTP UH server, I mean usually most of the time,
especially on an embedded device, I want to supervise this
(21:14):
process or this worker, and if I'm able to supervise
it via elkx heir, I'm much happier, especially because this
means I don't have to well, I know what's happening
inside Elixir, so inside the Elixir, I know if my
(21:36):
workers are fine or not, and I don't have to
install some other services like well, I don't know kubernatists.
Maybe not in an embedded device, but it's something that
I could run on a on a different in a
different case if I want to UH scale easily and
supervise different services.
Speaker 2 (21:59):
So this parts and makes this much easier.
Speaker 4 (22:06):
And another solution, as Mark said, UH with with another
solution to communicate UH for example, like with the with rust,
it would be uh niffs. So I went through also
that path and so yallow which is uh.
Speaker 2 (22:24):
It's an acronym for you only look once. Which is
the algorithm behind the object real time object detection. This
is a state of the art object detection system which
is with a real really uh simple, with the n
Vida GPU with its not cheap but around the five
(22:45):
hundred dollars GPU. Uh, you can detect objects at a
thirty different per second.
Speaker 4 (22:52):
Uh.
Speaker 2 (22:53):
And the thing is uh.
Speaker 4 (22:56):
The the creator of this algorithm they he did the
C version C and COUDA versions. So I said, okay,
maybe I could do a NITH. So first it's so
far easier to uh do to use the part with
a Python a high level library, then create an diffan
(23:17):
see and trying to interface with to to bind to
create a binding to this library and the But the
main thing, Uh, what happened is that the first time
I compiled there was an error in my ce neth
and obviously uh it crashed. The problem is that everything crashed,
(23:41):
the beam crashed, and so yeah, there are ways to
deal with it. Well, first is use rust.
Speaker 2 (23:49):
The thing is, uh, I didn't find anything.
Speaker 4 (23:53):
I mean, there were some libraries, some bindings with open
CV with rust. By the things when quite quite complicated,
quite easily, so I said, okay, ports is far better
to me.
Speaker 2 (24:08):
I mean, I'm not. I tried many things.
Speaker 4 (24:11):
But if I, if I would go back, I would
just use ports in this case. In this case because
the detection time is still around a twenty million second,
which is reasonable, reasonable time to wait to delegate a
job to a Python process. Obviously, if I just want
(24:34):
to do a sum from for two floats, a part
doesn't isn't isn't isn't the best way because just this
messaging it costs the time. But and in that case
maybe it's better to use to use a NIF. But
(24:56):
for things that are longer that than you milli seconds
parts is easier and safer uh and uh safer than
cm in uh and um yeah so and uh and
and also is we were able to just use a
(25:19):
high level library rather than something.
Speaker 2 (25:24):
With a lower level.
Speaker 3 (25:25):
I think it's funny the library called Yolo.
Speaker 1 (25:30):
You know, I I thought it was like you only
live once, but I can't.
Speaker 3 (25:35):
That can't be it. So that's really cool. It's like
you only look once.
Speaker 5 (25:38):
Uh.
Speaker 1 (25:39):
But what I love about your article, and I just
want to encourage you, dear listener, to check out the
article because if you if this is an interesting topic
to you at all in doing image detection, and you're
just kind of like like me, like I'm anew with this. Uh,
this is an awesome article because Alvisa has put a
ton of effort into this.
Speaker 3 (25:57):
It is detailed, it has you know.
Speaker 1 (26:00):
One of the things I just love watching in this
is the real time ish kind of updates of as
it's classifying you know, video and it's like he holds
up a remote you know, water bottle and it's like, oh,
and it's drawing boxes around things, and as he gets
up and walks around and how it's following, you know,
classifying a person.
Speaker 3 (26:18):
It's like, that's really cool. So I was curious as to.
Speaker 1 (26:21):
What kind of you've mentioned, like the the idea of
doing something embedded. Is this a personal project something you're
kind of playing with and exploring, Like what kind of
is driving this?
Speaker 2 (26:30):
Yes, so there are two goals.
Speaker 4 (26:32):
Yes, it's a personal project, but it's driven from actually
business needs. So I worked with Python in most of
the time in the last five six years a Python
and well in the last three years leisure uh, and
(26:54):
I always needed to uh to deal with this five
well at the beginning, I thought it was a fight.
Oh but there is this on Python. I would like
to have it in elix yir. I really don't like this.
Uh two, it's not that I don't like Python, but
I prefer to write elix heir.
Speaker 2 (27:12):
Uh.
Speaker 4 (27:13):
So I started actually to find ways to bring different
kind of problems more on like historical uh a list
of historical prices, uh.
Speaker 2 (27:30):
Volumes of trades. Uh so Uh.
Speaker 4 (27:34):
What I wanted to find is that like a generic,
generic way to deal with this kind of interactions between
Elixir and uh and and Python. So the best way
was to find a heart problem I mean hard in
the sense of processing something like I mean, uh, this
(27:55):
model is one of.
Speaker 2 (27:57):
The uh toughest on the processing side.
Speaker 4 (28:01):
Uh and uh it's it's so this uh a yallow
needs to be to.
Speaker 2 (28:09):
Run yeallow until a three different per second.
Speaker 4 (28:11):
We need to be uh faster, not only on the
processing side of yallow. I mean that obviously, especially if
we take this function of the high level library. There
is there is there isn't anything we can do, but
what we can do is to work on the boundaries.
Speaker 2 (28:29):
So I said, okay, this is a great problem.
Speaker 4 (28:32):
Also to find a like a generic way to handle UH,
this interfate interaction with Python also to deal with other
things I had in mind, like classification of a series
of UH prices. So this is actually what driven. It's
(28:54):
a mix of personal and business. And then I got
hooked by the problem itself and I started to say, okay, oh,
but there is this jetson nano which is an Nvidia
device and at the device it's a ninety nine dollars
UH and is an embedded device.
Speaker 2 (29:13):
UH which is powered by.
Speaker 4 (29:18):
UH that the power supply is from Amery correctly ten bots.
And the cool thing is that it has an an
Nvidia GPU, so we can run TensorFlow GPU or the
Kuda version of the of the yollow of the yellow
convolution on neural network, which this means it runs much
(29:43):
much faster than on a CPU UH.
Speaker 2 (29:48):
So I said, okay, that's great. So now that I have.
Speaker 4 (29:51):
A gen server and that that runs this detection along
with the Python process, what if I buy a few
of these devices and by the devices and I linked
them via Elixir distributed Elixir distributed around. So this means
(30:12):
each single device UH runs a worker. So each single
device is a node which runs a worker and Uh.
One of these is a master that has a camera
and actually use uses the other devices to spread the
load of this classification of this attraction across different all
(30:38):
the devices. And uh, this is what I did. Actually
I voted two devices because I don't didn't want to
spend two too many.
Speaker 2 (30:46):
Dollars on this. But uh what day saw?
Speaker 4 (30:49):
Actually I linked also my computer to the to the
to the cluster.
Speaker 2 (30:53):
And uh it Uh it scaled ririzontally super easily.
Speaker 4 (31:01):
I just needed to do something like a management module
which uh uh which does something like a round robin
so uh, it knows which node is which node and
the worker is busy and which one is free.
Speaker 2 (31:19):
So as soon as it receives.
Speaker 4 (31:21):
Uh, the it's like a broker, so it receives uh
the frame uh knows where to where to sign it.
Speaker 3 (31:29):
Uh.
Speaker 4 (31:30):
And it was super easy. The thing is with the
Elixir was super easy. And I said, okay, if I
let's see if I want to do it in Python. Uh,
it's not, It's absolutely not that easy.
Speaker 2 (31:42):
It's not.
Speaker 4 (31:43):
There are so many free things in Elixir when it
comes to the scaling horizontally in a distributed manner that it's.
Speaker 2 (31:56):
It's amazing that's cool.
Speaker 1 (31:59):
So yeah, you have a link to this, and I
included a link to in the show notes to this
Nvidia jets and nano device which is looks like it
retails for about one hundred dollars US, which which I
think is super cool that you can do like coudacors
on this little super embedded thing and have a little
cluster of them and have workers and farm out, you
know work, which is like you think of like, okay,
(32:20):
well if I was going to actually do this, you know,
that is the kind of device that I could sell
and have like in the office area that is processing
the sensor data and the images that are being captured
and doing all of my processing. So very cool. I
had not seen anything like that. Didn't know that was available.
Speaker 3 (32:40):
You know.
Speaker 1 (32:41):
It's like Raspberry Pie is the thing that everyone kind
of talks about because it is the most approachable place
to start. But yeah, when you're talking about machine learning,
you're wanting to go like coud decors and using the
libraries that are already there.
Speaker 3 (32:53):
So very cool.
Speaker 1 (32:54):
Well, we are coming about about up to our time.
Is there anything else you want to mention before we
transition side One other thing we haven't touched on. Is
just that you had mentioned in your articles and just
the idea of like also using scenic Oh yeah, okay, so.
Speaker 3 (33:09):
Maybe you could talk about that for a moment.
Speaker 2 (33:12):
Yes.
Speaker 4 (33:13):
The first thing I I I tried is to use
a Phoenix to.
Speaker 2 (33:22):
To render, especially to render the results.
Speaker 4 (33:25):
So because I needed to get frames as as easy
as I could from from the camera, so I said, okay,
let's try it with Phoenix, especially because I can use
it also to render as the frames and the result
with as VG. Obviously, this is not the best way,
(33:46):
is the easiest way. It was the easiest way. It's
not the best way because there is this because frames
are frames are sent around from Phoenix from the browser UH,
and the browsers sends the UH. The frame is to
as a basic sixty four encoded we have two decode THEMS.
Speaker 2 (34:09):
And to the Python workers.
Speaker 4 (34:10):
So it's so what it did is to say, okay,
I saw that the throughput I could get with parts
is great, and so I could actually use Python and
open CV to get the frames. So to get the
frames directly from the camera especially is great if we
(34:31):
use it with on embedded device UH. And what they
do well, the first advantage is that open CV.
Speaker 2 (34:43):
Works well to get the frame. We have the frame
in memory, UH, and.
Speaker 4 (34:47):
The representation is a NUMPI metrics UH. NUMPI is UH
is a is a library in Python to deal with
matrices and and UH. The cool thing is that we
don't need we have this image in memory, we don't
need it's a ROW representation of the image, and so
(35:10):
we don't need to send send the image to other processes.
We just need to classify, run the classify, the the
yallow object detection. And what I so at the beginning,
I said, okay, maybe there will there will be delays
and compared to the compared to the open CV renderer.
(35:35):
But actually what is so is that with Scenic, which
is a framework to create applications, uh desktop applications, and
in Elixir with Phoenix sorry, with Cynic, I was able
to receive the frames from UH from the Python process
(36:01):
from open CV row as a binary and I was
able to take this binary we die without doing any manipulation,
any editing, any and using Scenic to render this frame
directly into the window without any processing of the binary,
(36:21):
which is was great. So yeah, I think this could work.
Is still something I have to try. I think this
could work well in a better device where we are
running this object detection system and we actually want to
see the result on another device. So we have our
(36:43):
embedded device with the camera. It runs a scenic and
the cool thing with scenic we are able to see
remotely the result the window. So it's something I still
have to try, but I think it should well.
Speaker 3 (37:00):
Nice. That is very cool.
Speaker 1 (37:02):
I love the idea that you can just like scene,
it can just you know, display raw data coming in,
don't have to process it and convert it in any way,
and just kind of like be able to render it.
Speaker 3 (37:11):
So that is very cool.
Speaker 1 (37:12):
So awesome stuff that people can take advantage of and
check out and look at. So if people are wanting
to kind of get in touch with you and follow
more about this topic, where should they go to do that?
Speaker 4 (37:26):
So the site where I where I write my blog
posts is poit coding dot com and I write articles
mainly about Elixir and Phoenix awesome.
Speaker 1 (37:40):
So we will also have links to that in the
show notes. Check that out and also where you are
on Twitter?
Speaker 2 (37:46):
Yes, your name?
Speaker 3 (37:47):
Well, let's go ahead and transition to picks. Sophie, do
you have one you can share.
Speaker 5 (37:52):
Yeah, I do. So. I recently started going through this
book called hold On, let me see.
Speaker 6 (38:00):
I can find it in our ex section black Hat Go.
So I'm pretty new to GO when I'm starting to
use it at work and just kind of plugging through
and learning what I need to learn to kind of
get stuff done in my actual job. So this has
just been like a really fun read to go a
little bit further and to sort of explore some of
(38:20):
the things that GO can do and some of the
things that we can learn about, you know, the Internet
and how it works using this language and using these frameworks.
Speaker 5 (38:29):
I'm not super far into it yet.
Speaker 6 (38:30):
I'm only in chapter two, but for somebody that again
is relatively new to Go, it's super approachable. It's been
really cool. It's you know, it's very security oriented. You're
kind of learning, you know, you're not really learning how
to be a black cat attacker. That's not like a
book that you're going to buy on Amazon. You know,
you're learning to improve your offensive security skill set, which
is always an area that I wanted to dive a little.
Speaker 5 (38:50):
Bit more into.
Speaker 6 (38:51):
So so far, it's been super fun and I'm definitely
recommending it my other pick is not programming related. It
is a reality show that is taking over the internet.
You may have heard of Love is Blind, anyone.
Speaker 5 (39:04):
I'm going to try to keep it short because I
am obsessed with it. The premise of.
Speaker 6 (39:07):
Love is Blind is that these people are dating each
other in these pod like rooms where you cannot see
the other person.
Speaker 5 (39:14):
There's like a frosted glass.
Speaker 6 (39:16):
Pane in between them, and the idea is that they
want to get married. So as many proposals that come
out of this dating situation, they're then allowed to meet.
They're sent on like a vacation, they meet each other's families,
they move in together, and then they see if they're.
Speaker 5 (39:29):
Going to actually get married. Their goal is to get married.
Speaker 6 (39:31):
And I started watching it because it sounds telling nuts
and really fun, which it is. But it's also a
really fascinating just like study of human behavior. I can't
say enough good things about it. It's totally wild and
I really recommend it to basically anyone.
Speaker 5 (39:45):
Please watch Love is Blind. You won't regret it.
Speaker 6 (39:48):
Oh when it's toasted by Nick Lache of ninety eight degrees,
I should add, which.
Speaker 2 (39:52):
Is very important, very cool.
Speaker 1 (39:55):
I've heard about that one all right. So my pick
is it's a it is a command line little tool
that was created. It's to make it look like you're
doing some really cool stuff on your computer.
Speaker 3 (40:09):
Right. It's it's just for show.
Speaker 1 (40:11):
It's actually called Hollywood, but it is it's actually been
made appearances in television shows and movies and things. So like,
if you want to look like super Elite, you know,
you're going to have like your image detection going on
on one screen with like as things are moving around
being classified, and on the other screen you've got Hollywood going.
(40:31):
And so you can check out the website. It's it's
really cool because it you can actually run out of
Docker and stuff like that, so you don't actually have
to install anything. And but what it does is it
opens up multiple t MUCKs panels with then it has
lots of techno nonsense stuff going on, plus some real
stuff like speed tests, directory tree listing, device listing, you know,
(40:55):
off of your actual computer, you know, generating PGP keys,
you know all stuff. It's all console based. It's just
really fun, just just fun that I don't know. I
think it's a cool little toy. But that's it for me, Alvisa,
how about you?
Speaker 2 (41:11):
So uh.
Speaker 4 (41:12):
The I want to mention uh an article from Sasha
Urick and uh the which is outside Elixir, which is
an article that helped me a lot to see uh
to to use parts uh and I think it's it's
it's a really great read. And the the second thing
(41:37):
is is a book. And actually this book is uh
it's something I was reading. It's a fantastic book. One
of the best books I read in the last two years.
The name is designing that intensive applications and from the
writer dealtor is Martin Klatman.
Speaker 3 (41:59):
Uh.
Speaker 1 (42:00):
Uh.
Speaker 4 (42:00):
It's a quite big book and it goes from the
how database databases are written, how they store data in files, pages,
how to use logs. And the reason why this isn't
(42:21):
uh has been an important book to me is because
I started a blog, my blog actually trying to implement
something a database I log database that was explained in
this book and I tried to do it in Elixir
(42:41):
and it was so easy to do in Elixir that
a Kivali store based on logs right ahead logs something simple.
But it's actually one of the first articles. So uh,
it's a great read.
Speaker 5 (42:54):
Uh.
Speaker 4 (42:55):
And also it deals he deals also with he explains
the distributed databases all the issues how they also deal
with the different tissues.
Speaker 2 (43:10):
Great.
Speaker 4 (43:11):
Great, And the last thing is uh, so I follow
you guys on the on the on the movie side
is something I watched recently. It's called Dark and is
Uh time traveling is I think if if I remember
(43:32):
Christly is.
Speaker 2 (43:33):
Uh is uh uh.
Speaker 4 (43:38):
Based in is Is in Deutschland, I think is uh
and Uh. It's a great TV series about time traveling
uh and the consequences of time traveling. And I'm really
passionate about this topic and it's a great Yeah, it
(43:58):
was great watch.
Speaker 3 (44:01):
Oh cool.
Speaker 1 (44:02):
All right, so I give a plus one to the
Sashi Yericks article outside Alixer that is when I was
getting into ports. That was like super helpful for me
as well. So awesome good stuff there. All right, Well,
thank you for coming on al Visa. I had a
great time talking with you about this.
Speaker 4 (44:19):
And thanks a lot guys for having me.
Speaker 3 (44:23):
All right, glad you could come on. And that's it
for today.
Speaker 1 (44:26):
Thank you for listening, and we hope you'll join us
next time on Alixer Mix