All Episodes

December 1, 2025 51 mins

Send us a text

Una decisión automática puede cambiar tu vida en segundos: negar un crédito, filtrar tu currículo o replicar tu voz en un deepfake. En esta conversación con Rosa Celeste nos metemos de lleno en cómo construir IA responsable que optimiza procesos sin convertir la privacidad en daño colateral. Hablamos en primera persona sobre lo que sí funciona: gobernanza clara, transparencia real y equipos interdisciplinarios capaces de unir ley, negocio y tecnología para reducir riesgos y acelerar con cabeza.

Desmontamos el mito de que “anonimizar” basta. Explicamos por qué la reidentificación es un riesgo real y cómo contrarrestarla con minimización de datos, evaluaciones de impacto, controles de acceso, retención limitada y auditorías independientes. También abordamos las zonas rojas del momento: clonación de voz, identidad sintética y desinformación amplificada por modelos que no conocen límites. Discutimos responsabilidad algorítmica y explicabilidad: quién responde, cómo documentar variables decisivas y por qué la revisión humana significativa no es opcional, sobre todo en crédito y reclutamiento donde los sesgos pegan más fuerte.

Nos detenemos en el terreno regulatorio y en los derechos del usuario: pedir explicaciones, oponerse a decisiones automáticas y exigir límites al uso de imagen, voz y datos sensibles. Recorremos herramientas prácticas para líderes: métricas de riesgo, pruebas de equidad, controles de uso, marcas de agua robustas, trazabilidad y entrenamiento del equipo en principios éticos accionables. La idea central es simple y poderosa: la IA puede ser tu mejor aliada si la diseñas con reglas claras desde el día uno, alineada con la cultura de tu organización y con respeto total a los derechos fundamentales.

Dale play, comparte este episodio con tu red y cuéntanos: ¿qué control te gustaría exigirle a cualquier algoritmo que evalúe tu vida? Suscríbete para no perderte nuevas conversaciones y deja tu reseña; tu opinión ayuda a que más personas encuentren este contenido.

Descubre Protección para la Mente Inventiva – ya disponible en Amazon en formatos impreso y Kindle.


Las opiniones expresadas por la host y los invitados en este pódcast son exclusivamente personales y propias, estas no reflejan necesariamente la política o postura oficial de las entidades con las que puedan estar vinculados. Este pódcast no debe interpretarse como una promoción ni una crítica a ninguna política gubernamental, posición institucional, interés privado o entidad comercial. Todo el contenido presentado tiene fines informativos y educativos.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_03 (00:00):
A great empresa of e-commerce international who,
for example, utilizing theintelligence artificial to
recollect curriculums.
And they were analyzing thosecurriculums and rechazing the
perfil of the homes andpriorized the homes.

SPEAKER_02 (00:20):
Oh, wow.

SPEAKER_03 (00:21):
And we use it for a time after finally and
Intangible.

SPEAKER_04 (00:42):
I'm very pleased to have a compatriot.
She is brilliant and managedwith distress into the privacy,
auditors, algorithms, anddilemmas ethics of a global.
Rosa Celeste has assessor forEurope, has impulsed politics of
intelligence artificial,responsible, and gives the
privacy from standardpersonality and professional.

(01:05):
Preparing this conversation is apoderosa with legal,
technological, and a visionclear about the future digital.

SPEAKER_03 (02:04):
And Swiss?

SPEAKER_04 (02:36):
Muchas gracias.
Este es tu hogar, and aquítienes una compatriota para
abrazar andar siempre presente.
Andas a la materia. ¿Cuál diríasque es hoy el mayor malentendido
que enfrentan las organizacionestocadas inteligencia artificial
sin comprometer la privacidad?

SPEAKER_03 (02:56):
Yo no lo limitaría a un solo malentendido.
Primero, the mayor of theorganizations piensan that
implementar la anonymization issuficient.
Saco algunos datos que puedenidentificar a las personas and

(03:20):
this is suficient.
When the reality is thatidentificators with other bases
of data, combination with otherbases of data.

(03:46):
So, the anonymization issufficient.
It needs other technicalorganizational and political
that can control the datapersonality.
This is one.
The second transparency.

(04:11):
What data uses, when, what, whenwe supposed, with what we
compartment.
All these things don't comparand inform the person.

(05:00):
So it's an opportunity tooptimizate processes at the same
time.
And if the organization said,No, this is a freno, it's a
barrier, when in reality is anopportunity.

SPEAKER_04 (05:17):
Totally.
And the second is that it's anopportunity.
It's an opportunity to have thethings with the process
adequate, and empresa ororganization.

(05:40):
I'm gonna take my positive too.
I don't have other manner ofpresentarles my life.
Really?
It's very bonus.
Well, the IA has advancessurprising.
You can see a library sort ofproblem in salute, productivity,
and services public, butvigilancia and abusive.

(06:05):
Desde tu perspective, how do weinnovative technology not in an
excuse for invading nuestra vidaprivada?

SPEAKER_03 (06:15):
Claro, it's important.
Some decisions interne, thereare directions, there are
organizations that decide whatdata are, and a particular

(06:36):
organization does thatnecessarily mark regulatory.

(08:17):
And both include, I don'tgeneralization in the world, but
almost in Europe that there areauthorities to control that
alphabetizing and educate theeducation.
These are the danger, they canexhibit if nobility, no contact,
and it is bastard for trying tomaintain this balance of respect

(08:41):
the privacy of data of thepeople.

SPEAKER_04 (08:44):
And so they're letter mutable, because if there
is execution, aunque the norm iscreated, nobody is implementing,
so it's letter muerta.
But in this case, have agovernment authority vigilante
to that have the motive to do itfor ethically, but also have the

(09:05):
reason to have it because theyare monitoring and vigilant.
And if it does, there areconsequences.
Exactly.

SPEAKER_03 (09:22):
And lamentably with these sanctions, with this
balance, no empresa desarrollingthe conformity at the nivel that
it required.

SPEAKER_04 (09:38):
And much organizations dicen using the
intelligence artificialdeformation.
But at the end of the hechos,the compliance parece formality
conviction.

SPEAKER_03 (10:08):
It has a governance interpretation suficiently
fuerte and structured, withroles defined, with métricas and
ries.
And for ello, one of theequippers interdisciplinary or

(10:39):
controller or regularintelligence artificial in the
internal.
We need the jurídico, theequipped marketing, the program,
the segurid technical.
In reality, it's impossible todesign an intelligence
artificial utilization of datapersonality, since an equipment

(11:04):
interdisciplinary.

(11:39):
And at the same time, I permitconstruct a major confiance, a
legitimate data personality ornot.
But sort of mantra cómodesarrolling this intelligence
artificial, cómo utiliza, whatare the frenos, and basically

(12:01):
necessary in what is the ethicalrequirement in the design and
utilization of the intelligenceartificial, interrelation with
the use of data personality.

SPEAKER_04 (12:14):
Okay, so it's a thing of inclusivity.
In the equipping, inclusivity,in the process, in the system to
create something that isaccording to the standards.

SPEAKER_03 (12:25):
Exactly, and according to the objectives
organizational, because thereare objectives and culture
organization.
Each organization has a culturespecific.

(12:55):
So it's necessary to design agovernance, so structured that
me permit this control and atthe same time the responsability
to utilization.

(13:36):
Sí, claro, hoy en día sabemosque la mayoría de la
inteligencia artificial que seutiliza para voice cloning o
identidad sintética, no tienenese límite de yo poder pedir o
exigir ese derecho de, oye, noquiero que me clones
digitalmente, no quiero queutilices in my voice para crear

(13:59):
deepfakes, para crear situationsficticias that no pueden ser
controladas hoy en día.
Because much desarrollada thetechnology in this way that no
sabemos lo que es real, falso,de lo que es fictitio.
Para ello, realmente, a priori,if we desarroll this dare, the

(14:22):
empresas can have theorganizations to not personal,
not utilizing the images, notutilizing the voice.
And many algorithms ofintelligence artificial don't
have these limites techniques.
The mayor are autonomous in thissense, and we have the

(14:46):
responsibility algorithmic.
This is a point bastard, but wedefine what is responsible for
what.
Is the design, is the utilityfinal, or is simply what we are

(15:11):
implementing?
So in this sense, if we have themayor of the decisions and
cases, it's impossible orbastard to do a linear.
Some people analyze.

(15:54):
And who designed how you getthese decisions, how this
control autonomous.
So this part of theresponsibility algorithmic today
false at level.

SPEAKER_04 (16:21):
Persian culpable is faster.
There's to connect the culpable.

SPEAKER_03 (16:27):
And it does have bastards, in many general are
lent, but it's much because wecan use an expert that would
enter and analyze that nosolarly se limites.

SPEAKER_04 (16:45):
And technically the reality, mañana no looks.

SPEAKER_03 (16:52):
Absolutely.

(17:23):
Oh Dios mío, fatalities.

SPEAKER_04 (17:47):
And if you have a person who is fragile or
vulnerable, it's morefacilitating to manipulate.
Totally.

SPEAKER_03 (18:14):
These types of points that my identity for
creating an image fictitious ora video fictitious, the identity
synthetic, all these points arecontrolled.
Exactly.

(18:41):
There are regulations thebalance between the privacy of
data, and much is a posterior, apriori.

SPEAKER_04 (18:54):
The images or the video or the voice is away.
Exactly.
Okay.
And I basically with me, I waslike.

(19:21):
Inclusive, obviously there arethings that not perfect for the
manner that I pronounce allthese things, all of expressions
that you know that aresynthetic.
But my mother will distinguishbetween the artificial and real,
but my mother.

(19:43):
No, this is a thing.
Totally.
Because they imitate things.
And this is a friend in thealarm.

SPEAKER_03 (20:10):
I know an opportunity.
And when, for example, you canpermit and the professional and
different projects that youhave, the innovative, the
technology, the is maravillosa.
So it's the balance, as thecontrol of what conscience you
are utilizing and guarding thepart ethical at the time.

(20:32):
But the utility of the IA ismaravillosa, you use it all the
time.

SPEAKER_04 (20:35):
The consentiment, if you are done, okay, perfect, but
I can't give permission.
Antes.
Antes.
Antes.
Antes, it is privacydifferential.
The intelligence artificial isexplicitly, ethical.
But no sobrecargot because therisk crew.

SPEAKER_03 (21:44):
Tratar to utilizing the data strictly necessary for
the objective with which theintelligence artificial and
technology.
You can use the data strictlynecessary.
You don't say anonymization.

(22:06):
No zero with no.
And because other data, morethan that.
In this way, try to implementthis principle of minimization
and maintain auditorsindependent that can have this
control continual personality,the privacy in many general and

(22:36):
relation with the technologiesand the intelligence artificial.
It's basically a privacy torespect the principles basically

(22:59):
from the principle of the lifeof the personality, and at the
same time of the technology thatwe are designing.

SPEAKER_04 (23:16):
It's a problem that if you don't, don't.
But there's much adversion inthis, because no, for size.

SPEAKER_03 (23:27):
For six, totally, but we are consuming a base of
data, guarding data for a timeindefinite, many ways that I can
definitely conserva.
But not, for six, dinner.

(24:03):
Because what it is, you have tojustify a lot of authority of
control.
If you don't justificate, I canjustify a lot of those, and then
the organizations for which youare paying the sanctions at the
end.
But the risk reputational.

(24:27):
Absolutely.

SPEAKER_04 (24:32):
Exactly.

SPEAKER_03 (24:59):
For example, the part of the supervision public
is absolutely necessary.
But for else, there are recurs,we need experts in the area, and
there is a responsibility forit.
But it's absolutely necessary.
The reality is that the controland not reemplazed by the

(25:23):
conformity internal thatorganization.
So this balance is present.
So I insist that the regulatoryis more important in the design
of the control.

(25:44):
And the duty digital, this isnecessary.

(26:13):
In Europe exists this control,it's necessary that we are
informed by ethical because weinteract with an intelligence
artificial.
In Students, at the point of thederechet to the consumer, it
exists.
But in other places, at anational general, no.

SPEAKER_04 (26:38):
And have conversations and revelation
information and what they aredoing.

SPEAKER_03 (26:43):
And the control individual is necessary.
Nobody controls the utilizationof those personality like we
have to do.

(27:04):
They say no being part of adecision automatically for an
intelligence artificial.
In other parties no.
And I try to say that manypeople do decisions, for
example, when recollecting thecurriculum, the resumes, the
curriculum of people, and theyhave decisions automatically for

(27:28):
rechazing perfiles.

SPEAKER_04 (27:29):
And no one.

SPEAKER_03 (27:32):
Exactly.
That's equal to the credits.
There are banks of the world whoare utilizing the intelligence
artificial intelligence for abit and say, No, this person is
not qualified, and there'snothing revisando.
Okay, but what?

SPEAKER_04 (27:48):
Because this would be found, like we've seen in
various occasions, in datahistoric that are
discriminatory.
Absolutely.
Because people who living in acertain sector who have
characteristics that put in agroup specific economic or
sociocultural group have accessto the credit.
Exactly.

(28:08):
But they can't escuch that apaper is horrible.

SPEAKER_03 (28:17):
And this is one of the results of the ethical in
the design of the utilization ofthe existed balance, a control,
existed discrimination.

(28:41):
And they were analyzing thosecurriculums and rechazing those
perfiles of homes and priorizedthose of those homes.

(29:38):
Conjusta causa.
Exactly.
Entonces siempre ese punto detener ese balance entre ética,
privacidad y los objetivos.
Claro, se entiende, hayobjetivos empresariales detrás,
pero se tiene que tener esebalance y siempre respetar esa

(29:59):
ética.
And igualmente, tratar de exigirthe explications necessary of
how functional intelligenceartificial, how you got to this
decision.
And for example, in Europe, youcan exigere calls that decision
automatically, what are theparameters or the variables that

(30:20):
you used, how you got to thisdecision?
But say that in other parts ofthe world this exists.
And much people arediscriminated, and there is a
discrimination via theintelligence artificial that hoy
no control death.

SPEAKER_04 (30:50):
These are the people that you will prioritize,
because historically.
But all this has a reason.
And the intelligence artificialis that it magnifices.
Because in an application like aperson in a bank application in
the day, the intelligenceartificial applications in a
minute.

(31:10):
So they magnificate that.
But what terrible.

SPEAKER_03 (31:37):
And if there's a superintelligent that you go, we
am going to exist today, but itneeds this control, and
imaginate, no tendril of thevictims of the technological.

(32:20):
No, I'll talk about myexperience because you know that
we have to have fundamentalssolid in three areas: human, and
the regulation technical.

(33:30):
But I have to have thisconnection basically, and at the
same time, the technical basicthat we implement.
What are, for example, thesetypes of technologies that
controls that control, like whatthe people don't know how to get
to those points.
These are various models for theintelligence artificial, various

(33:55):
models.
When I look at model, I don'tknow of model of algorithms of
trust, of intelligenceartificial or machine learning.
I refer to it, I have to havethis connection technical and
understand that if I are anequipment multidisciplinary, I
have to communicate, I have toadapt my discourse.

(34:30):
I have to adapt my discourse forother things, for the
intelligence artificial andprotection of personality.
This is basically.

(35:07):
And there are expectatives ofthat they facilitate to enter
what are the pautas, the reglassto start in conformity.
And this balance is absolutelynecessary.
And for ultimately, you knowthat there are a little bit of
connection of geopolitical.

(35:29):
Exactly.
And really, a soft skills,something that you say, well,
no, no, I'm enseñing, but it'ssomething that you can design to
enter and at the same timeentertainment.

(35:51):
She can try to design, you sayempathy professional.
In the world international, justin the public or in the private,
I can try to design this empathyinternational to adapt to the
necessity of each one of thoseinterlocutors.

SPEAKER_04 (36:11):
Exactly.

SPEAKER_03 (36:22):
Aquí las leyes are more restrictives.
In Stato United, like comienza anivel, a nivel federal.
So it's the discurso, and wehave to try to enter it, and the
exigencies are different forthat.

SPEAKER_04 (36:46):
Exactly.
But apart from this, you have tobe a bad communicator, adapt to
your life, and be sensible withthe reality of the people.

SPEAKER_03 (36:56):
And technically, I'm a business, but the connectivity
technical adoption in master, incertifications that I adopted,
and I have this connectiontechnical with it, it is
impossible to work in.

(37:19):
Exactly.
No, no, no.
It will be more.
It will cost, and at the sametime, no balance that you need.

(37:40):
So to coexist and those threepoints, you need the concept,
technical, and at the same time,including adaptation to the part
that you say organization, butit's more about the political
and the culture of the empresaor the organization.

SPEAKER_04 (37:56):
Because you are the context.
Exactly.
And much of those that arepossible technically or are
insensible in the cuid with it.

(38:20):
The session flash.
Elige one option and no loanesmucho, simply elige the primer
that you entice.
Lista?
Privacidad total or innovaciónsin barreras.
Privacidad total.

(38:41):
Preferir salud predictiva acambio de tus datos médicos or
salud genérica para mantenerlosen secreto.

SPEAKER_03 (38:52):
Esa es difícil.
Esa es la idea.
Si esa está difícil, yo es laprimera opción.
Ok, ok.
Salud predictiva.

SPEAKER_04 (39:01):
Sí, salud predictiva.
Consentimiento eterno por unasola vez.
O tener que confirmarlo cadasemana.

SPEAKER_03 (39:11):
Uy, no.
Confirmarlo cada semana.
Se cansarán, pero confirmarlocada semana.
Pero las cosas cambian día parahoy.
Sí, las cosas cambian.
Sí, no se puede dar unconsentimiento eterno.
No, no.
No, no, no.
Demasiado riesgo.
Sí, totalmente.

SPEAKER_04 (39:29):
Una inteligencia artificial que predice tus
decisiones o una que guarda tussecretos.

SPEAKER_03 (39:37):
Ay, la que guarda mis secretos.

SPEAKER_04 (39:39):
La confidente.
Sí, totalmente.
Derecho al olvido absoluto ohistorial digital transparente
para siempre.

SPEAKER_03 (39:48):
No, derecho al olvido absoluto.
Absolutamente.
Ok.

SPEAKER_04 (39:53):
Un assistente personal basado in an
intelligence artificial thatscrewing tusks a man.

SPEAKER_03 (40:01):
No, no, no.
The first option, theintelligence artificial that I'm
assistant marathon.
Lo mejor que no puedo pasar.

SPEAKER_04 (40:09):
You am the intelligence artificial.
Yo también is a very good amigo.
Ayuda a facilitate much of thethings.
Yes, Dominican.
You got to have a bizcocho andme a little past.
Yo, I'm terrible cocinera.
O sea, yo cocino parasobrevivir.
Ay, no, yo soy buena in this.

SPEAKER_03 (40:30):
You cocino bien.

SPEAKER_04 (40:31):
Te voy a invitar a mi casa.

SPEAKER_03 (40:32):
Es verdad, I cocino bien.
Y me gusta, y me gusta.
Y me gusta.
Sí, sí, sí.

SPEAKER_04 (40:39):
Y me dio antojo and me ayudó a los dominicanos.
But it's difficult to say.
Es difficile.
Y es el mejor, para mí el mejor.
Sí, porque es que sabe comoninguna otra cosa.
Sí, sí, sí, el mejor.

SPEAKER_03 (40:51):
When you dije que estaba allá.
Yo vine como con 5 kilos.

SPEAKER_04 (40:54):
Y felicidad, y felicidad.
Tú sabes la falta que a mí mehace en la cocina de mi mamá.
Ay, totalmente.

SPEAKER_03 (41:00):
Una cosa increíble.
Ay, eso, y no se compara.
Y es que no sabe igual, yo lointento, pero es que no sabes
igual aquí.
No, es que no, no son los mismosingredientes.

SPEAKER_04 (41:08):
Tú te has comido un mango de aquí.

SPEAKER_03 (41:10):
No, ni lo he intentado.
Digo, sí lo he probado, pero nosaben igual.
Te da una tristeza en elcorazón.
Es pura agua.
Imagínate, en San Cristóbal, enmi casa me crié con quatro.
Mi papá enfermo con eso.
Then I had quatro plantas deenvio.
Tranquila, mata de mango enDominicana.

(41:32):
Then he had quatro of all thetips.
And those days they llegaba.
Yes, yo no soy muy fan delmango.
La verdad.
Y le doy gracias a Dios, but forthis, because I crié from pure
mango.
But no, igual que el aguacate.
El aguacate a mí, lo que meduele aquí es el aguacate.
Ay, Dios, ese aguacate chiquito.
The gente no entiende que unaguacate dominicano te bajas

(41:53):
como carne, como cena, comoproteína.

SPEAKER_04 (41:56):
Aquí yo le echo sal, pimienta, le echo balsámico, le
echo aceite.

SPEAKER_03 (42:03):
Yo le digo, yo digo, eso es el aguacate dominicano,
digo el del Caribe igual.
Porque una amiga cubana una vezme trajo un aguacate dominicano
y me dijo, lo único que te pudetraer, tú quieres más de ahí.
Pero eso es lo que se puede.
Me tocaste el corazón.
Ay, qué lindo.
Y yo me tocaste el corazón.
Yo no le pedí nada, imagínate.

SPEAKER_04 (42:23):
No, pero con eso con una alegría.

SPEAKER_03 (42:26):
Me acuerdo que vivía con mi hermana.
Mi hermana se lo llevó y no loquería soltar porque eso es como
oro aquí.
Sí, no es una cosa increíble.

SPEAKER_04 (42:32):
Yo, esa es la cosa que más falta me hace cuando
vivo afuera es la comida.
Igual, igual, igual.
Mango, aguacate, guayaba.

SPEAKER_03 (42:42):
Pero aquí aparecen, pero no se sabe igual.
Pero no está tan bueno.
Aquí el dulce de guayaba.
Ay, el dulce de guayabas.
El dulce de guayaba.
El dulce de guayba.
No, es que no sabe igual.
No sabe igual, para nada.
No, para nada.
Eso sí, es lo más.
Para mí eso es lo más duro deestar fuera de mi país.
Porque mi papá y mi mamá yo melo traigo.
Pero la comida.
La comida.

SPEAKER_04 (43:02):
Mi mamá siempre me trae café cuando viene, porque
es lo más fácil de traer.

SPEAKER_03 (43:06):
No, yo el café me he adaptado al italiano.

SPEAKER_04 (43:10):
Pero es que el aroma del café dominicano es
diferente.
Tú sabes lo bonito, losrecuerdos de yo crecer in my
casa.
The primer en la mañana es elcafé.
Es una cosa irreplicable.
Que te lleva en el corazón.
It's duro.

SPEAKER_03 (43:28):
And the gente a uno se deshumaniza a poco when it is
duro.
Sí, porque llega al corazón.
Yo sé.
Y dicen que el secondo cerebrois ahí.
If you no comes back orsomething that te gusts, no vas
a ser feliz.

(43:48):
No, totalmente, totalmente.
Pero que tú lo ves.
Se ve.
La gente, when you comes back,in the bien, the gente is feliz.
No, se ve, se ve, se ve.
Se ve, se siente.
No, no, no, es verdad.
Ah, sí.
Para mí, la comida is amor.

SPEAKER_04 (44:03):
Es amor.
Bueno, nothing.
Volviendo al paréntesisdominicano.
La última pregunta, Flash.
Un mundo where your data are unamoneda, o donde simplemente no
hay datos that you puedanrecolectar.

SPEAKER_03 (44:21):
Qué abstracto.
Que los datos sean una moneda.
Bueno, depende, puede ser unamoneda, pero no va a acceder a
ellos que yo lo permita.
Ah no, jamás.
Entonces, la seconda option, no,jamás, jamás.
No, no, no, no.

SPEAKER_04 (44:41):
No se negocia con la privacidad.
No, no, no.
Con los derechos, no, no, no,no, no.
Fundamentally, no, no, no.
Okay, okay.
So cog la paleta.
Okay.
Aquí we have two options.
Futurista significa que es falsoin un futuro 100 años orejano.
Y verdadero is que estápassando, está por pasar, or in

(45:02):
cualquier momento pasa.
El consentimiento como baselegal desaparecerá y será
substituído por sistemasautomáticos de confianza.
Ninguno.
Ok.
Nada.
Sueñe muy feo.
Las plataformas te ofreceránmodo privado total, pero con una

(45:24):
tarifa mensual.
Verdadero.
Una persona podrá exigir queninguna inteligencia artificial
imite su voz, rostro o estilodigital.

SPEAKER_03 (45:40):
Creo que algunas existen.
Sí, sí, sí, existe ya.
Gracias a Dios.
No estamos recogidos.

SPEAKER_04 (45:46):
Pero hay, hay.
Habrá una red digital paralelasin cookies, sin rastreo y sin
publicidad.
Futurista.
Viviremos in un mundo sinprivacidad y nos parecerá
normal.

SPEAKER_03 (46:06):
Yo espero que no.
Yo voy a.
No.
No, no, no.
Rechazado.
Rechazo.
No, no, no, no.

SPEAKER_04 (46:15):
Las personas preferirán tener un clon digital
that represented in redes in theway of public directly.
Okay.
Exactly.
Sí, claro, because it's morefair.

(46:37):
Exacto.
Las personas empezarán aalquilar su ADN para entrenar
modelos de inteligenciaartificial personalizados.

SPEAKER_03 (46:46):
Y eso existe, lamentablemente.

SPEAKER_04 (46:49):
Muy triste.
Tu historial de búsqueda seráusado por bancos para definir tu
nivel de riesgo financiero.

SPEAKER_03 (46:58):
Eso existe ya.

SPEAKER_04 (47:00):
Uy, cuidado con lo que googleas.
Totally.
Y finalmente, nacerá unmovimiento activista that
defiende el derecho tocado.

SPEAKER_03 (47:12):
Existe ya.

SPEAKER_04 (47:13):
Existe.
Pero es un derecho absolute.
Sí.

SPEAKER_03 (47:18):
El derecho tocando.
Sí, sí, sí, exactamente,exactamente.
Aquí existes.
In Europa exists.
In other ways.
Exactly.
But exists that are tratando.

(47:41):
It's that puede served.
Todo es el balance.
And the perspective of thederecho, the intelligence
artificial and technology.
Todo depends.

SPEAKER_04 (48:05):
Yes, because if there are informations that are
of interest public, ya sea quehubo actividad criminal o
cualquier other type of coachparecida, hay un interesse in
que se recuerde, in que no sequede en el olvido.

SPEAKER_03 (48:22):
But it's de utility pública and that no están
accesible al internet, top.
So this exists and this entrancedental, ah, you can exigere el
derecho to the old driven inmanner general, this exists for
the parted, por ejemplo, thisderech doesn't applique in the
parted.

SPEAKER_04 (48:47):
Perfect.
And we're going to get thepreguntable.
If you don't have a consequenceto those who are creating
technology and doing decisionsabout intelligence artificial,
what we olvides that al finaltry to protect the humanity.

SPEAKER_03 (49:19):
In the sense that we have a software and all.

SPEAKER_04 (50:23):
No, nothing like the cafe.

SPEAKER_03 (50:28):
Y has been a placer conocerte igual.

SPEAKER_04 (50:34):
The intelligence artificial would predece,
optimization decisions andautomatic processes, but no
enseñar to our principles.
This is human.
And for sure, there are peoplelike Rosa who recuerden.

SPEAKER_00 (51:24):
Hablando claro sobre propiedad intelectual. ¿Te gustó
lo que hablamos hoy?
Por favor, compártelo con tured. ¿Quieres aprender más sobre
la propiedad intelectual?
Suscríbete ahora en tureproductor de podcast favorito.
Síguenos en Instagram, Facebook,LinkedIn y Twitter.
Visita nuestro sitio webwww.intangibilia.com.

(51:45):
Derecho de autor LeticiaCaminero 2020.
Todos los derechos reservados.
Este podcast se proporciona solocon fines informativos y no debe
considerarse como un consejo uopinión legal.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.