Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
So in this episode, we talked a bit about, yeah, why you shouldn't test autonomous vehicles around small children.
(00:07):
That seems to be a bad idea.
Ethical tech innovation, a value-based engineering podcast.
I'm your host, Joshua Roe, and welcome to ethical tech innovation.
Today, I'm joined by Mario Tokarz, co-founder of the VBE Academy.
(00:29):
When did you first become aware of VBE?
No, I was actually, I was working at BMW for a very long time, because I started right after high school,
and I stayed there for like 16 years or something, until my mid-30s-ish.
And then I worked for a startup, right?
And the first time I became aware of VBE was actually after I dropped out of that startup,
(00:53):
which had been sold briefly before to another company.
And like roughly six months after that merger happened, I moved out and I was doing some soul searching,
and I was trying to find something more meaningful to do with my work life,
in the sense that I wanted to work on a topic which also had a more, let's say, social type of relevance.
(01:18):
And I figured out that this question of how you align technology with, I wouldn't have said values at the time,
I would have said maybe compliant and ethical technology, how you build that.
I realized that this is an actual, actually very difficult question to answer, because, for example,
if you talk about safety in a car, like for example, why does the airbag open?
(01:41):
There's a lot of process behind it, right?
Like literally thousands of hours are being spent just on making sure that this airbag is definitely going to open up when you have an accident, right?
And I was like, okay, now when I want to say something is safe, I have a whole lot of process and I have to follow it.
If I say something is ethical, how does that happen?
(02:03):
And I was looking for answers on that question.
If I say my product is ethical, how could I make that claim?
And yeah, that's when I started to research different methods and I ended up finding out about IEEE 7000, which led me to VBE.
So also the standard kind of was my entry gate, right?
(02:24):
In my research on how to do that.
Were you always interested in ethics or did that interest emerge at one point when something happened?
Or you noticed something?
I think saying I was always interested in ethics would be maybe a bit too, let's say, flattering to myself.
So I would at least claim and I leave that maybe also to other people to judge.
(02:46):
I would claim that I wasn't actively trying to do the wrong thing.
So if I realized in my immediate sphere of things that there was some, let's say, value conflicts, for example, I was a manager at BMW.
So I had a lot of employees I worked with and I tried to treat them well.
I really did. Right.
But I don't think I had a structured approach to ethics.
(03:06):
So I think a lot of people I would imagine try to do well and try to do good things when they see it.
So I would think most people want to do good things.
But at the point where I started my own endeavor there, I realized that just wanting to do the right thing doesn't necessarily mean that it is the right thing to do, at least not in a way that you can argue.
(03:31):
Right. You can't build a product and then you say it's ethical and then people say why.
And you say because I like it.
That doesn't work. Right.
So there has to be an element of a more neutral type of approach to it, which is not just subjective.
So that was something I realized early on.
And then the more I looked into ethics as such, I realized I don't know all that much about it actually.
(03:54):
Right. In the theory, because I think we as a society, we have lost the ability to talk about ethics on a meta level, because at least for a long time, a lot of things were pretty clear.
For example, in Germany, like how you behave towards your parents.
Right. Or in China, that might be a bit different.
Right. But it's kind of established in society.
(04:16):
And of course, you can make ethical arguments about how to behave to your parents, your neighbors, your friends, like what is good behavior.
But I don't think we do that very explicitly often off the top.
Were there any particular moments when you realized, oh, I'm trying to do the good thing? And this is a very maybe early concrete examples where you encountered a problem and just by trying to do the right thing, it didn't work.
(04:42):
And you realized, okay, I need to structure it.
Yeah, I'm not sure if... Yeah, good question.
Yeah. I'm not sure if I realized that it didn't work.
I think you can always tell yourself in that situation, I'm doing as good as I can.
But there was one thing that came to my mind, which is when I worked at BMW, my last assignment was being the head of AI strategy for autonomous driving.
(05:11):
And then I moved into a startup.
And that startup also built autonomous vehicles and we did test drives in Munich.
And one test drive would actually lead us past our office.
There was a little smaller street with like a poor quality street, but anyway, so that would go around our office.
And the idea was that the car would make a circle there.
And I think there was a kindergarten or something on that street.
(05:34):
And then somebody came and they said, well, there's this kindergarten, so probably the car should slow down to, I don't know, like 10 kilometers an hour or something.
And of course, it's a bit of a social question, right? You're like, ah, OK, there's people who are particularly in danger, like children.
So probably you should slow down there.
(05:55):
And I'm sure nowadays people would think more about it.
But at the time I was like, yeah, it's actually good that somebody had come up with that idea.
Right. And in that moment, I realized, OK, there could be many more ideas like that.
And it's good that somebody came up with this idea, but are we missing something maybe?
(06:16):
So I realized that there are people who think of these things.
And when you hear it, you're like, yeah, of course, that makes a lot of sense.
But then this is not the same as having something like a structured approach to really making sure that you have reasonably thought about these type of issues.
So in that instance, it occurred to me that we were lucky to have good people.
(06:37):
But if we didn't have them, would we maybe have missed something like that?
And what point in the design process did this idea come up?
Because it's kind of a bad problem if you've built this track already, or you've built this test center and then suddenly you realize, oh, wait, it's near a kindergarten.
That could be dangerous.
Yeah. In that sense, you can see that the solution to that wasn't all that difficult in the end.
(07:01):
Right. It was more so there was no problem in that sense.
Also, when people listen to this now in 2025, this was in 2018 or 19, which means that, of course, also the legal situation evolves over time.
And it will at some point also capture the question on how to drive close to kindergartens.
Right. So this is going to be covered.
(07:22):
But I think it's a good example, because at the time when we came up with this problem, nobody had really thought about it because there were no such test drives yet.
Right. So there was no law that would tell you what to do.
And there was no prior evidence.
It was just like people working on that, looking at where we're driving, making very concrete plans.
And they're like, oh, there's a kindergarten. What do we do there actually?
Right. And so you see that this ethics discussion can well be ahead of the legal discussion.
(07:47):
I think that's one takeaway from it.
And then in this case, it was quite easy to fix.
So no problem there.
Of course, you can also always ask bigger ethical questions, which are then less easy to address, for example, the question, what you think of autonomous driving as a whole.
But let's say I would think that for companies, it's maybe good to start really to look at their technology and what it does and take it from there.
(08:17):
And then we should also be aware, of course, that the question on AI ethics is maybe one of the biggest questions of our time.
Right. So in that sense.
Just because there's these huge ethical questions, it shouldn't take us maybe away from looking at the concrete impacts and try to address those nevertheless, even though a lot of discussions happening.
Right. If that makes sense. I don't know.
(08:37):
Is this this problem of ethics often be tacked on to think about it afterwards?
You know, it's like we want to design a test center and it's not actually by design, which is a key idea in VBE.
Right. That is true.
So, as I said in the past, like, take my example with the kindergarten drive.
(08:58):
I think it happened more coincidentally that good people would look at things and they would come up with their own concerns.
And often these would be very valid concerns.
However, from a process perspective, there's an element of randomness in that.
Right. So if that particular person hadn't looked at this, but somebody else, would that person also have had the impression?
(09:20):
Right. So.
And VBE fixes that by defining this stakeholder analysis of a system where it's not just you looking at something,
but you try to think about, like, who is actually affected by the system and you make those people give feedback and share their insights.
(09:42):
And this is how this can actually be addressed using VBE.
So it kind of takes away this element of randomness.
And what I mean by randomness is just that people have been acting ethically also in the past.
They just maybe didn't do it in a structured way, which left it a bit to chance on whether or not they would spot issues.
(10:06):
And here in VBE, you actually work with a clearly defined process, which doesn't mean that you necessarily capture everything,
but at least it gives you a chance to have a reasonably complete picture of what you're doing.
That's another question with VBE is that value based engineering is that it comes out of value sensitive design.
(10:28):
But then you also have other terms as well that sound very similar.
And there are all these other different approaches to doing ethics with AI, ethics with technology.
To be honest, I think one very big difference, and I think I can say that because if you will,
I only joined this VBE movement, maybe I think a bit more than two years ago,
(10:55):
because I started my own business in that area three and a half years ago.
And it took me some time to even get aware of VBE at a time when there was no chat GPT and the topic wasn't as prominent as it is today.
So what made the big difference for me is that VBE is already based on a standard,
(11:15):
which is IEEE 7000 or ISO 24748-7000, pretty long.
But I think the important thing is to see that this standard has also been internationally adapted by ISO and IEC.
And I think that makes a whole lot of difference because whatever you think, maybe about the details of the method,
(11:36):
you see that there was a group of people who took considerable time and put this into an actual international standard,
which you can reference as a company or as any type of organization, really.
And it's very important to make sure that whatever you do is state of the art and standards are just a way of showing that.
(11:56):
So that's why I think that is a very, very strong argument.
Besides the fact that I'm really not aware of a lot of methods which are so closely rooted in engineering, which is also different.
So the question is not only how do you analyze ethical concerns, the question is also what are the steps in order to get them into the product?
(12:19):
Because I think there are other standards, at least in development, around ethics of AI, right?
Yeah, I'm very curious to see how that's going to spin, actually.
So there are already quite some standards that have been developed by ISO.
(12:39):
So, for example, there is ISO 400001, which is an AI governance standard.
IEEE has not only developed IEEE 7000, which, as I said, has also been adapted by ISO,
but they have actually also developed other standards, for example, on transparency and on the management of bias in AI.
And so a number of standards exist.
(13:03):
And then in Europe, due to the EU AI Act, even more standards are being developed by SENS Senderlec.
So we have a lot of different activity there, with some already out since, I think, probably 400001 was published, end of 2023.
So, yeah, a lot of movement in that space, for sure.
(13:26):
Do you think the standards can work together or are they just always in competition and you have to pick which one you want?
Well, there is a bit of a competition between those standardization bodies, is my impression.
So I'm not involved in these activities myself.
But what I see is that, as I said, for example, there is a standard on bias management.
(13:48):
I think there's also recommendations at ISO around algorithmical bias.
So and then there's also a development in SENS Senderlec.
So you can see that three of the world's big standardization bodies are all working on the same topic.
The question is, and I'm not talking from a practical perspective, really, so no offense, I cannot really judge on the merits of each standard.
(14:11):
So I don't want to say that it wouldn't be possible that one or the other has a bit of a better standard in that sense.
But it seems that it's a lot of resources put on a topic which has already been covered by by others.
So I think joining forces on a topic which is so time pressing would actually be nice.
But then you also have to see how standards are being developed by committees of people and on a consensus type of basis.
(14:37):
So that's another argument to maybe try to use stuff which is already around.
It just takes very long to build a standard.
Consensus is a nice idea, but actually it's quite difficult to get in practice.
I'm not. Yeah, absolutely. Right.
So as I said again, I'm not in the details, but the problem is that one thing that is, I think, strong about IEEE 7000 BBE is that it was developed at a time when there wasn't so many people who cared actually about the topic.
(15:06):
So in the sense that it was still a very competitive environment.
And I know that the standard was at some point also close to failing in the sense that that didn't seem to be a path to agreement in the sense that there also need to be people who push it on.
Right. So it's a big social problem to build such a standard.
So nowadays, there's a lot of corporate interests and it's directly linked to the EU AI Act.
(15:30):
So a lot of people have very, very tangible, immediate interest in these activities.
So the question for me is a bit, will that maybe lead to those standards being more of the smallest denominator you could also you could come up with?
Or are there really going to be standards that help you to get anything done?
Because also standards can be more, let's say, juicy in the sense of really giving you concrete things to do.
(15:54):
And that's good for you as a company because you do ABC and you're going to be set.
Or will they also stay very high level, which could be happening due to the fact that people might not be in a space where they can agree on very concrete things.
So they might be a bit superficial. That's the thing to be seen.
Is there a risk with standards of complacency emerging where?
(16:20):
So, I mean, I remember I've worked with ISO 9000, of course, it's probably the most famous standard there is.
And often, if I saw a problem with a process, the response I would get, well, this has been vetted by the standard.
So it's whatever it is, it's OK. Is there a risk of that?
If we get this standard and we get this stamp that's saying we're ethical, then OK, that's it.
(16:44):
We've proven this and we don't look to keep this going as an active thing.
Yeah, it's a let's say this is a problem you always kind of have in a lot of fields.
So I think there's greenwashing, there's ethics washing.
I worked a lot with open source software in the past.
(17:06):
Also, these open source software are the people who, for example, develop the GPL license, which is very well known.
They also see themselves more as a movement, right?
They are not only lawyers in the sense of here's a license, use it or not.
But they want to achieve certain goals with that.
For example, in the open source field, they want other software also to be open source and they want to share an open software so that people can use it freely.
(17:33):
So they always say like, you know, free as not in free beer, but free as in freedom.
So it's about freedom for people to use and adapt and improve software.
And that's also this question, like if you use it and you publish your stuff, do you really also have to be part of the movement?
Or can you basically just abide to the license and really publish your things as you're supposed to?
(17:56):
And will that also be helpful, even though you're maybe not totally convinced yourself?
Maybe you're not doing it for the right reasons.
So doing the right thing, but not maybe with the right motivation behind it.
I think it's a big question.
For ethics, I mean, it would be nice if there were standards and people would at least follow them.
I think that would give you the feeling that there's a minimum standard in place.
(18:17):
So I would say this is at least something, right?
Maybe even at the expense that people over sell it a bit and they say we're super transparent when in reality they've just done the minimum.
I'm worried at the moment people are not even doing the minimum.
So that's why I would say following any type of standard in that field, even for the wrong reasons,
(18:38):
just because you want to be compliant, not because you're really interested in the merit of why this is important.
I think it would at least be a start and maybe a better place than what we have today.
[Music]