Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Intro (00:01):
This is a Technikon podcast.
Peter Balint (host) (00:09):
In this Ethics in Technology podcast series, we have determined
that how the role of ethics should play out in
research projects is not always clear, and we have seen
this in all of our episodes. We also learn that
the advancement of science as a whole relies on responsible
research and sticking to ethics principles. It would seem, then,
(00:30):
that the most effective approach is to tackle ethics training
early on in the careers of the next generation of researchers.
And I'm talking about at the university level here, but
is this happening and does it work? I'm Peter Balint
(00:51):
from Technikon, and today we speak with Elisabeth Oswald. She's
a professor in the Department of Artificial Intelligence and Cybersecurity
at the University of Klagenfurt in Austria. Her research projects
deal with the concept of side channel exploitations. She's here
today to give us a glimpse into the academic side
(01:12):
of ethics and research. Welcome, Elisabeth, and thanks for coming
on today.
Elisabeth Oswald (01:18):
You're welcome. I'm looking forward to it.
Peter Balint (host) (01:20):
While most of your focus is on the anatomy of
side channel attacks, there are, of course, ethical considerations in
your work as a researcher and professor. First, tell us
what a side channel attack is.
Elisabeth Oswald (01:33):
OK, let's perhaps start with what is a side channel
in the first place. So in security, we are very conscientious
about the way in which we argue about the security
of a system. Thus, when we define a system and
its security properties, then we are normally very, very explicit
about all of the channels of information that we think
(01:55):
an adversary has access to. And then we build a
security model based on these known channels of information. So
a safe channel is an information channel that for some
reason we didn't include in our security model. This could
be because we weren't aware that this channel exists, or
because we didn't realize that a known channel could be
useful for an adversary, or simply because we might know
(02:18):
that there was a channel and this contains useful information,
but it's just too difficult to really reason about it.
And perhaps we know that if we include this channel
in our security model, the system would be recognized as
insecure and attacked and uses this extra bit of information.
And I would like to give you a simple example
to illustrate this, perhaps in a less abstract manner. So consider,
(02:40):
for instance, a combination lock the security model here of
the lock, who would be at least as is how
they're advertised, that you need to know the secret combination
in order to open the lock. So under the assumption
that nobody knows the combination and lock supposedly is secure,
but if you just go online and search for how
(03:00):
to break combination locks, then you will see lots of tutorials.
And typically, the way in which they work is that
you're asked to listen very carefully to the sounds that
the lock makes or observe very carefully how much a
lock moves after you kind of change one of the
combination rings. And with this bit of extra information, so
(03:22):
either the sound or the observable movement, you can then
figure out which is the digit for each combination ring.
And with this extra bit of information, you can actually
open the lock in much, much less time and with
much less effort than what it would be if you
had to try out all possible combinations.
Peter Balint (host) (03:42):
So that's a really great analogy, and I think that
will bring the point home about what side channel attacks
are sort of coming in the side door, perhaps if
you will. Yes. So you are researching and teaching about
these side channel attacks. Can you tell me what sort
of ethical issues do you face in your in your
(04:02):
day to day work?
S2 (04:03):
Sure. So there are many aspects of asset ethics that
really arise when you do research in the cybersecurity area.
So depending on what kind of research you do, you
might have humans involved which immediately raise ethical concerns. Or
you might kind of study a specific type of engineering
practice within a legal context, or you have very complex
(04:28):
technical kind of questions that you answer using mathematics and
computer science. So obviously, any study that would involve humans
would always kind of lead to ethical concerns and are
often processes set out by academic institutions of how you
deal with them. But if you kind of go, so
(04:48):
to speak, to the opposite end of the spectrum, if
you just look at a thing, a product or a
device that some security claims have been made, then even though
there are ethical issues that crop up. For instance, you
might want to look at the security of a concrete product,
which means that if you actually find flaws, then you
(05:10):
have to think very carefully about how you disclose them. And
hence the corresponding process is typically called responsible disclosure. If
we don't focus perhaps a little bit more on side
channel attacks and how to deal with side channels. And
then there's often a catch 22 that you run into
in sort of our daily research practice, because if we
(05:33):
want to demonstrate a new kind of attack vector, then
often it would actually be possible to demonstrate this attack
vector based on a sort of mock up device. So
maybe a simulation that we write that corresponds like real
devices that we are aware of and then be demonstrated
on that. But then typically during a peer review process,
(05:53):
the argument would come, well, you know, this is just
a mock up. This is not a real device. We
want to see that your attack technique works on real devices.
And this is, of course, a problem, ethically speaking, because
then you take a product that is on the market,
we break it. And then of course, you need to
go through responsible disclosure. There is perhaps another kind of
(06:17):
problem that arises, and this is called dual use. So
in typically in cryptography research, we try to create new
algorithms or new implementations of algorithms that are stronger than
existing ones. But in order to make something stronger, you
really have to understand how to break things in the
(06:38):
first place. So we kind of always look at attacks
and countermeasures in a way simultaneously many discoveries that we
make as researchers, which we make in good faith and
which we make in order to kind of make things
more secure. They will inevitably also help adversaries, and this
kind of brings us into a dual use problem often.
Peter Balint (host) (06:57):
And I guess this could be a real problem because
most of the work that you do will be published
and when it's published, this means anyone can read it.
Elisabeth Oswald (07:05):
This is correct, yes. So especially when it comes to
real life products, then very responsible disclosure process has to
happen before the actual publication. That's very important.
Peter Balint (host) (07:18):
Of course. Now, in some of our previous episodes dealing with ethics
in technology, we found out that ethics could sometimes be
a moving target, especially perhaps in academia. And it's good
that we're talking to you today to address this issue.
And I'm curious from your perspective, what is the best
way to keep the topic of ethics integrated into some
(07:41):
of the more practical topics that you tackle in the classroom?
Elisabeth Oswald (07:46):
You are right in the way that in the sense
that ethics is in itself actually a moving target because
ethics is nothing that you can define once and will
sort of hold forever. Ethics depends on societal norms and values,
which of course, keep changing as societies change, and there
(08:06):
are also different between different societies. So ethics is nothing
that can ever be static. It always has to be
negotiated for every project. It has to be renegotiated often
during the project as understandings develop and change, and it's
always very, very context specific. So when you... at least
(08:28):
and I talk about ethics, then I'm not so much
talking about, you know, knowing something about ethics or knowing
some ethical guidelines or ethical standards. It's more about a skill,
a skill that relates to asking in a way good
questions and this kind of process of asking questions whilst
considering a research question or some project... there's another term
(08:52):
for it that I often find the bits more useful
and this is called responsible engineering or responsible research and innovation.
And this is indeed a process that you can, in
a way, practice and it becomes hence also, in a way,
a teachable. So it's something that we integrate in our
teaching here in Klagenfurt and it's something that and also
(09:16):
in a very naturally ties in with project work that
that I would be doing so relating to my own
funded projects. And so it's really handy for me that
I can have the opportunity to teach responsible engineering at
university because it gives me also the opportunity of practicing
these skills and being reflective. And considering potential unintended consequences
(09:39):
of my own research.
Peter Balint (host) (09:41):
Right. And it sounds like since this topic of ethics
is something that's always changing, the one constant is that
it's always changing and it's being aware of this and
maybe keeping up to speed with ethics on a much
higher level and applying it to various situations might be
the best approach. Which leads me to my next question,
(10:02):
which is, is there some sort of organization which could
guide researchers in cybersecurity with regard to implementing ethics measures
during their work?
Elisabeth Oswald (10:14):
So when it comes to academia, for instance, often ethical
guidelines are actually provided by publishers that kind of support
cyber security research. So there are a few to name. So,
for instance, the USENIX organization traditionally publishes a lot of
(10:34):
security and cybersecurity research. ACM, and IEEE, these are all
professional organizations that are international and they have guidelines with
regards to, for instance, responsible disclosure. They are also very
clear when it comes to pieces of research that involve
humans that you have to go through whatever sort of
(10:57):
approval process is normally in place at your university. It's
much less clear exactly how to deal with ethics, I think,
in a... not in the academic context. So, for instance,
I don't know how this would be dealt with in
large companies, whether they're also ethics approval boards or things
(11:20):
like that. But outside of then academic publishers, it's much
less clear which organization you would actually turn to if
you want some advice or some help in regards to
ethical questions. I think this is actually quite a gap there,
(11:40):
and there are also not so many guidelines of available
as such for the context of AI for instance, the
European Union has published a set of guidelines which are
very useful and very interesting to read. But when it
comes to cybersecurity and ethics, there are some papers out there,
but they're perhaps not concrete enough to really help. And
(12:03):
as I said, the ethics is not so much about
reading something and knowing about a guideline. It's actually much
more a hands on an applied skill, if you will,
so you actually need an opportunity to practice it. And
I don't think there are very many out there outside
of academia at least. So for me, the best opportunity to practice
(12:25):
is myself really is in the context of teaching.
Peter Balint (host) (12:28):
Well, thank you so much for this information that you
shared with us today. It's interesting to see what happens
on the side of ethics where students are involved and
what you can pass down to them. So. Thanks for
sharing this with us today, and we hope to talk
to you again soon.
Elisabeth Oswald (12:42):
Thank you very much for hosting me.
Peter Balint (host) (12:49):
This podcast has been brought to you by Technikon. For
our complete series on ethics in technology, go to technikon.com/ethics