Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hey, everybody, and welcome backto the Better Business Analysis
podcast with your host Benjamin Walsh.
And we are going to continue on our BA Byte series today and
specifically closing out our AI topic with some things you need
to be wary of. The Better Business Analysis
(00:22):
Institute presence, the Better Business Analysis Podcast with
Benjamin Walsh. That's right, folks.
We are diving into the ways AI can mislead us and the pivotal
role business analysts play in navigating these specific
(00:44):
challenges. Now, we haven't played with AI
that much. There's something called AI
hallucinations and you need to be aware of these are really
important and you may have experienced just bad responses.
This is slightly different. AI hallucinations refer to
(01:04):
instances where AI generate information that's actually
incorrect or entirely fabricated, as in they have made
it up. These errors can have
significant impacts on the worldround UPS, so let's jump into
some real life examples of AI missteps.
(01:28):
In a notable case, New York authority relied on Chat GVT for
legal research. The AI provided fabricated case
citations, leading to a reprimand from the court.
This underscores the dangers of unverified AI outputs in
(01:50):
professional settings, and that's where BAS can play a
role. So you can go to the MIT website
if you want to find out more about that case.
The next one is financial disinformation.
So a recent UK study highlightedhow AI generated fake news could
(02:15):
actually trigger bank run. So false information about the
bank's stability rapidly spread through social media.
And that can cause mass withdrawals and it can
jeopardize financial institutions.
So it could say, for example, crisis in Edinburgh bank limited
(02:40):
people withdrawing funds. Get your money where you can and
that could generate people actually going to their bank and
taking funds out. There's not actually a real news
story, and that was on routers.com, which is a real
news outlet. If you want to read more about
that. There's also an example where
(03:01):
scammers, so this is an example of what we call deep fake scans,
OK. And they've employed AI to
create what we call deep fake videos of public figures like
Elon Musk or Trump or Obama. And they will promote naudulent
investment schemes. So they might say, buy this type
(03:21):
of crypto coin, but they actually have no association
with it. And they were so realistic, but
they're actually fake endorsements.
And it's to save many into losing lots of money.
So those coins aren't real or, you know, they've just been
pumped and then sold. And that is a major issue maybe
(03:43):
in the real news since these days as well.
The next one is the fact that there is actually misinformation
in some AI chat box out there. So AI chat bots designed to
assist users sometimes provide incorrect information.
For example, Google's Bard, which is now Gemini, once
(04:05):
falsely claimed that the James Webb Space Telescope captured
the first image of an exoplanet and that word spread, but
apparently was fake news and never actually happened and
wasn't actually checked. And of course, that can breed
conspiracy theories. You know how they're hiding
(04:26):
something from us and all the rest of it.
Another example is that you can generate images from AIAI image
generations, quite easy to do, and you can produce unrealistic
or misleading visuals. And so there's been instances
where AI systems have identifiednon existent objects and images
(04:51):
leading to potential misinterpretations like that
this person was there at this various time.
And of course that is now being used as an example of not being
able to trust photography and legal cases, even if those
photos were legit at the time. So now this a photo may not
(05:13):
actually be evidence, neither may a video and that can be
proved that it's been tampered with.
So the role of business analyst in mitigating AI risks is an
important one. And that's why we're going to
close on this. We've talked about some good
examples of AI, but B is can play a critical role in ensuring
(05:35):
that AI systems function correctly and ethically.
And we can do that by verifying the data.
So we can verify that the AI systems are trained on accurate
and reliable and factual data sets to minimize bias scenarios.
(05:55):
That's a that's a prime role. The next one is continuous
improvement and monitoring wherewe can implement regular audit
checks of AI output at certain points, just a random selection
maybe to help detect and correctinaccuracies and whether or not
(06:16):
a prompt is returning the wrong results and we can adjust that.
Thirdly is the stakeholder communication.
So it's about educating stakeholders about the
limitations and potential pitfalls of AI and foster
informed decision making when rolling out AI because people
(06:37):
are obsessed with it at the moment and there is a lot of
room for people to be taken advantage of or for things to go
badly here. Ethical frameworks.
So establishing guidelines for AI use that aligns with ethical
standards. And the system is designed such
(06:58):
that it meets those and those business rules are baked in.
BAS can help with that. And of course there's the
scenario testing. So conduct tests through various
scenarios. So not like testing tests from a
tester, but business scenarios and help check whether or not
the responses you get and the process that the AI goes through
(07:22):
'cause you might be able to see the process steps.
If you're working a proprietary company, then you can maybe
anticipate and mitigate any unintentional AI behaviours.
So while AI offers transformational potential, I
love it, It is not infallible, and business analysts play a
(07:44):
critical role in bridging the gap between AI capabilities and
human judgement, ensuring that this technology is going to
serve us and it's going to end ethically and we're going to get
reliable, accurate results. So there's a call to action
here. And that is, if you've
(08:04):
encountered any of these inaccuracies, share your
experiences on social media. Post them down in the comments
here and let's continue this critical conversation because I
believe that BS will play a rolehere and maybe a new role will
develop in terms of AI ethics and AI monitoring, maybe an AI
(08:26):
process engineer or some kind ofoversight role.
So thank you for tuning in. I hope you've learned something.
That closes out our beginning of2025 conversation about AI, but
I'm sure we'll have many topics throughout the year and I'll see
you next week.