All Episodes

April 29, 2025 13 mins

In this episode of Accelerated Velocity, we explore advancements in AI processing, the risks of under-tested automation, and what real ethical AI use looks like in practice.

First, we unpack Google’s Gemini 2.5 Flash update, which introduces reasoning control to reduce unnecessary processing. It’s a win for developers—and a potential breakthrough for energy efficiency, security, and scalability.

Then, we discuss a support chatbot gone rogue, highlighting how poor implementation can quickly erode trust and damage brand experience. The episode wraps with reflections on ethical AI use as an intentional approach that impacts customer trust and organizational integrity.

Visit our website

Subscribe to our newsletter

Chapters

00:00 - Introduction to AI in Business

01:43 - Gemini 2.5 Flash: Enhancing AI Efficiency

04:05 - The Importance of Accessibility and Security in AI

04:54 - Customer Experience: Lessons from AI Failures

07:34 - Ethics in AI: A Business and Humanity Issue

10:19 - The Future of AI: Balancing Technology and Humanity

 

Sources

“Company apologizes after AI support agent invents policy that causes user uproar”  by Benj Edwards for Ars Technica

“Google introduces AI reasoning control in Gemini 2.5 Flash” by Dashveenjit Kaur for AI News

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
1Especially for a small company, small company, medium sized business to have something that potentially could blow up in their face just really speaks to the due diligence and never ending testing that needs to happen.

(00:19):
2Welcome to Accelerated Velocity, the podcast that helps you move faster, smarter, and more strategically in the rapidly evolving world of AI. We'll break down the latest AI news and innovations and explore how they impact your marketing, sales, and business growth. We'll dive into the practical use cases, unpack new tools, and cut through the noise so you and your team can adopt with confidence.

(00:43):
2I'm Grace Matthews, director of content at inbound TV, a business development agency and HubSpot solutions partner dedicated to driving sustainable business growth. Each week, I'll be here with Peter Malik, our CEO and founder. Join us as we make sense of what's changing and what to do about it.

(01:04):
2Hello. Welcome back to Accelerated Velocity. I'm Grace Matthews, and I'm here with Peter Malik. Hi, Peter. How are you?
1I'm doing well, Grace. This week, we're diving into smarter AI processing real world sales, which I'm really interested in. And why ethics in AI isn't just about sort. It's a real business issue. And even beyond that, it's almost like humanity issue.

(01:28):
2Absolutely. And just a little tease on what we'll look at with that. First of all, a promising update with Google Gemini that isn't necessarily going to be impacting a wide group of people right now, but I think there's a lot that we can take from it. And then we'll look at a few things not to do with AI right now, especially from the customer experience perspective.

(01:51):
2So let's get into that. Starting us off. There's an update on Gemini 2.5 flash. Peter, do you want to speak about that a little bit?
1Absolutely. So Gemini 2.5 flash includes a new reasoning control system to avoid over processing simple tasks. And and I have tactile reaction to that because I was doing a deep research of something that really wasn't that deep on perplexity earlier this week, and approximately thought about it for over an hour. So it was it definitely would improve my quality of life if we could get those those results back faster.

(02:29):
1And and I think there's a couple of other aspects of this too. I mean, processing power equals electric power and electric power equals carbon footprint. So anything especially at AI, because there is such an enormous carbon footprint that results from data sensors. And, you know, over the summer I was visiting Virginia and and drove down the street of like 50 data processing centers, you know, lined up one after another on both sides of the street.

(02:59):
1So it's not a trivial subject. And I think just the power of asset, the carbon footprint is, is really significant.
2Absolutely. I kind of thought of it as, you know, having a hammer to drive in a nail versus an anvil. Right. More simple processing. You just don't need that amount of power and reasoning. So this makes a lot of sense in terms of a step towards better efficiency. Peter, as AI becomes more efficient, do you see that potential also increasing accessibility to AI for various organizations?

(03:36):
1Yeah, I do, and I also think there's another facet to this, and that's security because although this release doesn't provide that, but if the engine is more efficient, there's a lot of things that can be done locally on your computer or on a company's computer or an organization's computer. So in other words, if there are security concerns around certain AI functions, you could run them locally to a totally disconnected computer and and you would have a 100% security as long as somebody can't access a computer.

(04:08):
2Yeah. And I think that this speaks to the idea that although this new reasoning control system with Gemini is for developers, efficiency isn't just about speed. Right. And looking forward, it's also about more efficient energy usage and also about broadening who can access and afford to use it as well.
1And you know, the other aspect of that is what you're just about to bring up, which is the chat part. That was.

(04:34):
2Absolutely. Yeah. Let's move on to looking at this, this chat bot debacle, which is honestly hilarious. But I also think we can take a lot away from it.
1Yeah, it's it's funny, but it's not funny.
2So quick overview A couple of weeks ago, a popular AI powered code editor called Curser experienced a little bit of a debacle with their AI support agent, who basically fabricated a policy when speaking to a customer. And I'm just going to read directly from this article on Ars Technica by Benj Edwards, because it's honestly hilarious. So it says.

(05:11):
2On Monday, a developer using Curser noticed something strange switching between machines instantly logged them out, breaking a common workflow for programmers who use multiple devices. When the user contacted cursor support, an agent named Quote Sam told them it was expected behavior under a new policy, but no such policy existed and Sam was a bot.

(05:34):
1I love that.
2And I think as funny as it is, it really is a cautionary tale on how any organization that's incorporating AI into their customer service or customer experience should be thinking about how customers are actually engaging with those programs.
1Yeah. You know, I think by extension, this is one of those things that have been happened to major brands where there is some social media fail that, you know, unintended consequences were just enormous fallout. And I think that especially for a small company, small company, medium sized business, to have something that potentially could blow up in their face just really speaks to the due diligence and never ending testing that needs to happen.

(06:20):
1And of course, we're in an AI world, so that never ending testing doesn't really have to take a lot of your time.
2Yeah, absolutely. If it can happen to Coca-Cola curse or whoever it is can happen to you. So so I guess my takeaway is, no matter how small your usage of AI is for a specific reason, you need to kind of have a game plan for thinking about what is the course of action, and what are some safeguards that you're putting in place in terms of not just preventing those horrible experiences for customers, but also transparent and ethical AI?

(06:54):
1You know, it'd be interesting to like, spin up a fictitious company that sells some sort of weird widget and create a rogue chat bot for that company and see what happens.
2Do you mean like, actually putting that out into the public and kind of piloting what the response is?
1Yeah, well, not piloting, but just seeing what the responses.
2Yeah. And I think that actually brings up a valid point, which is that with this AI boom, a lot of people are starting to accept and incorporate AI into different daily workflows. But from my understanding, general consensus is that we still want I mean, it's very obvious, but we still want human interaction on a basic level with important touchpoints like getting customer service needs met, like engaging with content.

(07:41):
2Right? And that doesn't mean that you can't be using AI to some extent in service and content workflows as well, but it needs to be paired with human escalation paths and clear labeling. And that brings me to another thought for today, which is the overall ethical ties to AI usage. We've talked about energy usage. We've talked about engaging with customers.

(08:08):
2And those are kind of two angles of this discussion on how to responsibly approach. I use. Right. And using it consciously thinking about what some of the implications are, and trying to have a forward facing mentality about that. You know, there are other areas where ethical AI usage is certainly important to talk about. And I won't go down the rabbit hole, but I think that's something to bookmark as well.

(08:33):
2And maybe we can come back to that another day. Peter.
1Yeah, you know, it reminds me of a conversation we had a while ago where which we talked about humanity and AI and what we were talking about was if, for instance, you were set up with an AI therapist, there is no human, no emotion coming from that computer program. And at the same time, you know, what is what are the results?

(08:59):
1Results could, on one hand, be a kind of empty feeling that you don't can't really identify. But also there's a lot of research to show that it's really improve people's quality of life. So I think, you know, as far as ethical standards go, I think that sort of thing scares me in the concept that in the future we could be relying on computer programs for our mood, for our outlook on life, for all of these things.

(09:28):
1And that could change us. Material really, as human beings.
2Yeah, absolutely. I think that there are ways that we're incorporating AI as an extension of our daily thought process, of some of the daily functions that we all need to complete, whether that's, you know, going through your email inbox or following up on workflows and I think that, well, there's still a lot to be known and understood about how that will play out.

(09:56):
2It just comes back to that, that core principle for me of of conscious. I used to stop and think about how you're engaging with the tools that you're using.
1Yeah I agree. So I have to put it all in a anecdote here. This is going to be a bit of a pivot. First of all, there was a science fiction writer, Philip K Dick, who was mostly active in like the 1950s, 60s and 70s. And, you may not have heard of them, but you've definitely watched movies that were his stories that were turned into blockbuster movies.

(10:33):
1One of my favorite stories, and this is one of his short stories, is about a post-apocalyptic world where people have learned to rely on computers. And in this particular enclave, there's this one computer that they rely on, and the computer is getting old and getting ready to die. And no, it's it's sort of rusting out and is slowing down.

(10:54):
1And and it was like this existential set uation where people just didn't know how they were going to get by when this computer finally died. So I think that's, that's hopefully not, you know, a harbinger of what could happen with AI. But I just think about that story often.

(11:14):
2It kind of reminds me of some of what people have been talking about with issues of system collapse, although that's another whole rabbit hole. So I think that brings us to the final thought for today, which is that transparency and intent matter. Even small AI implementations should consider both efficiency and ethics from the start. Smarter AI processing is coming with Gemini 2.5 flash and a couple other models following suit, so we'll see how that develops.

(11:46):
2It is hopefully going to create better access, lower cost, better performance. Going back to cursors I bought debacle. The customer experience should not be suffering because of automation. We still need that human connection at all points, right? And lastly, ethics isn't just theoretical, it's something that you're doing or not doing with every AI decision. And this is something to bring into your organization and how your organization is engaging with the AI workflows that you're using.

(12:17):
2As always, there's a lot more to unpack, but I think that brings us to our conclusion for this week's episode of Accelerated Velocity.
1And and since we're building this podcast, we're really looking forward to it every week. And I hope that you do. And I hope that there are lots more people who start to tune in. And to that end, if you can leave us a five star review, that would be awesome. And it really helps our visibility within the podcast ecosystem.

(12:46):
1And and also please visit our site. It's inbound as as in Accelerate velocity.com and sign up for our newsletter as you can learn more about what we do. There's also additional content that might be of interest to you. Absolutely.
2Check that out. And also check out the show notes for links to everything that we've covered today. Thank you for tuning in. See you.

(13:08):
1Next week.
Advertise With Us

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.