All Episodes

May 20, 2025 15 mins

A House bill could put a 10 year ban on U.S. state-level regulations of AI. This episode of Accelerated Velocity dives into the implications of AI deregulation for both individuals and businesses. With AI being used to power more and more processes–including making critical, potentially life-altering decisions–accountability is key. Whether or not states maintain the authority to pass regulatory legislation on AI, this is a crucial reminder of the need for businesses to practice decisive and transparent AI practices. From personal steps to maintain data privacy to customer-facing AI policies, we unpack it all. 

 

Visit our website

Subscribe to our newsletter

 

Chapters: 

00:00 – Intro 

03:05 – Details U.S. AI legislation & deregulation

04:24 – What happens when AI makes critical life decisions

06:15 – How deregulation could affect transparency and employment

08:00 – The industries (and individuals) that need to be paying attention

09:51 – Data privacy, user control, and trusting AI platforms

10:33 – Memory layers, personal AI agents, and where we draw the line

12:18 – Social media as a cautionary tale 

14:31 – Outro

 

Sources: 

“House Republicans include a 10-year ban on US states regulating AI in ‘big, beautiful’ bill” by Matt Brown and Matt O’Brien for AP News

“Sam Altman’s goal for ChatGPT to remember ‘your whole life’ is both exciting and disturbing” by Julie Bort for Tech Crunch

“Californians would lose AI protections under bill advancing in Congress” by Khari Johnson for Cal Matters

 

 

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:37):
Welcome to Accelerated Velocity, the podcast that helps you move faster, smarter, and more strategically in the rapidly evolving world of AI. We'll break down the latest AI news and innovations and explore how they impact your marketing, sales, and business growth. We'll dive into the practical use cases, unpack new tools, and cut through the noise so you and your team can adopt with confidence.

(01:02):
I'm Grace Matthews, director of content at inbound TV, a business development agency and HubSpot solutions partner dedicated to driving sustainable business growth. Each week, I'll be here with Peter Malik, our CEO and founder. Join us as we make sense of what's changing and what to do about it.

(01:22):
Hi, this is Grace. Welcome to episode seven of Accelerated Velocity. I'm here with Peter Malik. Hey, everybody, I'm Peter Malik. I'm the founder and CEO of inbound RV. And, Grace, I was just thinking, you know, we've been taping this at the end of our day on Friday, and this kind of turned out to be, like, a little early start of the weekend for me, because it's about the most fun thing I do around here at inbound RV.

(01:48):
I don't know about you, but, yeah, it's a great start to the weekend. Yeah, I would agree. It's been really fun producing and putting out the podcast episodes and hopping on Riverside with you every week. Totally. And for those of you like Aware Riverside, shout out to Riverside. It's an amazing podcasting platform and it does even a lot more than that.

(02:11):
A great it's been totally game changing for any sort of audio video work in general. And by the way, we didn't even get like an Oreo cookie for that plug. All right, should we launch into our topic for today? Absolutely. So we're looking at the potential of AI deregulation, specifically in the US and what this means for individuals, for data, for the tools that you're using, for the decisions that you might be making in terms of how you're approaching AI usage.

(02:40):
I should also mention that we're recording this on May 16th, 2025. So if by the time you hear this, there's been more updates on the topic, you know where we're coming from. But I guess we should start with an overview of potential deregulation coming from the federal level. Peter, do you want to kind of explain what's going on from the top down a little bit, and then we can get into the more specific implications?

(03:05):
Yeah. So there's a bill before the House. And it's basically aimed at cutting off the ability for states to regulate AI by themselves. And that's a really bold bill in my opinion, and for a number of reasons. I think that you know, if you have an overview of AI, you know, that AI in general and robotics and so forth have the ability to help us enormously in finding cures, diseases in surgery, a million different things.

(03:34):
And it also has the capability to annihilate us, as in a million robots with rifles show up from a, you know, a country that we're conflicted with or are at war with. So I feel like it's serious. It's a serious subject. And I would say in a perfect world that you want to limit I really drastically because you don't want there to be unintended consequences that could really have a negative impact on us and humanity in general.

(04:02):
But at the same time, it's not a perfect world. And if we ignore advancing in AI, other countries around the globe are not going to be so cautious. They're not going or advancing the AI. So so we have to keep up in that way. But I think that this bill, as far as just the the humanity aspect of it, is really dangerous.

(04:24):
And the reason is that I could be making a decision about, for instance, you being canceled for health care or could be making a decision whether you get that interview for that job you submitted to, and just the ability to know what the story is. In other words, was this an AI that did this? Was this a human being I think is super important?

(04:46):
I think it really should be a right of us as workers, as citizens, etc., to know exactly who has, who's pulling the power levels, levers. And so so in that context, I think, for instance, in California, where we are there are a lot of really thoughtful bills up in the California legislature that could basically protect the rights of working people and a lot about this stuff, but mainly protect the rights of working people.

(05:15):
So to cut off the ability to do that, I don't I think that's overreach. And, and I, I really hope that the bill before Congress does not pass. I think that we need to be cautious. We also need to have sort of this adversarial relationship going on, the people who want to push straight ahead on AI and robotics and so forth.

(05:37):
Absolutely. That's exactly what they should do. But we should also have the ability to push back in that way, have a better chance of uncovering the risks that we're going to face by unchained AI development. I'm totally on the same page with you, Peter, and I think something really important to consider is that in the future, without regulations, a lot of different organizations and companies, whether they're government organizations, public organizations, private corporations, may not be required to specify when they're using AI to make critical decisions.

(06:15):
If we do trend towards a deregulation of AI, what what are your thoughts on how that deregulation might affect business specifically? Well, I think, you know, you touched right on it is it'll impact business in a lot of ways. I mean, first of all, in our business, if we weren't transparent about what we do, we could be selling AI content as handwritten by humans and charging a lot more money for it, then we really have a right to charge.

(06:43):
Honestly, as far as hiring. We've touched on already. If you get rejected by an AI, I think it's good to know about it. So, you know, in the context of writing a job resume and being hit with AI, there are services out there right now who are selling a service to get around the bot, and that's really not helping anybody.

(07:05):
It's maybe helping the person who's resourceful enough to to cheat, but it's certainly not helping the employers because they're going to have to filter through the exact same level of, of applicants as without it, without AI. And so the bottom line is there's always there are two sides to everything. And yes, can a company save a whole lot of time of getting through that first level of applications using AI tools?

(07:34):
Yeah, absolutely. But on the other hand, will that time saving actually result in in the big picture, a time saving if the ability for that person, perfect person to get in front of you and get hired as opposed to somebody who is not going to be that productive. So this could affect transparency. This could affect employment. Are there any industries that you think should particularly be watching out for the implications of deregulation?

(08:02):
Well, you know, as I think about it, it's not a specific business. It's not a specific vertical, but it is just a general caution you should have about results. I you know, we've talked about this before, like a brief written by some lawyers who put it before the judge. And the judge was like, this is garbage. You didn't write.

(08:24):
This doesn't even address the issue. You know, I'm going to get your censured. And so that sort of thing is super important. And they have real world results that impact people's lives. If you're not cautious and at least read through and fact check anything that I give you, especially in a context like that, where it's not just you got to piss off some prospective customer, but you're actually lose your law license.

(08:51):
So in other words, if we're in some version of the future where there's no pressure for oversight, oversight is still very important and necessary. Yeah, exactly. Yeah. And parts of this, I think that's a perfect example. In an unregulated world, you have to provide the oversight yourself and really ill advised to not do that. And I gotta say, I just laugh every time I think about our last episode where we, we ran into these just absurd hallucinations by ChatGPT.

(09:22):
It's just it's almost unbelievable. Yeah, it's kind of mind blowing and also mind blowing how frequently that seems to be passing under people's noses, even in a professional setting. Yeah. So on the other end of that, Peter, I'll ask you, in terms of protecting your own data and privacy, whether that's personally or for a job in which you're using an AI tool, what should people be doing to be more diligent with the information that they might be sharing with large language models?

(09:51):
Yeah, I think that you have to have some level of trust where if the Lem is saying, we are not using this to train our agents, that you have to sort of go with that. So be cautious, you know, be keep your eyes out for that being proven wrong. But, you know, there are settings with all the elements where you can allow it to train.

(10:12):
You can cut off the ability to train, although there might be a couple that don't allow that. But I would highly recommend against letting your data be used because it's your data. It could be proprietary stuff that has a value to you. There could be like IP considerations and it's just super important to be cautious in this world.

(10:33):
So I read a recent article on TechCrunch talking about Sam Altman's quote, goals for ChatGPT in terms of having it function as a memory layer. I think the words that he used were having ChatGPT remember everything about a user's life so that it can become more and more useful and kind of function as the personal agent. If I does become something that remembers everything about us, where do you think we need to draw the line, both as a user or as a builder of I know, Peter, you're working on an AI agent, so from that angle too, I'd be curious to hear.

(11:10):
Well, I mean, on one hand, if your dream is to never die and you're like, satisfied with having an AI agent, who is you carry on for you carry on your legacy. I guess that's something you wouldn't be that disturbed about, but that's not my goal. But you know, having said that, privacy is a really important thing. And I think especially in a world where there are bad actors, because if bad actors have a particular prejudice, if bad actors have a particular agenda, that you might stand in the way of the ability to access everything about a person is dangerous.

(11:49):
Period. That's my thought. You know, although this is a different topic, this conversation in a lot of ways, reminds me of some of the realizations that a lot of us had maybe 5 or 10 years ago, and everybody just started to learn a little bit more about the way that algorithms worked on social media, the way that we were sharing a lot of data and giving up a lot of privacy to meta and various other social media platforms without really realizing it or thinking that through.

(12:18):
Now, many of us are aware of how social media stores and uses our data. But at the same time, a lot of those algorithms and approaches are still a black box. And with that known, I feel like it's, you know, there are more people approaching social media use with caution today. I think that that cautious tendency is probably a good mentality to bring to.

(12:41):
I use as well. Yeah. No, I really agree. I mean, I think that it's talking about social networks, you know, we've seen a couple of waves of how social networks affect politics. And this is not meant to be political at all, but it's meant to be a technology discussion. And I think in 2008, the people that Barack Obama surrounded himself with really understood the power of social networks.

(13:08):
It was still sort of in the infancy in a lot of ways as far as being this ubiquitous thing. But they understand it and they harnessed it. And I think that's a good part of how Barack Obama was elected president in 2008. Now, Trump in 2016 also did some groundbreaking things. And he used people like Cambridge Analytica to to approach social networks and social media from a whole different perspective.

(13:36):
And it was very, very effective for him. And I'm making no judgment about one or the other. But I think that those are examples of how the power of social media is still something that needs to be just like regulations on. I think there needs to be regulations on, on social media because it has the ability. I know it's been responsible for different populations being discriminated against, sometimes even worse than discriminating against.

(14:06):
And I think that there there needs to be a lot of scrutiny on exactly making sure that's used for good until loop back to kind of what we were talking about in the beginning. You know, if AI is determining like someone's housing eligibility in a particular state or whatever, there's a very similar implication of like there needs to be regulation on how that's being determined.

(14:27):
Are there biases at play? Is the computing accurate? You know all of that? Yeah I agree. And you know what. It looks like we are at time Peter. But with that we can continue to come back to this topic of AI regulation and deregulation as it continues to develop. And I'm actually going to put a little addendum in here.
I thought you were going to ask me about a reality show. And that question is a shout out to Josh Wilson, who has the most entertaining marketing podcast in the world. It's called Do This, Not That. If you haven't heard it, you should definitely check it out. But he definitely takes on some of the hard questions about marketing and reality TV.

(15:06):
And I have to admit, I've actually gotten some TV show recommendations from Jay's podcast. That's awesome. All right. Thanks, everybody. I'll just remind everybody if you liked what you heard, please leave us a review on whatever platform you're listening on and make sure to subscribe to our newsletter, which I always link in the show notes. Check out inbound AV at inbound AV as an accelerated velocity.com.

(15:33):
And thank you for listening. See you next week.
Advertise With Us

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.