All Episodes

May 10, 2025 24 mins

Making Sense of Artificial Intelligence: Why Governing AI and LLMs is Crucial

Artificial intelligence (AI) is changing our world rapidly, from the tools we use daily to complex systems impacting national security and the economy. With the rise of powerful large language models (LLMs) like GPT-4, which are often the foundation for other AI tools, the potential benefits are huge, but so are the risks. How do we ensure this incredible technology helps society while minimizing dangers like deep fakes, job displacement, or misuse?

A recent policy brief from experts at MIT and other institutions explores this very question, proposing a framework for governing artificial intelligence in the U.S.

Starting with What We Already Know

One of the core ideas is to start by applying existing laws and regulations to activities involving AI. If an activity is regulated when a human does it (like providing medical advice, making financial decisions, or hiring), then using AI for that same activity should also be regulated by the same body. This means existing agencies like the FDA (for medical AI) or financial regulators would oversee AI in their domains. This approach uses familiar rules where possible and automatically covers many high-risk AI applications because those areas are already regulated. It also helps prevent AI from being used specifically to bypass existing laws.

Of course, AI is different from human activity. For example, artificial intelligence doesn't currently have "intent," which many laws are based on. Also, AI can have capabilities humans lack, like finding complex patterns or creating incredibly realistic fake images ("deep fakes"). Because of these differences, the rules might need to be stricter for AI in some cases, particularly regarding things like privacy, surveillance, and creating fake content. The brief suggests requiring AI-generated images to be clearly marked, both for humans and machines.

Understanding What AI Does

Since the technology changes so fast, the brief suggests defining AI for regulatory purposes not by technical terms like "large language model" or "foundation model," but by what the technology does. For example, defining it as "any technology for making decisions or recommendations, or for generating content (including text, images, video or audio)" might be more effective and align better with applying existing laws based on activities.

Knowing How AI Works (or Doesn't)

  • Intended Purpose: A key recommendation is that providers of AI systems should be required to state what the system is intended to be used for before it's deployed. This helps users and regulators understand its scope.
  • Auditing: Audits are seen as crucial for ensuring AI systems are safe and beneficial. These audits could check for things like bias, misinformation generation, or vulnerability to unintended uses. Audits could be required by the government, demanded by users, or influence how courts assess responsibility. Audits can happen before (prospective) or after (retrospective) deployment, each with its own challenges regarding testing data or access to confidential information. Public standards for auditing would be needed because audits can potentially be manipulated.
  • Interpretability, Not Just "Explainability": While perfectly explaining how an artificial intelligence system reached a conclusion might not be possible yet, the brief argues AI systems should be more "interpretable". This means providing a sense of what factors influenced a recommendation or what data was used. The government or courts could encourage this by placing more requirements or liability on systems that are harder to interpret.
  • Training Data Matters: The quality of the data used to train many artificial intelligence systems is vital. Since data from the internet can contain inaccuracies, biases, or private information, mechanisms like testing, monitoring, and auditing are important to catch problems stemming from the training data.

Who's Responsible? The AI Stack and the "Fork in the Toaster"

Many AI applications are built using multiple AI systems together, like using a general LLM as the base for a specialized hiring tool. This is called an "AI stack". Generally, the provider and user of the final application should be responsible. However, if a component within that stack, like the foundational artificial intelligence model, doesn't perform as promised, its provider might share responsibility. Those building on general-purpose AI should seek guarantees about how it will perform for their specific use. Auditing the entire stack, not just individual parts, is also important due to unexpected interactions.

The brief uses the analogy of putting a "fork in the toaster" to explain user responsibility. Users shouldn't be held responsible if they use an AI system irresponsibly in a way that wasn't clearly warne

Mark as Played

Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.