All Episodes

May 1, 2024 22 mins

In this episode, we discuss, how we might protect prompt-based applications and LLMs from prompt injection. We discuss how data validation was done in the 1960s and modern libraries and techniques that can successfully act as a first line of defense against prompt injection. We touch on the idea that using other types of models, such as decision trees, conventional NLP pipelines, embedding models, or neural networks trained on datasets different from typical LLM training data, might be used to validate inputs before sending them to an LLM.


Continue listening to The Prompt Desk Podcast for everything LLM & GPT, Prompt Engineering, Generative AI, and LLM Security.
Check out PromptDesk.ai for an open-source prompt management tool.
Check out Brad’s AI Consultancy at bradleyarsenault.me
Add Justin Macorin and Bradley Arsenault on LinkedIn.

Please fill out our listener survey here to help us create a better podcast: https://docs.google.com/forms/d/e/1FAIpQLSfNjWlWyg8zROYmGX745a56AtagX_7cS16jyhjV2u_ebgc-tw/viewform?usp=sf_link


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Mark as Played

Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.