Artificial Intelligence chatbots have come such a long way in a really short time.
Each release of ChatGPT brings new features, like voice chat, along with updates to the training data fed into the systems, supposed to make them smarter.
But are more leaps forward a sure thing? Or could the tools actually get dumber?
Today, Aaron Snoswell from the generative AI lab at the Queensland University of Technology discusses the limitations of large language models like ChatGPT.
He explains why some observers fear ‘model collapse’, where more mistakes creep in as the systems start ‘inbreeding’, or consuming more AI created content than original human created works.
Aaron Snoswell says these models are essentially pattern matching machines, which can lead to surprising failures.
He also discusses the massive amounts of data required to train these models and the creative ways companies are sourcing this data.
The AI expert also touches on the concept of artificial general intelligence and the challenges in achieving it.
Featured:
Aaron Snoswell, senior research fellow at the generative AI lab at the Queensland University of Technology
Key Topics:
Stuff You Should Know
If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.
24/7 News: The Latest
Today’s Latest News In 4 Minutes. Updated Hourly.
The Clay Travis and Buck Sexton Show
The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.