All Episodes

February 21, 2024 • 32 mins

Welcome to the very first episode of the Testing Experts with Opinions podcast for 2024 (Part 1) - a dynamic platform for engaging discussions about software testing trends, tools, and technology. This episode highlights a detailed introduction of the Inspired Testing team members and their significant roles, including test architects Stephan Coetzee and Mamatla Rantidi, data testing specialist Mathew Zungu, training-focused head of consulting Jehan Coetzee, and CTO Leon Lodewyks.

Emphasizing the ever-changing software testing landscape, Leon highlights the importance of incorporating new tools and technologies to provide better value to clients. He stresses the need to maintain relevance in the industry through continuous training significantly.

This episode lays particular emphasis on software quality engineering trends for 2024. Stephan leads an intriguing discussion about the role of AI in testing. Explaining AI as a tool that enhances the testing process, he emphasizes that AI aims to address real business challenges, from improving operational efficiency to enhancing accuracy. The team also shares practical advice for testers surrounding AI selection and usage, stressing the need to identify business needs before adopting any AI tool.

While discussing the use of AI, potential concerns about AI learning with private data are brought up. Companies are advised to create an in-house bot to retain control over their sensitive data. The debut episode wraps up with a look into the future, predicting that 2024 will see an increase in API testing due to the growing adoption of microservices architecture and AI integrated applications.

This episode offers a comprehensive analysis and speculative overview of software testing's future - challenging traditional thoughts of API testing and ushering in an era where AI plays an integral role.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
Welcome to Testing Experts with Opinions, an inspired testing podcast aimed
at creating conversations about trends, tools, and technology in the software testing space.
I'd like to welcome all of you to our first podcast of 2024.
It's an idea that we had after having interesting conversations as a team and

(00:29):
we thought it might be interesting to other people as well.
So we're going to try and do this regularly depending on whether people find
it interesting and valuable.
But before we start, I'd just like to introduce everyone.
So do you guys just want to go around and just quickly do an introduction?
Well, I'll start. Thanks, Leon. Stefan Kutsier here.

(00:52):
Excited and slightly nervous about the podcast, but I'm sure it's going to be a fun experience.
My role in the team is that of test architect, fulfilling a bit of an ancillary
role across all the different teams, helping with standards and processes and
building up the maturity for all the different services that we offer.
Currently, I'm heading up and helping to build out our test data management

(01:15):
service offering that we plan to assist different clients with test data management
needs such as test data obfuscation,
subsetting, synthetic test data generation, interesting things like that.
So that's top of mind, but that's one of many other things we're all focusing on.
Okay. Thanks Leon. And hello to your listeners.

(01:38):
I'm Amatla Rantidi. I'm a test architect at Inspired Testing,
responsible for overseeing the test environments and release management as service.
So that is me in short.
Matt, do you want to go next? Yes, thank you, Leon. My name is Matthew Zungu.
I'm also a test architect within Inspired Testing.

(01:59):
I have great passion and interest in test automation, and I enjoy leading a
whole lot of technology suites.
Currently, I'm responsible for implementing a data testing service,
which comprises of quite a number of segments, like like ERP system testing,
data waste testing, data migration,
which also includes cloud, as well as big data testing.

(02:22):
So that is what I currently do, and automation is my passion. Thank you, Leon.
Cool. Thank you, Matt. Jan Kutsie, head of consulting at Inspired Testing.
Also sit on the architecture team, and I look after our training.
Nice to meet you. Okay, excellent. Okay, so I posted something on LinkedIn.

(02:44):
Okay, maybe I should introduce myself as well. That's probably important.
So I'm Leon Ludewijk. I'm the CTO. I work very closely with this team who just
introduced themselves.
Just a bit of context around this team.
So this team is very much focused on moving forward our company from a testing

(03:06):
perspective. So, what new tools and technologies are out there that we need
to incorporate into what we're doing?
How do we need to adapt our services? Do we need to introduce new services?
We all know that, I guess, the testing landscape changes at a rapid pace and
every year there are new concepts and new tools and technologies to look at.

(03:29):
So the role of this team is or part of the role is to look at how do we incorporate
those changes in the underlying market into what we do and how do we ultimately
provide more value to our clients from a testing perspective.
We talk about relevance often, not just Inspire Testing being relevant as a

(03:55):
testing company, but actually we take great responsibility in our consultants,
in our employees, having the relevant skill set to still be relevant in the testing industry.
So we like to look at what's happening out there, attend lots of webinars,

(04:15):
do a lot of reading, et cetera, I guess, do a lot of R&D, and whatever we find works.
Ultimately needs to to flow back into our
consultants and the skill set there so what
training do we need to give our people to to potentially
upskill or reskill them to keep them relevant if

(04:37):
you i've often said this if you sit on your hands and you do nothing for a couple
of years in the testing space you're probably going to become irrelevant quite
quickly and your skill set is going to be old and maybe not exactly what's required
so that's a it's definitely part of the role and it's a big responsibility we
have and one we take seriously.

(04:57):
So I posted something on LinkedIn, I think in early January,
and it was kind of my thoughts and predictions for 2024 from a software quality
engineering perspective.
And I guess what I thought the trends would be this year or what the prevalent
areas within testing would be.
So I thought for For this first podcast, I know it's actually the 14th of February

(05:21):
today, Valentine's Day, so it's almost the end of Q1.
But we're still going to have sort of a kickoff for 2024, keeping on that sort of theme.
So I think what would be good is if we can get from each of you what you feel
are going to be the most important, I guess, predictions or thoughts in terms

(05:42):
of software quality engineering for this year, for 2024.
What are those things that we're definitely focusing on or maybe not focusing
on yet, but you feel that maybe it's the next big thing. It's the thing that we should be looking at.
So I'm going to keep the same order that we did the introductions in.

(06:02):
So I think, Stefan, you went first.
So do you want to kick off with your first thought or your first prediction? Sure.
Surely on thanks for the opportunity yeah i
guess the obvious one that everybody was expecting us to talk about and i think
is the reason why the people are expecting us to talk about because it's so
relevant is ai of course so i was thinking about ai and you know it can be daunting

(06:26):
just talking about it in general i try to break up my thoughts in into two two components the one is,
us as a testing community in 2024
you know in terms of consumers of ai in
terms of ai augmented testing how do we think of ai
helping us to become more efficient as test analysts

(06:47):
or a testing community and the other one touches
a little bit more on us helping the ai or how can we test it specifically in
the prompt engineering side so i think just in terms of us consuming or using
ai to advantage and i.e i i shouldn't say i.e a.i augmented in other words ai augmented testing,

(07:08):
i think i just like to touch on some of the advantages or things that we can
use ai for right so and i was looking at things that people are very excited
about i think one of the first things is the ability for ai to help us with
with test creation right and that's not that's manual both manual and automated so i mean.
It's it's the creation of automated and manual tests and also

(07:30):
the ability to automatically update those tests using
ai so one of the cool i
mean i just thought i'd mention a few tools that i came across test trigger
is very interesting one i was watching a video earlier today about how they
use how you can write use natural language processing so you write basically
a prompt and it generates test steps for you and it automates it and runs those

(07:53):
tests on a web and api mobile it's actually It's actually so cool to see how that operates.
And I think tools like that is going to surface more and more this year.
Tools like UiPath, originally for RPA, Robotic Process Automation.
It's also a tool that's coming more to the forefront in terms of writing automated tests.

(08:14):
And they've also now got a feature where you can use natural language to automate
the scripts for simulating business processes. processes.
Applitools has bought a low-code automation platform pre-flight.
Catalan has got TrueTest also for the creation of regression tests by actually

(08:37):
monitoring end users on the actual system in live,
in the production system, and they track what the heat maps are in terms of
user activities, and it helps them to create regression tests for that.
I mean, these things are super amazing.
What I just found is there are so many. I think there's a big responsibility
as a testing community this year to to to stay up to date to know what what

(08:59):
what ai features tools have some might be smoke and mirrors and others might
be the real thing and it i think there's a bit of a
it's going to be a lot it's going to be a responsibility on us to know to differentiate
the two and leverage from the ones that are that are the real deal and stay
up to date because it's such a fast moving industry at this stage.
You guys are welcome to interrupt me at any time, but there are many things.

(09:22):
I quickly want to ask something, Stefan. So you mentioned a lot of the,
I guess, the commercial tools.
So the Appli tools and the UI parts of the world. And I'm obviously not shooting
them down or what they've come out with.
But it feels to me, based on conversations, that maybe that's not where you

(09:45):
necessarily need to go next.
So, there are things like ChatGPT and now Gemini, and Gemini is getting better and better.
But still, there are people out there that don't actually realize how useful it is for testing.
So, what you mentioned, and I don't think it was one of the two I just mentioned,
but being able to create test cases and get them automated. it.

(10:05):
So, I mean, we know today you can use ChatGPT as an example to provide it with
some sort of an architecture overview or requirements document and it can spit out test cases for you.
You can ask it to convert those to BDD scenarios and then even ask it to go
and write the step implementation or the automation steps associated.

(10:30):
So, I'm kind of questioning questioning, what are these tools?
And we don't have to go into that now. It's maybe a conversation for another day.
But what are those tools where there's a license fee associated going to give
you above and beyond what I just said?
So if there's a, I guess, well, even ChatGPT isn't necessarily free because

(10:53):
probably most of what you want to get out of it is a subscription-based one.
But it's still relatively cheap at, I think, $20 a month.
But I have a feeling that these will be a lot more expensive.
And maybe you don't know, but do you see there's a massive benefit in going
with sort of a commercial AI tool at this point?

(11:14):
Or do you feel that what's there in terms of ChatGPT and Gemini,
et cetera, can probably provide enough of that AI-assisted testing or AI-augmented
testing to actually just make you more efficient?
Definitely. I think there's definitely a case for that. I mean,
these were just some of the tools, like you say, commercial tools,

(11:35):
but us as a community really understanding how to leverage things like ChatGPT,
for example, and see how we can optimally benefit from that using the scenarios
you just mentioned is totally such, that's exactly it, right?
So we need to understand how to be good prompt engineers, how to get the best
out of it and be creative.
I mean, I think there's a big creative element to this as well, especially

(11:57):
when you look at things like google gemini where you can now integrate you
can in a way write these things into your automated scripts to do visual
testing you know if you look at these things i think the sky's the limit
when there are a lot of opportunities without necessarily
having to rely on these commercial tools so yeah definitely are they are there
any any other opinions or any anything to add yeah i'm just thinking about what

(12:21):
you said stefan and and your question leon around AI augmented testing and then these tools.
I think that to me there are two separate things, right?
So the first one is how do I as a tester use AI to test more effective and more
efficient, right? And that.
As you said, AI sounds like the scary thing, but in the past,

(12:43):
we would Google, how do I write a test case?
If someone asked you in the Agile team, send me a test report,
you went to Google, what does a test report look like? And you found a template, right?
And AI, to me, in that sense, and in the LLM sense of it, is replacing that.
So it allows us to Google better rather than going to Stack Overflow,
for example, have a personal assistant that helps me with the things I want

(13:06):
to know. So I think from that perspective, we as testers shouldn't be scared to say, this is AI.
We should say, well, this is just a better way of finding information,
whether that's Google or Bing or anything else.
It's just now called something different, and it provides me better information quicker.
So it shouldn't be a scary thing for us in our day job as testers.

(13:27):
When it comes to these tools, I think what they are doing is,
as we are using AI to augment our testing, they are using AI to augment their
tools and make their tools smarter.
And then us using those tools, we get the benefit from that.
But it's not really us as testers using AI if you use one of those tools.
They are using AI for self-healing, for example, or predictive analysis,

(13:52):
or writing test cases from a screen. So they are already incorporating these
things into the tool, and we are the users of the tool.
And then benefiting from the AI, they include it in the tool.
So that's the one part. The other part, as I said, is we are using AI or just
a better way of finding information to make our testing more effective and more efficient.

(14:12):
Yeah. And just, Mamatla, just before you go, I think there's something important here.
And this is we're not talking about testing of AI and testing of AI models.
We're talking about using AI to actually test.
So this is an assistant or an augmentation of your existing testing skills and knowledge.

(14:34):
Testing of AI is obviously a completely different thing, and that we definitely
need to discuss the next time. Mamatla? Yeah.
Okay, thanks. Thanks, Leon. Just to add to what Johan said, or what you have
just commented on, that with AI, we're just augmenting testing.
And I also looked at your LinkedIn article, where you listed the predictions for 2024.

(14:59):
And what I liked about that is that AI is not here to replace testers,
but we should use it to embrace the capability of AI to improve testing.
So I think we should understand that when we say AI augmented testing,

(15:19):
it does not mean replacing testers.
Absolutely. Hey, thanks, Mamatla. I think you just tore words from my mouth.
That's what I wanted to reiterate.
Yeah, but basically, Stefan, you are spot on there.
AI is here to aid testers in this market just for us to be more productive, to be more efficient.
For example, you know, AI is being used to integrate with IDEs.

(15:43):
You can easily write your test cases while you're scripting.
It can suggest and there's also a whole lot of prediction that it can do while
you're typing and writing your automation code. So definitely,
yes, AI is here to aid us, but not to kick us out of the market.
All we have to do as testers is to stay relevant and embrace it in our day-to-day jobs.

(16:07):
I just want to go back to something that Stefan said, and it's around the fact
that there are so many AI tools out there, but also they're growing exponentially every day.
I subscribe to a newsletter related to AI, basically an AI report, and they list tools.
And just the number of new AI companies and tools listed on there every single

(16:29):
day is just astronomical.
So I think what starts making it difficult is for someone in testing out there
to know, well, which tools should I actually start with or which tools are good
or which ones should I use?
Any advice or suggestions on that?
I would say use none of them, right, until you have a use case to use one of them, right?

(16:55):
So if you use a LLM bot as an assistant, absolutely do that, right?
But if you look at all these tools out there, if you're going to try and find
a problem for these solutions, you're going to look for a long while, right?
But if you have a problem in your business to say, all right,
I am an automation engineer.
I'm using a framework. My scripts are flaky. they are

(17:17):
fading because locator is changing is there an ai tool that
can assist me with that oh yes one of these already built in
self-healing test so that is solution and that's an
ai solution that i can incorporate into my process
so i would and it's a strange answer but don't use any of them use the llm that's
there as a bot to help me as an assistant and if i face a problem then go find

(17:41):
the right solution for that and then then make sure you find one that hopefully
has some AI built in there that is smarter than the others. That would be my approach.
That's such a valuable point in that it feels to me like as soon as AI became
a thing, everyone was now looking for a reason to use it as opposed to finding
a problem and then using it.

(18:03):
So let AI address a real business problem or
a real problem in your day-to-day as opposed to
just trying to find a reason to use it but
but when you say just use the
llm jan i assume you meaning you meaning
just use chat gpt as a start or or use something something like like google

(18:23):
gemini don't go and buy a tool don't go and find something and unless you you've
i guess identified a problem within i want to say a business problem but obviously
this is a testing related related conversations.
So within testing, there's an issue which maybe Gemini or ChatGPT can't address, can't help you with.

(18:44):
Then go look at, well, what else is there? But use that as the starting point.
Okay. Yeah. So, Johan, your suggestion, I wanted to find out if we use a chat GPT, right?
So, that model uses data to learn.
So, how would you advise companies to avoid putting like their sensitive data

(19:06):
in chat GPT to help with testing?
Because I think that's going to be one of the main concerns that the model is
going to learn using their private data. data?
And I think that's a real concern on my line, a great question,
right? So there are two things to consider there.
The first one is that when you, just as you go into Google, when you search

(19:28):
for something, that doesn't become the new answer, right?
So if you use ChatGPT to say, write test strategy for inspired testing,
it doesn't take that and save that as now this is how for everyone after that,
that's how a test strategy will look, right?
So even if you give information, you are asking it and prompting it to give you an answer back.

(19:49):
It doesn't take that information necessarily and save that and use that in subsequent things.
So I've tried to test that as much as I could to give it information and try
and recall that information from it, but it didn't learn again.
So I'm not sure about the numbers, but it's a considerable amount to retrain
these models with new data.
So it's not something that happens often. I think that's the first thing.

(20:11):
The second thing from a company perspective, and there I agree with you,
it's always safer to be safe, right?
So what we're doing in Inspire Testing as well is to create our own version
of a bot that's our information and that's controlled within our environment.
So we have full control of that data.
And as you said, even if it is sensitive information, it just stays in our domain.

(20:34):
So that's another way if you're really embracing it as an organization to make
sure that you are in control, host it yourself and any of these cloud providers,
and then you are in full control of the information going in and out because
that information doesn't go out of your domain again.
Okay, thank you. Yeah, and I like the fact that we have our own internal model,

(20:54):
which will help us mitigate that risk.
Okay, I wanted to just touch on that point, Leon, where you asked,
how does one really determine which tool to use?
Yes, Jan, I understood your point, that at times don't go for any of it.
But there are times where maybe as an architect, I need to come up with a solution

(21:16):
I need to provide. provide.
I think there are quite a number of ways that we can also implement or we can
determine which is possibly the best tool to use.
For example, the first thing you need to do is detailed analysis of your problem,
like what are you trying to solve for?
Obviously, my first take would be to go for the open source, chat GPT.

(21:37):
Now you can spin up your LLM within a few minutes.
You can spin up anything for you to be able to test whatever you want to test.
So basically, Basically, the first thing that one needs to do is just to explore
all open source options before you go for any commercial tool.
Depending on time constraints, you can then explore other options.

(21:58):
But usually what I've observed is while you're exploring the open source,
there are things that will start unfolding.
And before you know it, you realize that, OK, this is exactly the tool that
I want to go for. So my recommendation is why not use chat GPT for a start?
Yes, there's Google, Google is there, there are a lot of answers.

(22:19):
You can also join other sort of forums where people share information,
but definitely go for the open source first, explore.
And until you get the real limitations of the open source, that's when you can
consider any commercial tool.
Yeah and maybe i'm sounding like a salesman here again for commercial tools
but i'm i think there is a case to say that if you've already got tools in your stack,

(22:43):
it may possibly a commercial tool do your
research and and find out about the the roadmap of that tool and be be ahead
of or know what to expect and make sure you leverage from what they offer because
i can almost tell you if any commercial tool out there is with assault they
will have have some feature of some level. So just be informed.

(23:04):
You don't have to go and buy commercial tools just to benefit from AI,
but be informed and see how you can leverage from that.
Be aware, don't just use the same features you've been using the last five years.
Go on the website, check if there's a feature and test it out.
And maybe you can benefit from that already without paying a cent.
So I think it's a hybrid of both.
But John, maybe... I can get on board with that.

(23:28):
I think pragmatic approach is is also you
know if you already have a tool so so you mentioned uipath
so let's just touch on that so if your company is already used using
uipath and and they now have as an example a functionality which is similar
to chat gpt or gemini then rather use that instead of using something else and
and the same for all of the other tools as well i think what's what's worth

(23:51):
just reiterating and i think mamatlan and jan touched on it is is the privacy side.
Data compliance. So just make sure that when you do use these tools,
you're not giving away any IP or you potentially putting anything into these models,
which could be a data compliance issue or risk. risk.

(24:16):
So just be clever in how you use it. It's quite easy to anonymize a problem.
So even if you have a very specific one or if you have a very specific document
to just take out the relevant names, et cetera.
But yeah, so I think I almost want to say it's going to be impossible for us
to not discuss AI again at some point down the road.

(24:39):
So I think in order to get through some some of the other themes,
let's just park AI there for now.
We can definitely pick it up again in a future podcast.
But Mamatla, do you want to go with, I guess, the second thought?
Okay, so my top pick for 2024 will be API testing.

(24:59):
So I don't know if you all agree with me. So my reasoning behind that is that
the adoption of microservices architecture architecture will require solid API testing.
And secondly, with the rise of AI, I believe we're going to see companies integrating
their applications with AI models.

(25:20):
I'm not talking about AI, trust me, this is API testing.
So with that, API automation is going to take the center stage in 2024.
And as APIs become the backbone of software integration, their security and
performance going to be paramount.

(25:41):
So I think, yeah, API automation, security, performance testing,
as well as developing skills around API.
So we're going to see a lot of recruiters looking for automation engineers with API testing.
So I think that's my prediction for 2024.

(26:02):
So Mamatla, I'm going to have to challenge you in that
there's going to be people listening to this or let's
hope there will be at least one person listening to this but
if if you were to say that
to a large number of people they might they may say well api testing is old
we've been doing that for years and years and api automation isn't new either

(26:24):
so what is that what is the aspect of api testing or automation which you you
feel is going to be prevalent this year so So is it different to just the norm?
Yeah, yeah. So it's going to be different to the norm because I said as most
companies adopt their microservices architecture, so as more companies are moving to the cloud.

(26:45):
And then secondly, integration of applications with ML models.
So that's something that is new.
So we will be looking at API testing from that angle.
Yes matthew okay so um my thoughts mama i think you you're right there so basically in terms of,
api testing there are quite a number of aspects which we need to look at if

(27:08):
we're to consider the security side of things just security of apis it's it's
it's a big business in the sense that there are a whole lot of vulnerabilities
that are being introduced due to ai and other things And,
you know, according to OWASP, those 10 aspects which they look at,
you know, things like injection, the SQL injection,

(27:30):
broken authentication, sensitive data exposure, all those XML,
external entities, XE, there's a whole lot of aspects that may need to be looked
at in terms of API testing.
Performance wise, a whole lot of APIs may be hosted,
and there's need to ensure that APIs are being efficient in terms of how they're

(27:51):
being consumed or how they're being exposed to the public or to the demilitarized zone as such.
So also looking at how performance wise, which APIs are performing better or
not, all those will still form part of API testing.
So it's not just going to be an aspect of when we test this API,

(28:11):
is it retaining the correct records?
Are they enough or that? But it goes as far as looking at the security side
of things and performance.
Yeah, I asked myself that same question when you started talking about this.
How can API testing be something for 2024, right? We've been doing API testing for so many years.

(28:33):
But then as you were talking, I started thinking about our client base and the
people I get to deal with in the tasting community.
And it makes sense. More and more companies are only now going through digital transformation.
More and more companies are now going into microservices and changing their architecture.
Yes, it's been around for a very long time knowing how to test an API, right?

(28:55):
But in the organization as a tester, these companies, especially the large organizations,
are only now on that journey.
And it's now more prevalent than ever for a
test to really know to match point really know how
to test an api right yes i can do a put and
i can get a get and that's easy enough but how do i actually really effectively

(29:17):
have a strategy around api testing because in the past we we got from the developers
you got swagger documentation you did a get and you saw that the parameters
are correct right and that was kind of okay but now if your whole big enterprise
organization is transforming,
it means that you have to look at API testing a little bit differently.
There are nuances around really testing APIs effectively that now comes into play.

(29:41):
So I think it's a little bit of a warning came out to myself as well to say,
okay, revisit the way we think about API testing.
I really understand, can I actually test APIs the way I am supposed to,
or am I falling back on the five years ago doing a quick request, right?
So I think those are my thoughts that I agree with you that we need to sharpen

(30:04):
up API skills, API testing skills, and effectively do that.
Effectively, no, not to wait for develop to give you postmodern collection and
then clicking the button. That's not API testing.
But understanding how to test that, where do I get the information?
How do I automate that even on the manual testing side to actually get effective results?

(30:25):
I think the shift is absolutely there. So in that sense, I agree.
Yeah no i agree with what you said jan i mean
i also had that saying that knee-jerk reaction look like this isn't
new but if you think like you say a bit further it's
not just like you say the vanilla doing a request and getting a response it's
that whole thing about the non-functional part that matt touched on and the

(30:46):
internet of things right that's all apis so it's not just the testing one api
is is fine but testing a string end-to-end all apis connecting via different
flows, that's the tricky bit.
It's that all-into-end thing. And I think that's going to be,
although that's even internet of things, it's not new.
I think the complexity thereof is just going to increase.
And therefore, I think it's still a very valid point that Mamata is making.

(31:09):
Oh, yeah. So the same with AI and ML.
Remember, those are not new inventions that came with chat GPT.
They have been around since the 80s, right? They just became popular now in in 2021, 2022.
So the same with APIs. They have been around, but the rise or the demand for

(31:31):
API testing is going to be high in 2024.
Hi, Leonia. So this first podcast actually ran for much longer than we initially anticipated.
And I think we ended up recording for maybe an hour and 20 minutes,

(31:55):
close to an hour and a half.
So we've decided to split this into a couple of different parts.
So this is the end of part one and soon we'll release part two.
So please tune in to that if you found this first part interesting.
This has been an episode of Testing Experts with Opinions, an inspired testing
podcast. Find us on LinkedIn, Twitter, Facebook, Instagram, YouTube and TikTok,

(32:20):
where we're driving conversations.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.