All Episodes

April 21, 2025 31 mins
e510 with Andy, Michael and Michael - #AI stories ranging from #privacy, #dolphin communication, #OpenSource models for #robots, #GameTransferPhenomenon and much more.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
[Music]

(00:10):
This is GamesItWork.biz, your weekly podcast about gaming, technology, and play.
Your hosts are Michael Martine, Andy Piper, and Michael Rowe.
The thoughts and opinions on this podcast are those of the hosts and guests alone,
and are not the opinions of any organization which they have been, are, or may be, affiliated with.

(00:33):
This is episode five ten singing to the dolphin
[upbeat music]
Hello everybody welcome again to games at work.biz your weekly technology podcast
My name is Andy Piper and with me are two gentlemen who are experts in our field of

(00:58):
gaming virtual reality augmented reality business all kinds of cool tech
Mr. Michael Martin how are you?
Fantastic. Thank you so much, Andy. Great to be here with you and also delighted to be alongside of Mr. Michael wrote today.
I'm doing well, and I'm so glad that we're going to talk about AI, because that was not one of the fields we're experts in, so let's, let's have some fun.

(01:25):
Wait, you're not an expert?
I don't even play one on TV.
Well, I better brush up fast 'cause I'm supposed to be speaking at Guilford College on Wednesday on the very subject,
Oh, good.
so I better hurry.
Kind of like a podcast. We'll talk about anything

(01:48):
Well, let's dive right in.
We have an enormous set of really intriguing articles on a number of fronts, whether we're talking about privacy, use cases, and a variety of different things.
The first up on our list is an article from Bloomberg.
We've been talking about Apple here recently and about the Siri approaches and some of the challenges that have been reported on.

(02:11):
This particular article deals with the story about the use of real-world data versus synthetic
for AI model training.
Michael, I'm curious to know what your thoughts were about this particular element, especially in light of the privacy aspects of real-world data being used.

(02:32):
>> Yeah, I think this is actually more about the approach that Apple started in 2016 around the identification of CSAM data and how they are using differential privacy to kind of dial between direct identification of the user and privacy and security.

(02:56):
And what's really interesting here is given how far...
behind Siri is currently perceived to be in the market. This may be a way for them to accelerate some functionality.
And as we've mentioned in the past, you know, there's been a big shake-up with their AI team over the last couple of weeks.

(03:21):
And I'm hoping that they can kind of get that balance right between how private it is and how accurate it becomes by identifying.
>> By utilizing on-device real data. I do think it's interesting kind of the approach of what they're doing is using your on-device data to try to identify data in the synthetic data set that kind of looks like it in order to do the training on-device, which is kind of funky.

(03:53):
And before the show started, we were talking about one of the big frustrations that we've had with Apple's updates recently about them turning things on by default.
And in the 18.5 beta coming up, or I think it's out now, but I haven't... I have not installed it. This is the time of year I stopped my betas, just to get some stability before WWDC.

(04:15):
But supposedly in 18.5, the setting is going to be there to start doing that on-device training using this technique.
And so, I was wondering,
if either of you have had any issues with defaults changing on your devices recently with 18-4 or 18-4-1, which just came out this way.

(04:44):
I haven't noticed anything, I don't think.
Maybe I'm mistaken, maybe I miss something?
The the big one on 18 for and is that it was automatically turning back on Apple intelligence if you had it turned off
Oh, well, I had it on, so I mean, yeah, I wouldn't have noticed.
Oh, yeah, I had it on as well, so, yeah, not a surprise.
Staha But but what do you guys think about about this approach and do you think it'll give them any type of

(05:13):
Runway to kind of catch up with some of the other LLLM based AI
I don't know. I thought the big issue was that they had sort of promoted this local data or we know your contextual stuff therefore we will be the best at doing stuff for you rather than what they seem to be proposing here which is
chat at.

(05:40):
sort of grouping together similar requests to an external AI and then building on top of that.
Maybe I'm misunderstanding things but I don't see how this particularly helps them jump forward.
I think what's interesting is at least from some of the other articles and things that I heard over the last week about this specific story is, they haven't been able to deliver on the on-device stuff because of that privacy battle internal, and this might be a way to kind of break that loose because now you're maintaining some level of privacy using differential privacy,

(06:19):
even though it's all on the device anyway.
Well, and there's an interesting point about the machine learning research blog post from the 14th,
which goes into this a little bit more about the differential privacy as it relates to different
devices. So if you're using an iPad and you're using your phone and you're using other things that may not embed the same elements across them, and that's kind of a intriguing thing in my book,

(06:47):
to especially given, and I know we've experienced this, and I've experienced this with a few other
people too, where a text chat that's a group text chat, whether it's only with Apple, iOS users,
or people that are across different programs or different applications, they wind up having multiple threads that are created too. So I wonder if some of that is the attempt at preserving that level of differential privacy. So if you respond to a text on your iPad, then that may be something that Now, forks.

(07:21):
>> Yeah. I hadn't thought about that angle of differential privacy.
the chat that you have, and I message for other people.
That's interesting because one of my, well,
actually two apps that I'm currently working on,
both are implementing AppIntense.
Mm-hmm. Oh, interesting. Yeah.
And the AppIntent is only provided on the device it's running, right?

(07:45):
So, so when you contribute to the, the,
right, it's on that device. So, an AppIntent that gets contributed on the iPhone,
won't necessarily show up in the spotlight search on your iPad.
So, yeah. Right. Exactly. Yeah. Very cool.
Which which generally makes sense even though you're the same person you're just using different devices and it's an on-device level of privacy, right?

(08:13):
Well move into something that likewise is is kind of cool I came across this article from Quantum Magazine earlier this week that is focused on the the general idea of
large language models and these
performers being focused on mathematical ways of predicting

(08:33):
the next logical word and the cool thing about this article is it's translating math into words through the tokens and this Explorers rather than going at a mathematical to language level. It's going from a mathematical to a musical level to allow for musical responses and that to me is super intriguing because music conveys information in ways that are

(09:03):
a little bit different than pure texts or spoken word and can also unlock things maybe in a slightly more related to the mathematical functions as well. So it was intriguing to me on a couple of different levels because of just personal experiences here of late where music was a way for me to understand and convey a thought a whole lot easier than it would be to

(09:30):
translated into a language and I'd love to see you in this tutorial.
Thank you for watching.
I, you know, it's interesting, yeah, it's interesting.
I didn't focus on the musical aspect at all in this article.
I mean, they used musical terms.
However, I love the whole basic idea of the latent space

(09:52):
that they talked about and how, in my mind,
it kind of aligns with how we, in two things, right?
You're not constantly translating back ideas between different known ideas.
you kind of make these.
These things that you necessarily can't explain and from that perspective, it got me down the path of, you know, yes, this can be more efficient, which might have great benefits for us on energy costs and environmental impact and things of that nature and pricing of token usages on the various LLMs out there.

(10:28):
But my concern was explainability, right, and how well you can explain how this.
One of the big things on ethical eyes right now is explainability. How did you get to your answer? And if you take out all the intermediate steps where it kind of pops back up into a human language text, what would that do to explainability of the models and how they came up with the answer.
Mm-hmm.

(10:55):
The other thing that was there was one other key thing that I thought was interesting, they compared the work at meta and the work at the University of Maryland with Maxplunt.
They kind of said it was going to be better, but they didn't actually prove that it was definitely better on mathematics. Matter of fact, the phrase that the guy used at the end was that it should have, it looks like it will fare better.
- In tubing in. - Yeah.

(11:21):
Right. Right.
But they didn't have any quantified state.
That says yes, it's better.
Now, environmental impacts, energy costs and stuff like that.
I think that's great.

(11:42):
Possible loss of explainability, I think that's bad.
And they can't quite say it is better.
It just likes it should be better.
So that was interesting too.
I think that both of you have done a lot more work on the nature of how these things work than I have. I mean, I've used a lot of these things and I've got conceptual knowledge, but I think both of you have actually been trained in understanding how this stuff fits together. So I don't feel super well equipped to discuss this in a ton of detail. One of the things I would say,

(12:20):
Michael Rowe is what you just talked about, the transparency of the processes.
I've seen a lot more in the newest models that I've tinkered with where you get these little descriptions of what it's quite unquote thinking about. The processes it's going through,

(12:47):
which I think it was one of those shower thoughts things this morning for me was I was thinking about how that fools
the human interacting with it into believing a lot more that there's some level of sentience sentience thought here versus the actual mechanics of the process that's going through. This is not this model coming up with a unique and off the wall idea as it's just talking you through the process it's going

(13:26):
to make sure you think or can make you think if you are not familiar with it.
Oh there's a there's a human going through a bunch of tasks on the other side of the screen. I think this whole idea of using the same concepts that this tokenization not not to deal with language but to deal with things like music are absolutely fascinating. I for sure think that that is you know abstracting it away from the notion of words so you don't become

(13:56):
an autocomplete mechanism which is how I continue to think of a lot of these language models and autocomplete mechanism for sentences but autocomplete for stuff and I think that becomes really quite quite interesting but again I am going all over the map here because I don't feel fully equipped to discuss the scientific background to this particular

(14:24):
Well, it's interesting and and and Michael did a really good job of team up a thought for me because while I was reading this article I was actually listening to flock of seagulls.
And there's a song that they put out on their very first album called man made and in that there are two sets of lines that that were going on as I was reading the article.

(14:46):
One man made machines make music for the man now machines make music while the man makes plans.
right and that's what a lot of people are doing.
They're doing with AI right now. They're letting it do hard work and then they're just kind of vibe coding their way through life.
And then the second verse is man-made machines to control the days. Now the machines control while the man obeys.

(15:08):
So there's your apocalypse view of AI.
This reminds me of a really good album which is available as Creative Commons, which I will possibly use parts of as for my own future podcast should I do one and I'm looking at the title of it because my brain is blanking on it but it will be in front of me momentarily.

(15:33):
Ask the machine
But it's all about exactly the same kind of concept around AI.
I'm looking through my own human curated notes that I have painstakingly written.
but by the way, the flock of Seagull's album was, I think, 1981 that the song came from back in the day.
uh, I, yeah.

(15:53):
Andy, as you're looking, I have to tell you, I love, love, love, your phrase, autocomplete for stuff.
I think that is such a delightful way to think about this and a wonderful way to also keep the focus on the human creativity and augmentation versus supplanting, right?

(16:15):
Oh, well that's that's good to know. The album I'm thinking of is Happy New Dystopia by Zylander which I will add to the show notes because you can get that, you can get that online and it's quite different to the kind of music that I usually listen to but here there's some really good stuff, there's a particular song in there called "I Am Here" which is all about man and machine and I think that that one is one I particularly enjoyed.
Yeah, please do

(16:44):
It's the official soundtrack album.
That's what it says, to what?
So, so sticking on the idea of songs, music, and the like, we have this really quite intriguing article here from ZDNet about Google actually talking to the animals.
Yes. (laughs)
We've had a couple of stories on this over the years too, but this is such a wonderful way of using technology, capturing voice, or in this case, dolphin utterances, I guess.

(17:16):
So we can use that since we're talking about AI anyway.
And then being able to do something with it.
And my first little glimpse of this kind of reminded me of the camera that was lowered into Loch Ness gosh, somewhat 40, 50 years ago.
And it had only recently been pulled up, the same contraption, you know, to be able to have the phone lowered into the water and the cameras and the like was kind of cool.

(17:45):
Yeah, I thought this was really fun and the aspect that I thought was really cool about it was,
again, talking about privacy, because what they're doing is they're using a pixel phone,
and they're doing the communication or the language understanding model on device because when you're underwater, Wi-Fi doesn't work very well and needed a cell signals.

(18:09):
And so I thought it was really neat. There's a nice little video that goes along with it as they're
trying to under
understand what the dolphins are saying and they're getting better at it. And once they get to the point where they can talk back to them, you know, who knows what they'll say. Because as we know, the dolphins are the intelligent race on the planet, according to Douglas

(18:30):
Yes. Yes. Well, I think Dr. Sadum's was a man. Stop it.
And the mice.
Dr. Sadum's was a human way beyond his time, so yes, I think we made this. And of course,
[laughs]
and of course the whales, according to Star Trek, the boy James, so yeah.
Yes, and Star Wars dealing with the space whales too, so, you know, there must be something here.
correct episode four

(18:59):
And of course, Star Trek also had dolphin navigators than they in the ship, so...
Yes, that was not below decks. Was it below decks? Is that what it's called? Lower decks.
Yes, yes, exactly.
Yeah, I love it, X, I love it, X, yeah.
Yeah, lower decks.
Below decks is a reality TV show that I refuse to watch. Lower decks is fun.
This, okay, okay.

(19:23):
All right, so moving from the organic to maybe something a little less organic, we have an article from Wired talking about open source AI robots who would like to get us kicked off on this topic.
Well, look, I was the person that brought this link to the group and this is a story about hugging face the open source AI company having acquired a robotics company and we typically talk about both topics quite a lot.
Well, we do have an open source expert.

(19:59):
We try not to talk so much about AI because it's becoming a bit of an avalanche of stuff and I think we're much more all three of us are much more interested in a broader range of topics.
I thought it was particularly interesting that hugging face had made this move to acquire this company, Pollan Robotics with their self-described goal of democratising robotics.

(20:24):
Now there's been a lot of progress in the space over the course of the last decade in terms of affordability of hardware, progress of robot operating systems, ROS for example.
And really just the interest, we've seen more and more devices hitting places like Kickstarter,

(20:49):
crowd supply, other crowdfunding, which have had elements of robotics.
Now we're not talking about fully autonomous humanoid robots walking into your house and doing stuff.
We're talking about things which have a range of sensors that include things like eyes and audio sense is to figure out what's going on in their environment and respond and do

(21:11):
things together. Now they, this company Polyrobotics have a model called Ricci2 which has arms, it is humanoid torso and can do some things that are quite advanced in terms of manipulating objects. But the goal of hugging face here appears to be that they want to genuinely make the software and designs available under open source licenses.

(21:41):
And I think the idea that open sourcing the hardware here and making sure that designs are freely available so that parts can be replaced through 3D printing.

(22:11):
As more methods of home or small scale manufacturing become available to a wider group of people, audience of people.
I think that there's a lot of opportunity there and of course it does coincide with this wave around AI as well, although plugging the two things together remains to me a little curious to see how they go.

(22:38):
Yeah, what would be interesting to me as I was reading this article is when we get to the point where you have meaningful AI models and meaningful robot models, both open source that can produce a fully enabled robotic device on par with mid-tier robots in the market.

(22:58):
Of course, I have no idea what a mid-tier robot is in the market right now, because it's everything.
But getting to that point, I think we'll get us to the point as you just
described Andy to kind of democratization of this space in a way where it's not just closed ecosystems. It's not just proprietary systems, etc. And I do think that's that's interesting. And I do I did really like the aspect of making the models. The 3d models open for replacement parts, etc. So you don't get that vendor lock in.

(23:31):
Yeah, I mean, we've seen it with companies coming along and then going offline because they can't run the cloud service anymore So this kind of thing is essential really for these things to be
To have more longevity
So, um, at a talk I gave at Science and Math in Morganton earlier this year for their SMATH hackathon, uh, of I showed a hugging face hosted open versus closed arena.

(24:05):
So open versus proprietary LLM by Elmseth's Arena ELLO score.
And one of the fun things about this is it showed the number of days till crossover that that were anticipated based on the regression from when an open lm's are going to cross over where proprietary lm's exist today. Unfortunately, at the moment, there's a runtime error on this. So it's not functioning. We'll leave it in the chat so that you can find it. Maybe it'll be fixed by the time we go to press, but it was a really intriguing look at just the advances that are being made from an open source large language model and the speed at which it will overtake where proprietary enclosed lm's are going to be.

(24:46):
very enclosed LLMs are going to be.
So excited about what this prospect is and where we're heading with it.
Okie-doke.
Moving along, we have a intriguing privacy point here that we wanted to touch on.
And Michael, you saw this first, so let me let you tee it up and introduce it.

(25:08):
But it's an intriguing thing, isn't it?
Yeah, so this is from Steve Trottenberg Smith.
Trottenberg, excuse me, Smith on Mastodon.
And the story is how chat GPT's O3 model can not only pinpoint a location of a picture,

(25:30):
but also pinpoint pretty accurately or with a pretty good set of deduction,
production where the picture was taken from.
So the location of the person taking the picture and so this is to me for various reasons that we probably have discussed on the show in the past and I won't go too deep into them today.

(25:51):
You know, having an LLM or a chat GPT pinpoint the location where a photo was taken from is much more dangerous than taking and identifying what the picture is or where the picture is where where the thing is that you're taking the picture of.
So... I was it.
And given that there are so many people taking pictures around you at all times, the location data about individuals in privacy concerned environment is very concerning for me.

(26:23):
Andy, go ahead.
I was at a conference last week in Italy and there was a presentation at the event by the folks from Bellincat who have a toolkit for intelligence in terms of digging into data and checking the factual nature of that data so they actually have tools to do this by cross-referencing, a ton of other information related to a store.

(26:53):
So it's really interesting to see this then surfacing is capability in a large language model is quite interesting, and yes, the fact that everything is on camera at all times everywhere.

(27:21):
probably.
Even in our inhouses with security cameras and things is really interesting.
Well, keep an eye on where we are in time. We've got a couple of things that we wanted to hit just before we close out for the day and one of the BBC articles that was in my feed this week was dealing with something called game transfer phenomenon.

(27:49):
I can leave it there. It is a thought that the article takes you through the notion of games bleeding into reality, which is you're constantly playing a game and you're experiencing reality through how you've been playing a game.
It's an intriguing doctoral thesis that came into play and even more. It kind of does.

(28:10):
And it neatly flows into our last link, actually,
because it refers to a participant reporting,
seeing health-indicator bars, like those in a role-playing game world of Warcraft floating above companions' heads as in real world, as these two kind of things blended together.
And the last link we have is something called pico-craft.

(28:30):
And this is built using pico-8, which is this small fantasy console, which it meant to be similar to you.
It's really quite an incredible achievement given the severely limited capabilities of this fantasy console.

(28:57):
Michael, as Michael rowers are World of Warcraft X, but I assume that you've already beaten Petercraft.
I did play it today, I did do a single level and I did win, yes, it was loads of fun.
There we go. There we go. Excellent.
[laughing]
Yay!
- Well, closing out today, guys, is a TV show.
You've both have seen "I Have Not."

(29:17):
So I can't add anything extra to this other than it sounds like I need to go watch it because it seems a little bit like the guild with Felicia Day in company, maybe.
Oh, no, much different, but definitely fits the theme that our show was originally founded on, which is gaming technology and business, highly recommend people watch it.
So this is a mythic quest and in true fashion, they are releasing or have released by the time you hear this, an updated version of the last episode as a post release patch.

(29:49):
Yeah, it's the shame that that show was going away, but I was actually talking about this early today, and I think that probably it had run its course in terms of the ideas that they toyed within it, and it was a really, really fun show.
It's a shame to see it finish, but I'm looking forward to actually now going back and playing the patch and watching the final episode and the season for the series finale.

(30:12):
- Well, if you are a gamer or a business person or both or neither, and you enjoyed this show, tell your friends about it, check us out on the various socials and definitely go over to our website at gamesitwork.biz where you'll find all our links to everywhere where we post, et cetera, and, you know, read us, rank us, all that other fun stuff so others can hear about us too. And we will see you next time.

(30:42):
Bye!
See ya
Swang everybody.
[upbeat music]
You've been listening to games@work.biz, the podcast about gaming technology and play.
We are part of the Blueberry podcasting network, and we'd like to thank the band,
[upbeat music]
Random Encounters for their song, Big Blue.
You can follow us at our website at games@work.biz.

(31:03):
[upbeat music]
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy And Charlamagne Tha God!

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.