All Episodes

September 21, 2025 31 mins

Send me a text

Make sure to let me know what you think of this episode.

I completely refactored an audio system for a work app, splitting a single AVAudioEngine into separate engines for recording and playback. This architectural change fixed a bizarre bug where the system volume slider moved unexpectedly during audio operations.

• Split AVAudioEngine into separate recording and playback engines
• Fixed the MP Volume View movement issue by unifying audio session management
• Improved background task management for location tracking services
• Removed dead code and deprecated functionality
• Explored solutions for audio session conflicts, threading issues, and memory leaks
• Implemented dedicated dispatch queues for different audio operations
• Created a robust background task management system for location updates
• Added extensive logging to better understand audio session lifecycles

Looking ahead to SwiftUI integration, audio performance optimization, and iOS 26 compatibility testing. Do iOS 2025 is happening November 11-13 at NEMO Science Museum in Amsterdam - check out do-ios.com for more information.


Support the show

Do iOS: https://do-ios.com


Rate me on Apple Podcasts.

Send feedback on SpeakPipe
Or contact me:

Support my podcast with a monthly subscription, it really helps.

My book: Being a Lead Software Developer

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jeroen (00:01):
All right, let's get started.
Ios Development Worklog,episode 105.
Audio Engine Refactoring andBackground Task Management.
Welcome to the first episode ofmy iOS Development Worklog.
I'm Jeroen Leenarts and this iswhere I'll share the real work
I've been doing, the challengesI've faced and the lessons I've
learned.
No fake demos, nooversimplified examples, just

(00:23):
honest insights from buildingreal iOS applications.
Let's get started.
The Week in Review this week wasall about audio engineering and
background task management, twoareas that are notoriously
tricky in iOS development.
Let me break down what actuallyshipped and what didn't.
What shipped?
Audio Engine Refactoring Icompletely refactored the audio

(00:44):
system in the app I'm working onfor my job, splitting the
single AV audio engine intoseparate engines for recording
and playback.
This was a major architecturalchange that touched multiple
components File files changed,256 insertions, 44 deletions.
The refactoring includedcreating separate recording
engines and playback engine.
Instancesing a unified audiosession helper to manage audio

(01:08):
session state.
Fixing the MP Volume Viewmovement issue that was driving
users crazy.
Improving the walkie-talkieservice integration with the new
audio architecture.
Mp Volume View fix.
This was actually the mostsatisfying fix of the week.
The system volume slider wasmoving unexpectedly during audio
operations and users werereporting it as a bug.

(01:28):
The root cause was multipleaudio engines fighting over
audio session control.
By unifying the audio sessionmanagement, I eliminated the
conflicts.
Background task management,improved our location tracking
service to properly handlebackground tasks, preventing iOS
from killing our locationupdates when the app goes to
background.
This involved implementingproper background task lifecycle

(01:49):
management and preventingduplicate background tasks.
And there was also a codecleanup.
I removed some dead code anddeprecated functionality,
including an unused MP4M Plusslider view extension, cleaned
up deprecated audio session codecode and I removed a few
unnecessary main actorannotations that were causing
threading issues.
Then there's also some stuffthat didn't ship.

(02:09):
I was working on an audiosession optimization.
I spent considerable timetrying to optimize audio session
management, but some of thechanges introduced new issues,
so I had to roll back parts ofthat work.
Sometimes the best code is thecode you don't write.
Performance improvements I hadsome ambitious plans for audio
buffer optimization that didn'tpan out.
The current implementation isactually performing better than

(02:30):
my optimized version, which is agood reminder that premature
optimization is still the rootof all evil.
Swiftui integration I planned tostart integrating SwiftUI into
the existing UIKit app, but theaudio refactoring took priority
and consumed most of the week.
So the honest assessment thisweek was frustrating in the best

(02:50):
way possible.
Audio on iOS is genuinely hardand every simple fix seems to
introduce three new problems.
But that's exactly why I wantto share this with you, because
this is what real iOSdevelopment looks like.
The most challenging part wasdebugging the MPVolumeView issue
.
It's one of those bugs that'shard to reproduce consistently,
but when it happens it'simmediately obvious to users.
The fix required understandingthe entire audio pipeline and

(03:12):
how different componentsinteract with the system audio
session.
So let's dive in Code, deepdive to audio engine split.
Let me walk you through thebiggest technical change I
tackled this week splitting thesingle AV audio engine into
separate recording and playbackengines.
The problem the app has awalkie-talkie feature that needs
to handle both recording andplayback simultaneously.

(03:32):
The original implementation useda single AV audio engine for
both operations, which wascausing several issues Audio
session conflicts.
The single engine was fightingwith system audio controls.
Performance issues.
Recording and playback wereinterfering with each other.
Mp volume view movement thesystem volume slider was moving
unexpectedly during operations.
Threading issues audiooperations were happening on
different threads without propercoordination.

(03:54):
Memory management.
The single engine was creatingcomplex retained cycles.
The solution architecture.
So here's how I approached thisrefactoring Instead of having
one audio engine trying to doeverything, I created two
separate engines Onespecifically for recording and
another for playback.
In the AudioManager class I nowhave two private properties
RecordingEngine andPlaybackEngine both instances of

(04:16):
AVAudioEngine.
This separation allows eachengine to be optimized for a
specific purpose.
For recording, I haveproperties like IsRec to track
data.
Is tap installed to know if Ihave setup the audio tap and bus
number, which is always zerofor the input bus.
For playback, I have a playernode, which is an AV audio

(04:38):
player node, a mix node foraudio mixing and an input
converter for handling differentaudio formats.
The key insight here is thateach engine can now be
configured independently.
The recording engine can beoptimized for low latency input,
while the playback engine canbe optimized for smooth output.
This eliminates the conflictswe were seeing before.
The key insight was when theBreakTool came, when I realized

(04:58):
that AV Audio Engine is designedto be specialized.
Each engine should have asingle, clear responsibility.
By separating recording andplayback, each engine could be
optimized for its specific usecase.
But here's the crucial part theengines still need to share the
same audio session.
This is where the complexitylies and why the MP Volume Fuel
was moving unexpectedly.
The implementation details.
I will now walk you through howthis actually works in practice

(05:21):
.
For the recording engine, whenwe start recording, the start
recording function first checksif you have already recorded, if
you're already recording toprevent duplicate operations.
Then it calls setup recordingengine, which configures the
engine for offline renderingwith a 4096 byte frame buffer.
That gives us enough goodperformance without overwhelming

(05:43):
the system.
And the key part is the installrecording tab, which sets up a
tab on the input node.
This tab captures audio data asit flows through the engine and
calls our process recordingbuffer function each time with
each audio buffer.
Think of it like installing amicrophone at distance on the
audio stream.
For the playback engine, theset the player function creates
an AV audio player node,attaches it to the playback

(06:06):
engine and connects.
Setupplayer function creates anAVAudioPlayer node, attaches it
to the playback engine andconnects it to the main mixer
node.
This creates the audio graphthat allows us to play audio.
The playback engine isconfigured for real-time
rendering with a smaller 1024frame buffer, which is perfect
for smooth playback without thelatency concerns we have with
recording.
The beautiful thing about theseparation is that each engine

(06:27):
can be started, stopped andconfigured independently.
We can be recording on oneengine while playing back on the
other, and they won't interferewith each other.
The debugging process to get tothis conclusion this wasn't a
smooth implementation becausethere went a lot of things wrong
.
And this is how I debugged it.
So first of all, there wereaudio session conflicts.
The engines were fighting overaudio session control.
Second of all, there were somethreading issues.

(06:48):
Recording and playback werehappening on different threads.
Then there were some memoryleaks.
Of course the separate engineswere creating retain cycles and
of course we had this visualissue with the MP volume view
tab moving if we startedrecording.
So the debugging strategy thatI used was to use AV audio

(07:09):
engines built-in logging totrack engine state.
I added extensive logging tounderstand the audio session
lifecycle.
I used instruments to profilememory uses and identify leaks.
I created a test harness toreproduce the MP volume view
issue consistently.
So let's dive into this MPvolume view mystery.
This was the most frustratingpart of all the MP volume view.
The system volume slider wasmoving unexpectedly during audio

(07:30):
operations.
Let me explain a little bitwhat the MP volume view is.
It's basically a simple viewthat you can put on your screen
and that attached itselfautomatically to the system
volume.
But depending on what youroutput channel is, so that's
like the like the, the earpieceof the of the iphone, or the
speaker of the of the iphone, ora bluetooth headset or

(07:51):
connected headset, they all havedifferent volumes.
So if you change something thatadjusts the channel that the
output is on, the mp volume viewwill follow that and clearly
indicate what volume theplayback channel that you
currently have selected is on.
And that was causing thejumping around because we were
not being consistent with theoutput channel that we were

(08:13):
choosing.
So it took me a few hoursactually to be able to explain
this to you in a few sentences.
So that was a bit of achallenge.
So multiple engines were audioengines were trying to control
the audio session.
Each engine had settings thatwere different and were using
different audio sessionproperties, and the system was

(08:33):
interpreting these changes asuser input.
The fix was to ensure that onlyone component manages the audio
session state.
So the final solution that Icame up with was when I had this
breakthrough of understanding.
So when I unified, the audiosession manager called the audio
session helper.
It is a singleton yes, I knowthat has both of the engines and

(08:55):
uses it to manage a sharedaudio session.
And here's the key insightInstead of each engine trying to
configure the audio sessionindependently, they all could go
through the central manager.
The manager tracks the currentstate, what category is set,
what options are configured,what sample rate is being used
and whether the session isactive.
The critical part is in thesetup walkie-talkie audio
session function.
Before making any change to theaudio session, it checks if the

(09:18):
setup is already in progress toprevent race conditions and
then it only reconfigures thesession if something has
actually changed.
This is what fixed the MPVolume Slider issue.
The problem was that multipleengines were constantly
reconfiguring the audio engineeven when nothing had changed.
Ios was interpreting theseunnecessary changes as user
input, which caused the volumeslider to move.

(09:38):
Now the session only getsreconfigured when it actually
needs to be and both enginesshare the same session state.
The walkie-talkie optionincludes things like mixing with
other audio, defaulting tospeaker and allowing Bluetooth
connections All the settings weneed for a walkie-talkie app.
So the walkie-talkie serviceintegration.
The walkie-talkie service alsoneeded some updates to work with

(09:58):
a new audio architecture forPush to Talk.
This service acts as thecoordinator between the audio
manager and the rest of the app.
One of the most importantchanges was creating separate
dispatch queues for differentaudio operations.
I have three dedicated queuesone for receiving audio data,
one for sending audio data andone for playing audio.
Each queue uses user-initiatedas the quality of service, which

(10:20):
gives audio operations priorityover other background tasks.
This is crucial because audiois very time sensitive.
If audio processing getsdelayed or interrupted, you get
clipping, stuttering or droppedaudio.
By giving audio operationstheir own high priority cues, we
ensure smooth performance.
The services begin audiorecording and end audio
recording functions are now muchsimpler.

(10:40):
They just call thecorresponding methods on the
audio manager and update thewalkie-talkie state.
The complexity is hidden in theaudio manager and this is an
architectural principle to hidethe complexity, which makes the
service easier to test and alsoeasier to maintain.
And the key insight here isthat the audio operations need
dedicated cues to preventclipping and ensure smooth

(11:02):
operation.
Using user-initiated quality ofservice ensures that audio
operations get priority overbackground tasks.
Then there was also somethingthat I did, that what I like to
call the tool talk backgroundtask management.
This week I also dove intobackground task managers for the
location tracking that the appdoes.
This is one of those iOS topicsthat seems simple, but it's

(11:23):
actually quite complex,especially when you're dealing
with location services that needto run continuously.
So the challenge was that ourapp needs to track location even
when it's in the background,but iOS is very aggressive about
killing background tasks.
The challenge is managing thelifecycle of background tasks
properly while ensuring thatlocation updates continue to
work reliably.
The specific issues we werefacing we had some duplicate

(11:45):
background tasks, multiplelocation updates for starting
new background tasks withoutending previous ones.
We had task expiration.
Ios was killing our backgroundtasks before location updates
were completed, memory leaks,background tasks weren't being
properly cleaned up and therewas also a performance impact
because too many backgroundtasks were impacting the app
performance.
So the solution that came up isto create a robust background

(12:07):
task management system withproper lifecycle management in a
class we call theMBLocationService.
The key is having a singlebackground task idea property
that tracks whether we have anactive background task when we
need to do location work.
The startBackgroundTaskfunction first checks if there's
already a task running.
If there is, it skips creatinga new one.
If we need to start a new task.
It calls your applicationshared beginBackgroundTask, with

(12:30):
a descriptive name, locationupdate and an expiration handler
.
This expiration handler iscrucial.
It gets called if iOS decidesto kill the background task
before you're done and itensures you can actually still
clean up everything properly.
The endBackgroundT taskfunction is equally important.
It checks if we have a validtask ID, calls end background
task to tell iOS we're done andresets the task ID to invalid.

(12:51):
The update location functionties it all together Start the
background task, do the locationwork, then end the background
task.
This ensures that locationupdates can complete even when
the app is in the background,but we're not hogging system
resources.
So why were we doing thisbackground task work?
What was actually happening?
We were getting a backgroundpush notification with a
location update.
So that's usually when youenter a geographic region or you

(13:14):
exit the geographic region, andone of the things that this
implementation was doing wasactually ask the system for an
exact gps location and then,because of the way this ap works
, the processing of the functionthat gets called is done,
because you ask the system toping the GPS and get you a
location, but you get theinformation through a callback.

(13:37):
So what we did was, before wereturn from this push
notification callback, weactually start the background
task.
Then we end this function, thenwe end the push notification
handling.
We have asked the system for aGPS update.
We return Then because thebackground task is active.
The operating system keeps theapp in memory.

(14:00):
The GPS gets the location andcalls back into our application
to report the location.
Through these GPS callbackfunctions we get the GPS, we
process that, then we enter taskand then return.
What now happens is that theapp gets started through one

(14:20):
push notification call.
A background task is started.
This keeps the app alive.
Something happens in thebackground while we already
return from the first call.
Then another callbacks getcalled and that actually clears
the background task out ofmemory so that the operating
system now knows okay, now it'ssafe to shut down the app again
so we can start a propershutdown of the application.

(14:42):
So we needed to make sure thatwe only started a single
background task and not create anew background task if there
was already a GPS ping running.
So the startBackgroundTaskfunction uses a guard statement
to check if we already have anactive task.
If we do, it logs a message andskips creating a new one.
This prevents duplicate taskcreation and avoids a big
headache for us.

(15:02):
So and then the cleanup.
The end background taskfunction is defensive.
It checks if we have a validtask ID before trying to end it.
It logs the task ID fordebugging, calls the system's
background task method andresets the internal state.
And, of course, there's theexpiration handling.
The expiration callback iswhere the magic happened.
If iOS decides a backgroundtask has run for too long, it

(15:25):
calls this callback.
We use a weak self-reference toavoid retained cycles and we
ensure cleanup happens even ifthe task gets killed
unexpectedly.
So this three-part approachstart work and ensures that the
background tasks are managedproperly throughout the entire
lifecycle.
I hope this was anunderstandable explanation.
So the reason this approachworks is that we have a single

(15:46):
background task that keeps theapp alive while the GPS ping is
running, and it also preventsthe app from being terminated.
Of course we are neat citizenson iOS.
We want to do proper cleanupwhen we have the opportunity.
This prevents memory leaks andresources being retained for too
long.
This also happens with theexpiration handling.

(16:07):
We've added extensive loggingbecause we want to actually,
because it's happening in thebackground.
You can't really see and tell inthe app what is happening.
So you have to base yourconclusions and your
observations on what's appearingin log files and, where
appropriate, we used weakreference enclosures to prevent
retained cycles.
So the tool that we used wasthe UI application background

(16:30):
task.
The key tool was the UIapplication shared object and
then the begin background taskfunction on that.
This gives you a limited amountof time, usually about 30
seconds, to complete abackground piece of work.
So some best practices herealways end background tasks when
you're done.
Don't start multiple backgroundtasks if you can help it.
Handle the expiration callbackproperly and use descriptive

(16:52):
names where possible, becausethis really aids in debugging.
And make sure that you doproper memory management.
So weak references whereappropriate and make sure that
you clean things up and logbackground task lifecycle for
debugging so that you canactually see what is happening.
So this background taskmanagement system integrates
seamlessly with the locationservice.

(17:13):
When the CL location managercalls did update locations, we
extract the most recentlocations, start a background
task process, the locationupdate and then end the
background task.
The processLocationUpdatefunction does three things it
updates an internal last knownlocation property.
Notifies iDelegate about thenew location so it can be sent
to the server and it updates thelocation tracking state server

(17:39):
and it updates the locationtracking state.
This pattern ensures thatlocation updates can complete
the work even when the app isbackgrounded, but we're not
leaving background tasks hangingindefinitely.
There are some performanceconsiderations you need to be
aware of.
Background tasks have aperformance impact, so it's
important to minimize backgroundtask duration.
Keep background tasks as shortas possible.
You want to try and batchoperations, so if you have
related operations, group themtogether.

(18:00):
Make sure that you monitor yourtask count, because you know
how many background tasks youstart, so you need to make sure
that you don't start too manyand use an appropriate quality
of service if you are using anyqueues, because that gives you
the right priority on thebackground and gives the system
a better chance of giving yousome CPU time at an appropriate

(18:21):
time.
So the result of this is, afterimplementing this background
task management, that thelocation updates now continue to
work reliably in the background, because it used to be that
once the GPS callback into ourapplication was finished and we
did our fetching of a locationon the on the seal location
manager, that we we finished ourprocessing, processing, we

(18:44):
returned out of the callbackfunction and then the system
thought okay, we're doneprocessing, no background task
active, kill the applicationbecause work is done.
Um, so we've.
We've been able to have betterquality in our location updates
in the background byimplementing this background
task mechanism.

(19:05):
We made sure that we didn'thave duplicate background tasks
and we do proper cleanup toavoid memory leaks.
And we had better performancenow because the tasks were very
efficient and because of thelogging it was it was, it was
much easier to debug.
So lessons learned of all thethings that I just mentioned.
So both the audio engine and thegps stuff.

(19:27):
Um, audio engineering is hard,so this rig really reinforced
that.
Audio on ios is genuinelydifficult.
The combination of audiosessions, av audio engine and
system audio controls createscomplex interaction that is easy
to break.
So what I do differently thenext time is start with a
simpler audio architecture andadd complexity gradually instead
of like doing it one big movein one go.

(19:49):
The single engine approach wasactually working fine for the
use case, but the refactoringwas necessary to fix the m MP
volume issues because it was avery visual thing that was
annoying users.
But technically nothing wasreally wrong.
But we could have approachedthis issue and avoided this
issue probably if we approachedit a little bit more
incrementally.

(20:09):
So the real insight is thataudio on iOS is not just about
code.
It's about understanding howthe system works.
So the MP volume viewumeViewissue taught me that the iOS
interprets audio session changesas user input, which is why the
volume slider was moving.
And background task that's thesecond lesson needs very careful
management.
Background task management iniOS requires discipline.

(20:32):
It's easy to forget to endbackground tasks, which can lead
to memory leaks and poorperformance, and what I do
different next time is create adedicated background task
manager class that handles thelifecycle automatically.
This would be a reusablecomponent that tracks multiple
named background tasks so thatwe have some control over that.
The manager would have adictionary mapping task names to
their identifiers and it wouldprovide start task and end task

(20:55):
methods that take a nameparameter.
This would make it easy to havemultiple background tasks
running simultaneously withoutconflicts.
The start task method wouldcheck if a task with that name
is already running and, if not,it would create a new background
task with the providedexpiration handler.
The end task method would lookup the task by name and clean it
up appropriately.
This approach would allow mefor a much easier time creating

(21:17):
a scalable and reusable systemthat works across different
parts of the app that needbackground task management.
And then there's also a thirdlesson Sometimes the best code
is the code you don't write.
I spent hours trying to optimizeaudio buffer management, only
to discover that the currentimplementation was already
performing well.
This is a good reminder thatpremature optimization is still
the root of all evil, and thereal lesson here is that

(21:38):
performance optimization shouldbe data-driven.
I was optimizing based onassumptions rather than actual
performance measurements.
The current implementation wasalready efficient and my
optimizations actually madethings worse.
So what I do differently nexttime is to measure before I get
started, and once I havemeasurements, do small
increments and see if thoseactually generate an improvement

(21:59):
in the processing that we werealready having, the quality of
processing that we were alreadyhaving, and you can really use
instruments here to profile theactual performance.
And then you can optimize basedon real data.
And there's a fourth lessonhere Logging is your best friend
.
Audio debugging is nearlyimpossible without extensive
logging.
I added logging at every stepof the audio pipeline, which

(22:21):
made debugging much simpler.
So, and a pro tip is to useemojis in your log messages to
have to make them easy to spotin the console.
So a microphone emoji forrecording, a speaker emoji for
playback and one of these mappins if you want to log
something about locations.
And the real insight here isthat logging isn't just for
debugging, it's also forunderstanding.
By logging the audio sessionlifecycle, I was able to exactly

(22:43):
see when and why the MP volumeview was moving.
And then there's even a fifthlesson Threading is critical for
audio.
Audio operations need dedicatedthreads to prevent clipping and
ensure smooth performance.
The walkie-talkie service usesthree separate dispatch queues
one for receiving audio data,one for sending audio data and
one for playing back audio.
Each queue has a descriptivelabel which aids in logging and

(23:06):
debugging.
That includes our app's bundleidentifier and the specific
purpose of the queue.
They all use user-initiatedquality of service which gives
audio operations priority overbackground tasks and concurrent
attributes to allow multipleoperations to run simultaneously
.
This separation is crucialbecause audio is time sensitive,
as I already mentioned, Ifaudio is not processed in time

(23:28):
or interrupted, you get clipping, stuttering and dropped audio.
And by giving each type ofaudio operation its own high
priority queue, we ensure smoothoperation.
So audio operations are timecrucial and if you use user
initiated quality of service,that ensures that audio
operations get the exactpriority that it needs.

(23:50):
And then there's a bonus tip,and that's a bonus lesson, I
should say, and that is numbersix code cleanup is worth the
time.
Removing dead code anddeprecated functionality might
seem like busy work.
Bonus lesson I should say, andthat is number six code cleanup
is worth the time.
Removing that code anddeprecated functionality.
Functionality might seem likebusy work but it's actually
crucial for maintainability.
This week I removed an unusedmpvue slider view extension, a
bunch of deprecated audiosession code and some

(24:10):
unnecessary add main actorannotations.
That code creates confusion andmakes debugging harder.
By removing unused code, I madethe code base cleaner and
easier to understand, and userexperience matters.
That was the whole purpose ofdiving into this MP Volume View.
This is Lesson 7.
The MP Volume View was drivingusers crazy, even though it

(24:32):
wasn't technically a bug in theapp and users don't care about
the technical details, they justwant the app to work as
expected and a volume slidermoving around on its own is not
really expected behavior anduser experience bugs are just as
important as functional bugs,sometimes the most satisfying
fixes are the ones that improveuser experience, even if they're
not technically, technicallycomplex, while in this case,

(24:55):
getting it right was actuallyquite complex.
So, looking ahead a little bit,I'll probably be working on some
more audio performanceoptimization, some more
background task monitoring andsome error handling to improve
things, and I want to getstarted on the switch ui
integration and I want to dosome combined reactor

(25:16):
refactoring, because there'slike a mix of reactive code that
uses, delegates and combine andwe need to standardize on one
approach for better consistencyand easier testing.
So what I'm really excitedabout is the SwiftUI integration
.
I'm planning to startintegrating SwiftUI in the
existing UIKit app.
This will be a gradualmigration, of course, starting

(25:36):
with new functionality andfeatures, and I'm particularly
excited about using SwiftUI forthe walkie talkie interface or
some new component that we mightbe adding.
I also want to improve thetesting infrastructure because,
especially for audio inputcomponents, audio testing is
notoriously difficult, but Ithink we can create some good
test harnesses there.

(25:57):
Do iOS 2025 is happening and I'mreally excited about the
upcoming event in Amsterdam thisNovember.
It's on November 11th and 13that NEMU Science Museum and it's
going to be an incredibleopportunity to connect with
fellow iOS developers and learnabout the latest trends in iOS
development.
The conference featuresworkshops on topics like
building connected devices withembedded Swift, plus two days of

(26:18):
inspiring talks from industryleaders.
If you're interested in iOSdevelopment, I highly recommend
checking out doioscom.
So that's do-ioscom link in theshow notes.
It's going to be a great way tostay current with the latest
iOS technologies and networkwith other developers facing
similar challenges.
So what I'm dreading is iOS 26compatibility.
Ios 26 is now released and wereally need to start to test our

(26:41):
audio implementation and see ifthere's other issues.
I did cursory tests and it allseems to work out fine, but
there's always this thing thatyou didn't look at or that you
forgot about and that we reallyneed to get right.
And I also want to look atmemory management in the app.
There are some strange thingsthat are noticed and we need to

(27:01):
be very careful when dealingwith memory management in the
app.
There are some strange thingsthat are noticed and we need to
be very careful when dealingwith memory management, and
there are some complex retaincycles and we need to ensure
that everything is like properlycleaned up.
So I really think in the bigpicture of things this week was
very foundational.
I've established a solid audioarchitecture that should serve
well going forward, and the nextphase is about optimizing and

(27:24):
integration.
I want to make sure that theaudio system is not just
functional, but also that theperformance is good and that it
stays maintainable.
And the most important thingthat I learned this week is that
audio on iOS is a system-levelconcern.
It's not just about writingcode.
It's about understanding howthe system works and how
different components interact.
This understanding will becrucial as we continue to build

(27:45):
and optimize the audio features.
So I want to hear from youabout your experience with audio
on iOS, of course.
So some questions for you.
If you care, use one of thechannels that I'm available on
to answer those.
And audio architecture.
That's the first question.
How do you architecture audiocomponents in your apps?
Apps?
Do you use separate engines forrecording and playback?

(28:06):
Second of all is background task.
What is your approach tobackground task management?
Have you found any patternsthat work well for you?
Uh, testing audio how do youtest audio functionality?
What tools and techniques haveworked for you?
Um, performance, I mentionedinstruments.
Are there any ways that youtest performance, or are there
different tools that you use andwhat specific metrics are you

(28:29):
mostly interested about?
And then SwiftUI and audio.
That's probably something thatI need to do at some point as
well.
Anybody integrate the SwiftUIwith audio components and were
there any specific challengesthat you faced?
And another?
Another final question is do iOSconference Are you planning to
end to attend do iOS inAmsterdam?
And, if you look at the speakerlist and their topics, what are

(28:52):
the topics that you're mostinterested about?
As always, you can reach me onTwitter at app force one that's
app force and then the numeralone on LinkedIn Mastodon, blue
Sky, anywhere.
I will make sure that thoselinks are in the show notes.
Make sure to reach out and youcan always use the text.
The show feature of my podcastand your input will really help

(29:16):
me shape future episodes anddirections of this work log.
Also, if you're planning toattend DoIOS, I'd really love
for you to connect with me there.
It's always great to meetfellow iOS developers in person
and share experiences, and youcan find me at the conference,
of course, because I'm organizedand I'll be sharing some
insights from this work log andaudio engineering challenges

(29:38):
with anyone who's interestedwhile we're there.
So this week was all about audioengineering and background task
management, two areas that arenotoriously tricky in iOS
development, and this is what wecovered.
So the key achievements werethat the audio engine
refactoring was successful.
I split the AV audio engineinto separate recording and
playback engines.
The MP volume view issue wasfixed, so this thing is not

(30:02):
moving around anymore.
Unexpectedly.
We fixed the background taskmanagement, so we implemented
proper lifecycle management forlocation tracking and I did a
lot of code cleanup by removingdead code and deprecated
functionality.
We did a technical deep dive onthe audio architecture.
I tried to do a detailedwalkthrough of the audio engine
split, how we managed the audiosessions so a unified audio

(30:25):
session helper to preventconflicts and making sure that
we don't set audio sessionproperties if they're already
set so that you don't set thesame values twice.
Background task lifecycle soproper management of iOS
background tasks, some threadingissues and there was also some
lessons learned.
In recap, audio on iOS isgenuinely difficult and requires

(30:46):
system-level understanding.
Background tasks need carefullifecycle management.
Sometimes the best code is thecode that you don't write.
Logging is essential for audiodebugging.
Threading is critical for audioperformance.
Code cleanup is worth the time.
User experience.
Bugs are just as important asfunctional bugs and, looking
ahead, I'm going to be workingon audio performance
optimization, swiftuiintegration, combined

(31:08):
refactorings, ios 26, fixedcompatibility testing and
verification, some legacy codecleanup, because we're probably
going to deprecate the supportfor older iOS versions because
we're now supporting back to iOS15.
I'm aiming for at at a minimumlike iOS 17.
And I'm hoping that thisepisode demonstrated what real

(31:29):
iOS development looks like thechallenges, the debugging
process and the satisfaction ofsolving complex problems.
And next time we'll dive intothe things that I'll be working
on this week.
Keep building amazing iOS apps,and that's it for this week's
Work Log, and I'll keep you onthis week.
Keep building amazing iOS apps,and that's it for this week's
work log, and I'll keep youposted on any developments and
make sure to check the links inthe show notes.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

What Are We Even Doing? with Kyle MacLachlan

What Are We Even Doing? with Kyle MacLachlan

Join award-winning actor and social media madman Kyle MacLachlan on “What Are We Even Doing,” where he sits down with Millennial and Gen Z actors, musicians, artists, and content creators to share stories about the entertainment industry past, present, and future. Kyle and his guests will talk shop, compare notes on life, and generally be weird together. In a good way. Their conversations will resonate with listeners of any age whose interests lie in television & film, music, art, or pop culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.