All Episodes

July 5, 2022 47 mins

On this week's episode, we break down how camera APIs work in Android and why third-party camera apps just can't match the features and quality produced by the stock camera. Long story short, it's a mess. What gives? And what's being done about it?

We're joined by Mohit Shetty, a developer behind Secure Camera, the camera app on GrapheneOS and available to everyone on the Play Store.

  • 01:48 - How does hardware fragmentation make camera app development on Android inherently more challenging than on iOS?
  • 03:52 - Was there anything Google could have done in the early days to make things better?
  • 08:21 - Why don't OEMs bother with making sure third-party camera apps work the same as the stock camera app?
  • 12:27 - What are some features that OEMs can't expose to third-party camera apps through Android's camera API?
  • 17:20 - How does Android's camera architecture work? What is Camera HAL 3?
  • 20:23 - How will Google Requirements Freeze (GRF) affect camera HAL versioning?
  • 24:11 - How do third-party camera apps interface with multiple cameras?
  • 29:28 - What is the Camera2 API?
  • 32:52 - What is CameraX and what can (and can't) it do?

Android Bytes is hosted by Mishaal Rahman, Senior Technical Editor, and David Ruddock, Editor in Chief, of Esper.


Esper enables next-gen device management for company-owned and managed tablets, kiosks, smart phones, IoT edge devices, and more.

For more about Esper:


New from the Esper blog:


Our music is "19" by HOME and is licensed under CC BY 3.0.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
David (00:01):
Hello, and welcome to Android bites powered by SPER I'm David Reddick.
And each week I'm joined bymy co-host Michelle ramen.
As we dive deep into the world ofAndroid this week, we're talking about
something that I think everyone onAndroid has had some kind of frustrating
experience with, which is how.
Third party applications be theySnapchat, Instagram, um, or any other

(00:23):
application that has to use your cameraon your device to capture content.
How does that interactwith the operating system?
How does it interact withyour device and what are the
challenges associated with that?
And we have a guest today who isreally experienced in working with the
Android camera APIs, and, uh, MichelleI'll allow you to introduce him.

Mishaal (00:40):
Thanks David.
So on today's episode, we've invited Moji.
He's part of the graphine OS project.
If you recall, we talked about thisproject on a previous episode of the
show and when we invited one of thedevelopers on the actual operating
system, but today we're talking about.
The Android camera API.
So we wanted to invite an expert whohas worked with Android camera API.

(01:03):
So that's why I invited Mohit who'spart of the development of the
camera application on graph OS.
So Moji, can you just giveus a brief introduction?
Just tell us what is the applicationthat you work on at graph OS.
Hi

Mohit (01:18):
everyone.
So basically I've been workingon the secure camera app at
Graphos over the last few months.
Currently, this application is beingused in production by Graphos for all
this devices, which is on replacementto the default USP camera app.
Apart from that, you can always find theapplication on the play store, which so

(01:42):
we support under devices starting off.
Android

Mishaal (01:46):
10.
Yeah.
Thank you.
Mohit.
So he's working on a cameraapplication and of course, in order
to develop a camera application thatsupports multiple devices, you need
access to the Android camera APIs.
But as David mentioned at thebeginning of this show, for those
of you who are using Snapchat,Instagram, you're wondering like,
how do they work and why do they not.

(02:06):
Nearly as good photos as theirstock camera app counterparts.
It's a really tricky andcomplicated question to answer.
And if you look on the iOS sideof things, that's not true.
Snapchat, Instagram you'll takepretty much nearly identical photos.
If you use the stock iOS camera app,as you do with those social media
apps, but on Android, you'll getvery different results depending on.

(02:27):
Why the issue of coursestems from the big F word.
The one we always bring up on pretty muchevery episode on like every newsletter
it's fragmentation, but how exactlydoes fragmentation affect things?
There's two aspects.
There's camera, hardware,fragmentation, and there's
camera, software fragmentation.
First of all, hardware, fragmentation.
This is like the reason why everythingis the way it is and why camera

(02:50):
software for fragmentation stems from.
This one's kind of easy to imagine.
Hardware is fragmented because whereason iOS, you would have one manufacturer.
You have apple who makes a handfulof phones and they control the
software stack from top to bottom.
And there's only a handfulof image sensors to consider.
On the other hand, Android, you havethousands of different device models

(03:11):
from dozens of different brands, eachrunning their own operating system,
fork of a O S P with different Siliconfrom media tech, Qualcomm, et cetera.
And thus they have their own image,signal, processor, implementations, and
then they also source image sensors fromdifferent vendors like Samsung and Sony.
So you have all these differentcombinations of hardware and

(03:32):
software to consider in the Android.
So if you're a app developer looking toimplement to create a camera app that
supports all these different combinationsof devices, you're gonna have a bad time
because there's so many different quirksand different capabilities of devices that
it's, it's a staggering mess to consider.
So Moji, I wanted to ask youjust taking a step back and

(03:55):
looking at the overall picture.
Do you think it was inevitable?
The Android camera implementations, likethe situation would've become a mess.
Was there anything that Googlecould have done or maybe mandated
in the early days of Android?
So he wouldn't get to this.

Mohit (04:10):
So before we speak about how we could have solved this problem
back then and diving deeper into thisdiscussion, it is important for us to
understand what the problem really is.
So the messy Android cameraimplementations that are often found
in apps that directly use the cameraone or the camera to API targeting
a wide range of devices are mainlyattributed to the OS or vendors.

(04:33):
Not.
Implementing these standard APIswith the hardware that the device
relies on this in tone leads to otherapps, somehow working around those
unexpected issues from time to timewhile trying to solve this may sound
as simple and bad as short debugging.
Analyzing, what could have possiblygone wrong since having access to device

(04:53):
specific code isn't always feasible.
It isn't actually the case.
It's much worse than that.
Workarounds that work for a specificdevice may or may not work for in though.
Or could even worse cause existingsupport for another device to break
this leads to having an entire mappingof device specific implementations,
just for a single functionality,even for a simple camera app, that's

(05:17):
expected to work across a varietyrange of standard Android devices.
Now try imagining this for everypossible camera functionality and more
importantly, your code and projectthat was meant to fulfill a different
set of requirements, but is nowbusy, heavily investing resources,
working around device specific bugs.
All these issues could have greatly beenresolved if Google or the O HHA back then.

(05:41):
Stricter compatibility, testsuites and compatibility
definition, document requirements.
That's CTS and CDD requirements wouldhave been developed, promoted and
well enforced instead of the currentscenario where the vendors don't
really need to get their devicesCTS or CD compliant certified before

(06:02):
releasing that device into the market.
The scope of these CTAs could have beenexpanded, further, more requirements that
are described as recommendations could bemade mandatory by enforcing the passing
of those CTS CTAs documents for vendors.
Apart from that, they could have alsodesigned better camera APIs from the
beginning that are not needlesslycomplex for both the app developers

(06:26):
and the camera driver or hardwareabstraction layer implementations,
to handle that by making the overallprocess for writing compliant code.
Much easier.
So I

David (06:36):
guess I, I have a question here and it's kind of.
philosophical.
If Google is more strict about this andhad a much more rigorous CTS process
around camera, API and compatibilityand features, how much do we think this
would reduce the amount of innovationthat has happened in handsets?

(06:57):
Because they've been able to do things.
Break Android's camera framework,essentially because they won't work
with third party apps the right way, butthey will work with the vendor's camera
application where they've developed thisvery special functionality, whatever it
is, super zoom or some kind of specialportrait mode or lighting filters even

(07:22):
can be stuck behind that wall now.
So I guess that's my question.
You know, if Google ismore strict, would we see.
Fewer features, camera features on phones

Mohit (07:32):
for that.
Like we could separatethose implementations.
And like, for example, for the basicfunctionalities that other applications
are expected to use, we could have hardand separate implementation for them.
And in order to add thoseadditional features such as.
The filters that we see on modernphones, we precipitated them into
different implementation that couldhave been interfaced on top of the

(07:54):
existing, basic implementations.
And for the initial basic implementations,we could have ensured that all
the test cases passed and we couldhave, as I just said before, we
could span the CTS further and.
Requirements that are described asrecommendation in the docs could be made
mandatory by enforcing the passing of theCTAs or CGD for the vendors that who want

(08:18):
to release their devices into the market.

Mishaal (08:20):
So I have a kind of different take on this
question that you asked David.
So like, Would innovation be hamperedby enforcing camera compatibility
with third party applications.
And I think because of the waythings work right now, I think
the answer would be yes right now.
Like there, there are ways that OEMscould implement feature parody between

(08:42):
what their stock camera has access to.
And what third partycamera apps have access to.
Of course, as Moit mentioned,Lee APIs themselves.
They're a little frustrating to use andthey're a little difficult to use and
Google has made strides over the yearsto improve them, but there are APIs.
Like they, they could expose alot of the functionality they
offer in their stock camera app.
But why don't they the question then?

(09:03):
Like what would imposing aCTS requirement actually do?
And I think that would just significantlydelay the launch of any devices.
The way the Android market works for nowis like, if you're in like a, a Chee or a
shams competing in Southeast Asia, right?
You don't have time to focus on makingsure your latest mid-range phone, a
camera app, your stock camera app,and third party camera apps have

(09:24):
access to the same functionality.
Right?
You gotta get that phone from conceptto design, to testing, to launch
within like maybe a year or less forsome of these like mass market phones.
And if you're doing that with a wholerange of, you know, budget mid-range
and sometimes high end phones, you justdon't have time and resources to invest
in making sure all of the innovativefeatures that you wanna market on your

(09:48):
phone, on your stock applications willwork the same on third party camera
apps, because there's just so manydifferent considerations to implement.
Like why bother doing.

Mohit (09:58):
And the solution for the same could be like Google or the open handset
Alliance could have probably createda set of designed a set of standard
classes and tied up with certain hardwarecamera companies and probably given
out the code for each of those standardclasses in the a S P source code itself.
So that would even ensure quick delivery.

(10:20):
The devices in the market, as well aswe could have an fallback class for
all the other vendors that wish to havetheir own well research implementations.
And we could over there thensee the CT C D thing in a much
more stricter way over there.
So like tracking those issuesmight, would be much easier than

(10:40):
having the mess we currently are in.
So like, that's justan out of box solution.
Like I haven't really workedwith hardware and all, so like
there's an abstract solution away.

Mishaal (10:49):
That is a good point that you brought up though.
Google should.
I think that would be themost effective solution.
Bypassing working with OEMs and workingdirectly with the is P designers
and working with the image sensorvendors, because OEMs, they have
multiple different models, but theis P and the image sensor vendors.
They're distributing just a few specificproducts and they're writing the drivers.

(11:10):
So if Google could get them tostandardize the way their drivers
interact with Linux and Android, thenthat would go, that would do wonders.
For how all that propagatesthrough the market, but there are
some complications to consider.
First of all, is there even enough of adesire from consumers for OEMs to actually
like bring feature parody between stockcamera apps and third party camera apps?

(11:34):
We do.
We have seen, you know, somemarketing from Samsung and Google,
you know, the partnerships withSnapchat and Instagram and whatnot.
So there is clearly some potentialthere because if they're not gonna
be marketing these features, ifthey think people don't care about.
Are they even legally allowed toexpose some of the features that
they offer in their stock camera app.
There are a lot of vendors behindthe scenes whose tech are in
smartphones that you've never heard of.

(11:54):
There are some facial recognitionvendors who provide their software
implementations to smartphones.
We don't know the exact terms oftheir agreements, but maybe their
terms say that you're only allowedto use this in your camera app.
And we don't want our technologybeing used in by any arbitrary
third party app, because we don't.
A licensing fee from them.
So like what wouldhappen in that situation?
Would the OEM even be allowed to license,maybe their BD mode drive from some

(12:17):
third party tech, maybe, maybe not.
So that could be one of the reasonswhy, you know, these features
aren't being across the board,exposed to third party apps.
So mahi, I kind of wanted to ask you,there are ways for OEMs to expose features
to third party apps through camera APIs.
There are also a lot of features thataren't even possible that they can't

(12:38):
be exposed to third party apps simplybecause the Android camera APIs don't
provide a way to expose those features.
So what exists, what are some examplesof, of features that say you can't
implement and secure camera app?
Because they're just not an API for it.

Mohit (12:52):
So the secure camera that web graph has mainly focused on providing the most
simple features to the users initially.
So we, for the timing have don't haveany search fusion, mind that cannot
be implemented, but we are currentlysticking to only using the camera cyber
to ensure that, uh, like we don't end.
Spending too much time on dealingwith the device specific cos.

(13:16):
So in my experience, there are not reallymany examples of such cases in general.
Like the camera APIs that Androidprovides while being complex are
highly extensible nature and henceimplementing any valid feature with
some additional code or liability.
Our support library.
Isn't impossible as search, but ofcourse there could be other limitations

(13:38):
that could be based on the scope ofthe project, or maybe how practical
is it to implement a certain featurein terms of its maintenance or maybe
the hardware limitations that could.
Probably make it impractical to havea certain feature and quite a lot
of devices that the project targets.
So like mainly depends on the, how thedevelopers and perceive the problem

(14:00):
and like what the scope of the projectis and how much time and efforts
they are willing to give for it.
And of course they can be hardwarelimitations apart from that.

Mishaal (14:10):
Right.
So can you just give me someexamples of features that you can't
implement because of API limitations?
So like portrait mode, That is thatsomething you can implement beauty mode?
Is that something youcould implement, et cetera?
What are some features thatyou can't because of, you know,
a lack of support in the API?
So

Mohit (14:27):
like one of the straightforward examples for that could be the
vendor extensions that the cameraX library provides, which aren't
available for most of the devicein the market, which mainly include
the portrait mode and the night mode

David (14:42):
could that include augmented reality features, for

Mohit (14:45):
example, Yeah.
Like we could do that, but as far as Iknow, like the Android camera repairs
don't support anything related toaugmented reality, but we could surely
use some external library for the same.

Mishaal (14:58):
Yeah, that's a good one.
David pixel camera usedto have a AR stickers.
I think they were called in the cameraapp directly integrated and there was
just no way like that wasn't exposed at.
Two third party apps via API.
Like of course there'sa separate AR core API.
That's part of Google play services,but the actual AR sticker feature in
the Google camera app, they're just it.

(15:19):
I wasn't exposed to thirdparty apps as far as I'm aware.
So speaking of the camera API, we'vekind of talked about the hardware
side device fragmentation side.
Now I wanted to actuallytalk about the API itself.
I think that's probablythe most interesting part.
The thing people are actuallywanting to hear about clearly
they've evolved over the years.
Right?
Mohi mentioned we had camera one andwe have camera two and I'm sure people

(15:42):
have also heard of camera Hal three.
So like what do these numbers mean?
And we'll get to that in a bit, but Iwanted to talk briefly about the evolution
of the camera API and how they work.
It's only recently that the camera APIshave become not a nightmare to use.
As Malik mentioned, the secure camera.
App uses the camera X API, whichis part of the reason why camera

(16:03):
app development has become simpler.
But in the past most camera app makershad to use the framework camera and they
were kind of a nightmare to implement fromwhat I've heard, reading on, uh, developer
forums and people speculate the reasonSnapchat for many years used to just take
a screenshot of the view finder insteadof actually using the camera API directly

(16:24):
to take a photo is because they'd rather.
Deal with using a, a lowquality screenshot of the view.
Find to actually implement the onevery device they wanted to support
which remember this is Snapchat.
So that's like millions, tens of millionsof users on multiple different devices.
Then rather infamously, a few yearsago, moment who sells camera, hardware,
accessories, and they also makean app for iOS called pro camera.

(16:48):
They actually tried to porttheir pro camera app to Android.
But after two years they just gave up.
They just said we quit.
And they gave the reasonto nine to five Google.
They said that the reason wequit is because of fragmentation.
They had this chart that showedhere are all the features we support
on pixels on Samsung devices.
And like there's is like a wholebunch of green, whole bunch of
yellow, whole bunch of reds.

(17:08):
It's just so inconsistentwith what they had to support.
And there's so many different models.
It just became not really feasibleto support without investing.
significant man hours.
So why is this such a nightmare?
Like what exactly left moment to quit?
So let's talk about theAndroid camera architecture, a.
So at the low level, you havethe drivers for the actual camera

(17:29):
slash image sensor on the device.
Those are written image, sensorvendors, I Samsungs and Sony.
Those are distributed to OEMs forintegration into their builds.
And then next in the pipeline, all the rawBayer data from those image sensors are
processed by the image signal processor.
That's part of the SOC inside device.
And of course, that image signalprocessor is developed by another company,

(17:52):
the Qualcomm media tech, or Samsung.
That ISB does processing of its own on theraw aired output from the image sensors.
That doesn't happen if you know,you're taking off photo, of course,
but that's another topic entirely.
It's a feature that's notsupported on every device.
So you have the drivers that areclosed source and provided by
the image, uh, sensor vendors.

(18:14):
And then you have the, is Parchitecture implementation.
That's written by the Silicon.
Both of those are black boxes,pretty much to camera apps.
Like you have no insight intoexactly what capabilities they
have or their data sheets.
You can instead only rely on whatcapabilities they expose to the
framework, which is determined by thehardware abstraction layers that are

(18:36):
written by the OEM slash the vendors.
So apps on the Android side, they.
With the camera hardware usingthe camera two API, which is the
framework API that interacts withthe underlying camera service and
Android, which then interacts with thecamera, hardware, abstraction, layers,
hardware, abstraction, layers to those.
You don't know.
They define the standard interfacebetween the higher level Android framework

(18:59):
and the lower level camera driver.
And as I mentioned, thatimplementation is what defines
what capabilities are exposed to.
So there are multiple camera.
How interfaces thatOEMs have to implement.
There's a camera provider, how,and there's a camera device.
How, but the problem is thatOEMs aren't required to implement
a recent version of each.

(19:20):
How nor are they required toimplement every capability
introduced with each how version.
So as Mohi mentioned, Googlecould update the CD and the CT.
To test for more recent versions ofthese Hals and see if OEMs have actually
implemented them and defined certaincapabilities, but they don't right
now in order to pass certification,if you launch a device with Android 10

(19:43):
or later, you only have to implementcamera device, Hal version 3.2 or later,
Hal 3.2 was actually introduced allthe way back with Android 5.0 lolli.
Even Android 13 still has backwardcompatibility with how 3.2, even though
the latest, how is 3.8, which addedsupport for a really basic feature,

(20:03):
the flashlight brightness control.
So as you can see, a lot of featuresare being added along the way.
Some even quite basic, but because there'sno specific requirement to implement
a specific it's, it's all gonna dependon the OEM and the Silicon vendor.
Like what exactly are theywilling to implement and expose
to third party camera apps?

David (20:22):
Michelle, you know, before we drop into the next part, the
reason this exists is basically theGoogle requirements freeze, right?
Because Google has helped letting thephone vendors lag behind, essentially buy
up to what, like three years effectivelyor more, or is this a different situation?

Mishaal (20:38):
That's actually something that's going to make this situation even
worse because in the past, the hardwareabstraction layers could be updated.
They were basically required to beupdated whenever a vendor like Qualcomm
would have to update their implementationto support a newer Android version.
But with GRF, as you mentioned, say, adevice launches with Android 11, then

(21:00):
whatever hardware abstraction layersthat device ship with will never be
updated until that device reaches.
Once they're updated to Android 15.
Because Google's guaranteeing backwardcompatibility with vendor implementations
that are re letter versions behind.
So device running Android, like toAndroid, 12 Android, 13 Android 14,
and keep the same hardware abstractionlayers, and the kernel interface that was

(21:23):
introduced with Android 11, for example,the camera Hal 3.8, which introduces
support for flashlight whiteness.
Device that launches with Android12 is not gonna have 3.8 because
that version wasn't even introduceduntil now until this release.
So device upgrading the Android 13,probably won't even get support for
this basic flashlight brightnesscontrol feature, because vendors aren't

(21:44):
gonna go back and, and update how so?
Yeah, it's, uh, it's, it's a messbecause there's no real requirement on.
OEMs are required to expose andactually implement in their house.
But of course, Google chugs along.
They keep updating the underlyinghardware abstraction layer interface.
They keep defining newcapabilities in each Hal version.

(22:05):
For example, with camera device, Hal3.5, they introduced the ability for
OEMs to define zoom ratio, which actuallyprovides support for optical camera,
zoom capabilities to third party.
Then, of course, as Imentioned, 3.8, introduce a
flashlight, brightness control.
And then also in Android 11, with3.5, they introduced the ability
for OEMs to expose bouquet support.

(22:28):
So like they have to define thesecertain constants in their hardware
abstraction layer to expose opticalcamera, zoom capabilities, and bouquet.
It's basically up to the Goodwill andthe willingness of the OEM and the
vendor they provided the house from,in order to implement these features.
So I wanted to ask you now, Mohit,if you are aware of any way your
work has been affected by the, howversions and implementations across

(22:52):
different devices, like, are you ableto access certain features that are
supported by Android technically, butbecause the device doesn't have the
right how version, you know, you can'tactually use it and secure camera.
So

Mohit (23:06):
the secure camera app that we are working on graph know is at least for
most of the time hasn't directly reliedon the hardware abstraction layers or
the camera one or the camera to API.
The main reason why we push for sucha design is the device specific issues
that we would otherwise have to spenda lot of our time to deal with that
could unexpectedly blow up as more noncamera supported features accumulate.

(23:28):
And as a app gets used by more kinds ofdevices, which is quite closely related
to the fragmentation issue that we werediscussing earlier on this podcast.
This valuable time could be spent bycontributors on other places that require
more help and attention and graph.
However we did recently make a fewexceptions rule by supporting an AI story

(23:49):
that relies on the camera to interruptAPI and introducing experimental support
for Zeel that has not been supportedby a lot of devices for the time being.
So coming back to the question, no, likewe haven't really faced any issues or
troubles while dealing with differentversions of file, primarily because
we haven't really dealt with them.
In the code of a camera application.

Mishaal (24:11):
Yeah.
At least with project travel introductionand the vendor test requirements
surrounding how releases you can beassured that at least the main rear
facing and front facing camera willbe operable on any given device.
So for example, if you were to take andevice that supports project trouble,
such as the Lenovo tab, K 10, and youwere to flash a generic system image

(24:34):
of Android, 11, 12, or 13 onto it.
Very very likely you could just open the,a SB camera app and the rear facing camera
and the front facing camera would work.
And the reason is because that'ssomething that Android actually mandates
testing for part of the vendor testrequirements is that the camera has to
be at least the main rear facing andthe front facing have to be operable.

(24:57):
But of course, nothing else is guaranteed.
You're not guaranteed to havethe image processing models.
And the add on camera features thatthe stock camera app has most devices
are at least most smartphones.
These days have multiple rear cameras.
If you try to use those on a GSI, youprobably won't be able to actually
access the secondary cameras.

(25:18):
Android actually does providesupport for third party camera apps.
Use those cameras.
But the issue is that it's through anAPI that Google introduced an Android
nine called the multi-camera API.
So what OEMs have to do is they haveto define logical camera devices,
logical being like they're not physical.

(25:38):
These logical cameras are composedof two or more physical cameras
that point in the same direction.
So for example, you can have a maincamera and a telephoto camera, and
you can create a logical camera.
Is composed of the main and the telephotoand the benefit of doing that is that
for the third party camera app, theysee it as one camera and it can change

(25:59):
between the two basically seamlessly.
So if you're like zooming out fromone X to five X, the underlying
camera, how would basically handle theswitching between the lenses seamlessly?
The app itself wouldn't haveto manually detect, oh, I'm
supposed to change lenses here.
The logical camera interfacewould define that change.
The only problem of course isthis is yet another thing that

(26:19):
OEMs don't have to implement.
Google has tried to make support formulti-camera implementation mandatory,
but they actually reneged becausethat conflicts with GF, which is
something David mentioned earlier, theywanted to make it mandatory for all.
And Android 12 launch devicesto support multiple cameras.
They want to make it sothat if a device ships with.

(26:40):
Or more rear cameras on the back.
The OEM has to define at least onelogical camera for those rear cameras.
But the problem is that because devicescan launch with Android 12, but running
Android 11 vendor software because ofGF, Google can't mandate that because
that would make those devices not capableof supporting this feature and thus

(27:01):
not capable of launching with Android.
so, uh, Mohi, I wanted to ask youa bit about your thoughts on the
multi-camera support on Android.
What are your general thoughts onAndroid support for exposing multiple
cameras and actually using them in apps?
Do you think a requirement forOEMs to support the API would
actually make a difference?
The

Mohit (27:20):
multiple cameras that we often find on modern Android phones for the most of
the time have had separate IDs assignedto each of them, such that we could choose
a camera by their vendor specified IDs.
A while ago, Android came up withsome support for a multi-camera API.
That can be used to merge multiplecameras into a single logical camera

(27:43):
instance, having its own ID that isoften used to enhance these zoom levels
provided by a single logical camera.
Although the API again, can be certainlyused for several of the purposes.
Zooming is just one of them forthe secure camera application.
Yes, it does make a differencein the range of zooming levels

(28:04):
that we provide to our users.
The multi camera zoom.
Expected to be implementedby the vendor in our case.
So like for example, we've come acrossmany instances, especially in case of
non pixel devices where the device didhave the hardware required to enhance the
zoom range, such as the ultraviolet linksto support zooming out a bit further.

(28:25):
But the devices themselves didn't supportzooming below the default Onex, mainly
because the vendors didn't actuallyimplement them for their own reasons,
such as there was in case where the deviceactually predated the feature itself.
So the vendor themselves could notno longer roll out updates since
they no longer supported that device.

(28:46):
So now having a gen solutionby maintaining a database
of physical camera IDs for.
All the also devices on endwouldn't really be feasible.
And perhaps it isn't even a viablesolution for the camera X team, as
there's a possibility that these IDs mayjust change in between updates, which
could again, drastically increase thecomplexity and overall efforts required

(29:10):
for maintaining this entire work around.
Hence like according to us, it mightbe better to mandate multi camera
zoom in the test suite for all.
Upcoming devices that otherwisehave the hardware to support it.

Mishaal (29:25):
Okay.
So yeah, you just mentioned something.
I actually wanted to talk abouta little bit next it's about the
camera IDs and how actually apps areactually supposed to be able to control
the camera hardware using the API.
So I've already mentioned the camera twoAPI, which is the actual framework API.
That enables enumerating IE listing thecamera devices that are available on the

(29:48):
device, or that are exposed to Android.
That API also lets apps connectthose devices, configure the outputs,
send capture requests, and then readresulting metadata and image data.
So this API it's, uh, alittle difficult to use.
Because there's a lot of legwork,a lot of preparation work that apps
have to do before they can actuallystart doing capture requests.

(30:11):
First of all, since OEMs have toactually expose the capabilities for
each individual camera to the camera,too apps then actually have to probe.
What features are supported by thisspecific camera device on this ID.
If you're ever used one of thosecamera, two API probing apps before
you've probably gotten like ahigh level summary of what they.
You see strings that say limited orlevel three, those strings basically

(30:33):
tell you, okay, this specific camerasupports this list of features, but
then it's up to the actual cameraapp to decide, okay, because this
specific sensor supports this feature.
This is what I'll enable inside theapp for when the user is using it.
Whenever you are trying to use a appthat uses the camera two API on a
specific device, you may notice differentfeatures than are what are available

(30:56):
on other devices used by other users.
So Mo I'm sure you've at least heardor seen bad reviews or complaints
from users who report a feature ortwo is missing from their device.
They blame you.
Or, you know, the graph OSproject, secure camera developers.
One, in fact, it's because the OEMdidn't expose the feature when writing

(31:18):
their interface for camera two.
So how do you deal with this?
Like, what exactly can you do about this?
So

Mohit (31:24):
while most of the features that our app provides are available across
all under devices that we support.
There are a few features that we onlysupport when the OEM implements and
exposes them for other apps to use.
Just as discussed earlier, we haverecently very often heard of complaints
regarding our app, not supporting zooming.

(31:44):
The default range, despite thedevice physically having support for
it on a lot of non pixel devices.
Apart from that, the major complaintthat we have received is regarding the
support for vendor specific extensionsor modes that the user thinks are missing
while they actually need to be implementeither exposed or, or UN implement.

(32:06):
By the vendor themselves for a app toshow up those modes on a given device,
based on where the issues reported, weinform the user about it and explain
them, why does a camera app not supportthe given set of features on the device?
And either try to look for sometimeline for the scene that is if
it's easily available or request theuser to inquire regarding the same

(32:27):
with their respective vendors, if byany chance it is something that the
camera X team needs to fix on their.
Or is related to pixel devices, notimplementing something correctly.
We reported to the camera X team anddiscuss how the issue needs to be
resolved or could be further addressed.
They're quite friendly and supportive.
So reporting issues to the cameraX team has never been on hassle.

Mishaal (32:50):
We've actually at least mentioned camera X a lot, but we've never actually
explained what it is for those of youwho don't know camera X is something that
Google introduced back in IO of 2019.
And the secure camera app that Mohiworks on actually uses camera X.
It's one of the few apps thatI'm aware of that proudly states
that it uses the camera X API.

(33:10):
I wanted to ask you Mo this is avery basic question about camera X,
just so our readers, our listenerscan understand what exactly it does.
What benefits does cameraX offer over camera to?
And like, why does thesecure camera use it?

Mohit (33:23):
so essentially camera X is a better designed, simpler, higher
level API, which is packed codescompatible with the camera and API.
But it also as well takesadvantage of the camera too.
And the advanced camera to featureswhenever they're available.
So essentially.
This ensures that the camera featuresthat you develop are of high quality

(33:44):
work across a wide range of deviceswhile allowing quick and steady
development of your application.
Thanks to the use case based modelingof the classes that, and the high level
simplicity that the library has to offer.
Apart from that, it has avery high quality and highly
performing cameras of face stack.
That's far better thanthe vast majority of.

(34:05):
Camera applications that are currentlyavailable as one short example
in the recent experimental Edisonsupport that they recently added to
the library, they included a ringbuffer and some pre-processing,
which was mainly to improve theoverall performance of that feature.
And hence the quality of theZL feature that they provide.
ZL basically essentially stands forzero shorter lag, which can be used

(34:29):
on the latency mode to take fastershots from your camera application.
Apart from that, the overall code qualityof the implementation of the camera
features tends to improve a lot whileusing the camera X library in terms of
readability, which in turn drasticallyreduces the overall maintenance cost.
The reason being that camera X worksaround a lot of device specific.

(34:51):
Cos impacting a specific ora certain range of devices.
So which in turn it, it makesthe code a lot more readable.
So that way, uh, by making anychanges or perhaps trying to
add any custom implementationover gets a lot more easier.
Apart from that one can alwaysexpect good support from the camera
X team for the device specificissues that the users face.

(35:15):
With the officiallysupported camera X features.
So basically this in tone has alsoencouraged us to investigate such
issues and bug reports further andreport them to the camera X team,
which could potentially help us helpa lot of organizations for your to
come while using the camera X library.
One can always fall back tousing the camera to APIs using.

(35:37):
Camera to interrupt API, whichoffers classes that are compatible
with the main classes that thecamera library itself offers.
So, but however, one must note that,um, the support is limited to an extent
that is not all features of the camerato APIs can be accessed via camera.

Mishaal (35:56):
Thank you Mohi for the rundown of camera X versus camera two.
So for those of you who listened toour previous episode on modern Android
app development, we talked a bitabout jet pack, support libraries and
how they simplify app development.
Well, camera X is actually one of thesupport libraries that's under jet pack
and like the other libraries, you know,it simplifies developing across specific

(36:17):
Android OS versions because camera twois an API updated with the OS itself.
So, of course there's gonnabe OS specific API methods.
There's gonna be different behaviorsdepending on the OS version and
the way the code is written, whateach method accepts and uses.
So like it's there and it'supdated with each OS version and

(36:38):
it's little complicated to use.
So what camera X does is simplifies all ofthat by wrapping around it and basically
just letting developers not worry.
The underlying implementation in theOS and just use camera X, but under
the hood camera X just passes thosecalls to camera two and simplifies
that interface for app developers.
Camera X, of course, because it'sa newer API and it's actually just

(37:02):
wrapping what's available in camera two.
It's not fully interoperablewith all the features that
are available with camera too.
As Moit mentioned, that's why.
The camera X offers the camera tointerop API, which lets some apps
use camera two APIs when there's nota camera X equivalent, but of course
that has some of its own limitations.
And then one of the other thingsthat, uh, we've been talking about

(37:25):
a bit or MOS mentioned severaltimes are vendor extensions.
So that's actually something thatwas introduced alongside camera X,
actually a little bit later in camera.
X's, uh, life cycle that came about.
So vendor extensions, for thoseof you don't know, basically
it's a library that OEMs create.
That exposes specificfeatures to third party apps.
So for example, an OEM can write acamera X vendor extension for HDR

(37:50):
face free touch and night mode.
And that would allow apps that are usingthe camera X API to actually use those
OEM provided effects in their own apps.
So like if an OEM has its own HDRimplementation, a third party app
could see that this device has avendor extension for HDR, and then they
could use that in their own camera.
I wanted to ask you Mohe, the securecamera app uses vendor extensions.

(38:14):
Can you tell us about theimplementation in the secure camera app?
Like what vendor extensions do youuse if they're available and what are
some of the challenges with vendorextensions as they are right now?

Mohit (38:25):
Secure camera currently supports all the five standard
vendor extensions that the cameralab degree has described in its talk.
However, the availability of thesevendor specific extensions, such as
bouquet, HDR, face research and night.
Site depends upon whether the vendoror OEM of the device decides to

(38:45):
expose them to other apps by thesestandards defined by the official
documentation of the camera X lab degree.
Currently, from what we have knownfrom our community of active users,
not a lot of devices, support camera,X's vendor specific extensions, all
pixels, the devices that mainly target.
At the time of this recordingdon't support any camera extension.

(39:09):
Although in a recent pixel feature dropsupport for night site was added to
Excel six as in camera to extension,which is unfortunately different
from camera ex extensions, very fewflagship Samsung devices support all the
camera, X vendor specific extensions.
However.
In one of the events that was recentlyconducted under Google IO, 2022 Google

(39:33):
said that they'll be launching their newextensions through camera versus camera
two, where they'll be providing theirown fully software extensions for when.
Entry level devices that don'thave them yet starting from the
bouquet portrait extension mode.
We're quite optimistic about itas this could help us giving a

(39:54):
more consistent experience for ourapplication across all the supported
devices, including our place to users.
And honestly, if this gets implementedas expected, it might shortly be both
the long weight of camera extensions.
Finally arriving for pixel devices.

Mishaal (40:11):
Honestly, when I first heard about vendor extensions in camera X,
I believe back when I was at XDA, ourdeveloper author, Zachary wrote an
article about vendor extensions andthere was a lot of hype around them.
Maybe this will finally solve thefeature parody issue between stock
camera apps, and third party camera apps.
And as the more I've learned, itseems like there's been major issues

(40:33):
with the implementation as youbrought up, even Google's own pixel.
Don't support camera X, vendorextensions, which is hugely
disappointing because they're theones who are pushing developers
like use camera X, use camera X.
Well, they don't even support itproperly for their own devices.
So that's a huge disappointment.
And then with the pixel featuredrop, as you brought up that
enabled night site in Snapchat.

(40:55):
But the way they did that is they releaseda camera two vendor extension rather
than a vendor extension through camera X.
And you think they'd bethe same thing, right?
If there's a vendor extension, there'sa vendor extension, but it's not only
apps that implement the camera two vendorextension API, which is something that
only introduce an Android 12 can use it.
Whereas camera X is vendor.
Extension is available across moreAndroid versions and is something that

(41:19):
more apps are expected to support.
But then Google goes ahead and does.
A camera two extension insteadof camera X, which they've
been pushing on app developers.
So it's kinda like a, it's a, it's areally weird situation where we're in,
where one side of Google is tellingdevelopers use camera X, but then
the other side is like, uh, yeah, wewill continue to support camera two.
And here is our flagship pixelphone with an extension that's

(41:43):
only available through this API.
We're telling you not to.
But fortunately, at least Google IO, thatwas a pretty significant announcement.
As you brought up that Google itselfwill start providing some software vendor
extensions for low end devices, startingwith like a bouquet portrait mode.
So all low end devices will have avendor extension for portrait mode

(42:04):
that apps using camera X can hook into.
It remains to be seen if they're ableto extend that to the other extensions
that are possible, including like night.
But, um, I'm not really sureif Google plans to expose its
brilliant night site feature to alllow end devices that are on its.
So we we'll see what happens with that.
Yeah.

David (42:23):
And I think that it's a, it's a question of the economics of a feature.
And so Boca and portrait mode, forexample, is basically democratized, um,
even very low end media tech chip sets.
I'm sure at this point in the camera stacksupport for some kind of portrait mode.
So it's no longer one of thosefeatures that you would want to.

(42:44):
To a high end device and not giving peopleaccess to that, especially in a consistent
way is probably worse for the platformthan just coming up with a standard
implementation that works for everybody.
And Google can keep adding value on theirpixel phones by introducing things like
IOS's portrait, lighting style mode.
Which is much more advanced and uses alot more algorithms to get the output.

(43:07):
So, I mean, a lot more algorithms.
I sound real smart when I say that.
Uh, but you, you understand what I mean?
So these initial extensions areprobably a good sign of what Google
thinks is most important to largestnumber of camera users, portrait mode.
Obviously being a big one.
And I think that over time.
Yeah.
Well, you will almost certainlysee more of these features.

(43:30):
They used to be gatedto the high end devices.
Probably just start to become astandard smartphone feature, like
having tool cameras, all of thisstuff sort of democratizes over time.
Right.

Mishaal (43:42):
Yeah.
And Mo said that he was optimisticabout the feature of camera.
And honestly, I can, I canunderstand why they have been slow
to implement some basic features.
Like version 1.1 of the library iswhat brought video capture support.
And I think that just came outearlier this year and stable.
And then one of the features thatthey've been working on in a O S
P support for CSL or zero shutterlag, but we'd mention that as well.

(44:05):
So like, these are features that you'vealready had access to with camera.
But they're not in camera X yet.
And then Android 13, for example,introduces HDR video capture support
in camera two, but I don't thinkthere's a, there's an equivalent API
through camera X, but if you're lookingto implement a very basic app that
uses some very basic camera capturefunctionality, it's easier than ever

(44:28):
thanks to camera X and other libraries.
So for example, if you wanted todo QR code scanning or barcode
scanning, Google play serviceshas a drop in solution for that.
But there are also, you know,open source solutions that you
could implement with camera X.
The ability for app developers toadd camera functionality into their
applications is easier than ever.

(44:48):
And it's not like the realmof professional camera,
app developers anymore.

David (44:52):
And with that, this is where we do our S per plug, because if you've been
listening this entire episode, you'veunderstandably come to appreciate just
how complex the situation is with camerason Android and how the OS interacts
with the camera and how applicationsthen interact with the operating
system, which interacts with the camera.
do you want to build for camera X?

(45:14):
Do you want to just keepbuilding for camera two?
Do you even need direct access to thecamera to accomplish what you're doing?
It depends.
And if you find yourself in a situationwhere you're trying to build the
device that needs to do something likeMichelle said, capture a QR code or
scan a barcode or recognize a face.
those are obviously verydifferent scenarios.

(45:34):
The camera needs to be ableto do different things to
accomplish those tasks.
So if you're wondering, okay, isthe camera on this device going to
be suitable for my work purpose,whatever it is, I'm capturing faces or
barcodes or taking pictures of cats.
I don't really know what you're doingwith the camera, but if there's something
specific and you're wondering, okay.
Is there an expert out therewho can tell me, what can I

(45:55):
actually do with this camera?
And is it extensible?
Is it scalable?
Can I do this with a bunchof different cameras running
Android, come talk to us at Esper.
This is the kind of thing we deal withregularly differences in implementations,
across vendors and hardware and software.
That is our bread and butter andunderstanding how to make that
experience consistent for you and yourdevices is what Esther's all about.

(46:17):
So if you want to learn more aboutESER check us out at SPER dot, I.

Mishaal (46:22):
Thanks David.
And with that, I wanted to giveMohi a brief chance to tell us about
where people can follow him as well.
So Mohi, tell us about wherepeople can follow your work.
So yeah.

Mohit (46:32):
Could follow me at my official getup page.
So it's get.com/.
MH she, which is M H S H E T T Y orcould follow me on LinkedIn as very, as a
student at the shin engineering college,you could follow me there as well.
So that's where I'm mainly active.

Mishaal (46:50):
And if people want to follow, if people want to try out
your work, they can go to the Googleplay store and download the secure
camera app, or they can try installinggraph OS on a compatible device.
If you have one, go listen to our previousepisode with the graph OS developer,
Gabe, if you wanna learn more about theproject and, uh, thank you for listening
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.