All Episodes

April 29, 2025 23 mins

Research has often suffered from shortcuts, assumptions, and poorly conducted user interviews, long before AI entered the picture. While there are concerns about AI exacerbating these issues, today we’re exploring how AI can actually improve research practices by standardizing and democratizing good research at scale.

Cori Widen, User Research Lead at Photoroom, joins us to share how AI is being leveraged to transform research practices at her company. She discusses the cultural mindset that encourages innovation and provides practical insights on how teams can use AI to elevate the quality and impact of their research.

Resources from this episode:

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
For time immemorial, people have been performing
research badly.
We didn't need AI to takepoorly calculated shortcuts or
make broad assumptions based onscant data or develop awkward
personas shaped by assumptionsand painfully unhelpful user
interviews.
So now that AI is here, itcomes as no surprise that the
research community is concernedthat the technology might wreak

(00:23):
even more havoc on researchquality.
Research community is concernedthat the technology might wreak
even more havoc on researchquality, but on today's show
we're exploring ways that AI canactually help to standardize
and democratize good research atscale.
My guest today is Corey Wyden,User Research Lead at Photorum.
If you're not familiar withPhotorum already, the company is
getting a lot of attention forthe innovative ways they're
using AI, and Corey gave me afascinating rundown of how all

(00:43):
of that starts.
At the cultural level, we talkabout how the mindset of the
company fosters an excitementfor exploring the possibilities,
as well as some practical waysthat teams can use AI to
appreciate, perform and usegreat research.
Let's jump in.
Oh, by the way, we holdconversations like this every
week, so if this soundsinteresting to you, why not
subscribe?

(01:03):
Okay, now let's jump in.
Welcome back to the ProductManager Podcast.
I'm here today with Corey Wyden.
She is the User Research Leadat PhotoRoom.
Corey, thank you so much formaking some time in your
schedule to meet with me.
Thank you so much.
I'm so happy to be here Me too,and Corey has actually been
working with me for a long time,and this is one of the first
times that we've actually gottento speak and not just

(01:24):
communicate via email, so thisis very exciting.
I feel like I'm talking to anold friend or an old pen.
Yes, very exciting, I agree.
We'll start off the way that wealways do.
Can you tell me a little bitabout your background and how we
got to where you are now.

Speaker 2 (01:35):
Yeah, sure, so I've been in the tech industry for
about 13 years.
For most of that time I wasactually working in product
marketing.
But as most people in productmarketing, research methods were
kind of a part of my job, right, like interviewing users and
things like that and at somepoint I just made the call and
said, actually that's the part Ilike the most the user research
so I transitioned to being afull-time researcher, first at

(01:59):
Lightrix and now at PhotoRoom.
I'm leading user research atPhotoRoom.

Speaker 1 (02:03):
Awesome.
So today we're going to befocusing on researcher-approved
ways to use AI for researchpurposes, which is kind of a hot
button issue right now.
So we'll kick it off on a hotbutton topic Using AI for
qualitative analysis.
What's your current methodologyfor combining AI and manual
work in a way that you feel goodabout that you can put your own
stamp of approval on, and howhas that evolved over time?

Speaker 2 (02:24):
Definitely controversial, and I would say
just a bit about how it evolvedbefore I go into, like, where I
landed now is that you know, Iwas also reluctant, I think like
the whole research community.
One because I didn't trust theAI to do a good job, and at the
beginning it really didn't.
That was very legitimate.
And the other thing that Idon't hear people talk about

(02:49):
enough is just that I loved myjob.
I actually wasn't waiting forAI to come and say let me make
it better, let me solve thispain point.
I liked things like qualitativeanalysis, the process of doing
it myself.
So I wasn't super excited aboutit at the beginning, but it did
become clear that it was kindof like figure out how to
utilize it or get left behind.
So when I started using AI forqualitative analysis, my first
process was to do my exactmanual process that I had always

(03:10):
been doing alongside trying toutilize AI and kind of comparing
the experiences and seeing whatthe AI did well, what it didn't
do well and like how it couldkind of like hit this sweet spot
of reliability.
It's been a lot of trial anderror and I think this process
will always be evolving, becauseAI is always evolving, but, as

(03:30):
of right now, I'll describe toyou where I am at this moment.
Okay, so I think it helps tohave like a concrete example.
So let's say I am analyzing astrategic research project and
most of the data is userinterview data.
So what I will do is I willhave all of the transcripts
within a project.
They're stored, we use Dovetailand then we use a tool called

(03:51):
Dust to query the AI based onall the transcripts that are in
Dovetail.
Okay, so basically, I'm using AIto ask questions about a set of
transcripts that I have for aproject.
So what I do is I basically Ihave my research questions in
front of me, which I mademanually, and I start prompting
the AI and asking for quotesrelated to our research

(04:12):
questions.
So like, let's say, for example, I want to know about pain
points in a specific flow orsomething like that.
So I'll ask the AI to find mequotes from the user transcripts
that relate to that researchquestion.
And what that is doing is it'sbasically replacing the process
of manually tagging interviews.
I think many, many researcherswould use various tools to

(04:34):
manually tag interviews based ontopics according to your
research questions.
And now I use AI for that.
And what I then do is I takethe quotes on each topic and I
put them into a mirror board anddo affinity diagramming, which
has always been my process foranalyzing interviews, and I
still do that manually and basedon the affinity diagram, I come

(04:56):
up with my insights for theproject.
And the other place where I useAI here is I actually run my
insights through and I say, likeyou know, these are the things
that I came up with, and I askthe AI to disagree with me or
define things like in thetranscripts that are contrary to
the insights that I broughtforward, just to see if I'm
missing anything, to check forbiases and things like that.

(05:17):
And that is kind of myFrankenstein process of manual
and AI for analysis.

Speaker 1 (05:24):
So you kind of mentioned that your overall
opinion of AI has shifted.
I think a lot of us have kindof had that sort of journey of
like a little apprehension andthen that's kind of evolved over
time.
So how's that evolved for you?

Speaker 2 (05:36):
Yeah for sure.
So at the beginning I felt likethere was a lot of pressure
within companies to figure outhow to utilize AI and it was a
bit directionless.
Right Like this is going tomake you more efficient, it's
going to help you do a betterjob, et cetera, et cetera, and I
wasn't sure exactly how toapply it and most of my attempts
initially weren't that great.
It wasn't that great atreplacing me in any part of the

(05:58):
research process.
However, that pressure didn'tgo away.
So gradually, kind of asChatGPT got better at handling
larger chunks of data, I kind oflike relented and I started
making like custom scripts to dovery basic things.
Like you know, maybesummarizing an interview or
something like that.
That opened my mind a little bit, but I think actually the big

(06:19):
turning point for me wasrelatively recent, like when I
came to PhotoRoom, becausePhotoRoom is super unique in its
approach to AI, bothuser-facing AI and also
internally as a company.
Is there pressure toincorporate it?
Sure, but I would describe thatpressure as like excited
pressure.
Everyone is very excited aboutall the different kind of use

(06:41):
cases that they are finding withtheir specific field and
profession using AI, so there'san atmosphere of you know,
people always sharing whatthey've accomplished, people
being really interested in howyou're utilizing AI, etc.
And that environment kind ofturned it in from this like, oh
no, I'm a researcher and I haveto figure out how to use AI to
something.
That's actually, I would say,like a fun and interesting part

(07:02):
of my job.

Speaker 1 (07:02):
Cool, okay, I want to dig out how to use AI to
something that's actually, Iwould say, like a fun and
interesting part of my job.
Cool, okay, I want to dig alittle bit more into kind of
culture at PhotoRoom in a littlewhile, because this is really
interesting and I think this islike it's a big factor that can
influence the adoption withincompanies of AI overall, which
is like that's a huge issue onits own.
But let's talk a little bitabout AI assistance, because you
kind of mentioned that sort oflike ways that you've used AI

(07:22):
have changed.
Being able to use it moreeffectively has been a real game
changer and, like I thinkeveryone knows, now AI
assistance are trending and Iknow that you've built a few
that you found to be reallyhelpful in your process.
So what specific types ofassistance have you built and
what problems have they beensolving for you lately?

Speaker 2 (07:38):
Okay, so two main ones come to mind.
So one is called Mining UserInterviews.
It's not a very creative title.
However, that is exactly whatit does and essentially every
user interview that is done atPhotoRoom is stored in Dovetail
and then we have the transcriptsvia the API and we can ask
questions.
So that is utilized.
It's kind of solved differentproblems for myself and Becky,

(07:59):
who's my co-researcher, andother stakeholders.
So for us as researchers, as Imentioned, it's a big part of
our analysis process.
It's how we find quotes insteadof tagging things et cetera
when we're analyzing a project.
And for stakeholders, it's agreat way, without going through
us and us digging up reportsand all of that stuff, to just
mine all the interviews done todate to ask basically anything

(08:22):
about users or users ofcompetitors.
Right, like you know, pleasegive me a list of all the
challenges people have had with,I don't know, ai backgrounds or
some feature in PhotoRoom.
So it saves us time and it alsokind of brings the stakeholders
closer to the user by lookingfor that information and
interacting with it themselves.
So that's one, and another oneis more recent and it's an

(08:43):
interview guide generator.
So that's one, and another oneis more recent and it's an
interview guide generator.
So I know we're going to talkabout it later, but a lot of
people at PhotoRoom do userinterviews and there's just like
a large degree of availabletime to prepare a great
interview guide beforeinterviews and also knowledge
about best practices, like howdo you ask users things to
generate the insights you want.
So the interview guidegenerator essentially the input

(09:05):
is what do you want to learn,who are the users that you are
interviewing and what do youwant to learn from them?
And then it generates aninterview guide, mostly
according to best practices.
So the effect of that has beenthat like okay, so regardless of
how much knowledge you have,you can generate an interview
guide that doesn't ask leadingquestions and doesn't ask users
to predict their future behaviorand things like that.

Speaker 1 (09:26):
That's really powerful, because I'm seeing a
kind of a trend of not just AI,kind of expanding productivity
among individuals, but also kindof expanding the skill set of
individuals by being able tokind of transfer own knowledge
and that kind of thing, which issuch a cool way of using the
technology.
So it's like really interestingto hear kind of like a more
nuanced approach directly.
Okay, let's talk about promptengineering.

(09:47):
This is another huge thing.
Everyone's trying to do betterat it and I know that you know
everyone talks about.
You know, just give it lots ofcontext, but it's a little bit
more nuanced than that.
So what are some specifictechniques that you've
discovered that have reallyimproved the quality of the
outputs that you're getting whenyou're doing analysis on user
research?

Speaker 2 (10:03):
First of all, I'm just like everyone else.
I'm always trying to figurethis out and sometimes I'm like
pulling out my hair, like whydoesn't it understand me?
But there are a few things thathave come up that I find really
helpful, specifically whendoing analysis.
So, first of all, any questionthat I ask the AI, I ask it in
at least two or three differentways, because it's not a human

(10:23):
and it's impossible to know howit's interpreting my question,
and I often find that I getdifferent user quotes or
different transcripts coming upwhen I ask the question slightly
differently.
So, particularly when I'manalyzing a project and I want
to be careful that I'm findinglike every relevant point on a
particular topic, in order to dothe analysis, I use a few

(10:44):
different prompts, askeddifferently each time, and also
I always I call it nagging theAI, but I always nag it and ask
if they can find more examples,because even though I always ask
for an exhaustive list ofquotes or an exhaustive list of
examples, it's never actuallyexhaustive.
There's always more.
So I always ask for more.
And the other thing that I havelearned is that you know when I

(11:06):
speak, like if I'm speaking to ahuman, you know I can ask like
four or five questions at onceand get really excited about a
topic, and that does not workwell with AI.
So I'm trying really hard toask prompts that only contain
kind of like one question.
And it's interesting becauseyou mentioned that a lot of
people talk about like give itas much context as possible.
But I feel like with context itjust anecdotally based on my

(11:28):
own prompt experiences I thinkthere's such a thing as too much
context and it starts todevelop a hierarchy and
prioritize the things thatyou're saying and ignore others.
So I'm trying to find thatsweet spot which, to me, is
tending toward even less context.

Speaker 1 (11:42):
Oh, interesting.
Yeah, it depends also where areyou using context, okay.
Well, this is good to know,because this is the first time
I've heard someone say oh, don'tuse all the context.
Speaking of missing nuances, soone of the things that you
mentioned before is this kind ofpitfall of AI sometimes missing

(12:04):
nuances in data or presentinglimited data as a trend, like
there's like kind of some sortof things that I think most
researchers might catch on to.
But you know, I, like you saidbefore, as you're transferring
skill sets and knowledge toother people, it doesn't always
come through.
So how do you validate AI'sanalysis and ensure that you're
not missing important outliersand kind of more importantly,

(12:25):
like if you're sharing some ofthese skills with other
stakeholders, how do you ensurethat that transfer of knowledge
also gets carried forward?

Speaker 2 (12:33):
Yeah for sure.
I mean, these are reallyimportant questions.
I think so.
When I'm analyzing a project,I'm rarely asking the AI for
insights.
I'm asking it to find merelevant things, and then I'm
rarely asking the AI forinsights.
I'm asking it to find merelevant things, and then I'm
kind of using my human self tomake the actual insights.
But the mining user interviewsassistant that we talked about
before is often used for thingsthat are outside of strategic

(12:55):
projects.
So we do ask it questions aboutusers, right, like what are the
things users struggle with most?
About X or whatever it is?
And I think that, first of all,this is really challenging and I
have kind of two rules that Ialways follow when I'm doing
this.
First of all is that when I askthe AI for an insight, I ask it

(13:15):
to cite every single transcriptthat it is using in order to
create that insight.
So sometimes it will cite, youknow, two transcripts out of
like hundreds and I will see.
You know two transcripts out oflike hundreds, and I will see,
you know.
I can go into the transcriptsand look and see that like, okay
, this is actually not a thingand I can use my human
sensibilities to decide becauseit's told me where, from where
it is drawing the insight.

(13:37):
And the second thing is that Ialways ask for outliers.
So if it says, like, if itgives me an insight, I say okay,
and now please give me as manyexamples as possible of users
who actually are contrary tothis particular insight, and
that helps me kind of like lookat all the data and then draw my
own conclusion in a morenuanced way, of like look at all
the data and then draw my ownconclusion in a more nuanced way

(13:57):
.
I will say that, in terms ofstakeholders using the AI agents
, we did create a guide forusing the AI agent, just in
notion that everyone can access.
But you know, those areresources that I think some
people use and some people don't, and so this is something I'm
still figuring out Like how dowe, when we have something that
is supposed to sort of likeunlock the availability of user
data to everyone at the company?
How do we make sure thateveryone is aligned?

(14:20):
And I don't have an exactanswer yet.

Speaker 1 (14:22):
Yeah, I think this is kind of an ongoing thing,
because this is sort of, I think, points to the value of still
having a human overseer, nomatter what we're using.
AI for you still need to be anexpert in that domain in order
to kind of check the work andmake sure that you know it's
kind of treated more as anemployee that you're leading,
rather than just you knowsomeone with an equal skillset

(14:42):
to you.
Yeah, I think it was kind of aissue that a lot of folks are
bumping up against Like how dowe kind of ensure that our
internal owned knowledge is kindof aligned with whoever is
trying to use the AI for thispurpose?
Now that we've kind of talkedso much about sort of like the
values and the way that thecompany functions, I think we
can dive right into some of thevalues of PhotoRoom that's kind
of enabled and empowered you toact like this with AI.

(15:05):
So let's chat about PhotoRoom'svalues as they pertain to the
use of AI, because you kind ofpointed a lot around this idea
of the culture at PhotoRoombeing a big driver of your
adoption and engagement usingthe technology.
So what does it mean todemocratize research and how
does it change the way thatstakeholders interact with and

(15:26):
value your UX research findings?

Speaker 2 (15:29):
Yeah for sure.
So I mean, the thing thatappeals to me most about
PhotoRoom at the beginning washow user-centric they really
were.
Like the leadership wasspending time interviewing users
, and that was pretty much like.
I realized in the interviewprocess that it was an
expectation that people atPhotoRoom are talking to users
and looking at qualitative datain addition to quantitative data

(15:51):
, which is not every company,let's put it that way and you
know, utilizing kind of theirinteractions with users or their
exposure to qualitative dataand their decision making.
That was something that wasestablished way before I got to
PhotoRoom and you know I was alittle bit nervous about it
because I have to admit that myapproach to democratization had
always been kind of like on theother end of the spectrum, which

(16:12):
was like research should bedone by researchers and it's
great if PMs and designers etcetera speak with users, but I
didn't see like full ondemocratization as the best path
forward, and I think I reallygot an education at PhotoRoo.
I decided I was like open totrying something new and what I
have seen and what I've learnedis that the bottom line is that

(16:33):
in a company where peopleinteract with users and are
expected to take qualitativedata into account.
They just value user researchmore and they're more likely to
utilize the insights right.
And this is like one of thepain points that I'm sure you
hear all the time from userresearchers is that you know
they're constantly trying toadvocate for the user and for

(16:53):
their findings based on users,whereas at PhotoRoom, kind of
this culture, what itfacilitates is that everyone's
very hungry for those insights.
That's one thing, and I thinkthat the big fear of researchers
in, like, an environment wheredemocratization is so huge is
that you know it won't be doneright.
Okay, and I think that was oneof my fears as well, because

(17:16):
there are actual research skills.
And what I have also found isthat in an environment where
people are consistentlyinteracting with users, they're
actually hungry for bestpractices, right, like everyone
wants to do a good interviewwhen they interview users on a
regular basis and everyone wantstheir usability testing to be
accurate.
So one of the cool things aboutour job in user research is

(17:39):
that we get to help peoplefacilitate those skills, like
when there are gaps.

Speaker 1 (17:44):
And I think that it's one of those kinds of things,
especially when it comes toattitude around best practices
for research.
Maybe it's just me, but I feellike once you kind of get a
sense of the amount of skill andnuance that's involved in
performing effective userresearch and effective user
interviews, it's kind of like,oh, you didn't know how much you
didn't know.
Yes, I totally agree.

(18:05):
Yeah, we did.
We did an awesome episode withSteve Portigal a couple of years
ago on conducting userinterviews and I had not I've
never done them prior tointerviewing him other than just
interviewing on the podcast,and he was just like in 20
minutes.
It was so illuminating how manyeven just your own behavior,
how it can impact the outcome ofan interview.

(18:25):
So I think it's really, reallycool that there's this like
hunger to transfer knowledge andgetting people better at
talking to users.
It's very, very interesting tome.

Speaker 2 (18:35):
It's funny you say that because Steve wrote a book
called Interviewing Users, likehe literally wrote a book about
it and it's a book that I havegiven the stakeholders and it's
happened more than once thatpeople are like there's a whole
book about it, you know, andit's like, yes, absolutely,
there's that much to know aboutit.

Speaker 1 (18:51):
Yeah, well, and it's such a cool thing because, like
you said, I think every companyis always looking to incorporate
their users' real feedback andwhat they really want into
products, but you need to haveenough feedback and you also
have to have high qualityfeedback in order to use that
effectively.
So, yeah, I'm really excited tosee some of the ways that AI is

(19:12):
kind of helping to gain buy-infor a researcher.
So it's, yeah, I'm reallyexcited to see some of the ways
that AI is kind of helping togain buy-in for a researcher and
kind of help people wave thatflag.
That's really cool.
I'm very pro-research if youcan't tell.
So, looking ahead, though, howelse do you think that UX
research is going to kind ofcontinue to benefit from AI?
Do you see any other kinds ofcapabilities that are kind of

(19:34):
just around the horizon thatwere not quite there yet?

Speaker 2 (19:36):
Yeah, so I don't have enough technological knowledge
to tell you like how good thetechnology will be and when, but
when I think about how theresearcher's role will evolve as
AI evolves and as we adopt itmore, one of the things that
I've realized, even today, isthat I have a lot more time to
collaborate with stakeholdersbecause of you know how much

(20:00):
more efficient I am when I useAI in the process.
I guess if I had to make onekind of loose prediction about
this, it would be that the kindof research role morphs from
just being about executingresearch and sharing insights,
from just being about executingresearch and sharing insights
and, you know, yes, executingresearch where needed, but
spending a lot more time withstakeholders doing things like

(20:22):
brainstorming and spending timein the solution space, because
we'll have the time to do it andbecause you know we will kind
of have the credibility andinformation to bring to the
table faster.

Speaker 1 (20:33):
And as far as right now, for your researchers that
are just incorporating AI intotheir workflow.
Obviously you've shared a lotof really practical tips and I
love how accessible and likepick up and use them right now A
lot of the recommendationsyou've made are.
But do you have any otherrecommendations as far as like a
good starting point for thoseof us who are working without a
best practices?

Speaker 2 (20:51):
manual.
I honestly think that the bestthing to do is to first of all
change your mindset, just acceptthat this is happening, and I
think that that's really hard todo.
But once you do that, I think alot of people say that it's
effective to kind of like startusing AI for kind of small,
admin-y like tasks, and I kindof I think the opposite.

(21:12):
I think it's a good place tostart is to jump into using AI
for qualitative analysis.
Figure out that's the meatiestpart of the job and the most
time consuming part of the jobfor the most part, and figure
out where it can help you.
My recommendation is definitelybased on my own process, which
is to try and use AI to replacemanual tagging, to pull the

(21:35):
places in your user data whetherit's usability sessions,
interviews, whatever tocategorize information and do
your analysis from there.
The reason is because I thinkthat in order for you to kind of
like make the crossover tosomeone who is excited about
using AI in research, you haveto do something that's really
going to impact your workflow,and the amount of kind of like

(21:58):
time and energy you save doingthat is pretty monumental.

Speaker 1 (21:59):
To end us off what has been your most surprising
discovery, as you've kind ofused AI for your own purposes,
like what has been kind of likethe breakthrough moment for you.

Speaker 2 (22:08):
Two things here.
So the breakthrough moment forme which might have been more
obvious to other people was justhow I, as a human, could still
have an impactful part of thisprocess.
That it wasn't about using AI.
It didn't mean outsourcingeverything to AI.
It means, like, understandingwhat I'm best at and what AI is
best at and putting it togetherfor the best type of qualitative

(22:28):
analysis.
That was one thing, and theother thing that was like a real
moment for me was that I gotfeedback from someone at
PhotoRoom who told me that, likewhen they were hiring the first
researcher, which is me, thatthe biggest fear they had was
that it would be like you know,research is known to be slow,
okay.
So they were like, oh, thisperson's going to come and like
they're going to give us theinsight so slowly we won't be

(22:49):
able to utilize them.
And this person told me likeyou know, it turns out you're
pretty fast.
I realized that the reason I'mfast is because of the fact that
I've embraced AI at these kindof like crucial parts of the
research process.
I wouldn't be fast without it,and understanding that that
makes me more valuable on theteam was definitely an aha
moment for me.

Speaker 1 (23:10):
That's really cool and it must feel good to kind of
feel like, yeah, you got like abit of a nice in the hole now.
Well, thank you so much forjoining us, corey.
I always love conversationsabout research.
I always learn so much, andtoday has been like so much
knowledge digested into the same, like 25 minutes.
So thank you so much for all ofthis.
Where can listeners follow yourwork online?

Speaker 2 (23:29):
Yeah, so I'm not super online.

Speaker 1 (23:35):
However, I am always happy to connect with other
researchers on LinkedIn, so feelfree.
Cool, well, thank you so muchfor being here.
Thank you, thanks for listeningin.
For more great insights, how toguides and tool reviews,
subscribe to our newsletter attheproductmanagercom slash
subscribe.
You can hear more conversationslike this by subscribing to the
product manager.
Wherever you get your podcasts.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Intentionally Disturbing

Intentionally Disturbing

Join me on this podcast as I navigate the murky waters of human behavior, current events, and personal anecdotes through in-depth interviews with incredible people—all served with a generous helping of sarcasm and satire. After years as a forensic and clinical psychologist, I offer a unique interview style and a low tolerance for bullshit, quickly steering conversations toward depth and darkness. I honor the seriousness while also appreciating wit. I’m your guide through the twisted labyrinth of the human psyche, armed with dark humor and biting wit.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.