All Episodes

February 12, 2024 60 mins

Explore the power of AI in streamlining content creation and productivity in our latest podcast episode. We're joined by Clemens Rychlik from Bourbon, where he serves as COO, and the man behind the Barcinno newsletter, diving into how AI, including ChatGPT, is reshaping tasks like proofreading and SEO optimization. Clemens shares his insights on using AI for authoritative content, while we discuss balancing client projects with our own. This episode is a must-listen for anyone using AI tools or interested in their potential.

We delve into how AI aids in language-related tasks such as transcription, translation, and grammar checking, highlighting efficiency boosts from tools like Otter, DeepL, and Grammarly. Àlex also shares how AI has revolutionized his Excel workflows and other day to day tasks, emphasizing the importance of human expertise in refining AI outputs and maintaining a brand's voice.

We conclude by examining AI's impact on SEO and content strategies, noting its quirks and the importance of accuracy in the digital realm. The episode also covers our hands-on experience with an internal prompt library, setting the stage for a future where AI and human creativity merge to produce impactful content as Marspace approaches its ten-year milestone.

We don't know if these show notes, generated 95% by an AI, would be seen as great in the eyes of Google, but what we know for sure is that we have the best audience in the world. Thanks a tonne for listening to our show!

By the way, we're celebrating 10 years in a few weeks. Make sure to send us questions for a special 10-years Q&A episode with the founders of MarsBased. Send them at hola@marsbased.com!

Support the show

🎬 You can watch the video of this episode on the Life on Mars podcast website: https://podcast.marsbased.com/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Àlex Rodríguez Bacardit (00:00):
Hello everybody, welcome to Life on
Mars.
I'm Alex, the founder ofMarsBased, and in today's
episode we have of the Barcinnofamous newsletter, and also
from Bourbon, a digital agencydoing marketing content, seo and
other needs in this realm,based in Barcelona, but they
also innovating in many aspects.

(00:21):
We have been collaborating withboth parts and in today's
episode we discuss AI, of course, but AI not generically
speaking, the AI that is goingto kill us, but how AI will
incrementally make us moreproductive.
In this case, we discusscontent.
We discuss the effects on SEO,on ranking and positioning, on

(00:44):
reputation.
They are an expert on all ofthese categories with Bourbon,
and Clemens offers prettyinsightful things about content
generation, the authority thatyou can create out of the stuff
that you produce, and how youare able to pass down that SEO
juice, that expertise, thataccumulated authority throughout

(01:05):
the years, to the new contentthat they will be generating as
an agency, as a thought leaderin the industry.
And for today's question of theday, we want to hear what are
you using AI for?
Give us a concrete example,specific example, something that
you do every day, not somethingyou did just once, something
that is instrumental to what youdo, like its data

(01:27):
transformation?
Is it preparing text?
Is it a proofreading?
Is it content generation?
Is it image enhancement?
What are you using AI for andwhat?
Even if it's like really,really small use case, it's
going to be very helpful.
Just feel free to share it withthe community, with us, in the
comments down below, or send uson social media or via email.

(01:49):
We'll be sharing.
Also, send us questions.
We want to record a special 10anniversary of Marspace coming
up next month, actually in amatter of weeks probably.
This episode is going to be alanding on the first week of
February, the second week ofFebruary at most, and on the

(02:09):
first week of March we turn 10,so feel free to send us any
questions at alexandmarspacecom,olaandmarspacecom or our social
profiles, which we're going tolink in the podcast description
and, without further ado, we'lljump right into this episode.
Awesome, let's do it Well.
Welcome to the show, clemens.
Welcome to the live on Mars.
How are you doing?

Clemens Rychlik (02:30):
Thank you.

Àlex Rodríguez Bacardit (02:30):
Alex really appreciate it.
I'm good looking forward tothis episode.
As I mentioned, I'm not nervous, but I'm concerned that as we
speak, we might be releasing thenew Marspace website and you
know how these processes arelike.
They can be pretty troublesomeand I know I should be there,
but you know the times have goneinside it and I don't know if

(02:54):
you've been to any similarexperience in the past.

Clemens Rychlik (02:57):
I know.
I mean we work a lot on SEOprojects, right?
So this happened before and Iknow you just moved houses.
So I think the website newwebsite launches stuff.
It's almost like a house.
Like you know, it always takesway longer than you think, so
hopefully you will hit thetimeline today.

Àlex Rodríguez Bacardit (03:13):
I would say our website has taken way
longer.
We started designing it almostthree years ago I want to say
two, but I realized it's almostthree because it was during the
pandemic 2020, and then we kindof like put it off for a year
because we were too busy, saidlike let's recover this project
in 2021, and then you know theysay the shoemaker's children go
barefoot right.

(03:34):
Exactly Always too busy.
Is that always?
Is that something that happensto you at Bourbon as well?
That you're too busy withclient projects, that you can't
work on your own stuff becauseof Marspace and Tower Daily pain
, basically.

Clemens Rychlik (03:48):
Yeah, I think it's like the typical thing,
right?
You know you always focus onclients first, right, so they
take priority, and so, byessence, you know you almost
never have time, and then whenyou have time, it's usually
whole day, season or something,and then you have a project as
well.

Àlex Rodríguez Bacardit (04:01):
So I totally get what you mean, I
wanted to start asking by youknow the state of the art of AI,
what you guys are using andwhatnot, because it seems that
it's a recurring thing on theall the interviews that happen.
This podcast and it's alwayscomes in the middle of the
conversation and we never divedeep into it.

(04:21):
And I think it's quite relatedto SEO, to PR, like the world
that we're both immersed in, andso how are you using it on a
day-to-day basis?
What tools are you using?
What would you recommend?
What not?

Clemens Rychlik (04:34):
Yeah, cool Now for us.
We've been obviously and I willprobably talk about the
different stages of how we wentto AI before but obviously we
all had AI tools before,especially in content side.
I mean, we've been all usingthem.
So I mean, on that sense, it'snot really a new trend.
I mean, everybody probably usedlike Grammarly, clearscore, but
this type of thing, if you like, even in content space, I think

(04:54):
even Gmail has like for I thinkseveral years they had this
kind of like next-worldpredictor when you were typing
emails and stuff.
Sometimes it gives you like thenext phrase.
So I mean these things existed.
I think just now, with ChatchGDPand this other tool, it just
makes it sound like bigger andlike scarier or whatever.
So far as we've beenexperimenting the whole year
with different tools, I mean wehave to.
We are kind of like you know,if we talk it's a generative AI,

(05:17):
that kind of really affects usprobably the most.
So we've been playing aroundwith different tools right now
mostly with ChatchGDP, justbecause it still seems to be the
strongest one, at least forwhat we needed for.
So this is kind of like ourbread and butter, and then
behind that we still have theother tools that we use, you
know, to support our contentwriting and the processes, but
really, with ChatchGDP, I thinkinvesting in, you know, it's

(05:40):
like a paid version.
That's what we found reallyuseful because it allows you to
really, if you use it on aregular basis for content
production, review this type ofthings.
It makes it much easier toagain cut down a lot on the time
that you need to use it, forjust because you can pre-write
prompts, you can already includeall the context that you needed

(06:01):
to know, so you just need tohave type the same shit again
for like a 10-time or so, youknow.
So I think that's like a big,big life saver, especially for
us, since we have differentclients as well.
You know we wear different hats, so one day we are I don't know
writing for a FinTech space andnext day we maybe write for,
like a health tech company.
So this allows us to alwayskeep overview.
But I think, yeah, withChatchGDP it's really.

(06:23):
It allows you to kind of likedo a bit of everything Really
and I mean it's nothing used.
I don't want to go too muchinto depth on what people know,
but we've been experimentingwith other things as well.
I mean you mentioned thewebsites.
I know Scott has been using itto build like a landing page
pretty easily in, you know,question of not even days, you
know.
So I think, as long as you needit for something that doesn't

(06:45):
need to be completely refined bythe AI, you have like,
basically, tools out there foreverything.
It's just, like, you know, amatter of like, would you
personally need the most andthen just like testing it?
But it's literally likesomething out there for
everything you can imagine rightnow, even if I think for
podcast recordings are differenttools that apparently help you
to cut down and clip it.
So they are not AI companies.
I kind of like adding AIfeatures as well.

(07:07):
So I mean, we've talked aboutZoom before.
You know, when we try to recorda meeting, they everybody keeps
adding shit and we don't evenknow, like what's even possible
or how it works.
But you know everybody can saythey are like an AI company.

Àlex Rodríguez Bacardit (07:17):
Now right, Exactly, and for me it's.
It's crazy how people are hyperregulating or over regulating
this or doing you know this kindof super futuristic and this
impending doom talks that thatAI will destroy as well and
whatnot.
And I think, like therevolution of AI for me, is the

(07:38):
small things in the tools thatwe already use, right?
So these transcriptor and ZoomGrammarly embedded in Gmail or
Google Calendar telling you that, oh, you're scheduling a
meeting with this person.
He or she is much more likelyto accept this invitation if
it's only 30 minutes long, it'sin the morning, or he has been

(07:58):
canceling the last weeks.
He has been postponing a lot ofmeetings just because maybe
he's sick or something.
You know, this kind of smartassistance, this digest of
information to make our lifeeasier.
For me, this is a revolution ofAI, not a fucking AGI that's
going to destroy as well, notBoston Dynamics, robots and
stuff like that, which is cool.
It's cool.
It just maybe brings morepeople to the pool, to the

(08:21):
talent pool, and just makes thepipe eager.
But, like, the revolution isthis small test savers, this
automation, this destruction oflow, super low value test that
we had to do manually.
And for me it's been like, youknow, hiring an executive or
personal assistant, right,having 20 bucks a month for for
chat to be the plus, it's it'sthe cheapest bar game of all

(08:45):
time, because I've been using itin the last 24 hours.
Let's talk about the last 24hours would have you used to,
but I have created a fake logofor potential new company on a
company presentation for becausetomorrow we have the company
get together at Mars place andI'm introducing a theoretical
meeting with another company.
I'm like I'm going to betalking about Pluto based you

(09:05):
know, a fictional company basedoff Mars based and so I created
in 10 minutes I created a logo.
It looks fucking good and Ididn't have to sign up for fiber
and stuff.
But also you know the contentof our new website that's being
launched.
I translated it pretty mucheverything using using chat gpd,
not Google translate, justbecause it's fucking faster and
all the things.
I use it all the time.

(09:26):
I use it to bounce off someideas I might have and I'm like,
oh, this could be great, butmaybe I'll ask chat gpd to give
me counter arguments and see whyis this probably not a good
idea, give me all the othervisions people might have about
this, and it's very good at that.
So, as a good friend describesit, it's like an intern with

(09:47):
infinite time, and it's reallytrue.
What have you used it for inthe last 24 hours yourself?

Clemens Rychlik (09:53):
Yeah, I think it's a great example.
So for me and again, it's funnythat some of these tools have
been around before, but you know, since AI kind of like explore
with chat gp, we just didn'ttalk about it as much.
But for example, what we use alot is auto for transcriptions,
like for English content, likefor podcasting and stuff you
could it's really really good orinterviews.
It's like an AI transcriptionsoftware works really really
well.
We actually have a lot ofclients who, instead of

(10:16):
translating content themselves,for many years have already been
using deep L is kind of an opensource translation software for
basically any language.
And then they kind of it'sfunny because then they ask us
to review the machinetranslation.
So we almost call like it's ahumanization of the machine
translation, because sometimesyou get random stuff.
So I've used that before aswell this week, actually
yesterday and then obviouslylike Grammarly, google may like

(10:39):
this type of things are alwayson.
But I think maybe the mostinteresting is within chat gp.
Like you said, it's kind oflike this new users that you
find for this like kind of likeassistant, and it's exactly what
you say.
Just like Every week you seemto find some new application
where you can really remove asuper boring task and assign it
to your assistant.
You know.
So my favorite one was likejust think it was last week,

(11:02):
because I work a lot of likewith data and stuff.
You know when you do likeKeyboard research and then you
want to also analyze existingblog posts and how they perform
and all these things.
You have different data pointsand you know I'm kind of like
spreadsheet and axle Nerd, butI'm not like a super programmer
that knows every little.
You know formula that exists inthe tool, you know.
So before maybe I spent twohours researching Okay, how do I

(11:25):
make this cell do what Iactually want?
But you actually firstunderstand how the formula works
and then be able to apply ityourself, so that it takes, like
you know.
So I was trial and to figure itout, and now I generally just
like go to chat you to Pia, tellI have this column, I have this
Road, this is what you can findhere.
I want you to do this right me,the exact formula for what I
need, right here, and that youcan almost Copy and paste that

(11:48):
and instead of spending twohours researching, that, you get
it in literally, like you knowa few seconds and Maybe you
don't have the complete learningprocess behind it, but if you
then review the formula or stillget it how it works, you can
use it next time.
So I think that's like a really, really great one.
It's.
We do a lot of content.
A lot of times we, when weprepare briefs for for new
content and articles, a bigActually I would say you're

(12:09):
maybe like 70% sometimes of thebiggest value comes from the
research worker with you, andPart of it is obviously manually
.
Part of that can be nowautomated and part of it can
come to like expert interviewsand stuff, I think.
But sometimes you end up withso many Input data points that
it takes you quite a while toprocess all of that yourself and

(12:30):
put it in a structured way.
So we have like differentsections of an article that have
like a logical flow, and Ifound it incredibly useful to
just like put everything thatyou've collected into something
like a chat to DP and Give youlike some structured idea or on
Like you know how to puttogether this, this data, into
like clusters, into topic andlike paragraph Overviews, and

(12:52):
then there's like a greatstarting point.
You know Like obviously youneed to refine it.
Sometimes there's some randomthings coming out of that and
you need to tweak it with yourexperience.
I think that's normal, but itdoes help you so much time
because I mean, if it allows youto do stuff faster or better,
like you're gonna use it right.

Àlex Rodríguez Bacardit (13:07):
Well, exactly, and that's one of my
favorite use cases is the XL one.
Exactly, I upload thescreenshot of my File.
Sometimes I'll first get thedata and and then I tell it oh,
I want to do this.
How do I do it?
Because I know next to nothingwhen it comes to Excel.
I'm just like that Horriblewhen it comes to data
manipulation using Excel and andspreadsheet, and it just works

(13:30):
like a charm, pretty much like ahundred percent of the Times.
The only thing is you, youtouch a point, you you hovered
on a point.
I wanted to collect, the deep,dive a little bit more into it,
which is the, the expertise youneed to review it.
Right, a lot of people are notconscious about that and, and
the thing is, for me it's kindof like outsourcing in which if

(13:50):
you don't have a little bit ofthe knowledge of that field, of
that realm, then you you willnot get the best results because
you will basically accept anyoutput it gives to you as valid.
And we know of thehallucinations of touchy-pity,
we know of the hallucinations ofartificial intelligence, how
sometimes it just makes up stuffjust because it's a
conversation, conversational,llm, it's not a specific

(14:13):
sectorial thing, this one andwe're using it for very specific
things when it's a generalist.
But the thing is, when, forinstance, the translation of our
previous website, when welearned our previous website,
2015, I manually translated allthe content.
Granted, it wasn't too muchcontent, right, but this one we
have, I mean, we didn'ttranslate the blog.
But without the blog, we haveabout like 80 pages or something

(14:36):
like that.
I wasn't gonna go a hundredpercent manual on this, and so I
used 24 to translate it.
And but even with that, even ifit's really good, sometimes
it's too good and tries to belike it doesn't have your tone,
it doesn't have your voice.
It maybe translates stuff thatin your context that normally is

(15:01):
not translated.
If we are talking about, youknow, maybe the, the downtime of
a server, we don't translatedowntime to Spanish, like when
talking Spanish we talk aboutdowntime, we don't talk about
the Impaul de Caida or somethinglike that.
I don't know, I wouldn't evenknow a word for that.
Or throughput, or there's somecertain things are part the

(15:23):
jargon that we don't translate,and yet that GPD or the
equivalents would translate themright, but in the context of
our website doesn't really makesense.
So.
So that's why you got to reviewevery output it gives you, as
if it were an intern, as if theywere an agency that you were
outsourcing to right, becauseotherwise you will be given
these kind of maybe notfictional but just not a hundred

(15:45):
percent ready for productionoutputs that you need to review
otherwise.
Yeah, it is totally done by AI,but if you could tweak it a
little bit, I don't know, Idon't know how you can tell.

Clemens Rychlik (15:56):
Yeah, I think I mean.
You touch on so many points.
I'm trying to structure mythoughts on this.

Àlex Rodríguez Bacardit (16:01):
Welcome to the show.
It's like, it's kind of likethat for an hour.
No, it's great.

Clemens Rychlik (16:06):
I mean I have a lot of thoughts on that.
I would say, maybe first of all, because you touch on a really
great point which is kind oflike a lot of people are not
aware Exactly even like, wherethis data comes from, and we
can't, because it's kind of likea black box Right.
You put something in and youget something out and it doesn't
really explain like how did it?
You know, what decision did ittake, what did it look at to
come to this conclusion of, forexample, in your case, that

(16:27):
translation Right, you just getthe result and that's it.
So I think what's important tokeep in mind is also, especially
in our cases I mean, we aredeep in the tech sector on both
sides.
I think it's important toremember that we are actually
like, I mean, we're not likemaybe, the super geniuses of
like technology I, but I thinkwe're very much at the forefront
, like actually using it on aday-to-day basis.

(16:48):
Now I would probably say likemost of the people still have
like literally no idea about AIstill and they don't know like
all the Details behind it.
I was just this weekend there'sa really good exhibition in the
César Bay in Barcelona, themuseum, about AI and it really
explains a lot of the stuff,which I think is great, because
we need that.
We need people to actually knowwhat it is good at, like how it

(17:12):
I mean not to deal us how itworks, but at least them general
idea and Especially then, likewhat it means for you in using
it in a day today, because, likeyou said, you want to make sure
you review this stuff thatcomes out of it, especially if
you want to Create things thatare really important to you.
You know there will be somethings.
You know, if you use itpersonally to maybe draft your

(17:32):
you know grocery shopping list,like you don't care, it makes
like a small mistake or there,you know, almost it puts like a
super expensive item on the listthat you don't want to buy.
But if you use it for Anythingthat's really important, like a
website or like I don't knowclient emails and this type of
things, you really really,really need to double Check it.
I mean, translation is the bestexample.
You always get like this random, you know, like false friends,

(17:54):
a lot of things that can gowrong.
You definitely always want tohave somebody who at least
double checks that type of thing.
But even here we talked aboutthis before.
I mean it's not somethingcompletely new.
I mean you had the besttranslators, were using kind of
like some sort of AI Translationsoftware before for a long long
time, where I think actuallythat still probably works way

(18:17):
better than whatever we thinkabout right now, because it
allows you to build like atranslation library, right, so
you could kind of tell it okayfor this type of word or
phrasing, please always use this, and the next time you just
does it automatically.
So I think, as AI gets betterand I'm sure it will be more
specialized tool for translation, as it will be for other things
, I think this will make itreally cool because then you
don't have to always give thesame, always make the same edit

(18:40):
that you do to, you know, to nottranslate downtime, for example
, right, and it's gonna make itmuch easier because right now
you can do it.
I mean, you could probablycreate some like prompt library
or some context that alreadygives it always that they put on
like Don't translate thesewords or translate this with,
always in this case.
But again, it's just kind oflike still not as Automated as
it could be, which I thinkultimately the idea of AI is

(19:02):
that it's gonna be asStraightforward and easy as it
should be.
So you know, just going back,definitely always, always, you
need to double-check things.
We like, the very first day Iexperimented for the content
writing.
You know, obviously we're notusing it to write content for
clients, but personally we wereexperimenting with it to know
how good or bad it can be, andespecially for very long form

(19:24):
corner.
That's kind of like based onsomething new.
I mean it was terrible.
I mean you, you up, for example, we gave you the test to write,
kind of your journalisticarticle.
It should include like quotesand stuff to make it like more
engaging and more Attributive.
And then, okay, first read, youget the result you like.
Okay, I'm kind of reads more aswell.
You know it's structured, thegrammar is proper, you know what

(19:46):
usage is.
Okay, there's some quotes in it.
You say, okay, it's actuallynot too bad.
And then you look at the quote.
You say, hmm, I don't know whatthis person is.

Àlex Rodríguez Bacardit (19:53):
Let's check it up on Internet it
doesn't exist person doesn'texist.

Clemens Rychlik (19:57):
Okay, this quote has never been said.
And then you're kind of likepast the AI and it's almost like
a very Bad sneaky.
You know, student, that whenyou ask it, like okay, this
quote doesn't exist, I just oh,yeah, okay, actually I'm like an
AI.
I can't produce new quotes andstuff, you know, but you should
always review it.
So it's kind of like, come onit.
Actually, even you ask aquestion and then it admits that

(20:17):
there's certain things that itdidn't tell you until you have
specifically Ask it aboutsomething.
So I think it's super, superdangerous if you leave any type
of AI work on check, especiallygoing back to to what I
mentioned in this exhibition,the museum.
They gave great examples of how, you know, we have to keep in
mind that AI is trained withdata points and a lot of our
data points that has been usedto train AI Are biased,

(20:40):
depending on whoever was theresearcher who was working on it
.
So, by definition, a lot of theAI production is gonna be Based
on this bias, which is alsoterrible, because we don't want
to come and reinforce, you know,certain Stereotypes and
discrimination that alreadyexist in the world, because, you
know, actually we would like todo the opposite, like you know.
Yeah, you say I to make itbetter.
So I think there's like so manyQuestions around that they just

(21:04):
make it, I think, for meobvious anything with AI you
need to check it.
I think that's like basicone-on-one that I think I just
hope that everybody who listensto this episode and beyond To
realize is that because this islike we do, we just care, we
release check, like work oncheck.
And I find it funny what yousaid.
If you can tell the style andstuff of AI, like, for example,

(21:25):
even in content writing I spokewith some even clients and our
people in the content industrydoes certain things where you
can tell this has been writtenby AI it seems to have like a
general, like default style.
It is this kind of likefriendly, corporate type of
writing.
So even sometimes, if peopleuse it for email, you can tell
like this guy is now using AI towrite his email or this social

(21:46):
media post was written with likean AI.
Sometimes it even uses exactlythe same phrases like a lot of
social posts a lot of times.
So, like you know, we havegreat or super exciting news and
this is like almost always theopener for any like news you
want to announce.
So it's really funny.
It's like this default signeverything's going to sound the
same.
But then again, you know, as youget smarter with how it works,

(22:06):
you can give it more context.
Right, you could say, I don'tknow, write your social media
post in the style of Elon Musk.
If you, for example, want youryou know to be very polarizing
and, you know, offend a lot ofpeople, that could be like a
great way to go about.
But you can pick differentstyles depending on who you like
.
But I think still that eventhough you can do that, you lose
kind of like the human touch.

(22:26):
You know you want to make surethat whatever you write sounds
like you, because otherwise, Imean, what's the point?
You know, like I think it wouldbe shame if everything else
ends up sounding the same orlike who are three general
styles?
You know we kind of boring foreveryone.

Àlex Rodríguez Bacardit (22:39):
Yeah, I tried it for the block.
I wrote a couple of blog postsand I let people know this was
written with AI.
We were just testing, right,and the funny thing is like we
don't have like, let's get alittle bit technical, let's nerd
out a little bit here.
So there's not yet aspecification in HTML to give

(23:02):
the authorship to an AI or,putting another way, we got the
author tag for HTML right, whichis used for, you know, to quote
, it's used to give thereference to the original author
of the of an article andwhatnot.
But I don't think it's prepared.
You can't.
You could write open AI andthat's it and be done with it,

(23:24):
but I think we need to come upwith something better.
I'm even suggesting somethingin my I think I wrote it on my
personal blog, but I will I'llgive it more diffusion, maybe
that that we need to come upwith something like I'm
proposing a seed tag right whenyou can say like, oh, these blog
posts was written with AI, withthis original blog posts in

(23:46):
mind, or with this idea in mind,or with this author in mind,
right, because the content isnot his or hers, but it's a big
inspiration.
Such a big inspiration, maybe Ihave used, I have ingested his
or her content into a tool, andthen I have come up with this
article that I have rewrittenmyself.
To put it another way, likebecause you said it, it just

(24:09):
comes with something that soundstoo generic and maybe too
polite and too nice, right, and?
And there's one small nuance toit, to what the things that we
do Like?
For instance, we're not nativein English and our blog is in
English, so sometimes I'm usingidioms that are maybe not 100%
correct, or some expressions areshouldn't be there, or phrases

(24:29):
are exceedingly long, or I wannause, maybe, words that are too
complex for the context, justbecause I'm not native, right,
and that's my way of writing,and AI is just not getting there
.
And I've trained an AI with ourblog post style and still it
just comes up with something toofluffy, right, it's too, it's

(24:51):
too optimized for marketing mostof the times, right?
And so I'm reading it and I'mlike I will not publish it
because 100% people will knowthis is generated with AI.
And then I end up almostredoing it, which is part of
what you're doing the factchecking, right, or the style
checking it's.
I'm not saying I'm rewriting100% of the blog post.

(25:13):
I would say I'm rewriting 50,just because I'm also pretty
picky about things and I've gotmy own style, I just wanna add
some more edgy conclusions.
I wanna be more blunt in someopinion that that, you know, an
AI is not capable of, orshouldn't be doing that.
And the other thing is the factchecking.
As you mentioned, I got a funnystory here.

(25:33):
You know, I run these othertechnological podcasts.
It's called Focaterra is inCatalan, and we did the live,
sort of a live demo of of chatGPD when it almost one year ago
maybe, maybe it was March,something like that I was asking
it about Focaterra, the podcastitself.
I said, oh yeah, totally,focaterra is this popular radio

(25:54):
program on Catalunya Radio andhosted by these super famous
journalists.
I'm like no, so you know, ifyou don't know it.
It's like, oh okay, nice.
I said no, I'm pretty sure it'snot a radio program.
I mean it's a podcast and, asyou said, it's kind of like a
sneaky studio.
They said, oh yeah, sorry, forI might just look it up

(26:15):
somewhere else.
I might have hallucinated, orsomething like that.
It's actually a podcast,technological podcast, talking
about the problems of Catalansociety and a class war and blah
, blah, blah.
It's like, well, no, not again,can you.
And then I was refining theprompt and the funny thing and
here's the funny thing that youmentioned about the quotes right
, I said can you provide linksto relevant articles and

(26:37):
citations where needed?
I said yeah, I can totally dothat.
So it goes again.
Fourth iteration of total,somewhat wrong explanation of
Focaterra and content and thehost and whatnot.
And then it's sites, websitesthat are actually made up, but
the funny thing is the structureof the links.
So HTTPS, you know all theslash, slash, slash, slash, the.

(27:01):
If you went to compare itdidn't link to anywhere, but you
could find other links on thewebsite following the same exact
structure.
Just it was something missing.
There was something made upthere, like my name, for
instance, or Focaterra Likeracuorg.
So this Catalan radio slashpodcast, slash morning, slash

(27:23):
Focaterra, didn't exist, but allof the other podcasts of Raku,
they followed this URL structure.
That was very interesting to meBecause it totally went to the
website, it crawled, it learnedabout these URLs and it made it
up.
You know, following thisinstructor, it was so funny.
But then again you gotta checkit because otherwise you will

(27:45):
look like the total automationor LinkedIn automation jerks who
are like hello first name.
Congratulations to youraccomplishment at company name.
You know it will look like that.

Clemens Rychlik (27:56):
Totally.
But then again that brings meback to like you know how scary
it is if we leave our especiallythe published work on check,
because imagine that I meanmaybe even somebody you know
publishes something, maybe ingood faith, and they just write
something with AI.
So it's like a post aboutFocaterra or something.
And then accidentally the AIdescribed your podcast as maybe

(28:17):
something I don't know like asuper extremist, I don't know
political website or something.
And then this goes out oninternet and then poof, they're
like a lot of like you know,let's say, ai, maybe Twitter
boards and stuff who keepreposting that.
And then it's kind of likeescalates.
And then you know you wake upin the morning and suddenly you
have like a lot of DMs in youraccount like saying, like Alex,
like what the hell are you doing?

(28:38):
Or something you know.
So it's like really scary howyou know.
We know how social media, howfast it can sometimes make
things go viral, even if thefoundational story of it is
wrong, but by the time peoplerealize that the damage has been
done.
So I think it gives us a lot ofresponsibility to check our
work, because there's gonna bemore false information if we

(29:00):
leave it unchecked, which isreally scary.

Àlex Rodríguez Bacardit (29:02):
That's a good segue for my next
question, which was gonna beokay.
We've been working for years,for decades, on SEO.
Right, we've optimized andpeople see, you know not only
the content but also thereputation, like there's stuff
like Glassdoor, then you canwork on the reputation you have
online and whatnot.
But then you ask it, you askabout your company to an LLM and

(29:26):
it comes up with something thatyou're maybe not in control of.
And we're starting to hear this.
I don't even know the term,what it is, but if SEO is search
engine optimization and we gotlike also the App Store
optimization, what is the AIsnippet optimization?
Is that even a thing?
That's something that you guysare working on already.

Clemens Rychlik (29:48):
It's something that we're starting to look at
very seriously.
It's something we wanna rollout next year.
The reality is it is very, verynew.
So you know, typically I meanfirst of all historically I mean
something related to that isthe claim of, obviously, that
SEO is that which is, like Ithink, happening for the 10th
time now.
So I mean SEO must be some sortof a zombie, because somehow it

(30:11):
always comes back and is aliveagain.
You know, there was like voicesearch and all those things
before, and every time peoplethink, okay, now this is gonna
be end.
I think certainly now, with whatyou mentioned, it changes a
person that you're searchingwill be done differently and you
want to make sure you canoptimize for that as well.
The reality is we have a lot ofsmart marketers who, whenever
these changes happen, they run alot of experiments and try to,

(30:35):
kind of like, identify bestpractices.
It will just take a bit of timeuntil they have enough feedback
and input from differentexperiments they run.
So right now it's notcompletely clear, but I do think
that in the end, especially thenewer version of AI's, they are
trained with more and morerecent data.
So my belief is that you justhave to be understanding the

(30:57):
side of things that when youwork with ChachiDB or something
and you ask it questions, itwill be based on data that maybe
were put in manually or fromthe internet.
The one that you can influencethe most is the one on the
internet, right?
So even if you think that moresearches will be happening
through this kind ofconversational chats with AI
boards or something similar, youwill be aware that this is

(31:20):
based on existing content.
So SEO will still haverelevance because if you rank at
the very top, it's going to bemore likely, I think, that an AI
is going to scan throughsomething that you have written
and is going to use that as oneof the foundational pieces of
its own training to createwhatever output it needs to
create.
So I think in that sense, themain idea of SEO is still

(31:44):
relevant and the same.
It's just a bit more indirectin how the result then appears
in this kind of conversationalsearch.

Àlex Rodríguez Bacardit (31:53):
It makes me think about the weight
of the importance of an articleor a website.
As you mentioned, thefoundation of SEO is going to be
relevant still because, mostlikely, chachidb and all of
these other open models arepulling data from Wikipedia and
Google and Reddit.
We know they will stop fromReddit, right, but at the same

(32:19):
time, we know this is happeningnow but it might not happen in
the next iteration.
The first iteration of the datait's pulled from SEO stuff,
let's put it this way.
The second iteration of data issomething that they have
reworked with their ownalgorithm and we don't know what
happens in this black box,right?
So what you're sayingessentially is that if we at

(32:40):
least control the first part ofthe equation, we're in a good
spot now, but we don't know whatwill happen in five years' time
.
Right, I mean kind of whenGoogle had to the rebalance, to
the algorithm.
But how do you guys stay aheadof the curve?
How do you train yourselves?
How do you stay on top of newsand tendencies?

Clemens Rychlik (33:01):
Yeah, I mean for us, first of all, this is a
great question and obviouslyAI's advancing so far that it's
really difficult to make,honestly, any sort of prediction
whatsoever.
That goes for like five yearsor more.
But, like I said, I do thinkthat if you follow the SEO
practices and make sure thecontent that you want to rank
for is at the top, this willhelp you rank in the or appear
in the conversational searchesas well.

(33:22):
And the other thing is, if wethink about like how, from like
a brand perspective, I think youalso have control there in a
sense that if you establishproper like content, style guide
and messaging playbooks thatyou use all across your board
internally but it can also sharewith partners that you work
with, like agencies, at leastthat helps you ensure, like, a
consistent brand message.

(33:43):
So the likelihood that, like acompletely you know off
description of your company orproduct or services comes in to
play in the future is limited.
But for us, this year has beenvery much about learning about
AI, how to use these tools,because we realize pretty
quickly that you know it's notgoing to replace marketers, but

(34:06):
it's going to replace marketersthat are not using AI, because I
think it's pretty logical thatanybody who uses this super
virtual assistant in your day today can be way more efficient
with what you're doing right now.
You know I can probably I canreview more content, I can
create more briefs.
I can probably create morecontent also if I wanted to.

(34:26):
So it is like definitelysomething that you need to use,
but in the end it comes down toit's almost like a new tool,
right?
Maybe we call it AI, but it'salmost like a tool.
You have to learn how to use itin best possible way.
So I mean we maybe we don'tneed all this like 10,000 prompt
playbooks that are out there.
I mean nobody's going to readall of them, but I think you do
want to learn how you can dowhat you need AI for in the most

(34:49):
efficient and best possible way.
And so what we do is we workwith different tools.
Every month.
We try to have like we havealmost like a weekly call where
we kind of digest of what we'vebeen working with, things that
we've learned.
We build like an internalprompt library that we can share
across the board just to kindof multiply the learning effects
, and this has really helped usto, you know, find some really

(35:11):
cool ways to save on timeinternally, and then also just
like sharing any articles thatwe see and read.
You know, we have like thisweekly touch point where we
really try to digest that Likewe talk about not really
anything related to like clientwork and stuff, but like really
just like okay, what have weworked like we have expanded
with this tool.
This is our experience.
It would be good for that.
It could be bad for that, andthen you just learn a lot faster

(35:32):
, I think you know, because Imean, even with AI tools I think
there's probably, like in thetime this podcast was recorded,
there are probably like 10,000new AI tools out there, because
everybody uses AI to build AItools then as well, which is
crazy.
But I think you learn so muchjust by engaging and practicing
with it.
So it's almost like anythingelse in life.
You just have to do it, youhave to learn, see the

(35:53):
experience, and then you pick upstuff and it's like even little
things that sound likecompletely bizarre, but then you
kind of just like trust themand test them out and see if it
works.
And then you see, okay, this iskind of like actually good, and
then you just keep doing it.
So I think one of the rate likeone of the strangest one I've
seen recently was some researchthat has been done that if you

(36:15):
kind of use like an emotionalpressure when chatting to an AI
bot, it performs better.
So if you, if in your prompt youtell it something like you know
, if you're bossy- do this andthat, and you know, not
necessarily rude, but if you saysomething like, hey, okay, do
this and this, so you put inyour normal, you know prompt and
contacts, and then maybe at theend you put something like, hey
, you know, like this is reallyimportant for me, like my job is

(36:38):
on the line for this, it tendsto perform way better, like at
least significantly better oncertain tasks, than it would if
you don't put it in.
So it's kind of like strangelyfunny that this emotional
pressure works for an AI, whichI think is like totally bizarre
somehow, like really strange,like it doesn't have emotions
but it understands, maybe, youremotion and then tries to do a

(37:00):
better job.
So that's one.
And another one that's also abit strange is like because
again, it's not really a human,but apparently if you tell it
things like you know, okay, Iwant you to do this and that,
but you know, do it in kind oflike a step by step process, you
know, take a deep breath andwork through your process to get
to the end result.
Maybe review it once or twiceyourself internally before you

(37:20):
post, whatever your output is.
That also tends to perform abit better if you don't do that.
So it's like this little thingsthat we're still trying to learn
that, I think, help us and Ithink to it, maybe even back if
we think at how we use Google.
You know, I mean, strangelythere's still people who don't
know.
You know certain of the I don'tknow how to call like codes for

(37:40):
Google that you can use.
You know that put in thingsinto like iPhone so it appears
the exact same phrase, or likeexcluding certain words from
appearing in your search.
I think it's just little thingsthat you know.
Then make it much more, makethe outputs much better for what
you need it for, and I thinkthat's like something that you
can only learn by doing it.
You know, like I don't thinkthat by reading 10,000 prompt

(38:01):
guidebooks you will rememberfive prompts out of that, so you
just have to do it and learn it.
That way, I think.

Àlex Rodríguez Bacardit (38:07):
It's funny because I'm following a
lot of people on.
I'm following fewer people nowthan one year ago on AI, just
because they're all posting thesame shit and it's too much shit
and it's pretty irrelevant towhat you do.
It's impressive, right?
It's like here's this new tool,every the Thread Boys, right,
the AI Thread Boys, and everyday there's something new.
It's like I can't play withthese.

(38:29):
I can't play with a new toolevery day because I have a real
job, motherfucker.
So, but they keep overloadingyou with information.
And the funny thing is a lot ofthe they give you like 80% of,
or 80% of what they give you istrue and valuable, so to speak,
but 20% is made up based onimpressions.
I'm going to give you anexample.
Remember the, let's say, thepeak days of mid-journey right,

(38:53):
when mid-journey was starting tobecome like really good and
everybody was kind of like, oh,here's my guide to writing the
correct prompts with this andthese parameters and this order,
and blah, blah, blah.
You tweak it like this and youadd like the camera style and
whatnot, and then funny thing isa lot of these things made
sense, but 20% of them were fake.

(39:14):
Just because they thought theywere working because they were
getting good results, but theynever occur to inspect whether
taking them off would actuallyproduce the same results.
And some people are like, dude,I work at mid-journey or I work
at here and this parameterdoesn't actually do anything,
but it made them look like moreintelligent, right, more
elaborate.

(39:34):
And one of the things that Ilike to hear is, if you follow
somebody else's steps 100% andblindly, you're just going to
get there to the same exactplace maybe, but you're not
going to get the thinkingoutside of the box results.
And I see a lot of people justthinking in this ABC modality
and they're just followingblindly what other people tell

(39:54):
them and buying these promptguides and whatnot.
And they're not experimenting,they're losing these critical
reasoning and this way of deepthinking right Ourself, so we're
just following blindly, whichis exactly the contrary to what
should be done in theseexperimentation times.
It takes me back to thebeginnings of SEO, when there

(40:16):
was black hat and whites hat,white hat SEO and some really
questionable tactics right, butthe good thing there is, like
everybody was experimenting andsomething was good for 15 days
or for 12 months, something likethat, and it was completely.
It was a complete mess, but itwas funny.
We're experimenting and nobody,like the regulation, was never

(40:38):
catching up to the amount ofexperimentation.
That's what, when you knowmarket is not yet ripe, right,
or it's just beginning toflourish, and I'm getting the
same vibes, and now with AI, andI want to stay on top, but at
the same time, now I got a realjob.
I'm not a student anymore, ifyou want.
If you know what I mean.

Clemens Rychlik (40:54):
Yeah, I think.
Yet definitely at the start wehad all this hype that we
realized we could do almostanything.
And then we like, we tried toeven figure out, like, what do
we actually we do want to use itfor?
And then, like, some thingswere like fun, but then you know
, after you for example, likewith Dal, you know me joining
maybe you experimented with itfor, like you know one day, for
like an hour or so, to createsome funny image and that's it.
Or maybe you wrote some I don'tknow some like song lyrics for

(41:15):
like some random you know musicthat didn't make sense, just to
see how it works.
But I think once we get past it,it is like what you say this
experimentation is key.
And then it's funny becausethat again brings it back to
what we said before it's keyexperimentation is important.
But I think if you're notexperimenting it for things that
you have any expertise with tobegin with, you can't really get

(41:39):
what you said before, likehaving the knowledge of being
able to actually review theresult and saying whether that's
good or not.
Like if you take the mid-johnexample, I think it's great
example of how I think againthat it's not going to replace a
person, because, yeah, you canuse mid-john.
I mean, I could use mid-john tocreate some images and digital

(42:00):
art.
But since I'm not a designer, Ican look at all these problems
to look at, like camera anglesor lightning angles and color
composition and stuff, but I'mnot a designer, you know.
So whatever is the artwork, Ican say like okay, you have to
meet, kind of looks good, orlike this doesn't look good, but
a designer will be able to,even if they don't know the
problem, to just give it theinformation that it needs to,
because they just thinkdifferently based on the

(42:21):
experience.
And then, especially important,they can actually review the
results to say whether this hasactually done anything useful or
not.
So, again, experimentation key.
But if you're not an expert onthe subject, like to what extent
can you review the work?
It's almost like as if I wouldhire, for example, you guys to
do a web development project forme and then I try to review

(42:42):
their homepage, but I'm not likea coder so I have like no idea.
So, like, what's the value ofmy feedback?
Besides, like telling youthings about the appearance and
stuff you know?

Àlex Rodríguez Bacardit (42:50):
Yeah, that's a really good point.
Back to the, so circling backto the mid-journey thing.
I remember that at thebeginning everybody was creating
something like, oh, it's crazy,you know all of these
spaceships and like fictionalcities, and but then it got
really specific and I couldn'tprogress anymore because, for
instance, some people are like,oh, create this scene of

(43:13):
whatever you know, a dinner atthe table, but in the style of
Wes Anderson, lightning, fromthis angle camera, this model.
And because I don't have theknowledge of this realm I don't
know jack shit about cinema, Idon't know jack shit about
lightning, I couldn't fine tuneright.

(43:34):
So let's say, everybody willbecome a generalist in AI, but
the people who have the know-how, the knowledge, the sectorial
knowledge, they will still beable to be 10 steps ahead just
because they still have thisknowledge of camera models,
right, and the focus and thelens, distance and whatnot.

(43:54):
I don't.
So I will not be able to finetune my creations with regards
to that right.
Same happens with coding.
I'm not going to get tootechnical here, but happens more
or less the same with Copilot.
I was going to ask you the nextthing about kind of like to wrap
it up about the business, thefuture of your business, because
AI has given us a like, achange of scenarios and more

(44:20):
space.
You know, we've always been aboutique company, specialized.
We do very few things.
We do them really well At thesame time.
Right now, people are catchingup with the expertise just
because AI has elevated people,right, and we have not yet found
a way to elevate ourselves.
Or we are in this time wherepeople can get like pretty good

(44:41):
codes with Copilot, for instance, right, and clients might just
want to pay less to these people.
Put it another way, remember theJohn so the geez, the Jeff
Bezos famous quote of focusingon the things that don't change,
not on the things that change.
People will want to have abroader selection, cheaper and

(45:02):
faster delivery over time, notthe other way around, right?
So for me as an agency, I'mthinking, if I'm following this
reasoning, people will want morespace to do, more technologies,
provide the projects in lesstime for less money, which is
100% contrary to our philosophy.
But what if we are?

(45:23):
We have to adapt to thezeitgeist, right?
And if we don't do, it will bethe demise of our company.
Are you finding yourselves alsoin that position, or how are
you seeing the feature of yourmarketing PR content services
feature.

Clemens Rychlik (45:40):
Yeah, I think it's an excellent question and
something that I remember.
When we discussed it at thestart of the year it was like
less clear for both of us likewhat it may actually mean.
But then you try to speak withpeople, educate yourself and you
get a better idea.
I think you touched a goodpoint on.
People know these tools are outthere, so they're gonna start
using them and they're gonna seehow useful they are or how

(46:03):
useful they are not.
They may be pretty useless aswell in some things.
So I think they will realize,regardless of the quality or the
results they may get with, thatthere is a very affordable,
cheap way to do something atscale here, which obviously, if
you put it in comparison to aseasoned agency, has a

(46:23):
completely different propositionof what it tries to do.
So you have those two optionsout there.
I think as a business, youcan't completely ignore that.
I think you have to understandthat this is out there and
people gonna be start using that.
Maybe they're gonna be certainthings of the business that
you're doing right now.
That can be that maybe theywill internalize or maybe

(46:44):
they're gonna assign to somebodyelse, or that you have to
rethink how you even maybebudget or invoice them yourself,
because I do think that overthe long term there are
efficiency gains here to be madein certain tasks, in almost
anything I think we can imagine,and over time I do think the
reality is gonna be that some ofthat will be passed on
eventually to the client.
Because if, for example, evenif you have two agencies that

(47:06):
are equal, one doesn't use AIand one does, maybe even the one
that uses AI can perhaps chargea bit less but have a higher
profit margin by actually usingAI.
So there's gonna be a lot ofquestions.
But again, each function thatexists in the job world is quite
, quite different and AI isbetter or worse in each of them.

(47:27):
In our case, we have realizedthat are certain things where AI
is actually acceptable forcontent creation, which is
rather, in my opinion, veryshort form stuff that is created
based on an existing longerpiece.
So, for example, creatingsocial media copies or media SEO
descriptions, this type ofthing.
If you already have like anexisting long form article, you

(47:49):
can get pretty good basicsuggestion that you can then use
to kind of review and enhanceand refine.
So that helps you definitely tosave time.
But at least, for example inthe other case, for right now we
also see that if you wannacreate some content that is
about an eShop because somethingthat's actually new, where
there is no data points orexisting content to reference

(48:10):
for your AI, then it just likegoes into this hallucination
mode with like random results.
So there's gonna be certaintype of content where you still
want to absolutely make surethat human approach is included
there as much as possible, butthen maybe over time it's gonna
be better for kind of likeshorter content as well.
That's maybe and I'm thinkingagain if it's something very

(48:31):
basic, like I don't know if youdo like a lifestyle magazine and
you wanna write about the topfive beaches in Barcelona, maybe
you don't necessarily need likea human to write that, maybe
you just have an AI, do likefirst draft and then you revise
it.
But again, I don't think that'swhere the quality comes into
play.
That's more of like a wider netin terms of strategy and, if I

(48:52):
tied back to SEO, which in theend I think we do content
traditionally from like theinbound marketing side.
So the idea is to not just gettraffic to your website but get
like relevant leads out of thetraffic.
I think I see a change here interms of the general strategy,
because before that it has beenvery much about.
The phrase was always that youkind of like becoming like a

(49:13):
publishing house almost.
You wanna publish a lot ofcontent consistently and cast
like a very wide net to go havethis like long-term key moves
that you can optimize for, andthat has worked.
I think this is gonna changenow, because I realized that the
important thing to keep in mindis, like AI doesn't just exist
for you, it's gonna exist foreverybody around you too.

(49:33):
So if everybody uses AI tocreate content at scale,
probably most of the content isgonna sound exactly the same.
So who is gonna be number oneon Google?
If all things are equal, thedifferential factor is gonna be
most likely the domain authorityand the offer authority.
If you're like a smallercompany that isn't very strong
in those two points, then yeah,you can create a lot of content,

(49:54):
but it's gonna consistentlyrank in like very insignificant
amount of ranking pages, like atthe very bottom.
So it's kind of pretty, you cando more content, but it's gonna
be useless in terms of like endresults.
So I think that strategy willchange for smaller companies.
For larger companies.
Maybe that's actually wherethey can really use that to
drive more traffic at scaleright now.

(50:15):
And I think for the smallerones and something that we also
look at right now is there'sgonna be more focus, I think, in
a different mindset, a mindsetthat you are not a publishing
house, that you're more of kindof like a library, like you go
in your niche and, instead ofdoing like a lot of content for
generic topics, you go really indepth about the things that you
really know very well and youtry to really outperform other

(50:38):
content.
In that way, you add your uniquearguments, your unique views,
your expert opinion, ideallyalso your unique data points
that you collect yourself andyour experience.
And that's actually funnybecause that goes back to
actually what Google wants.
Like Google wants you to followkind of what we say like the
eat factors right, so you havelike expertise, authority and

(50:59):
this type of things.
So it always brings me backthat like SEO, ultimately, I
think you can't cheat the system, like you mentioned before,
that like black haptax, whitehaptax in the past that maybe
worked for like a year or twoand after two years, google
launches a new algorithm thatnow punishes you for something
that was like a best practicebefore that right, which is good

(51:20):
.
So I think the sustainable SEO,which is kind of the term that
I try to promote, thesustainable SEO is to just like
really understand what Googlewants, what the user wants, and
just follow this type ofprinciples Like follow these,
eat factors, follow focusing onvalue, creating something that
actually helps the user, and Ithink that way you're gonna
differentiate.

(51:41):
So this is something that welook at.
So I think there's gonna bemore of that.
This is kind of where we canadd the most value.
And then, aside from that, justagain being open to this AI
solutions and seeing kind of howcan we use our expertise as
well SEO, inbound marketers orcontent writers and use our
knowledge of what we see withour experiments with AI to kind

(52:01):
of like bridge the gap, becauseI think one opportunity that we
all have in our specificfunction is to become also kind
of like master Yuda.
A lot of people right now theyknow AI is out there, it's very
exciting, but they alsocompletely lost on how to best
use it literally in the day today in the best possible way.
So they're looking out forthose people who can give them

(52:23):
some guidelines like how, evenin content writing, if you wanna
internalize it more of thecontent production and use AI
for that, you want somebody whocan tell you how do you do it in
the best possible way, and wehave experienced with that
because we have been workingwith different partners together
.
So, for example, in the contentside, it would be maybe creates
.
Make sure you have like contentbrand messaging playbook that

(52:46):
really defines who is your buyerpersonas, what is your style,
what are the main points thatyou want to bring across your
content, because wherever youassign it to a human or a
machine, they will probably needthe same input to create a good
work.
Then you want to make sure youprovide more context.
You provide more context onstyle or certain phrases you
want it to use.
So there's got to be so much inthis type of field, which I

(53:08):
think is a great opportunity tobe this master Yota person for
people in your space to consultthem on one side, but maybe
there's also going to be anelement where you maybe roll out
training workshops forcompanies, where you can help
them maybe code better with AIand humans and bridge that gap.
So I think, a lot of differentways, for right now it's a

(53:28):
theory for us.
It's something we want toexperiment with a completely new
brand in the new year.
So we'll see how that goes.
But I think it's important thatwe all keep an open mind send,
because everything's changing sofast.
I think if you just keep doingwhat you do, it's going to be a
question of time until you'regoing to run behind.
There's actually a really goodbook, a very simple book, that

(53:49):
only has one main message isit's called who Moves my Cheese.
Like one of those best sellers,we do it really short, but it
really encapsulates the ideathat if you just keep doing what
you do, it's going to becomelike the end of yourself.
So sometimes you have to take abit of a risk, experiment new
stuff and see how that works,because the outcome is probably
still going to be better thanjust staying put and not doing

(54:10):
anything.

Àlex Rodríguez Bacardit (54:11):
Which is actually a good point,
because I'm seeing this as aparallelism of no code.
So, almost like about sevenyears ago, we have the rise of
no code and that was going todestroy all programmers, all
programming languages andwhatnot.
And a lot of companies wentcheap with no code and they said
like, oh, I can cut cars andlike now, a project that I would

(54:34):
have invested 150 grand in it,I can build it for 30.
Good Reality came to bite themback in the ass, right, and so
all of these no code projects,kind of like, disappeared off
the market and eventually theymoved back to code.
Why?
Because they don't view cutcorners.
If you use shortcuts and thetechnology is not really proven,

(54:55):
it will most likely not replacethe entire landscape.
It will become, it might become, a niche, and no code, granted,
is a great technology forcertain specific type of
projects.
It has become a niche, but ithasn't become the generalist,
and I'm thinking that here we'llsee the same A lot of companies
just replacing the marketers,the marketing departments of
their companies you know, we'veseen a lot of layoffs in these

(55:17):
departments, unfortunately orsacking their SEO agencies and
whatnot, because, oh, I'm goingto do it myself internally with
chat and team whatnot.
Guess what?
In one year from now, this willbe, you know, declassified by
Google and the ranked andwhatnot.
It will be penalized and theywill come back to you to bourbon

(55:38):
or to whatever other agencysaying oh, I need your services
because I've got a mess and Iruined my reputation with SEO
and now I don't rank on Googleand this and that, so you will
be called back.
Same for coding.
I'm seeing that maybe therewill be like a gap of one to a
year's time in which people aregoing to go cheap because they
can do it themselves.
They will create absoluterubbish and they will come back

(56:00):
to us, to the specialistcompanies at the boutique ones,
in one to a year's time askingfor help.
So maybe it's just how do weget, how do we bridge this gap
right?
So how do we survive for oneyear and then go back to
actually showing the expertisethat these people will need?
Before we wrap it up, there's asignature question of this
podcast and I know we werefinishing with a high note of

(56:24):
these predictions of, of AI.
Let's just tone it down, man,like now.
It's kind of like the mostdifficult question of the
podcast, but it provides thefunniest answers.
Which is what has been yourbiggest and most expensive
fuckup?
And if you can't quantify itwith money, it has to be related
to your realm.
And don't give me crap like Ihired the wrong people because

(56:46):
everybody tries to give me thatone.
That one doesn't count.
It's something that you havesquander your money on, like you
know, a runk marketing campaign, something that that you are
like oh, I lost 100 grand onthis.

Clemens Rychlik (56:57):
Yeah, for me, I mean, it's easy to say.
I think, if we realisticeverybody makes a lot of screws
up over the life it's importantthat you don't do it twice.
That's why, at least, I alwayspre-drip the team.
You know, it's fine if you do amistake, but at least make sure
you don't do it again.
For me, obviously I think wementioned harming I think that's
the same one for everybody.
If you're not an accrued show ortrained on that, it's easy to
make mistakes there.
For me, it's probably on theSEO side is something, because I

(57:18):
came kind of like from afinance side business degree.
Then I kind of did likeconsulting with a bit of sales,
and then when I first did mybridge to, let's say, the
inbound marketing side, one ofthe biggest mistakes I made was
like getting very lost and it'slike a typical marketing mistake
getting lost in this kind ofvanity metrics where you do like

(57:39):
keyword research, you do allthe strategy, but you optimize
for traffic and then you run itfor a few months and you say, oh
great, like we have this muchmore traffic, but how much more
customers do we have?
Like very insignificant amount,you know.
And I think this was like oneof the biggest realization, so
I'm happy that I did it veryearly on because it helped us
retain all of the clients thatcame afterwards.

(58:00):
But this is like, just like areally, really, really big one,
you know, because you can easilycontent production is expensive
, seo work is expensive.
So if you optimize for thewrong metrics like just like
website traffic, unless thathappens for whatever reason, to
be really the one thing thatcustomer cares about, that's
like a really, really, reallybad mistake Because like, yeah,
you get traffic but you don'tget what you care about.

(58:21):
Like, ultimately, people wantto get like leads and sales
things that then, you know, endup paying for whatever they
invest in you and their own team.

Àlex Rodríguez Bacardit (58:28):
And you quantify it in money.
How much money was like spentthere, or like on protection,
sale losses, or I don't knowthere any way to quantify it.
Not that we're ranking, but itjust gives people some dimension
.

Clemens Rychlik (58:42):
I mean I'm sure it costs like several thousands
.
Yeah, still in the four digits.
I think I was happy we had likethis regular client meeting so
we could identify this earlierand then, first of all, keep the
client, which is like, I think,a miracle.
But I think they understood ourtransparency and trust on that
and we were able to come and dothe switch to then actually
optimize for what matteredbecause we did it up it run for

(59:03):
like three months or so but yeah, that was really really like a
big, big mistake and happy wedetected it early and I learned
from it and I saw it earlythanks to customers, thanks to
our management and stuff.
Because if you keep doing that,I mean imagine that you keep
optimizing this for traffic forlike more than a year.

Àlex Rodríguez Bacardit (59:20):
Awesome man 30 seconds for you to say
how can we help you back, likeyou've helped us immensely with
this podcast episode.
How can we help you and yourcompany back?

Clemens Rychlik (59:29):
I mean for us, just anybody who even has
questions or doubts of where thecontent side and SEO side is
going.
I'm just always looking forwardto speak with people in industry
, basically to learn bestpractices, share what we see,
share what they see, come andsee what they may need.
Like I say, I'm not a salesperson.
I hate sales.
So, yeah, if a sales comes outof it, great, but it's not the

(59:50):
main reason I want to speak withyou.
But just yeah, if you want togrow up like an e-coffee or in
person in Barcelona, it would befun to talk marketing, ai, talk
more on the things that we hadhere in the podcast in more
detail, because it is veryexciting, if not, that we have,
of course, barcelona.
So I'm always happy to talkabout startups, tech here and
beyond, and especially my petproject with Barcelona Fintech,

(01:00:10):
with our team there.
If anybody's particularlyinterested in the fintech scene
here in Barcelona need any help,any questions, definitely happy
to talk and see what we can do.
It's a nonprofit so again,there's like no monetary
reasoning behind it.
It's literally just to helpcompanies here be successful in
a fintech space.

Àlex Rodríguez Bacardit (01:00:27):
Awesome .
And, just to wrap it up, mygratitude to you and your team
for the loving and support thatyou've been giving us to our
community at Mardspace inStarcraft in Barcelona over the
years.
So I wanted to make it evenmore public.
So kudos to you and your greatjob.
Thank you for being here.

Clemens Rychlik (01:00:42):
Thank you.
We appreciate all that you doMakes our lives definitely much
easier at Barcelona, and I thinkwe need more companies like
Mardspace, like Startup Grind,like you know Barcelona, to help
our ecosystem, you know, thrive.
So much great stuff.
I just need to, all you know,put it out there on the map.
Advertise With Us

Popular Podcasts

24/7 News: The Latest
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.