All Episodes

November 21, 2025 13 mins

Welcome to Breaktime Tech Talks! In this episode, dive into the latest updates and challenges in the world of developer tools, AI, and graph databases. 

Episode Highlights:

  • Overcoming technical hurdles with Langchain4j and Neo4j, including the new support for read-only Neo4j databases in vector indexing (Github feature pull request).
  • Navigating versioning headaches and framework differences between Spring AI and Quarkus for AI-powered applications.
  • Lessons learned from hands-on work with Neo4j GraphAcademy courses (GraphAcademy GenAI Fundamentals), including AI and knowledge graphs.
  • Key takeaways from the Andrej Karpathy interview (YouTube interview link), including:
    • The strengths and limitations of large language models (LLMs) for developers.
    • The concept of the “decade of agents” and how agents are shaping the future tech stack.
    • The importance of teaching as a way to deepen technical understanding.
  • Upcoming events and workshops:
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
You are listening to the BreaktimeTech Talks podcast, a bite-sized tech
podcast for busy developers where we'llbriefly cover technical topics, new
snippets, and more in short time blocks.
I'm your host, Jennifer Reif, anavid developer and problem solver
with special interest in datalearning and all things technology.
Apologies for missing last week'sepisode, but I've been a bit under

(00:27):
the weather, and it's also reallyeasy to get lost in the overwhelming
amount of content out there too.
Staying on top of all the techtoday is hard, and I'm so many
installments behind on my manynewsletters, subscriptions, and feeds.
But this week I tried a little bitharder to balance the content consumption
and production a bit better by soakingin an Andrej Karpathy interview

(00:52):
that people have been talking aboutas well as producing this episode,
writing for an upcoming Java book andreviewing Neo4j GraphAcademy courses.
Let me share what's going on.
First up, I had mentioned a few weeks agothat I created a GraphRAG Fundamentals
course that is through O'Reilly, and Iproduced the first installment there.

(01:14):
And I wanted to create a Langchain4jversion of the application to go
along with the primary spring AIversion that I had together, but
I couldn't because Langchain4jdidn't allow for a read only Neo4j
database because of the vector index.
So you could run with a Neo4jread only database in traditional
queries and things like that.
But to do the vector indexing, it triesto create the vector index inside the

(01:37):
database, and so requires write access.
That has now changed.
There was an update to Langchain4jthat got pushed out I think a
couple of weeks ago now wherethat capability is in Langchain4j.
So I thought I would go backand work on the Langchain4j
version of the application.
The first thing that kind of trippedme up was that it was pushed in
the Langchain4j community library.

(01:58):
So I needed to add that into myproject in order to take in the latest
release of that project, because it'snot in the core project, as of yet.
That was a little bit trickyto figure out, okay, what
library works with this version?
Does my version of Neo4j or Langchain4jor Quarkus need updated in order
to be compatible with all of that?

(02:19):
It was a little bit ofversioning headaches in order
to figure all that out first.
The second thing is that frameworksapproach things a little bit differently.
And it's taking me a little bitlonger to make some design decisions.
First off, Spring AI will format andconstruct your classes and the way that
the pipelines move and run throughoutthe application a little bit differently

(02:43):
than what Quarkus does with Langchain4j.
I could absolutely pull in Langchain4jinto my Spring app, but I figured if I am
differentiating between using Spring andsomething else, then I might as well use
something else for the framework as well.
So I went with Quarkus and Langchain4j.
That's what I'm building and, becauseQuarkus takes a slightly different

(03:03):
perspective ,then their structure,their design of their application
is gonna be slightly different.
So that's slowing medown just a little bit.
Just making some design decisionsabout which way do I wanna go
for this particular functionand this particular capability.
Also, I have found out that Claudeisn't the greatest at complex Cypher
generation, especially when you'redealing with brand new features.

(03:26):
For instance, Neo4j recently releasedCypher 25, which is the latest
installment of Cypher for this year.
And it has some really nice featuresand capabilities included, but
also your major LLMs aren't gonnabe trained on that data yet.
Right?
It's out there available on the web,but it's not part of their training.

(03:46):
And so the syntax that they're gonnago with, that they're gonna generate is
gonna be out of date or older syntax.
So it's not, maybe not as clean,not as easy to maintain or read.
And so I wanted to update and fix someof that and use some of the new syntax.
But when you're trying to do thisnew bleeding edge stuff, it's always
hard to have an AI generate that.
It's a lot of learning, going todocumentation yourself, building yourself.

(04:10):
Which is no problem, but it does kind ofslow that work down just a little bit.
AI is great at generating somethings not so great at other things.
This is one of the things they don'thandle so well, at least not yet.
So I'm putting together this Langchain4jjapp, but it is taking me a bit
longer to do so with some of theseroadblocks and hiccups along the way.

(04:31):
Now, I also am working through someof the Neo4j GraphAcademy courses.
I have built content for one or twoof these courses, but I hadn't gone
through the full library of everythingthat's available on GraphAcademy.
I really wanted to explore whatwas out there and just see what the
courses are, how they work, littletidbits I might be able to pick up
about Neo4j or technology in general.

(04:53):
And I wanted to see what was in them.
Now, the first course I tackledwas the Gen AI Fundamentals course,
and this course walks through howlarge language models work, how you
end up with hallucinations, and howadding context can help reduce those.
Talks a little bit about RAG introductionconcepts, how it understands a
user query, then retrieves relatedinformation using semantic search

(05:17):
or vectors, and then provides animproved answer based on that context.
Then the course goes into knowledgegraphs, what they are, how to
create one, and then integratingNeo4j with generative AI.
Looking at retrievers, things like thevector retriever, vector plus Cypher,
so a traditional GraphRAG retriever,if you wanna think of it that way,
and then a text to Cypher retriever.

(05:37):
And then talks a little bit about GraphRAGpipelines and frameworks and so on.
The thing that I really love aboutthese courses is that they blend
some concepts, so some traditionalreading, lecture style stuff with
real world examples and hands on.
This feels really manageable at eachstep, no matter where you're coming
from in your learning journey and nomatter what programming language or

(05:59):
framework you're most familiar with.
I really like this course.
I hope to showcase a little bit more aboutthe course and a few tidbits here coming
up, but that's where I am right now.
I also am still working on my bookproject, and one of the things
that I'm learning pretty firsthandis that my blog, post writing and
book writing are totally different.

(06:19):
Now, I knew they would be different.
But I felt like I was at leastsomewhat prepared from my blog post
because I like to produce blog poststhat are step by step, and when I
have longer form content, I breakit up into smaller chunks, so think
chapters or sections or what have you.
However, book writing is totallydifferent, even from multi-step blog post

(06:41):
because there's more context that youfill in in the book, rather than leaving
some things for the user to figure out.
In a traditional blog post, you'll makesome assumptions about what they know,
what they don't know you might expect.
They'll go out to resource linksor, run searches and fill in some
of the gaps that they don't know.
But a book, you don'treally want that to happen.

(07:01):
You wanna fill in all those gapsas much as you can, and it took
me a while to get into the groove.
I worked through the first coupleof chapters, off and on, and then
came back and started chapter threea while back, and chapter three
just started to flow much better.
And I think it was because I finallyhit my groove and figured out, oh,
this is how book writing is differentfrom my traditional blog post writing,

(07:24):
and this is what I need to changeand alter in order to get into that.
I've also noticed, too,that it's harder to.
Focus for those longer periods of time.
Yes, blog post writing takestime, but it's much shorter
installments and shorter editing andfeedback and publications cycles.
Where, in a book writing scenario, youcould sit and write for hours and hours

(07:46):
and hours and hours and still not be done.
So it's much more about I need tobreak this up into smaller chunks.
I need to do a section, or Ineed to work on the outline,
or I need to work on examples.
And you try to break it up into alittle bit smaller pieces so that your
brain doesn't get too fatigued, but itis a little bit of a different format
and scenario than what I'm used to.

(08:07):
I do also have a couple of upcomingactivities for the month of December.
I mentioned this in mylast podcast episode.
So here are the details for these.
The first is I mentioned a Neo4jFundamentals, GenAI hands-on workshop.
This is from Neo4j andit is entirely free.
It's entirely virtual, so you canfind this on the Neo4j events page.

(08:28):
I will also leave thelink in these show notes.
But this is December 11thand it's a virtual workshop.
I think it's two, maybetwo and a half hours long.
And it walks through Neo4jFundamentals with generative ai.
And this is a little bit different fromanother virtual presentation I'm gonna
be giving, or virtual training i'm gonnabe getting on the GraphRAG Fundamentals.

(08:49):
This is the O'Reilly three hour course.
This is another installment of it.
This will be December 18th, and ifyou don't have access to O'Reilly,
you can sign up for a 30 day trial.
Now, of course, that'll only give you30 days in order to, soak in as much
content as you want to, for free.
But, if you're interested in testingthat out or just seeing what all is

(09:09):
available, you can absolutely do that.
In fact, I had a colleague whodid that for my last course.
That was really exciting to beable to pass that along to you.
All right.
I mentioned early in the episodethat I sat down and watched
the Andrej Karpathy interview.
I will leave the link in theshow notes for the YouTube video.
It is very long, over two hours Ithink, but it is worth the, listen.

(09:31):
I did it in small chunks, kind ofwatched a chunk, then came back later.
And there's a couple of highlightsthat I want to boil down and
repeat back to you for today.
But feel free, I would definitelyencourage you to watch the
interview, even if it's just insmall pieces or chunks over time.
The first thing that I wanted to highlightfrom that is Andrej talks about that

(09:54):
LLMs are amazing, but they're not.
My take on this is there are certainthings that large language models
can do that are really spectacularand really amazing, but when they
fail, they fail pretty miserably.
And, they talk a little bit aboutthat, some scenarios where we're
still trying to figure out wherethey excel and where they fail.

(10:14):
That's still some of the learningjourney that we're doing.
Another point from the interview isthat they talked about the decade of
agents versus the year of the agent.
Andrej's perspective is that it's a decadeof agents, which means that we're going to
be seeing this longer term than just 2026.
And my take on this is that it doesseem to be the way the industry

(10:35):
is leaning, heavily on thoseagents where each technology is a
piece in the overall tech stack.
So for instance, whether you'relooking at, you know, graphs or
vectors or deployments or guardrailsor whatever it happens to be, then each
technology is going to have a pieceof that puzzle, a piece of that pie.

(10:56):
But it's a much larger stackwhen you're looking at a full
application or a full system.
The third point that I found reallyinteresting, which was towards the
end of the video, is that they talkedabout education, the importance
of it and the shift of it with AI.
Andrej's point was that you reallyunderstand something when you can teach it

(11:19):
or you realize how little you understandwhen you can't explain it to someone else.
My take on this is thatit's exactly my perspective.
When I started at Neo4j, I tookthe path that the best way to learn
quickly was to focus on teaching thecontent, so that's exactly what I did.
And I really loved thismethod of learning.
It forced me to learn thetechnology through deep

(11:40):
understanding and hands-on usage.
It also gave me a differentperspective when I was learning, that
I want to learn this technology sowell that I can help other people
understand it and use it as well.
And so I really thought thiswas a fun learning experience.
I got a lot out of it and I learned awhole lot just in my first year working
at Neo4j, the graph database companythan I would have if I had come in and

(12:03):
just learned things as I went along.
The last thing I wanna point outis that NODES 2025 recordings
are being added to the playlist.
I will link that playlist again.
There were lots of sessions that happenedthroughout the NODES 2025 conference
that was entirely virtual and 24 hourslong back at the beginning of November.
Those recordings from those sessionsare now being consistently added to

(12:25):
the playlist, so feel free to checkthose out if you missed them or if you
wanna see what all the hype was about.
I've been finding it difficult tobalance between content consumption
and production because I feelalways behind on consumption.
I feel like I could spend full timeon simply listening and reading news,
interviews, videos, trainings, et cetera.
But this week, I tried harder tobalance by doing some listening,

(12:47):
but also generating new things aswell, human generation, not AI.
I feel like I should distinguish that.
As always, thanks forlistening and happy coding.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.