All Episodes

June 20, 2025 9 mins

This week, I simplified my Langchain4j project with improved prompt variable injection. Then hear my perspective on the role of tools vs. agents in AI workflows—looking at how structured processes differ from autonomous systems, especially in the context of Java frameworks and GraphRAG.

Get an inside scoop on how I use different AI coding tools: IntelliJ IDEA for in-flow coding, VS Code with agent mode for problem-solving, and ChatGPT for summarizing and refining content.

Lastly, hear highlights from an article on building a local RAG app with Quarkus—clear diagrams and step-by-step breakdown of ingestion vs. retrieval workflows.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:05):
You are listening to the BreaktimeTech Talks podcast, a bite-sized tech
podcast for busy developers where we'llbriefly cover technical topics, new
snippets, and more in short time blocks.
I'm your host, Jennifer Reif, anavid developer and problem solver
with special interest in datalearning and all things technology.
What I thought would be a quieterweek has ended up being very busy.

(00:28):
I had another week of smaller projectsworking on content, updating code, and
deciding which direction I want to gofor a couple of different applications.
But there are a few tidbits Igathered that I want to pass along.
Here's what I learned.
First up was I was able to update myLangchain4j sample project to reduce
some of the boiler plate that I had bychanging up the prompt variable injection.

(00:53):
I was building the prompt manually inthe book resource class, but I actually
found a way to pass in the variablesto the prompt from the book AI service.
This was revolutionary to me.
I was able to reduce some of the codeand clean things up in my resource
class, making it much, much easierto pass in basically, these prompt

(01:14):
variables, so that was really nice.
I updated the code.
That is available on GitHubif you wanna check that out.
And of course, I'll be makingmore updates and tweaks as I go
along where I find efficiencies.
I. The next thing I will be reallyexcited to explore is tools and agents
with both Spring AI and Langchain4j.
I was talking with some colleaguesthis week about how complicated

(01:36):
the GraphRAG process seems to bein several of the Java frameworks
I've looked at, and they suggested,well, what about tools or agents?
I was initially thinking that toolsand agents have a higher risk for
uncertainty for the LLM to use the tool.
For instance, sometimes LLMs decidethey don't wanna use a tool or they

(01:56):
can't find something, or they don'tthink it's the right tool for the
job, and so they don't pick it up.
But my colleagues reminded me thatsystems encompass workflow processes
all the way to age agentic systems,or more autonomy, if you will.
So maybe I should frame it as tools equalmore workflow, structured processes, if
you will, and agents equal more autonomousworkflows or systems, if you will.

(02:22):
Knowledge graph then I could incorporateas a tool for the LLM to call and just
set some strict boundaries or rules aroundtrying to use that tool consistently.
It's something I still need toexplore and figure out how to do
this and the code for that, but Iam really excited to check this out.
I would need to set it up moreas a Specific workflow, rather
than letting the LLM decidewhether to use that tool or not.

(02:46):
I also had kind of a breakthrough,I guess, in realizing which AI
tools I use for certain things.
There were a couple of times thisweek that I opened projects and I
noticed myself start to open theproject in one tool and then realize
I actually wanted the other tools,features, and then switch applications.
And it cut both ways.

(03:07):
There were a couple of times Iwanted to open one app to work on a
project, and then realized I neededto switch and vice versa from the
other app back to the first one.
I am not really using anything fancy.
I'm not spending big subscriptions oranything like that, but there are a
couple of things that are available tome, and I'll mention those as I go along.
There are also some other higher endtools that I would love to explore.

(03:30):
I just haven't really had the chance todo that and vested the time there yet,
but I am hoping to do that in the future.
So currently I'm defaultingto a couple of tools.
First is IntelliJ idea with the codingassistant enabled this, I think I am on
the Ultimate Or the pro tier or whateverit is, past the community edition,

(03:55):
and it will suggest things as I type.
This is really nice.
It'll show you the next methodthat you might wanna type.
It pops up definitions.
It allows you to jump into sourcedocs and things very easily by
just a couple of clicks of, keys,command shortcuts and things.
This is really, really nice when I'mknow what I wanna code, but I might

(04:17):
wanna make some tweaks or have somesuggestions for maybe I wanna adjust
this a little bit or change the styleor, oh, hey, there's a new method now
that I didn't know was there before.
It'll suggest that upcoming.
This is more stream of consciousnesscode though, and where I know I what
I want to write, and the tool is justmaking suggestions as I go along.

(04:39):
The other tool I like to use is VS code orvs code insiders with the agent mode on.
This tool I like to use when I want toask about solving a specific problem or
I want the LLM to suggest cleaner ways towrite a block or a method or something.
Also for some content suggestions,that's kind of nice if I'm stream of

(05:02):
consciousness writing something andit'll pop up paragraphs or sentences
as suggestions, and then I decide,no, I don't wanna go that direction,
or, oh, I actually do wanna mentionthis thing, and so I'll add that
in and tweak it as I go along.
Then the third tool I like is plainold ChatGPT or some other web model
for summarizing or writing briefbits of content with my own edits.

(05:23):
Of course, I do make alot of adjustments to.
Anything written by the LLM, I'vealso noticed that longer form
content, I prefer to write myself.
Maybe you remember in school textbooksand teachers saying it's always harder
to make your writing clear and concise.
It's always harder to shorten something.
I find LLMs do a pretty decent job atthe first cut of condensing information.

(05:48):
So if I give it something longer,something wordy, something I know
that I want it to condense, it does apretty decent job at condensing that.
And then I make adjustments.
Often it's to things that itpicks out in that longer form
content that it finds important.
And I actually wanna highlightanother bullet point or
another idea or topic more.

(06:09):
And so I'll switch how itprioritizes those points.
This is something I noticemyself doing quite a bit.
It summarizes really well, but then I haveto adjust what it highlights or focuses on
within that shortened, condensed version.
these are the things I'm currently using.
Intelli J Idea vs code or vs.
Code Insiders with agent modeand then plain old ChatGPT.
All of these tools are pretty nice.

(06:30):
I don't use them for all ofmy coding and I don't let it
write huge blocks of things.
And I haven't done a lot of, I guessyou'd call it vibe coding at this point.
But I am starting to see it creep in.
And again, just noticing myself be drawntoward one tool or the other depending
on what I'm doing, and I hadn't noticedthat leaning in on one particular

(06:52):
thing or another until this week.
Some ideas to throw at you if you arelooking at exploring some of these things.
If you have access to some of thesetools or are able to try them out, these
are the things that I'm using them for.
And just find incremental smallbaby step approaches to checking
out what they have to offer.
The content piece I want to lookat this week is called Crafting a

(07:15):
Local Rag Application with Quarkus.
This builds an entirely local solution.
Now it does use Infinispanas the vector database and
GraniteLLM, which is a local LLM.
I don't have any experiencewith either of these tools yet.
I don't really know much about them even.
But it's something that Iwould be willing to explore, of

(07:37):
course, in the in the future.
But what I really liked aboutthis article is their diagrams.
So the first architecture diagram showsthat the data ingestion and the data
retrieval pieces are separate components.
They don't always have to bethings like the lazy GraphRAG idea
where you're doing ingestion andretrieval at the same time concept.

(07:58):
That is, a different approach, buttypically ingestion and retrieval
for RAG applications are separate.
I also really loved the second diagram,which shows more detail of the steps
for each ingestion and retrieval,and clearly divides the ingestion
steps from the retrieval steps.
I feel like there's been a lot ofconfusion that I've seen from developers

(08:22):
and people that I've talked to onwhether ingestion is a pre-step to
RAG or if it happens during RAG.
And typically the way I've seen it isthat data prep occurs upfront, so you
have to do the embeddings, the splittingof your documents, and so on before you
even look at building the RAG application.
Then the article, once it gets pastthese concepts and diagrams, it dives

(08:44):
right into implementation with code.
There are some really nicedetails and explanations of each
step and the code is linked plussome other resources at the end.
Now again, this uses Quarkus.
Because I have recently divedinto Quarkus and Langchain4j, this
was right up my alley and all ofthe code looked very familiar.
This was nice to reiterate some ofthose things and just look at more code.

(09:08):
I made some progress this week andI'm really excited about tearing
into the tools and agents forSpring AI and Langchain4j soon.
I updated my Langchain4j sample app withan improvement to reduce boilerplate
and then chatted about the AI toolsI'm currently using and for what tasks.
Next, I highlighted a Quarkus applicationwith RAG that included clear diagrams
on ingestion versus retrieval steps.

(09:30):
We'll see where next week takes me.
Thanks for listening and happy coding.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.