All Episodes

November 26, 2025 • 38 mins
Prepare to deploy multiple LLM-powered agents for your next (secret) missions with mini007, a new contender to the high-performance linter tools with blazing performance that doesn't seem possible (but it is), and a usethis-like manager for your projects needing unified branding across deliverables. 


Episode Links 

Supplement Resources
 

Supporting the show
 

Music credits powered by OCRemix
 
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
Hello, friends. Did you miss us? Oh, we missed you all. We are finally back of another episode of our weekly highlights. Yes. This is Brian.
I actually don't know how many weeks it's been, but let's let's set that slide for a bit. In any event, when we are usually recording, this is the usually weekly show where we talk about the awesome happenings, the awesome resources, and much more that are all documented on this week's our weekly issue.

(00:28):
My name is Eric Nance, and yes, I'm definitely rusty on this. So bear with me here. But thank goodness, I don't have to be rusty alone here. You know, we have the pro here joining at the virtual hip here. Mike Thomas. Mike, please keep me on. This has been a minute since I had this mic in front of me.
It's been a while. It's a short week with Thanksgiving here in The US. We're a little all over the place, but we're gonna get an episode out.

(00:52):
That's right. We're striking now because I know after
today, the kids are off from school for the week, and, yeah, no recordings happening there even though I am
trying desperately to resurrect my tech setup for some future, livestream
of dev work. It's it's this close, Mike. It's this close. I think you're gonna like it when it's out there, but

(01:12):
you know me. I overengineer all the things, and I'm proud of it.
Never. Never.
Never. No. Not at all. Well, speaking of hopefully, not overengineering.
Although, sometimes it does happen from time to time. Let's check who curated this issue. Oh, yes. It's mister over engineer himself. Yes. It was me and my my turn. So how's that for a coincidence? First episode in a few weeks, and that's the issue I ended up talking or curating. But that's probably for the best since I got a a nice preview of what we're gonna talk about here. And we got some great selection here, but as always,

(01:47):
this is a team effort, folks, and we have awesome an awesome team of curators that help each other out. So my tremendous thanks to the team as always for their tremendous help and for the wonderful poll requests from all of you in the community. I think I merged nine of them in this issue. So that's rocking.
Love the open source mindset here.

(02:09):
So let's dive right into it, shall we? And,
yes, it wouldn't be a twenty twenty five our weekly issue without our first visit into the world of LMS
and AI.
And, no, we're not gonna bore you with any
LinkedIn clickbait
stuff here.
I have always been bullish, if you will,

(02:29):
on the tooling that's we're seeing in the community, especially on the our side
that's building upon some solid foundations
in an open way.
And, of course, we've covered the Elmer package quite a bit in many episodes this year and frankly even early last year as well, and that's really starting to mature.
We have a lot of great extensions building upon it.

(02:52):
But what is interesting about this first highlight here is this extension, first of which,
is not coming from POSIT directly.
And I think it is encompassing an area that I know I've been wanting to learn how the best handle that I think a lot of other frameworks
that are somewhat either proprietary
or at least have an opinionated workflow

(03:14):
do expose to you, but maybe not in a very transparent way.
So what we're talking about here is the idea of not just
a single,
quote, unquote, agent
in your LLM
generation
or utilization.
We're talking about multi agent workflows.
And that is coming to us from a very fun name package,

(03:38):
mini double o seven.
This is a new r package authored by
Mohammed El Fodo Ihadden.
I know I didn't get that right, but he's been a part of the r community for many, many years. I've actually filed a lot of his work in the shiny space, but it looks like he's turning his attention to LLM,

(04:05):
pipelines as well.
And so this first highlight comes to us from a blog post on the art consortium blog as well as a link to a recent, presentation
from Mohammed at the RAI conference that just happened about a month or so ago.
And there was actually I think it was a hybrid of a workshop session

(04:26):
where Mohammed walked through the use cases
and kind of getting started with mini double o seven.
So we'll have a link in the show notes to the GitHub repository. I think that's one of the the best places to go for this, and it's a very comprehensive readme here of just what is possible
with this package.

(04:47):
So first, let's again, you know, frame this as this is building on top of Elmer,
and you start off with building a chat object via Elmer
using either the Frontier model provider or perhaps a self hosted
provider.
That's all the same.
But what mini double o seven brings to you is a new custom class called agent.

(05:10):
And this is where you can add
a single agent with its own custom instructions,
its own usage guidelines,
and, of course, tying it with that previous chat object that you create
at the outset.
And this is already
giving you a nice idea here
where with these agents, you could have one, you're gonna have two, you're gonna have however many you like, and they could each be tailored to a different aspect of your workflow.

(05:41):
So,
one of the examples in the readme
is maybe you want a custom agent tailored to translating, say, a foreign language.
You might have another agent that's a little more needs to be a little more powerful.
Maybe it's trying to help answer research questions
or at least understand the content of what it's translating.

(06:02):
And perhaps you have even another agent that's trying to do some, you know, maybe new ideas, maybe new data analysis,
or new summarizations.
And so there is an example here, like, I mentioned in this readme of this kind of three pronged approach
to multi agents.
But again, within each of these, give it a name, you give it a custom instruction via the instruction parameter,

(06:28):
and which object, like I said, that we that you created with Elmer
at the outset.
Now these are still kinda separate. Here's where the interesting stuff really comes into play.
You now have a new way of orchestrating
these together
via the lead agent class.
And this is where, again, it builds upon the existing Elmer object for the chat.

(06:53):
But now you can say for this lead agent,
use these other three agents
as the way you delegate responsibilities
or delegate how these different agents answer the questions.
This is where I know I have seen, at least from my limited experience of LN Technologies,

(07:14):
different, you know, platforms
such as, like, Claude or maybe others
that let you do multi agent stuff, maybe even behind the scenes. Maybe they wrap that for you,
but you don't necessarily get a lot of transparency in how that's actually built.
But throughout the mini double o seven,
kind of paradigm for running the functions,

(07:36):
you can have what they literally have a function for a human in the loop
to help
kinda stagger and put some, like, either breakpoints or, like, checkpoints
to make sure that this multi agent workflow is doing what you are expecting.
That's both for kinda, like, you know, a stepwise kind of approach to reviewing what's happening.

(07:58):
And you can also review the plan that it, that it wants to do before you actually start executing things,
which, again, in my industry is something you definitely wanna look at. It's just what exactly is gonna go on under its control
before you actually kick off those, analyses.
So that is really interesting. And then another interesting part

(08:21):
is, again, due to this multi agent workflow,
you could have it actually use different alternative models in some of these agents.
And either you yourself
pick what you think is the best response,
or you could even let the lead agent use these different, say, sub agents
with different sub models, say, from OpenAI

(08:43):
or or anthropic or the like,
and then you can let the multi that lead agent
do the decision for you and still
have it explained to you why it chose the way it did.
So I think there's a lot of potential here. I could see a lot of interesting use cases being built upon this
as you think about either in the automation space or maybe a more sophisticated data pipeline

(09:09):
where you've got different, you know, like what the tidy verse suite of packages is popularized
or the r for data science workflow.
Gotta import your data, gotta maybe do some transformations,
summarizations,
and then reporting.
Maybe this is a way to give you even more custom
flexibility
in those kind of pipelines

(09:30):
and then feed that into some other ETL process or the like. This is just me literally spitballing here.
But I think there's a lot of potential here and the fact that it's building upon Elmer, I feel like there's a lot of confidence, a lot of at least
at least a robust approach to start this.
As someone new to all this still, I I wanna try with maybe some basic projects,

(09:54):
but it definitely got me thinking that this might be the ticket to some more sophisticated
workflows in this space. But curious, Mike, I don't know if you have experience in multi agent workflows, and what do you think about mini double o seven here?
I don't have a lot of experience with multi agent workflows, to be honest. I'm I haven't barely scratched the surface with agents and tools

(10:16):
and all of that. But I do know Hadley Wickham gave a a great presentation at Positconf that was for audiences like me that had no idea what that terminology was,
and
boiled it down to very simplistic terms terms that resonated with kind of my approach to r. But I like the the r six approach to this mini double o seven package because it feels like we're creating sort of building blocks that we're putting together and orchestrating,

(10:43):
in a way that I think lines up with how
the best practices
currently are with these agentic workflows. They're kind of little building box little building blocks,
that we
put together and stitch together in particular ways to try to accomplish the the task at hand. And it so I think it's a very nice parallel in terms of the design,

(11:06):
of the the API for this, mini double o seven package with a lot of really nice little features
as well.
This this set budget,
allows you method allows you to actually set a particular budget in terms of dollars at least as shown in the example here
that essentially make it impossible for you to go over that budget,

(11:29):
with your LLM, you know, frontier provider of choice, as well as, you know, provide you with some warnings as you
approach that budget at thresholds that you set yourself.
So very handy things I would imagine, especially when you're orchestrating these multi agent workflows that are are calling these LLM APIs,
quite often. You know, maybe more so than what you would just,

(11:53):
typically be used to as you interact with sort of the web based, you know, chat GPT or Claws of the world as well.
That idea, as you mentioned, Eric, that you can have multiple agents.
And the example that Mohammad gives here is have a researcher agent, a summarizer agent, and a translator agent all sort of working together to answer this this one example question that says, you know, tell me about the economic situation in Algeria, summarize it into three bullet points. That's the summarizer after the researcher,

(12:25):
then translate it into German, and that's the translator there at the end. So sort of three different agents all,
executing
on this particular task. You know, there's this downstream sort of domino effect workflow.
And there's this really nice, function in here called visualize underscore plan
that actually
shows this, sort of relationship as a a DAG,

(12:49):
in a visual pane in your editor, which is really cool. I I wonder sort of what the, underlying packages that's being used to create this,
sort of workflow diagram. I imagine maybe this network or something like that, but I haven't dug into the internals of the package enough. But, again, just a really nice example sort of along with that that set budget function, this visualize plan function

(13:14):
is a really nice to have,
you know, utility that this
mini double o seven package provides as well as that human in the loop, set underscore h I t l, human in the loop, that allows you to decide which steps
in the particular workflow
you want to interject
and and have that human be, sort of a decision maker in the process as well. So I really appreciate the design here. I really like the way that, Mohammed was able to articulate

(13:44):
this package
in the YouTube video, which I'd highly recommend you watch. There's also accompanying,
PDF slides
as well and,
really interesting sort of lightweight but incredibly useful,
framework that Mohammed has provided for us, our users, to work with these agentic workflows.
Yeah. And I was combing through the repo. It's actually very minimal dependencies

(14:07):
over in Elmer itself, which you would expect. This is not a heavy footprint,
which to me, something seems as, you know, ambitious to my eyes as this. You would think it's gonna import all sorts of stuff. And, no, it's a very maybe, like, six dependencies here. That's a very lightweight
footprint, and I think there's a lot of potential. It's a very clean code base,

(14:28):
and I'm definitely gonna be watching that that presentation
and seeing those slides because
I wanna I'm still starting small in my typical LLM usage,
but there are a lot of things that we're hearing, especially in the life sciences
side. I hear presentations
about different, you know, statisticians and different companies

(14:50):
building multi agents for automation, but we don't really know how they did it yet. This may be a case where I can at least prototype what they might be doing and translate that to what we're trying to do at our company and whatnot. So, yep, this is,
going into the the December kinda hacking list of me to explore cool stuff when I get some time off.

(15:26):
Well, as many know, Mike, we often leverage what we talk about at the outset, large language models to help make things faster for us to get to that answer more quickly and whatnot.
Well, there's been a lot of great traction in terms of fast
linting
of your code, the cleanup,
but I often get wrong. Maybe a missing brace,

(15:48):
maybe lines that I should have had a carriage return somewhere, or maybe, you know, parameters that I misspecified.
We often look for ways
that just speed up our correction of that instead of just poking through that manually and hoping for the best.
I admit since about six months ago or so,
I've adopted the utility called air by posit

(16:11):
by posit
into my positron experience.
Oh, no. That's been so helpful to me. Now that has been great for just kind of the basic formatting, although I haven't really used it to its full potential yet.
But what if I told you, Mike, there is a new linting framework out there that
when you point it to the entire

(16:34):
source code of r itself, the r files of r itself,
It takes just seven hundred milliseconds
to lint the entire code base. Would you believe me?
No. I wouldn't believe you. That's what I thought.
But nonetheless,
it is real, and it's fantastic.
And what we're talking about here

(16:56):
is the Jarl
utility.
This has been authored by Etienne Botcher. Hopefully, I'm saying that right.
And boy, boy, oh, boy. This came out of nowhere. But, apparently, this has been in the works for a while because this is actually
one of the projects that was helped funded by Doctor consortium grant program.
Now just what is Jaro exactly? Well, I kind of said it at the at the outset. It is, quote unquote, just another R winner. But boy oh boy, I think that is selling itself short

(17:26):
because
one of the biggest reasons it has a huge speed, you know, promise and actually prove that speed
is that this is raining in rust.
Rust is becoming a lot more common now in the r community especially in light of
I think I saw a great presentation
at at Pazikoff about the Rust ecosystem

(17:47):
and r. We've seen a few utilities spin up, and this is just another example
of piggybacking
off of the promise and the and the enhancements of Rust
to make something blazing fast and with good quality.
So
this is not as meant to be a strict replacement for what you may be familiar with and that is the winter package, which many of us have used for years and years within our various workflows, whether in our studio IDE

(18:17):
or Versus Code or Positron.
Linter has been around for a while.
I can say that it's probably getting a little bit longer in the tooth, so it might not have the best performance for larger code bases,
but Jarl is definitely
helping in a lot of those use cases.
There is on the blog post of a link to as always,

(18:41):
there is actually a screencast showing that it's not just frivolous pointing to this our code base. It is real. You can play it and it's only about sixteen seconds and Melissa knows it's just looking at the console.
It is
just as fast.
And the example that's in the blog post here,
we see the formatting or linting, if you will,

(19:03):
of an apply function or a function that uses apply under the hood.
And this is where it's not just looking at the syntax,
it's looking at,
is there a better way to do it? And sure enough,
it picks there is a better way to do it.
So if you wanna do your own version of of a mean for rows,

(19:25):
instead of just doing the apply function verbatim,
you can use the row means function. But this lint this charl is intelligent enough to find these kind of areas
where maybe there's a base r function that can help you out and maybe you just didn't know about it.
Trust me. That happens to me quite a bit. Every time I read blog posts, I often see,

(19:45):
like, the nz char function in our base. And I'm like, what is that? And then it dawned on me, oh, that's non zero characters. Silly me. So I think Jaro is gonna be great at picking up things like this.
It is a single binary.
So this is one of those things where you can install it on multiple platforms
out to see how well it works on Linux. Hopefully, it does, but it usually rest things work pretty well there. And then you can just use the custom extensions that are also available to

(20:15):
utilize this in your IDE of choice such as Versus Code or Positron. So definitely look at the extension store for ways that you can plug in JARO for those
workflows.
That that is acknowledged that this is a very early effort.
However,
this is a really, really promising effort.
So I definitely wanna try this in my dev environment

(20:39):
to see not just at picking up some of these base r, maybe kind of misses I might have. I'll be just gonna see what they can do with shiny code too. Maybe it can help me lint some of that as well and make things a bit more performant.
So, again, credit
credit where credit's due here. I I love
seeing this tooling come out, and Atayna has really done an excellent job here at Jaro, and I'll be keeping an eye on this quite a bit. When I feel the need for speed, I'm going to Jarl for my linting needs.

(21:09):
Yes. I came up with that on the fly. You're welcome.
That was beautiful. To me, it feels like air just came out, and I I can't believe that I am still trying to pick up air, and now we have Jarl, which is sort of built on top of that. It sounds like that is potentially even better. And the fact that it can identify
these inefficiencies

(21:31):
in our code. Right? It's not even incorrect code,
in the examples that Etienne shows us.
It's it's inefficiencies
in our code and actually fix those with this dash dash fix flag,
in your JARL command it to the terminal
is mind blowing, and it opens up the the door for just tremendous amount of possibilities,

(21:53):
I imagine, to hopefully take that
that junk inefficient code that your LLM
spat back to you, and clean it up because that is I feel like what most of my day is is these days.
I don't know if it ever is necessary. Seen on that one too, my friend. Oh, goodness. Yes.
Yes. Too many for loops. Please

(22:15):
Please,
train on train on the per documentation
for the love of God.
Hot take incoming. I love it. Not a lot of tidy code, unfortunately, from what I've Yes. I've seen. But hopefully, it's getting better. Opus 4.5
just dropped. I have to to check it out in Gemini three.

(22:36):
But, this is, you know, a fantastic
utility. The fact that, you know, it it's just built on Rust and the the blazing speed. It's almost like my initial experiences
with DuckDV where I would hit enter,
and it would happen so fast that I would think something was wrong,
that I think I screwed something up. So I think we're gonna see a similar experience here with this Jarl utility,

(23:00):
especially when you're leveraging it on code bases that are smaller than the entire R source code, which I would imagine is, most of the code bases that you work with on a day to day
basis. And, you know, one of the the things I struggle with with the error
formatter,
that Linter
has sort of traditionally done a better job of is

(23:21):
incorporating
some particular custom, you know, or overrides, if you will,
of the default linting behavior.
There's some situations where I like
a little bit more,
white a little bit more space, a little bit more carriage returns in my code,
from line to line that I think, you know, we don't necessarily have the control over with air,

(23:45):
as opposed to, I believe, with the the linter package in r. We've had the ability to override some of those defaults and and set up some custom formatting or linting options.
And
it sounds like maybe from this blog post that that is on the roadmap for Jarl.
And I'm really excited to bring
my very opinionated,
linting decisions and code, styling and formatting decisions,

(24:09):
into
the fray here as I continue to to leverage these fantastic new,
code linting utilities that are coming out for us.
I do say we know which which opinion thing you will put in there. Get those four loops out of here, baby.
I I I can definitely concur with that side of it. And like I said, I'll be interested to see just out of the box

(24:32):
what it does to shiny coat. Because there are sometimes things went to shiny code that I don't quite agree with in terms
of spacing and maybe the way I I specify parameters
that I'm like,
you know, they're yeah. You know, it's hard to customize that. So I'm I'm I'm gonna I'm gonna give it a shot on one of my dev projects. So hopefully, I'll be able to install it in my, custom Nix, configs and and give it give it a whirl. But, Positron works just as great. As he as he mentions in the post, there are extensions

(25:03):
for Positron,
Versus Code, and the Zed editor, which I know is getting a lot more traction in the DevOps space as well. So
maybe, hopefully, RStudio support soon. I don't know. But either way, I'm gonna give it a shot.

(25:26):
And rounding out our highlights today,
we've
been looking at a lot of different things this year in our weekly,
and one of them has been a way that especially those of us working in an organization,
no matter big or small,
we often are given either rules or at least try to adhere to best practices

(25:46):
with respect to how things are styled, so to speak.
Maybe you have a company wide branding set of guidelines for your presentations,
reports,
maybe even web applications.
And one of the things that has come out in in earlier part of this year,
again, from Posit, has been the brand.yaml

(26:08):
utility
to help kinda give you that customization,
especially for web based, you know, deliverables
in one place. A single YAML that you could apply
in many different situations, whereas Shiny apps, quarter reports, our markdown reports, and the like.
Well, I think, well, in this last high, we're gonna have, at our disposal,

(26:30):
kind of a use this kind of wrapper on top of brand dot yammo in the early stages here. And this is a new r package
from this year
called r branding.
This is authored by Willie Ray and Andrew
Pulsipher.
And this I believe I may have seen a presentation about this. I don't know if that was at the shiny comp. It does sound vaguely familiar.

(26:56):
But what
our branding does
is it basically gives you some nice quality of life wrappers
to help interact with the management
and getting custom templates going
built upon brand.yammer.
So what it lets you do is that it lets you initialize a new branding project,

(27:19):
and then you can choose
where it's stored remotely.
It supports at this time, you're restoring up a public GitHub repo
or if you're in an organization, a private GitHub repo of sorts.
And it lets you easily manage
grabbing
the config information from that brand.yammer
stored remotely

(27:41):
into your local project. So you still have kind of a single source of truth
for where you store your branding project.
And then along the lines of this, as I mentioned,
our branding will let you
apply all these techniques, all these customizations
across what brand.yam

(28:01):
will supports.
That would be, of course,
quartal reports,
r markdown reports,
Shiny applications.
And one thing that really caught my eye with this package,
even themed plots in the g g pot two ecosystem
and on the kind of intersection of of Shiny
with b s lib and thematic.

(28:23):
So you can have that unified theme approach across not just your apps UI,
but also the plots that you're producing as well.
That's always a plus 100 to me because a lot of times I get get in get into kinda dicey situations where I got this great theme, and I realized, oh, jeez. My plot was too custom to figure that out. So I'm interested

(28:45):
to see if our brand new is gonna help in that side of it too.
And again, one of the other things on the package site, which I'll link to in the show notes,
is it's got a boatload of templates
for you to see just what a finished product might look like,
And you could build upon that or extend it across these different paradigms

(29:06):
for a simple quartal site. They got you covered. I get Tempo for Quartal.
Even a quartal report that implements Jujupilot two. Again, get that unified theming.
You can get it right there.
And there are actually three different shiny templates. This is really intriguing to me,
such as the very basic histogram version
or that k means app that we'd often see in the shiny gallery

(29:30):
and a more complex app as well,
which, again, has a dashboard kind of layout, but you could easily take this and run with it. And, lastly, a even more complex app with multiple tabs. Looks like some BS web action going there of a wastewater
data visualization.
So this is a great way. If you heard about Brand Yammum, you're kinda thinking,

(29:53):
well,
it sounds good, but how do I really manage it in practice across multiple projects?
I think our branding could be a very unique,
utility
to put in your toolbox for your branding adventures.
And it looks like according to the package site here,
both Lily and Andrew look to be supporting

(30:14):
the Centers for Disease Control and Prevention Center for forecasting
and outbreak analytics. So that tells me they've been using this quite extensively in production,
which is always a great thing to see.
So great. Great. And last but certainly not least,
on the package site, they have a great tutorial
on accessibility, which is something we've been, you know, very big champions of here on our weekly across the different venues of of UI toolkits.

(30:43):
I think they got some great checklist
about what you should do with a dashboard design,
avoiding common pitfalls,
about lots of things that we may see talked about in spurious
or I should say various presentations
and documents.
But they're kind of putting this in one place
with links to how you can test your various website

(31:04):
or shiny app for accessibility compliance.
Really great stuff, and you have cheat sheets about this too. So the documentation of our branding is top notch. And like I said, I'm gonna be putting this in my toolbox
for my branding adventures.
Likewise, Eric. And I think this package really helps solve
a problem that that we've had, maybe you've had, and and a lot of folks have had in terms of sort of managing or passing around

(31:28):
that underscore brand dot YAML file from project to project because you wanna consistently
utilize it so that all of your projects have the same sort of thematic
elements and and look similar and match your your corporate,
color palettes and things like that.
But it's you also want a place where that can potentially be updated over time and we have a good version history of that. And there's a few different options, you know, that we've used in the past that haven't been great. You know, things like, GitHub template repositories

(32:00):
for that,
as well as, what was the other one that I was thinking of? Developing like a internal R package
that essentially has a function in there that creates your brand dot yml file.
But this is a much more straightforward way to do that. So you can manage that brand dot yml file in its own, GitHub repository potentially,

(32:21):
and then download that and leverage it in the project that you're working on at hand. So,
excellent, excellent work done. It's it's nice to see kind of the the intersection of
an organization like CDC being able to put out something, in the public for us that we can leverage. So that is awesome, contributing to open source.

(32:42):
And the
package down site is fantastic as well. As you mentioned, there's a whole entire article on accessibility
guidelines, sort of beyond the the templates as well that they provide and and some of the additional articles that they have. But I really enjoyed sort sort of going through their general design principles for accessibility, which are are kind of these five principles on making content perceivable,

(33:07):
making your interactions operable,
which I am probably guilty of overusing,
some interactions and and collapsible elements and things like that that probably reduce accessibility. So this was a a good way for me to sort of stop and think,
about some of the projects that we've worked on and the Shiny apps we've done recently and some potential redesign,

(33:27):
choices that we could make. The last three are make the structure understandable,
make the data accessible,
and design for consistent
behavior,
which kind of touches touches on modularity, code modularity, and things like that. So I really in enjoyed,
reading through that article.
Specifically,
it it talks about dashboard

(33:47):
design, good practices,
bad practices,
a lot of pitfalls around text and alt text and things like that.
So great
great new package to have out there for those of us who are trying to
maybe create Shiny apps from project to project that look similar and don't look completely different, like I may have been guilty of in the past.

(34:11):
I can I can relate to that, but also sometimes you may be dealing with, like, in your case, a client that does have very opinionated guidelines
and maybe you hopefully get more than one project with them and then you can say, oh, guess what? We can use the same assets
from this app to this other report
and
even in my company yeah. We're one company. Right? But even different teams have different needs for various things.

(34:35):
Maybe a discovery group doesn't care as much about certain things as a communications group where yours truly literally had to make a shiny up that was a glorified PowerPoint
maker of a custom theme.
Yeah. That wasn't pleasant. If they can just get with the times and do web based reports, then I can use our branding to do that and probably solve, like, 90% of that pain that I had to to deal with. But you're not here to listen to me rant about PowerPoint. You've heard enough about that over the years of this show. That's what happens when we've been off the mic

(35:07):
for a few weeks. Yep. It's pent up, Mike. It just gotta get out somewhere. But luckily, I think I've got out some pretty good stuff, I would say, in these summaries. And speaking of good stuff, there's a lot of good stuff in this issue. It was actually pretty tough to determine what would be the 10
candidate
restores for the highlights. There was a there was a heck of a lot to choose from. So it's a lot of fun to to curate this. Lots of great issue

(35:32):
great packages that are being thrown out there. I'm gonna I could talk all day about these, but we do have our
respective day jobs to get back to. So we'll kind of wrap things up here.
But as you probably guess, the end of the year has been pretty crazy for us. We hope to at least get one or two more episodes in before the end of the year. Well, we'll see what happens on

(35:53):
a scheduling side and hopefully come back even refreshed for the next year as well.
But what can refresh us? Well, hearing from all of you in the community is always one of the biggest ways that I get my my jam, so to speak. And there are lots of ways you can do that. You can, in your favorite podcast player, just hit that little contact page in the show notes, and you can send us a message

(36:15):
right then and there. If you want those modern podcast apps like I've been using and many others in the Linux community have been using, you could send us a fun little boost there too if you like to send us a little fun message.
But, also, we are available on social media as well.
You can find me on blue sky with at rpodcast@bsky.social.

(36:36):
You can find me on Mastodon where I'm at rpodcast@podcastindex.social,
and I'm on LinkedIn. When I'm not causing too much trouble, you can search my name and you'll find me there.
And, Mike, where can the listeners find you?
You can find me on blue sky at mike dash thomas dot b s k y dot social, or you can find me on LinkedIn if you search Ketchbrooke Analytics,

(36:58):
k e t c h b r o o k. You can see what I'm up to.
Awesome stuff. I know you,
actually, you and I both had our recent,
positconf
talks now available on YouTube as well. So
if you haven't seen my if you weren't there in person to see Mike's fantastic talk, go on YouTube and watch it. It is required viewing, especially if you're in the interoperability

(37:22):
intersection of r and Python. You you framed it in a way that I could never do enough justice for. So require viewing, my friend. Well, for those in the shiny space,
looking to
to improve their caching workflows,
Eric's talk is is certainly
required viewing in my view. I'm getting the itch to get back to shiny state development, buddy. I've got some ideas. Some people are clamoring for that

(37:47):
investigation into using golem or rhino for it. So I got I got some homework to do. So, hopefully, I'll be able to stream some of that as well.
But in any event, this is audio, so we're gonna close-up shop here. Thank you so much for joining us for episode 214
of our weekly highlights.
And, hopefully,
within a week or sooner or later, and we don't know yet, we'll be back with another edition of our weekly highlights very soon. Thanks a lot, everybody.

(38:23):
Sandy for just another r winter,
authored by
I forgot your name.
Author by
I don't believe you. We'll have to edit that one. Did I time that right?
A little bit.
No, sirree bob. Nonetheless. Okay.

(38:44):
You you corpse me there. Alright.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.