All Episodes

November 6, 2024 • 43 mins
Eric's flying solo this week, but the show goes on! The eagerly-anticipated recordings of the 2024 Posit conference are now available and Eric shares a few of his favorite gems, plus the Quarto publishing system takes center stage with how GitHub actions brings automation to report generation, and a terrific batch of answers to the recent R/Pharma workshop on building parameterized Quarto reports in R.


Episode Links
Supplement Resources
Supporting the show
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:03):
Hello, friends. We are back of episode 184 of the R Weekly Highlights podcast.
This is the usually weekly podcast,
where we talk about the great resources that are shared in the highlights section and elsewhere
on this week's our weekly issue.
My name is Eric Nantz, and, yes, we were off last week because yours truly was

(00:23):
knee deep helping with a lot behind the scenes as well as in front of the camera, so to speak, with the virtual Our Pharma conference, which was, in my humble estimation,
a smashing success.
I think we had over 25 100 registrants at one point, tuning in during the conference days as well as some really awesome work shops.

(00:45):
And you'll be hearing more about those workshops in a little bit in our last highlight, but nonetheless,
that's why we were off last week. But I am here this week. Unfortunately, I'm by myself, so to speak, because
my, truly awesome cohost, Mike Thomas, is knee deep in his day job projects, trying to get things done, and then he look for he looks forward to being back next week with all of you. Nonetheless, I'll hold the fort down for this week, and I got some fun things to talk about and share with our highlights here, which this week's issue has been curated

(01:18):
by Batool Almruzak. And as always, she had tremendous help from our fellow Arruky team members
and contributors like all of you around the world with your poll request and other suggestions.
Speaking of conferences,
I'm happy to announce in our first highlight here that after a couple months of in the editing can, so to speak,

(01:38):
the recordings from the 2024
Posit Conference
have finally landed on Posit's YouTube channel.
We knew these were coming soon, but we weren't quite sure when.
But this is a terrific way to catch up on what you may have missed if you were not able to attend
the conference itself back in August.

(01:59):
And if you wanna hear more of my take on the event itself, especially from the in person experience,
I invite you to check out our back catalog here where I talked at length about this
in episode 174
of our weekly highlights. I'll have that linked in the show notes if you want to check that out. But, nonetheless,
another great benefit of the recordings, even for someone like me who was able to attend,

(02:22):
is that it is a multitrack conference. Right? And you can't possibly see all the talks that you want because inevitably there will be some that overlap,
And especially in the case of
when I gave my talk on the power of WebAssembly
for
the future of Shiny and clinical submissions, which, by the way, I will have re directly linked in the show notes. It's great to see the recording back, albeit I haven't watched it yet because

(02:50):
even though I had a podcast and I've been doing podcasting for over, what, 10 years or whatever,
it's hard to listen to watch myself on video, but I will at some point.
Nonetheless,
when I was giving that presentation
across the hallway, I believe one of the other rooms,
was Pazit's kind of team presentation

(03:12):
on the new Positron
IDE. So that's a great one I'm gonna be catching up with to see kind of the genesis of that, the comparisons
with your studio IDE, and it was kind of the, coming out party, if you will, Even though positron is still not, you know, quote unquote in production yet,
it was, posit's first crack at really sharing their story behind the development of positron, and I'll be watching that

(03:37):
space quite closely.
And there are a lot of, and I do mean a lot of, other terrific talks. And I dare say there's something for everyone,
whether you're knee deep in industry, health care.
Certainly, shout out to my friends in life sciences. We had a lot of great talks on the life sciences tracks.
Also, utilizing
automated reporting

(03:58):
and things in quartile, which you'll be hearing about
throughout this, segment and the rest of the episode.
And, yeah, for the shiny enthusiasts, there's a lot here too.
I was watching a little bit, this weekend when I saw the recordings are up.
I rewatched a talk from,
the designer, so to speak, of the shiny UX side of things, Greg Swinehart.

(04:21):
He gave a terrific talk on the recent advancements in the user interface components of Shiny both for the r side
and the Python side and lots of great resources that he shared.
And he he always has a unique style to his talks. I I definitely resonate with it.
And you would never know the importance of a HEX logo until you really watch his talk and how it kicked off a lot of their, design efforts for the shiny user interface function, especially with bslib.

(04:49):
My goodness. Have you seen the hex sticker for bslib?
That thing is ace, and I can't wait to get a hard copy of that.
Nonetheless, another great talk going back to the Quarto side of things
that I wasn't able to see in person,
was
by Andrew Bray, where he talked about
the new close read quarto extension,

(05:12):
how the development journey of that began, and it was a close collaboration
with another developer named Jeremy.
And, also, it really brings to light
a way that you can get that scrolly
towing look that you might see on, say,
you know, some
data readout or data posts from, like, the New York Times

(05:34):
or other groups that kinda use that technique.
Well, if you're writing a portal doc, this close reads extension
is definitely something worth your time to check out.
And speaking of close read, that definitely relates to a little, ad hoc,
addition here.
I just learned a few days ago that posit is now running

(05:55):
a close read
contest
to see what you can build for an engaging either data storytelling,
an engaging tutorial, but really seeing what you can do
to push close read to new heights. So I have a link to that, blog post from Pazit in the show notes as well. I dare say I've I've had attempts at doing a scrolly telling kind of presentation before.

(06:20):
Ironically, at a very earlier version of what was then the art studio conference
when I had a poster session.
And, of course, a poster session in these days means like an electronic display that you're you're standing next to and and kind of walking through with people as they walk by and answer questions. But my my poster session back then

(06:42):
was about kind of my take on the shiny community and the awesome advancements and my hope for a future shiny verse.
I recall using a package from John Coon, good friend of mine,
that kind of gave a somewhat scrolly telling look to a web based presentation.
Albeit there were some quirks and no fault of his own. It was just the the HTML,

(07:06):
package he was wrapping under the hood.
But I do think, closer to something I really wanna pay attention to, both for some day job tutorials or readouts,
but also for some fun narratives too. So I might have to throw my hat in that contest. I don't know. I'm just saying.
And there is a lot and a lot more resources in in the recordings that you'll see,

(07:29):
on the blog post from posit.
Also, speaking of resources,
posit also
at every positconf, you also have terrific workshops.
And, unfortunately, they're not recorded. However, every workshop has made their materials online,
and I have been referring back to the workshop that I was attending in person,

(07:50):
databases with R and DuckDB by Kiro Muir. It was a fantastic
workshop,
and I'm all in on the DuckDB craze right now, which, again, speaking of the recordings,
you can hear the keynote from the author of DuckDV,
in the recording of the POSITIVE Talks as well. Another great
great, development story of DuckDV and where we think the benefits are, and I dare say that we all can benefit from it in my, usage of it thus far.

(08:20):
So, again, lots more to check out. Obviously, I can't do everything justice in this segment here, but I'm really intrigued by catching up on what I wasn't able to attend in person
and lots of great ideas that I think you'll generate
in your daily work or your fun adventures with R and data science.

(08:51):
We're gonna switch gears here to a more automation
play story, if you will.
I'll be at a mechanism that I think many, many in both the open source community as well as in their particular industry day jobs
are leveraging,
especially in the piece of automation
and making sure that we can, you know, release the most robust code

(09:14):
possible, say, for developing packages
or if you wanna take away the manual steps of compiling things
ourselves when we can let the magical machines in the cloud do it for us, and that is exposed
via GitHub actions.
And if you are new to GitHub actions and you just want a quick take on what it's actually doing

(09:36):
and getting a really, you know, fit for purpose tutorial
that you can use today to kinda get your feet wet a little bit and then give yourself the appetite to dive into further,
then this next highlight here is just for you. And is author by friend of the show, Albert Rapp. He is back again with his, 3 minute Wednesday segment

(09:56):
where he talks about getting up to speed with quarto
with GitHub actions
for compiling
a quartal document.
And this is not gonna get in so much the theory behind GitHub actions. You're not really meant to you're not expected to understand, not that you even have to, kinda what is happening behind the scenes of actions, but this is about how would you set up a report

(10:21):
that you could automatically
regenerate
whenever you have a a change in the repository
where this, report is hosted
and to be able to automate this more in the future. So
the post starts off with creating a new project, in this case, in the RStudio IDE,
and he is careful to enable 2 options that you'll need for this is because this is relying on GitHub acts after all. You're gonna need to get repository locally for it, and he's also checking r env as well.

(10:53):
R env is, of course, a,
package in in the r community
offered by Kevin Ushay, a a posit,
to help you manage the dependencies
for your project via its r packages, but in a self contained way.
Usually, r m works really well. I will admit though,
If I was recording this yesterday, I may not have been the biggest fan of it because I had a almost knockout drag out fight trying to get my app in production

(11:21):
with some dependency, h e double l, that I had to deal with.
Somewhat self inflicted, but, nonetheless, sometimes RM can be slightly cryptic with its messaging.
Anyway,
things happen just like anything in life.
Nonetheless, that's gonna be important for the rest of this tutorial when we get the GitHub action actually created.

(11:41):
So the report itself that he's demonstrating here is nothing radical. It's simply a HTML based report
that's gonna say that this report was rendered at a time,
but this time is printed
and executed via a code chunk in quarto,
which would be very familiar to anybody that's used quarto or markdown before, just a typical code chunk.

(12:06):
With that,
you may notice that if you may have initialized r m, and then when you hit that render report button and pause it, it's gonna complain that there are some packages missing. So that's where you do need to install
in your r mv library
the r packages
needed for r execution, which is which are, of course, our knitter and r markdown. So once you once you do that with rmd, then you try to render it again, then your local version

(12:33):
of the report will compile correctly. And then you can see that, depending on when you ran it, that current day time printed right inside.
Great. You got yourself a report.
Now let's imagine instead of just using the typical HTML format
for the report output,
you would like to render this as an actual

(12:54):
document in markdown format going from quartal markdown to markdown,
but in a way that GitHub especially
can render that in a nice way.
That's using what's called GitHub flavored markdown.
And Quarto itself
is a command line utility as well as integrated with various IDEs.

(13:15):
So
Albert switches and pivots to a new way of rendering the quartal doc instead of through the click button.
He now shows you how to use the quartal render function to render that, and then there's a parameter
tac tac 2
to you to put in GFM, forget a flavor mock markdown.
And then he's changing the name of the output file

(13:37):
to reme.md.
So now you've got a file that can be rendered
in a special syntax
or special file name. So when you go to the GitHub repository for the project, that readme is gonna be what's displayed automatically under the code,
file listing there.
Great. Now we got that working. You can push out on GitHub, and you could just simply

(14:00):
run that report occasionally,
manually compile it, manually push it up.
But, no, that's not why we're here. We're gonna learn about GitHub actions. How does this work? So
this is interesting because
there is a package, of course, called use this that will let you
define a GitHub action right away based on a template of current of more you just

(14:25):
of, like, workflows that are pretty typical for an R, you know, developer, whether it's package development
or R markdown compilation or whatnot.
Albert is gonna show you how to build this from the ground up. And I do think this is important
because there are times when you're new to a framework which is an abstraction
that, in fact, GitHub actions

(14:48):
are really an abstraction on top of building a container
with various bells and whistles to do something.
That's really what GitHub,
actions are under the hood,
and the way you interface of it is constructing a special YAML file that's gonna define your workflow.
So Albert leads us through with what do we need in our local repository to make this happen.

(15:13):
That is you need a dot GitHub folder and then you need a subdirectory in that called workflows, and this is specific to GitHub here.
And then once you have that, then you're gonna create a YAML file inside that workflows file.
You can name this anything you want, but
in in the end, it's got to be a YAML file, and it's gonna have specific syntax that he's gonna walk you through

(15:35):
in the rest of this post. Now it'd be pretty boring for me to read all the bits verbatim here, but I'm gonna highlight
at a high level the really important parts to make sure that you're setting yourself up for success
the right way.
1st of which
is how do you define when the action is going to be run.

(15:56):
In this case, it's gonna be run on every push to the main or master branch of the repository,
depending on what you call that that that, quote unquote main branch,
that's in a certain directive at the beginning under the the narrative where you name it. There's this on declarative where you define, okay, on what operations will this operate?

(16:18):
In this case the push operation
from there below that then you have the jobs the declaration and this is where you could have 1
or more jobs which you can think of are a collection of steps to accomplish a task
So in this case, the task that he's gonna call it is render,

(16:38):
and each job
needs
an environment
defined for what you're gonna run this on.
Typically speaking,
you want to stick with a Linux based environment, especially if it's not like a,
a situation where you have to check all multiple OSes, in this case, combiner report.

(17:00):
Ubuntu dash latest will be your friend for this because you're not really caring about the version of it. You just want something that can quickly get quartile up and running and run this report and be on with your way.
So that's in the runs on declarative.
And then if you want
this action to be able to

(17:20):
write or commit things on your behalf,
you'll wanna make sure to give it the right permissions, and that's in the permissions declarative where you have to explicitly tell it, I want you to write to this repo. And that's another declarative here.
And then the post talks about the different steps. And at a high level, what they are,
our first getting quartile installed itself, which is done via another GitHub action.

(17:45):
So that's another thing to keep in mind. Just like in functions in r, you can run other functions inside of them.
You can run other GitHub actions as steps inside
your overall GitHub action. And so the Coral team
has generally set up an action for getting Quartle installed, so you really just have to declare it and then define which version of Quartle you wanna install,

(18:10):
which in this case is the latest version.
And in this case, Albert is actually being very explicit with this particular step in the pipeline for installing the R dependencies
where he is simply calling
arbitrary
code
via a bash line calling r script
to install packages that he needs

(18:33):
to first get r m up and running and then using r m itself to restore the library.
Now to be transparent,
there is an action that'll let you do this as well or a couple of actions to do this, but it's good to see kind of how you can do this in your own way when you have to do things more custom. I'll get to that in a little bit.
But assuming you got the dependencies up and running, now the next step is actually rendering the portal document, and that is simply

(19:00):
in the run declarative just like how Albert used the r script
in that run declarative to
install rm, but then run rm restore.
You can use this run declarative to run that same chordal render
the chordal render, function or I should say command line call
in the exact same way he did earlier in the post. So nothing changes. It's as if you're typing that in the command line. You're just doing it in the GitHub action.

(19:26):
And then this part will look a little odd at first if you're not used to it, but then there is a step about,
okay, the readme has been updated in the action.
I need to push this up to the repository
so that it can actually render that finished product.
And that's where you can run, again, via the run declarative,
the various git commands to tell git who you are. In this case, you're

(19:50):
gonna actually define it as a GitHub action bot. You can put anything you want there.
And then literally as if you're in git command line mode, adding the readme, committing of a commit message, and then pushing it up. In order to do that, though, the GitHub,
action needs to be able to
do this on your behalf

(20:11):
using your repository
secret token. Otherwise, it's gonna complain that it's not authorized. They do it. So every action step lets you have an ENV or an environment declaration
where he's able to say the GitHub token, but not put in the token like string verbatim
to use, like, this glue like syntax with the curly brackets to inject

(20:34):
that variable from the secrets
kind of store, if you will.
It there is every repository in GitHub action will have the secrets thing
available where you could put almost any environment variable you want that you define ahead of time, but the GitHub token one is there for you free of charge. So
that explains that stuff. And then lastly,

(20:55):
you push this YAML file to the repository. And if all goes to plan, your report will render automatically via GitHub actions.
I say if it goes according to plan because I have never once,
in all the times I've used GitHub actions, gotten it right the first time. There is always something that happens
whether I mistype a package name in my dependency installation

(21:19):
or I'm doing something really crazy with bash scripting and I have no idea how to debug it, and then I get that infamous red x
in the actions
output in the in the GitHub repo. I've
literally went through this yesterday. I was banging my head against the desk almost on this one.
So
budget a bit of time. You're gonna need it. I I I wish I could sugarcoat it, but I can't. But once you get up and running with it and you get it working,

(21:48):
give yourself a pat on the back because that's a major achievement.
It really is.
So this is scratching the surface so I can do GitHub actions. I am even if I joke about the the kind of debugging process,
when it works, all goodness, does it save a lot of time?
To illustrate that, I'll invite you to check the show notes where I link into the show notes of the repository

(22:12):
that we've built
for this, Shiny app that we're using as a template
for a web assembly version of a Shiny app going into a clinical submission.
I use GitHub actions
quite a bit,
for this this workflow,
and the ways I use it are pretty
pretty novel to me anyway because I never done anything this in-depth before.

(22:35):
I have 3 actions here.
One of which is to indeed render a quarto based document in multiple formats, both a PDF format
and an HTML format for this reviewer guide.
And for the HTML format,
I wanna publish that
actually to,
s 3 bucket

(22:56):
so that I can render this as a viewable link in the public domain for our reviewers in case they wanna see the latest and greatest draft of it without having to download it themselves.
So that was a clever thing I was able to to hook in there
and be able to render 2 formats. There's a lot of, you know, interesting points on that you can check out on the repo.

(23:17):
The other action was to actually publish the WebAssembly application
compiled
and then publish it to Netlify.
Before I knew about GitHub Pages being a a first class citizen for WebAssembly apps. I did Netlify because that's all I knew back then, so there's an action that helps with that. And also to publish a more,

(23:39):
standard bundle of this whole project that's gonna be used
in what we call a transfer to the regulators directly. That's the 3rd action.
And in each in each of these cases, I'm using bash scripts that are sourced in the action itself via a scripts folder.
And I won't pretend I'm the best bash script there out there, but that's another handy thing. If you found yourself adding a whole bunch of commands in that run declarative,

(24:06):
you could outsource that to a bash script and then be able to run that on the fly.
And so there's some interesting learnings from that as well. So
I have done a lot with GitHub actions.
I won't pretend that I'm an expert at all of it,
but I do admit they have helped my workflows
immensely.
And, yes, there are versions of this available on other platforms as well. GitLab in particular has their own take on it. We're gonna what they call GitLab runners.

(24:33):
Very similar
YAML type syntax. There'll be some differences here and there. I believe in codeberg does this as well. So it's not
the automation play, even though GitHub gets the most of the mind share, it's not just strictly related to them. There are many other ways
you could implement this as well. So
wonderful post by Albert. He goes, gets you up and running quickly.

(24:56):
And, yes, you'll wanna check out the community of resources for
the GitHub actions that deposit team maintains
that are a many there are many, parts of their workflow
that you can get from the use this package.
There's even GitHub actions for the shiny test 2 pack. It was to help get shiny, the test of shiny app with shiny test 2 and a GitHub action. There's lots of things you can do here,

(25:20):
and I've already blabbered enough about it, but definitely check out the resources and Albert does a terrific job getting you up and running quickly. You have a very relatable issue.

(25:47):
We love ourselves continuity on this very show even, and we've been talking about that GitHub action that would render
a quarto document. Well, quarto itself,
there are so many things you can do with it, and I do mean many. And one of the, you know, the features that it carried forward from what you might call the previous generation of quartile, which has been our, you know, our markdown,

(26:11):
is the idea of using parameters
inside your reports
so that instead of hard coding everything in the body of the report itself
and then having to, like, you know,
find, modify, replace when you want to change, like, a parameter value
or you want to change, like, a dataset name or a dataset variable,

(26:31):
you can use parameters
in your Chordal report so that you could define those ahead of time,
kind of like function arguments,
and render a document dynamically
injecting those values
into the body of the report. And our last highlight
is actually a wrap up kind of follow-up q and a portion

(26:53):
from Nicola Rennie, who was terrific
and once again being very generous of her time for Doctor Pharma conference where she led a workshop
on creating parameterized reports with with quarto
and
it was a spectacular
workshop. We have linked in the show notes
the resources from this workshop with the slides as well. So the recordings should be out in a couple months, so you'll be able to watch a recording of it.

(27:22):
Yours truly is gonna be helping with editing on that, and I can't wait to watch it because I'm I'm gonna learn something new, I'm sure.
But Nicole is, blog post here
is getting into some of the questions that they didn't have time to address in that, 2 to 3 hour workshop.
And, well, I'm gonna pick out a few that were nuggets to me,
especially in the intersection of what you can do with r itself in the in the compilation of these parameter

(27:48):
reports
and also with quartile itself.
There were some great questions
about
well, when you have a function,
is it safe to add an argument for, say, the data frame itself? Because she's using, I believe, the Gapminder dataset to illustrate the parameterized reports,
and she kind of shows the best of both worlds where

(28:11):
maybe in a in a first version of the function, you're assuming that the gap miner data is the one loaded,
and you're just gonna let them choose put in a parameter for, like, the continent to summarize in the filter statement.
But you could still have that continent as the as the first parameter, but then have a default argument for data

(28:31):
that just happens to be the Gapminder.
And in that way, if for some reason you wanna change the name of the data frame, you can still do that
and be able to leverage all the benefits of the quartile, you know, parameters and everything like that
with that data argument.
Speaking of which,
in order to evaluate that as an object, if you have, like, the name of an R object as one of your parameters,

(28:55):
you can use the get function in R to basically
translate that string of that object name
into the object itself and then do whatever you need to do for further processing.
She also mentions there was questions about could we could we use,
the parameters to generate dynamic tables instead of just plots.

(29:15):
And absolutely, yes. Right? I mean, you could easily
create a table with another handy function
to be able to have, say, a a reactable type structure
just as much as a as a plot. It all depends on you and what you wanna what you wanna define with it.
And then there were also questions about her use of the

(29:37):
walk function and the map function from the per package as part of this iteration of creating these
multiple reports based on different configurations of parameters.
The walk function
is really used for it's not so much you care about the output in R itself. You care about what it's doing as a side effect, like creating files,

(29:58):
creating images, creating or doing some kind of uploading of a file or whatever.
You don't really care about the object coming back from it. It could be invisible for you all you care.
But if you have iteration where you wanna do something with that result,
map is the way to go. So there's there's a nuance there, but once you get the hang of purr, it'll it'll hopefully be easy to grasp once you have that.

(30:25):
There are some fundamental
questions
in that.
What is the biggest difference doing quarto and r markdown in this case? Well, again,
r markdown is great,
you know, expect I mean, look, I've built so many things of r markdown. Right? You don't have to switch a quartile if you don't need to. I mean,
quartile is gonna, you know, get probably some more utilities added to it. There is a lot of developer resources behind it now. And with its cross language

(30:54):
capability, we're seeing a lot of data science teams really embracing that. But, hey, r markdown is stable.
R markdown is very dependable in the r ecosystem.
You're not compelled to switch if you don't have a need to, so don't feel like just because you're seeing all this material
that you have to go away from r markdown. I mean, it's still very much a fundamental

(31:15):
pillar of the r community in my humble opinion.
Another nugget that I didn't know about in respect to let's say you have a lot of R code in this report and you want to just
source the or execute that R script itself in your quota report.
I didn't realize about the, parameter called file

(31:35):
where you give it the path to that particular script, and it will basically,
source that into your your execution.
And that is pretty handy. That that means you could, you know, do a good job of modularizing
your code structure
instead of having everything in one big, you know, setup chunk if you will. You could, you know, export that into different scripts and then use them use them as you need to throughout your report.

(32:03):
And there are also great great questions about the concept of styling and formatting,
such as do those fancy call out blocks that you get in quartile that look great in HTML, do they also work in PDF?
Well, yes, they can. You just may not be able to do the collapsing stuff because it's a static representation.
But quartile is very careful to make sure that you can get most of the features

(32:28):
in each output format.
And in the case of
interactivity,
at least getting static versions of those.
And I was able to, you know, learn this earlier this year and last year
with that reviewer document that I was making as part of that submissions pilot project,
I could use the call out blocks and it looks really darn good in the PDF. Like, I I'm pretty happy with it.

(32:51):
So much so that now we have another work stream about the spin up about
using quarto and more of these, submission documents. So I'm really excited for that. But getting those nice enhancements in the style,
and, of course, we're keeping an eye on typest as well,
it's a great time
for those that still have to live in the world of static documents. I think, you know, Chordle is still gonna be

(33:15):
very helpful for you.
And, also, there are some real nuggets here about how
you can generate these multiple reports
from the command line. One thing that took me some getting used to, and, again, I haven't watched the workshop yet,
is that you can have a YAML file
with these params

(33:35):
defined, like, say, a default value of them. And that can be fed into the command line version of quartile render
so that you can just feed in that YAML file and then you'll be able to render
that document
on the fly. But if you want to do this within R,
you've got to do a little trick here that I didn't really realize

(33:58):
is that if you want
if you have to get these parameters
as kind of these key value name pairs,
and if that's where in the quartal render function
that the quartal package itself exposes the r package, I should say.
You have to give it
the name, the list rather than just the YAML file name.

(34:21):
And she has this great tip of using the yaml.loadfile
function from the YAML package.
And that way, you can just feed that
that evaluation
of that function
in the execute params parameter itself
instead of you having to manually do, like, the list, if you will, yourself of those parameters. That is awesome.

(34:46):
Man. I I'm so glad I learned that tip because I was dealing with that just recently with a project at work and I kinda
kinda gave up on doing it from the our side of things, but now we can definitely do that. So
that just is my quick take on it. There are lots of terrific resources I mentioned that Nicola
Nicola has has shared. At the end of the post, some more workshop materials from others, some more blog posts, and like I said, the recording should be up in a month or 2 depending on how fast I can get the editing chops on. And I can't wait to to watch this one again because I

(35:20):
literally use quartal parameterized reports for a pretty fun daytime project
where I was able to escape the confines of a PowerPoint
and have a dynamically rendered quartal dashboard, but tailored for each project
using parameters. It was awesome once I got it working, and I was like, I I've gotta be able to push this more into the mainstream

(35:42):
of how we communicate these results. So
I'm I'm definitely excited to see where this takes us and
another wonderful workshop in the space of parameterized
reporting.
This is a great companion to another resource we shared in previous episodes from JD Ryan
on her workshop on parameterized reports or quartile. So you got 2

(36:03):
top notch instructors from the art community
giving you level up knowledge on quartile parameterized reports. What what a time to be alive, folks.
And there is a lot more I could talk about here, but, you know, this episode is getting long already, so So I'm gonna close out the episode here of an additional find that caught my attention when I was perusing this issue here.

(36:26):
And in my additional find this week, I'm gonna, you know, put a great spotlight on another great post from Steven Sanderson. He has been,
you know, a machine almost or some of his great, tutorials
on the fundamentals of R and data science. And he has this great post, this great guide, if you will,
on how you can create lists in R.

(36:48):
List is one of the most important object types I have had in my, you know, last probably 5, 6 years of my day to day work because there are so many creative things you can do with it. A list for those uninitiated
is simply a collection of other objects and r, but they don't all have to be the same.

(37:08):
Unlike a vector where all the say if you have a numeric vector expecting that they're all numeric,
But if you have a string in there, it's gonna automatically convert it to string the moment it sees one string in there because they have to be the same type. You can't mix the
number and string.
So the list is a way around that sort of thing. And, plus, the list, you can create as much of a hierarchical structure as you like.

(37:33):
And that can be really important, especially some of these more complicated data structures with, like,
digital readouts,
digital machines, and we call it digital biomarkers in our
line of work. A lot of web data is coming as list type structure from JSON.
Being able to know the ways you can create lists, name them, and do various operations with them

(37:58):
using either the built in apply family of functions or the per package.
Lots of awesome things you can do with lists
and they are again a huge huge part of my my daily workflow.
Once you once you get a hang of them, there are so many things you can do with it,
such as a package I've learned about from our pharma.

(38:19):
I've heard in the grapevine, but I I saw a talk about it,
about the cards package by Daniel Sjoberg and Becca Kraus,
where you're creating basically a results type data frame of these different, like, statistics for a given variable or a set of variables.
But these statistics may be different.

(38:40):
And some may be a numeric result, and some may actually be more of a character result, especially when you're dealing with model attributes or things like that.
Their way around it
to mix all these different types of results together
is to use, wait for it,
a list column inside your data frame.

(39:00):
This is
awesome because then you get to have a lot more control
and flexibility
in what you're doing with these, you might say, wrapper type data frames
than either putting, like, entire data frames as, like, a cell value in a list column or another
model fit object, which the broom package does that cards is kinda taking inspiration from.

(39:24):
So with the list object, you can do so much stuff. I highly recommend this post to get get up and running quickly, so it was great to great to see this featured in the issue.
My goodness. Even flying solo here, I realized I've taken a lot of time already. So I'm gonna get you on your way here, but we have a few items to close out, of course, is

(39:45):
the R weekly project is a community driven project. We do not have sponsors. We we do this all for you
in the R community and the data science community.
All we ask is for your help to make sure this is up and running quick, you know, sustainably,
and that's via your contribution. So if you see wherever you authored it or someone else authored it, a great resource that should be featured in next week's issue, you just go to rweka.org.

(40:13):
That should be in your bookmarks. If it's not, I dare say, hot take. You should have rweekly in your bookmarks.
Hit that little, ribbon at the top where it'll take you to our draft of the upcoming issue and a way to do a poll request right there in GitHub's web UI.
You just need to do a little markdown, folks.
I'll markdown all the time just like with quarto,

(40:34):
and you can give us that great resource. And the the next issue's curator will be glad to review that and merge it in. So we're really,
really eager to have your contributions on this very, very important project.
We also like to hear from you in terms of this show itself. We have a little contact page linked in the show notes. You can find this

(40:55):
hand you a web form to fill out.
You can also with a modern podcast app, I really like Podverse and Fountain
these days, but many others in the ecosystem,
you can send us a fun little boost along the way. They give us feedback directly without anybody in the middle, without any
corporate overload trying to say, oh, nope. You can't say that. You can be as as unfiltered to us as you like.

(41:20):
Luckily, all the feedback we get is usually positive. But if I make any fumbles, I always like to hear about that
too. And, also, you can hear you can get in touch with me on social media these days.
I'm mostly on Mastodon with my
atrpodcast@podcastindex.socialaccount.
Also on LinkedIn, you can search my name and you'll find me there saying something usually or responding to other people.

(41:44):
I am contemplating
getting a blue sky account because I am seeing a lot of traction in the art community
going to this. But honestly,
Mastodon's been pretty nice to me. And I know there's some people wondering, oh, is this replacing Mastodon? No. I don't think so.
I think they both can exist, but I think Mastodon has been extremely helpful for me for both my R and data science, you know, friends getting keeping in touch with them and meeting new friends along the way, as well as my podcasting adventures. So Mastodon is not broken at all. In fact, I'm gonna keep going with that as long as I can.

(42:19):
And, big shout out to Dan Wilson speaking
of Mastodon. He's the one that maintains the r stats dot me server for Mastodon. That's been a great one to follow
and he he's been, you know, doing a lot of work to keep that up and running. So, Dan, your your your efforts are not going unnoticed for sure. I I greatly appreciate what you do for us.

(42:41):
Nonetheless, that's gonna close-up shop here for this episode of our weekly highlights.
We hope to have Mike back next week, so you don't have to hear me babble all the time.
Nonetheless, I hope you have a wonderful week for wherever you are.
Again, have fun with your journeys of R and data science, and we will be back with another episode of R Weekly Highlights
next week.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.