All Episodes

October 30, 2022 • 63 mins

Summary

Application configuration is a deceptively complex problem. Everyone who is building a project that gets used more than once will end up needing to add configuration to control aspects of behavior or manage connections to other systems and services. At first glance it seems simple, but can quickly become unwieldy. Bruno Rocha created Dynaconf in an effort to provide a simple interface with powerful capabilities for managing settings across environments with a set of strong opinions. In this episode he shares the story behind the project, how its design allows for adapting to various applications, and how you can start using it today for your own projects.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great!
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
  • Your host as usual is Tobias Macey and today I’m interviewing Bruno Rocha about Dynaconf, a powerful and flexible framework for managing your application’s configuration settings

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you describe what Dynaconf is and the story behind it?
  • What are your main goals for Dynaconf?
    • What kinds of projects (e.g. web, devops, ML, etc.) are you focused on supporting with Dynaconf?
  • Settings management is a deceptively complex and detailed aspect of software engineering, with a lot of conflicting opinions about the "right way". What are the design philosophies that you lean on for Dynaconf?
  • Many engineers end up building their own frameworks for managing settings as their use cases and environments get increasingly complicated. What are some of the ways that those efforts can go wrong or become unmaintainable?
  • Can you describe how Dynaconf is implemented?
    • How have the design and goals of the project evolved since you first started it?
  • What is the workflow for getting started with Dynaconf on a new project?
    • How does the usage scale with the complexity of the host project?
  • What are some strategies that you recommend for integrating Dynaconf into an existing project that already has complex requirements for settings across multiple environments?
  • Secrets management is one of the most frequently under- or over-engineered aspects of application configuration. What are some of the ways that you have worked to strike a balance of making the "right way" easy?
  • What are some of the more advanced or under-utilized capabilities of Dynaconf?
  • What are the most interesting, innovative, or unexpected ways that you have seen Dynaconf used?
  • What are the most interesting, unexpected, or challenging lessons that you have learned while working on Dynaconf?
  • When is Dynaconf the wrong choice?
  • What do you have planned for the future of Dynaconf?

Keep In Touch

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Unknown (00:00):
Hello, and welcome to podcast dot in it, the podcast about Python and the people who make it great. When you're ready to launch your next app or want to try a project you hear about on the show, you'll need somewhere to deploy it. So check out our friends over at Linode.
With their managed Kubernetes platform, it's easy to get started with the next generation of deployment and scaling powered by the battle tested Linode platform,
including simple pricing, node balancers, 40 gigabit networking, and dedicated CPU and GPU instances.

(00:26):
And now you can launch a managed MySQL, Postgres, or Mongo database cluster in minutes to keep your critical data safe with automated backups and failover.
Go to python podcast.com/linode
today to get a $100 credit to try out their new database service, and don't forget to thank them for their continued support of this show. Your host as usual is Tobias Macy. And today, I'm interviewing Bruno Rocha about Dynaconf,

(00:48):
a powerful and flexible framework for managing your application's configuration settings. So, So, Bruno, can you start by introducing yourself?
Hi. Thanks for inviting me for this podcast. My name is Bruno Rocha. I live in Portugal.
Some things about me, I'm a member of the Python Software Foundation.
I'm a senior engineer on Red Hat Ansible.

(01:08):
I have a YouTube and a Twitch stream channel
targeting mostly Portuguese speaking audience where I talk about Python and programming.
And besides that, I like bicycles. That's all about me. And do you remember how you first got introduced to Python?
Oh, yeah. Let me remind this. It was around 2, 003.

(01:29):
I worked it in a factory,
and my job was maintenance of 100 of Linux desktops.
And we use a Linux distribution
called
Linux, which is a Brazilian Linux distribution.
And 1 of the features of that Linux was called Magic Icons.
So it was like a GUI application

(01:51):
with buttons where user can click and run
bash scripts or things to automate Linux
stuff. And then I
work it written those scripts in bash and pearl for a long
time and adding more day to day off c activities to those icons like merging files,

(02:11):
printing and doing all the offsy day to day job using Pearl basically.
But at some point I started contributing. I think that's was actually my first open source project that I was a contributor.
And when I started contributing
back to the, magic icon system on on Linux,
I found that they were moving from Perl to Python. And then I had to learn Python to rewrite

(02:36):
those script bits, that distribution. Today, that distribution is no more tainted, but that was a good experience. And that's why I I first learned Python. And from that point, I stopped using Perl and then kept using Python for everything. Yeah.
That brings us to now the topic at hand, which is the DynaCon project. I'm curious if you can share a bit about what it is and some of the story behind how it got started and why you decided to build this project.

(03:04):
Well, so yeah. Dynacomf is is a library that I like to define it as a settings client.
So
settings usually are defined in multiple locations like
static files. You can have a py file, toml, yaml, JSON, or any of the other common formats
usually you do for application settings.

(03:26):
The settings is overridable
by environment variables
and even by external servers like in memory database or secrets vault.
And then at some point you need to gather all those information
in a single object. And this object is DynaConf like DynaConf is a I like to think it as a settings client

(03:48):
that I can configure
and point to multiple sources. And then it gives us a single settings object, a single single Python object where you can access and deal with those settings objects. So that is basically DynaConf and and if I want to if I can explain it simply, it's like it's a library to access settings.

(04:09):
However, the settings are located,
then it brings you a a bridge for you to access and and get those settings together.
And the other question was the story behind it. So
I work at in a company that was running Django and Flask application on the Cloud on AWS.
It was 2014

(04:31):
and the main website in API, they use it load balance
and auto scaling.
So eventually,
there were 100 of EC 2 instances running on the cloud.
And they wanted to be able to share the same settings with all those instances that were running. So you can think like applications will start having

(04:52):
on the environment
or hard coded settings like in a constant file. But what they wanted in that project is to have a single place where the settings is stored, like an s 3 or any other shared file system.
And all the instance could read from the same place, the settings. So this was like the first mission we we did.

(05:15):
And then
the second thing that we did in the project was like
turning the settings dynamic.
So for that time, we use it already
instance
and already is stored key values
and then all the instances read key values from the ready's instance
inside the AWS

(05:36):
network.
So that's how
the library started. So I just went to the main project and I created a folder called dynamic
config.
And then I started like coding using Python dynamic features like overriding get attributes,
trying to do like lazy evaluation.
And then I ended up with something really nice which we were able to

(06:01):
use it to read files locally. If it was not found locally, it would fall back to this to the environment
and then to external services.
So we decided to like promote it to, to a package. And then
instead of dynamic config, we shorted a little bit to Dyna Conf and created a package. It started being used by all the projects, all the internal projects on that company.

(06:25):
So it was 2014
Heroku
released
a guide called 12 factor apps.
And 12 factor apps was like
our main drive for deciding how the library would be written.
Because after this first version, we did many refactorings
and we base it a lot on 12 factor guide.

(06:48):
The 3rd topic on the 12 factor is the config
and that says that configuration must belong to environment and then we started like coding DynaConf to be like environment first.
And from that point,
we we added we added many other features. But that's how it started. It started, like,

(07:08):
just as a Python model inside a big monolithic project,
dynamic config, and I remember that we pulled that to
off that project and created the Dynacom.
And yeah. Now
it grow like it's no more like a single small library. It's more like you said on the beginning of the show. It's a framework for managing settings

(07:31):
together with this settings client. So, yeah, that's a little bit of the time.
Are there any particular
goals or kind of design philosophies
or
strong opinions about how to manage settings that you have for DynaCon?
The fact is that the project is evolving organically.
So it's based on the needs and feedback of the users and contributors.

(07:54):
So we are constantly
adding new things
and fixing things based on that.
So we don't have, like, a really strict guideline
on
on the style that we are following.
I can say that the goal as a project is to be reliable and simple to use.
So I all see that I want Dynacomf to be a go to choice when you think about how to manage settings on your Python application.

(08:23):
I already see that on the companies
and communities where I participate because I try to influence people sometimes to use the library and they say, yeah, we need to deal with settings. So it's gonna be like, just put Dynacom here. And to be able to be like a go to choice and simple,
you have to be very

(08:43):
like frictionless.
So the project should be something you can plug into your project
and you can unplug from your project easily. So I think this is a thing that we try to keep on Dynacomf. You can take your project right now, add Dynacomf
and it works. And if you need or if you don't like Dynacomf anymore, you can just remove it. And it's not going to be a hard work because the way Dynacomf is implemented.

(09:10):
As far as the kind of types or categories of projects that you have in mind as you build and iterate on Dynaconf, are there any particular ecosystems
or use cases that you are more focused on or that tend to work better with Dynacon?
Yeah. Definitely.
I think web applications running on cloud is the main target.

(09:31):
So as we base it on 12 factor app, I think
whenever you have environment
and you have this environment varying
across multiple deployments,
I think Dynacomf is a very good fit for it. So it can be running on virtual machine, containers, Kubernetes,
everything about the 12 factor apps, which is like payment for SaaS applications.

(09:54):
I think it's a perfect fit for Dynacom. But we have users using Dynacom for other projects. So have people and contributors
running for machine learning pipelines. So they configure all the models
using Dynacomf
loading of of Dynacom files.
We have CLI applications that people create and then the end user of the CLI application

(10:19):
like puts a settings file together,
in the folder they are running the CLI and DynaCon floats this the local settings. We have desktop applications and testing frameworks.
So
the first focus was cloud applications, but people users starts to be, like, more creative in using it, whatever you you imagine. So I think DynaConf can fit on many cases, but the main target

(10:45):
is when we are looking to, like,
change things, when we are looking to add new
functionalities, we always think about the cloud first. Yeah.
As far as the
category of settings management for applications, it's 1 of those things that is
seemingly very simple, but is deceptively complex as you dig deeper into the problem space where at the outset, it's just, oh, I just need a file that says to, you know, set these different values,

(11:15):
as you were describing, move into different environments, or you need to be able to support different run times or kind of application structures,
you start to realize, oh, now I need to be able to pull things from multiple places. I need to be able to manage things like overrides or, you know, how do I manage secrets? And I'm wondering what are some of the

(11:36):
kind of general philosophies that you've encountered about the so called, quote, unquote, right way to do it and some of the ways that you think about what philosophies are best suited for
applications that you're trying to kind of encode into how Dynacon is able to manage some of those things like overlays and environment specific attributes and secrets,

(11:58):
etcetera. I think that
settings is like an accessory for your application. It's not the main focus of any application.
So your application should focus on the things it should resolve.
And
as
less you see the settings the best because it will be, like, transparent

(12:18):
and it's not, like, being on your way from you and your code. So I think 1 of most complex thing
to do when you are creating your own settings management
is trying to do this in a more like frictionless way, transparent way. I see lots of settings objects where you just to read a single variable,

(12:41):
you need to call methods like
get, read, parse,
and you need to do your own specific type testing on every place.
And then when you look to the code, you have more, like,
interaction with the settings than other things. So I think
what I learned at that is if you do that,

(13:02):
you are like coupling
your code with your settings library.
And the main philosophy philosophy for Dynacomf is to be like decouple.
We want Dynacomf to be decoupled and transparent from the project.
The Dynaconf API
behaves like a normal Python model
or a dict like object because a Python model with attribute access or a dict like object when you look up by keys

(13:29):
is the 2 ways people read settings. If you have a constant file, it's gonna be attribute. If you have a JSON dictionaries like after parsing. So Dynacom supports
both APIs and you can like treat Dynacom just as normal Python object. You don't have any kind of special that you need to to call or need to prepare your application. You just use as normal Python variables. I think this is important because, as I said, at any point in time, you might need to change your settings manager.

(14:00):
Even if you have your own written settings manager, you might want to migrate to DynaConf. And if your
settings are really agnostic, like simple Python objects, it's gonna be easy to plug Dynacomf in your project. In the same way, the other way, like, if you want to remove Dynacom, it's just like removing the object created by Dynacom for placing to another object that behaves like a Python object and everything works. So I think this is important to decouple the settings from the

(14:31):
project.
So
there are challenges that you're gonna be having dealing with settings. Like,
if you
want, like, dynamic features like laser evaluation
and overload
access from the settings, like, you you want to put middlewares
between who is accessing your settings and where the settings is doing transformations.

(14:54):
So this is gonna be easier if you do this in a transparent way because Python offers
lots of features for dynamic programming. You can override.
You can do laser evaluation. You can override attribute access.
So it's not that difficult to use Python to create a layer
of abstraction for assessing.

(15:15):
We see that with ORMs
for databases
and then a constraint to fit that for the settings. So, yeah. And
about the right way, what would be like the right way to treat the settings?
I still stick with the 12 factor apps. I think settings
belongs to environment.

(15:35):
So
the application
must have reasonable
defaults.
It should be able to run without any specific setting.
Application could have, like, required settings, but it should be able to run with minimal settings.
But everything should be, like,
belonging to the environment where application is running. So the code is actually not. I think it's true

(16:01):
for cloud applications because you can run-in multiple cloud providers.
It's true for CLI applications if you distribute it to multiple targets and and multiple ways of running.
So,
yeah,
I think that's what I have to say. 1 of the things that you noted in there was the question of being able to

(16:21):
cast the different values into their appropriate types And with Python being a dynamic language and also the, I guess, not necessarily evenly distributed adoption of type annotations, etcetera,
that is an interesting question for being able to manage settings in Python applications.
And what you're saying about things like parse pool, parseInt, I've definitely seen that all over the place, both in widely used libraries as well as in some of the libraries that I've seen some of my teams kind of build in house.

(16:51):
And
I'm interested in understanding your thoughts on kind of how to
encode some of that type information in the settings that you're loading for your application as far as, like, what type should it be. And then particularly in the case where maybe it's a string, but it should only be 1 of a particular set. So you, you know, maybe you wanna have an enum and just being able to understand, like, what are the possible range of valid settings for this attribute and then being able to bring in things like

(17:20):
annotations
or documentation
around the purpose for a given setting so that as you come into a brand new project and it has the settings defined, you can say, okay. I understand what this value is supposed to do and why.
About typing, in the first version of DynaConf, we decided
to be transparent and frictionless.
I love this word like frictionless, like you just do settings dot key and you fetch the value you want

(17:48):
and with the type it's defined by Dynaculf.
So we wanted to avoid the need for you to call settings dot key dot cast
or settings dot get and then you put the key and then type that you want that to be tested. So we want to avoid this extra work and and give you the final data to use. So the first approach, which is already supported today but it's not highly used, we added type annotations to the variable itself. So as it's based on environment variable

(18:21):
and environment variables
as our first source of truth
on an application
environment.
You can only export strings and Python reads all environment variables as strings. So what we did, we created some tokens, some markers.
It has nothing to do with type annotations.
It's just you put at JSON and then you can put the JSON string

(18:46):
and export it to your environment.
You can put, like, at int, at string, at boolean. All the basic types are we have a marker.
So when you are exporting environment variable, you mark and then
space and the actual value you want to be parsed. So the type is on the source, is on the data itself and DynaConf parses based on on that. So this was the first implementation.

(19:11):
As I said, it works. You can actually add your own transformations. So you can create your own markers, and then you put this in your own annotation.
Like, you have, like, a parser for a phone number or email address. You can mark this as an email, and then Dynacom will try to parse this and include
and give you back when you access, like, a structure, an object, class, anything you want. So it's very customizable.

(19:37):
We call it converters,
inside the the source core
code. But then when we release the DynaCon 3.0,
we figure out that
most of the users
are were migrating to TOML. TOM obvious minimal language, which is used today
on many projects like by project dot toml, Rust uses on cargo dot toml.

(20:00):
We see this being adopted like as a very standard on on application configuration.
So we decided to adopt toml as our default parser.
So that makes things like easier because you can export a variable as a string
and then we pass this when reading to the toml parser and if it's a valid toml

(20:22):
integer then a conf reads as an integer. If it's a valid toml
float, it reads as a float. So you can obviously replace this behavior. You can choose your dynaconf to use yaml as a parser, to use JSON as a parser. But the default 1 is Toml. So you can just export, like if it looks like a string, DynaConf will infer it's a string. So there is a type inference based on environment variables.

(20:49):
When you are working with files that has typing, for example, YAML has its own typing, toml, JSON, then DynaConf
respects the typing coming from the the source. Yeah. But for all the other string based
sources,
typing will be inferred unless you annotate it explicitly.
As I was mentioning

(21:11):
earlier,
basically every engineer who writes an application gets to the point where they say, Oh, I need to be able to change this setting. And so they start to implement their own settings management
system.
You know, oftentimes, it's very basic. But as you
evolve the project and iterate on it, your settings management becomes increasingly complicated. And I'm wondering what are some of the, I guess, antipatterns that you see people develop as they build their own homegrown systems for

(21:40):
addressing settings in their applications and some of the ways that it can lead to unmaintainable
code and overly complex
settings management
approaches or overly rigid or opinionated ways of being able to specify those values.
Dealing with multiple sources of data
is difficult,
especially if you will have layered environments.

(22:01):
For example, you have a special section for testing variables, production, development,
and this varies across all the environments you are going to be deploying. So
when you get all those data together,
there is a very hard decision to make. So let's say you read, like,
2 static files like toml or yml and you read multiple prefixed environment variables.

(22:26):
And now you have lot of data coming from different sources. How do you merge it? How do you if you have the same key on the settings file and the same key is overrided by an environment variable,
And this is not just a simple primitive type like an integer or a string. It's a data structure. So this happens a lot on Django projects. For example, You have like a key called databases, which is a dictionary,

(22:52):
and inside databases you have like nested keys. And then the user wants to override
only the password,
not the whole dictionary for that key. So how do you do that? So I see that in many cases you will be requiring the user to override the whole dictionary,
but we ended up adding a way for you to override only the nested keys. So you can traverse

(23:18):
using the variable names
to the next nested structures
and then merge data. But when we are talking about more complex data structures like lists,
and let's say you have a list of,
middlewares that your application will
use and you want to contribute with another middleware,

(23:38):
how do you decide if it's gonna be on the first item of the list, last item of the list? How do you ensure it's not duplicating?
Like, you if you add the middleware twice, maybe you are, adding overhead in our application. So
those decisions are really hard because
there are multiple opinions on it. So there are people that thinks it should be more strict. Like, if you wanna

(24:02):
override something, you need to override the entire object.
We wanted to add more flexibility. So there are multiple merging strategies that we offer on DynaConf.
So when you are building your variable or your dictionary, you can mark it with some
special keys or markers, and this will tell Dynacomf

(24:22):
what is the merging strategy for that. So what I see when people starts doing
in house settings management is that they, like,
they don't think about it at the first time. And then when the application starts growing and having multiple environments,
they start needing
data merging
in multiple environments, and it becomes a challenge

(24:44):
especially if you have already running
environments that you don't want to change a lot. So
that's 1 of the things. Another thing is that based on Unix philosophy, I think environment variables first. Yeah. So you should always look to environment variables.
We offer ways to define specific subset of environment variables you want to be overridable.

(25:08):
But based on the subset of environment variables you have, this should be the source of truth on your application.
And so digging into DynaConf itself, can you describe a bit about some of the implementation details and some of the ways that you designed the project to be able to make it sustainable and maintainable
and some of the

(25:29):
evolution that it has gone through from when you first started building it as a 1 off solution to a specific application to where it is now, where it is this generalized framework for settings management?
Yeah. So I think the first important thing on this topic is tests. So
from the time we decided to
promote DynaConf

(25:50):
to, package,
we decided that 100% of test coverage will be the target. So we have 100% of unit testing.
I know that unit testing is not enough for all the dynamic cases. So that's why we also have functional tests, which are growing.
For every issue, we add more functional tests. So tests

(26:12):
is, like, helping a lot for
keeping the library reliable.
Yeah. Because it's it's really important.
And yeah. In terms of implementation,
Dynacomf is just a class. Like, it's a Python class
and it's implemented using
lazy evaluation.
So
there's a functional model inside Dynacomf. In this functional, we created some types that has lazy evaluation.

(26:39):
So the user declare
configuration.
For example,
paths, validators,
all the kinds of preferences they want when instantiate
the settings object from Dynacomf.
And at that point, nothing is really performed by Dynacomf. Dynacomf just creates a class with bunch of data in it, the declaration of how we want DynaConf to be behaving,

(27:03):
and
then you can, like, start your application without need to, like, have all the DynaConf
process. So you don't block the import time of your application
because of settings. This is also another thing I see people doing. Like, when you have a settings library that during our time, you need to load all the files and do all the transformations, then you can block

(27:24):
your application startup time just to read some settings files that is gonna be used only later. So that's why we implemented this as a lazy evaluation class.
When the first variable is assessed in dynaconf,
in your application
lifetime,
then dynaconf starts building
its loaders.

(27:46):
So it works with a pool of loaders and you can add more loaders to it. It has a collection of loaders that are built in. For example, it has,
JSON loader,
dotini loader, toml loader, yaml loader, environment variable loader.
So those are included in a pool in a specific order. All of this is configurable and you can easily create your own loader and plug there.

(28:11):
And loaders
uses the strategy pattern
where you can take the base loader and then you pass your
file reader, your file writer, and the loader methods that will be returning data. So after the data is loaded, it's just a Python dictionary
that's coming from all the different
sources. So you see this makes it flexible because on the loader you can read from a database. For example, we have cases where people wrote a loader for MySQL. They go and do a select in MySQL, get some key pairs from MySQL, build a dictionary

(28:46):
and feed dynaconf with that data. So
after
every loader runs, we put each dictionary in a chain map.
Chain map is a Python object that is a stack of dictionaries.
So you put your chain, lots of dictionaries together in a single object And then when you try to access a key,

(29:06):
if the key is found on the first dictionary, the last latest inserted, then
it returned. If not, then the chain map is gonna be trying all the other dictionaries in order until
it do, like, tribute error or key error. So
when DynaConf does it, it's loading. It puts in a in a chain map and then we do all the things we need in the chain map like merging, parsing.

(29:31):
All the transformations are inside the chain map.
So the process of of loading
from source involves parsing. So because, as I said, we can define type annotations. So when you read an environment variable, if it's marked with int, bold, JSON,
etcetera, it should be, like, tested to the appropriate type.

(29:52):
Also, dynaconf
supports some lathe formatting.
So you can have, for example,
environment variable or a settings variable in a text file
with jing jett and plating.
So people usually use this to calculate how many processors you have in the machine
or to call some external thing and and build the settings on the go or to reuse some environment variable.

(30:16):
So this is the process. So this is the heavy process that Dynacomf does. So at the end of the day when your settings is loaded, what you have is a settings, which is
a class using all the dynamic Python features
such as
overloading of get attributes and all those methods that makes

(30:36):
it makes it
easy to get a key and look up that on the chain map that it has. And all of this is very quick because it's all using Python. On the chain map, we have almost constant time to access. So after the loading, everything is very very simple. So I just see the if you look the overview of the architecture is not like too much. It's just a simple class. But then we plug the loaders

(31:03):
and the strategies user on the loaders. Then we have the filters
and, especially, we have the validators so you can ensure that the settings look the way you want. So, yeah, that's more or less how it's implemented. And as I said, it growed,
evolved
mostly organic. So from that small project, it started becoming like a framework. And people

(31:24):
from different
backgrounds are contributing and bringing more features from sometimes these features in other settings libraries they bring to DynaConf.
So we have different ways of doing.
Given the fact that it has these interfaces
for custom loaders and different strategies,
it definitely is very extensible.

(31:45):
And I'm wondering how you
thought about the process of deciding
which strategies, which loaders to incorporate into the DynaCon project
itself as it's packaged and released,
and what capabilities
you are
consciously deciding not to bring into the core because either it's a bespoke solution for 1 person or it's something that is not

(32:09):
a sort of core functionality, and you want to kind of push that more into
allowing people to do what they need to do without overloading the project and overloading your kind of maintainer capabilities
into some of the ways that people are able to hook into DynaConft to be able to add those customizations.
When I started, I got very excited about adding more features.

(32:31):
It was like my vision. Like, I want Dynaconf to be able to load all the possible
configuration
formats and all the possible
services
targeting for configuration like, etcetera, Redis, Vault,
etcetera.
And then at that point, we added too many features into the library. I know that. So all those loaders that we call the core loaders

(32:55):
for multiple file types
are not really necessary on all Dynacomf installations.
But currently, if you do pip install DynaConf,
you are getting all those together.
So at some point, there was no extensive thinking about this.
When we released the 3 dot 0, I started having more contributions and we formed it like a board of co maintainers

(33:21):
and people started thinking about like, we don't want to add more things here. We need things to be added as a plugin.
So now we have like a initial plugin system. So for example, there is a pytest plugin to make it easier to use Dynacom for when you are mutating
settings during your settings.
And we decided to not accept this as a core feature and then the person that sent the PR

(33:46):
created that Dynacomp Pytest repository
with a plugin.
So it was like 1 year ago or 2 years ago, I don't remember exactly, but
we started, like, rejecting
more features to focus on fixing and improving features that we have in the library.
And now we want to improve our plugging system,
and we have plans to have more pluggings in the future and even removing

(34:11):
some features from the core,
turning them into plugins. This is 1 of the things in our road map. Yeah. But right now, we are not accepting, like, new features, like new default core loaders. If you want to load something, for example,
for a new file format or a new external service,
we will

(34:32):
suggest you to create, for example, an plugin.
There are documentation
on how you can use it
and yeah. Currently, that's our approach
to fix and prove things before we
we add more features. And try to get the library. It's small. It's very small, but we need to get it more small and and have only the core on the on the main repository.

(34:54):
And then as far as
promoting kind of reusability and collaboration
among the kind of ecosystem of people building on top of DynaConf, do you have any,
I guess, naming
convention for packages that plug into DynaConf to add custom functionality, such as if I wanted to add something that allowed for loading key value data from console, for instance? Or

(35:19):
is there any kind of GitHub organization that you have for being able to allow people to contribute their plugins to DynaConf
so that people can kind of discover all the options that are out there rather than just reinventing the wheel all the time?
Yeah. We have a GitHub organization. It's called Dynacomf. Github.com/dinacomf.
There is the website, Dynacomf.com,

(35:39):
and there, I think the 2 plugins that exists right now is listed somewhere there.
And we have the discussions
forum on GitHub.
So if
people want like to contribute with a new loader or new functionality, we discuss there or on our matrix channel
and we give the help needed to create this plugin. It's not a hard work because you can create a loader by subclassing the base loader and adding those 3 objects,

(36:08):
reader, writer, and loader.
And also we have now support for hooks. So we got some inspiration from Pytest.
So you can file with hooks that Dynacomf will call on certain points of its loading process.
So it's possible to, like, customize lots of things in in Dynacomf right now. Yeah. But the main channel is GitHub. Github.com/dynacomf.

(36:32):
You mentioned a bit about the process for kind of onboarding onto Dynaconf as part of your project, where it's largely just PIP install and start using it. I'm wondering what are some of the ways that the
implementation
and usage of DynaConf scales with the complexity of a project or the complexity of the
environments

(36:53):
or kind of support matrix of a given project as it evolves?
The goal for the project is to be really simple. Like, you you really create your Python settings object.
Really,
the idea is that it's not going to be something
that you need to care a lot in your project besides the validators,
yeah, which is completely

(37:15):
the user
responsibility
to write. But everything else is really like
transparent and outside your project. And yeah. So the usage scale has been very good. I think there are lots of users right now. I don't remember right now, but I think in GitHub, we have more than
2, 000
repositories that depends on DynaConf right now.

(37:37):
And recently,
PIPI market DynaConf has
critical project, which I think it means that it's on the top 1%
most downloaded libraries.
And
in terms of support, like,
we have few contributors
that are really active in the GitHub issues. So if you need something, if you need if you have questions,

(37:59):
it's really quick to get information.
On the matrix channel, we don't have lots of people because the channel is more like for devs,
but it's also a good source of information if you need implementing
DynaConf in your project.
And I would say that GitHub is our our main place in the discussion tab or issues.
And usually, it's quick. Like, I tend to answer questions really fast and we have other people like helping. So

(38:27):
it's scaling good. As for example, the company I work on is, like,
sponsoring me to work on Dynacom for a certain time on my work.
I have, like, paid time to work on the library. So I'm trying to address lots of issues. So we work with open source, and DynaConf is 1 of the libraries in our open source. Yeah.

(38:49):
1 of the other topics that probably deserves its whole entire episode
that we touched on briefly earlier is the question of secrets management,
where
it's 1 of the things that, again, sometimes seems like it's straightforward
but can be
critical if you get it wrong because it, you know, has the potential to

(39:09):
expose your application, make it vulnerable to exploit or kind of malfeasance.
And I'm wondering, what are some of the ways that you have worked in DynaConf to be able to strike a balance between
that frictionless
use case that you're aiming for while providing appropriate guardrails
to prevent kind of credential leakage or prevent people

(39:30):
kind of circumventing
best practice for how to manage sensitive data in their application?
I think the best way to manage
secrets today
is using encrypted vault services, especially if you are working in the cloud.
So you're gonna have tooling that will
it will depend on the environment you are running. But for example, you have a w s secret manager, you have HashCorp Vault

(39:54):
and you can use encrypted environment variables depending on the on the place you are
deploying your application.
I think that's the best because you are delegating
responsibility
for
this where the secrets are stored and how it's encrypted to another service that is really good on doing this and well tested for this specifically.

(40:15):
And Dynacom supports
out of the box loading data from HashCorp Vault today.
So you can like just pass the configuration,
the host, and your token for Vault, and then it you can pass, I don't remember exactly, but I think the name space and then it goes there and reads your secrets.
But you can add folder for other services. It's not hard to do.

(40:38):
But this is something that probably you're gonna do in a production environment when you have like a team or as a re or DevOps engineer that will provide the service for you as a developer.
During your development environment, I think you don't wanna have this work of, like,
deploying or running a vault service during the development. So we decided to make things easier when you're running in development environment. So Dynacom supports you to add a secret dot toml file. So using the same Dynacom structure when you are working, you have a settings dot toml or dot yaml. The format doesn't matter. You can create a file together

(41:17):
at any settings that is called dot secrets and this extension you are using. This is just a settings file,
but the first way it helps is that
it's added to the git ignore if you use dynamic CLI to initialize
your project.
So you ensure you are not pushing it to your repository.
But it's a separate file

(41:39):
so you can ignore it from your project.
But you also can encrypt this file if you want. So you can use Dynacom CLI
to add new secrets and the secrets will be encrypted
using a private key.
And then your local vault is just a simple file with encrypted data inside it. This is not something that you should use in production system. I know people are using this encrypted secret file. In production, we can't control what users does.

(42:07):
So in production, I recommend using an external service, but this is something we provide for making development and testing easier.
Another thing that we did is
Dynacomf identifies
when your settings comes from a secret.
And then it will not print it out in logs. It will not print it out in the standard outputs. And when you are using banner conf CLI to inspect your environment,

(42:35):
it will also just show the
part of the the secret, not the whole secret. So there's some kind of obfuscation on on the variable
to make it secure.
As users become more advanced or sophisticated
with their use of settings in their application and in their application of Dynaconf,
what are some of the kind of advanced or maybe underutilized

(42:58):
or overlooked capabilities
that are available in the framework?
Yeah. So besides the basic things, I think
yeah. As I said, on env vars, you can override,
you can traverse to nested keys.
That's very powerful, especially if you are working in an existing project. I see lots of Django users that is not using it, but it's very powerful

(43:22):
because we adopted the Django way of doing it, which is
you export a variable.
If it's a dictionary
inside your settings, you can use double underscore
to go nesting
1 level
in our data structure, and then you can keep using double underscore
to traverse to all the other nesting levels. And then you can specifically change that variable you want and that together with the validation

(43:49):
that DynaConf provides, you can have,
easy way to override settings. So this is an underutilized
capability
that I want to promote more and see people using this
better. Other thing is that Dynacom supports lazy values. You can actually create your own lazy formatter.
Right now it has Jinja

(44:09):
and it has
the math
to do like, common math expressions like multiplication,
division, and so on. You can export the variable and mark it as a math or a jinga and put the template or the expression. And then during the load, then a conf will evaluate it. It also works together with validation to be more effective.

(44:30):
And besides the the loaders that we talked, like you can add new loaders to Dynacomf.
Now Dynacomf has a hook system. So you can
write functions that will be hooked into Dynacomf on specific parts of the loading process.
And from the hooks, you return a dictionary,
and that dictionary will have declarations that will contribute to the settings. So you can do like data transformation.

(44:55):
You can do data validation.
You can do all sorts of custom things with this. And I think
the thing that is more underutilized
in Dynacomf
is validators.
And it's really important because by default, Dynaconf is schema less. You just create a settings
that equals Dynaconf class instance, and then you start looking for

(45:18):
keys inside it. But there is no definition of schema by default. It's very flexible.
But we provide a way to define a schema. So you have a validator class where you can provide
validator for each key you want and you can define the type of the key, the range of the key. I think you asked it before. How do you define a range if you have like a number, a port, for example, for a service?

(45:40):
So how do you define exactly which options are available there? So you can use the validators to define it.
And another good feature of validator is that it has a text field, which is a doc for that settings.
So when you define,
collection of validators,
you put the rules that Dynacomf will validate and you put the description

(46:01):
of this. You can use Dynacomf CLI to generate a documentation
for the supported settings for your application.
So this is really under utilized it. I think that's my fault because I need to promote it more. We are focusing more on the other basic features, but it's very powerful.
And I guess 1

(46:22):
kind of side topic that I'm interested in personally is
another
project that I've seen that has some similar capabilities,
at least in terms of being able to
assign
schema and type information
to your settings and be able to do some validations
at load time is the Pydantic project.

(46:42):
And I'm wondering
if you see that as a complementary
or an alternative
solution for being able to approach settings?
Yeah. So the answer for these questions is on Dynacom pull request tab. If you go there, you see that there is a pull request that is going to be the Dynacom 40.
And on Dynacom 40, we are bringing by then take to Dynacomf.

(47:05):
So these validators
I told you now is we created the validation class. It works very well.
We want to keep it working,
but what we're gonna do is we're gonna provide a way for the users to create a schema,
use subclass dynaconf,
and then you create a schema and then use type annotations
and use bydentic

(47:26):
fields.
And then Dynacomf will follow
bydantic,
style of schema. So we see that Python community
is moving to this thing called modern Python.
All the things are going to this way of programming.
So using more data class, more bydentic with type annotations.

(47:46):
So we are moving this way also. It's taking a time because we are not like lots of contributors, but this is in our roadmap for the next version. Yeah. In your experience
of building and using Dynacon for yourself and working with the community of people who are adopting it for their own projects, what are some of the most interesting or innovative or unexpected ways that you've seen it applied?

(48:08):
Oh, that's interesting.
I have been hired some months ago by a company to give a consultants
on a project they were using, DynaConf.
And it was a very old software. They had like this data collectors,

(48:29):
like, scan things and this generates a dot CSV file.
And
at the end of the day, the person collecting the data will select an upload field on the application,
select the CSV file or multiple files and upload to a back end. So they were able to
create a new back end using Flask

(48:50):
but they are not able to upgrade
this outdated
data collectors.
So on the application they have on the client side, they have only a single upload field to send files to the server
and they wanted to gather more information
from the client parts. So they wanted to have a form with the name, the location, and all the information, but it was not possible because I think this is outdated. They can't, like, update that. So

(49:17):
what they did,
they added a toml
file in the mobile device with the information they want, like the name of the user
and all the serial numbers and things. And then the user, when sending the upload, they select the CSV file and select this toml file with the information from the the local place

(49:38):
and upload it to the Flask server. And Flask,
instead of parsing this
manually, they decided to use Dynacom for it, which is not like the go for Dynacomf, but work it because during upload, they drop this file in the configuration file
and then when running their pipeline, I think it was an airflow application,
a pipeline,

(49:59):
they read settings and Dyna conf would
load that toml file and know how to process that CSV file. So just to resume this,
what they did is instead of creating a new form in the application,
they replaced the form
with a toml file. They use a dynaconf to parse it. And dynaconf is not like a library for doing this. They use a dynaconf as,

(50:24):
ORM or something to process data coming from request
and not as a settings library,
but that was working. They didn't want it to change so
they got a problem where the end users
discovered
that they can
edit the toml file and override settings that they should not override.

(50:46):
And then we ended like creating
a custom format, not toml,
and adding a new loader to Dynacomf with more validation
to prevent users to hack the system.
And it's running with Dynacomf today. So they have lots of collectors. They run multiple
pipelines on airflow,
and they use Dynacomf

(51:06):
like in unexpected
way. So DynaConf is their request
parsing library
and yeah. I don't think there is any other
personal or project using like this. And yeah.
Yeah. It's definitely,
interesting, the different ways that the kind of old trope of when you have a hammer, everything looks like a nail, some of the ways that that manifests.

(51:31):
Yeah. You can control what the user does. We try. Yeah. I learn with this and other projects I work on. We always try to, like, build fences
to prevent users for of doing this.
But, yeah, when you work in a project like this that grows organically,
when you find
the hole that you need to close,
all the users are already using that. So you can't just, like, remove this feature because everyone relies on the feature. So instead of a bug, you just document and it becomes a feature. Then life goes on, yeah, with lots of testing. So,

(52:05):
yeah, that's interesting.
In your own experience
of building DynaConf
and maintaining it and evolving it, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process?
Okay. So well,
interesting.
I think
when I
started doing the project,
I thought that more documentation

(52:26):
would be better. If I documented
everything I did would be the best thing to do,
and that's not true. I found out that
sometimes
more documentation
is bad.
So the previous documentation of Dynaconf
have too many docs. We use it to
dock, for example, APIs and function calls and using that old style Python way of documenting

(52:51):
everything
on your library.
But as Dynacomf is more like a client,
not really a library.
You stunted as a client. You are the end users. You are not really
extending it a lot in terms of API.
I learned
that today, and I think it's something that is changing
on software industry. If you look to the newest

(53:12):
software that is created on open source,
documentation
is becoming more like a set of tutorials
written in a book form. If you go to excellent documentations like fast API,
I can mention.
You see that the style of reading docs are more like a tutorial. You are guiding the person
to know workflows

(53:32):
on how to use your application
instead of documenting
every method or every single thing your application can do. You put everything together. So this is was an interesting thing I learned, a different way of writing docs,
and I'm trying to do this to apply this to Dynacomp. There is a lot of things we need to change there in this way. And this is more difficult because you can't just look to the code and document based on code.

(53:58):
You need to think about use cases
and document
each use case
separately, but it's more powerful in terms of documentation.
An unexpected thing
that I figured out is that Django settings
is really difficult to customize.
While in Flask, you can easily subclass and override the config class because it's like designed for that.

(54:24):
On Django, the only way to customize the settings interface is by patching.
Yeah. So I discussed it with some Django core developers ways of making the Django settings
because Django has its settings manager when you do from Django dot conf import settings,
it does similar things as Dynacomf, like lazy loading and things like this.

(54:45):
You can add plugins to it. You can customize it in a good
way. So what we did, we patch it. All the other libraries
for Django are patching the settings,
and we are constantly trying to get Django developers more involved
in a way to make it easier.
And the other thing was challenging, I think, is merging data from multiple sources. It's the biggest challenge right now. The source of our,

(55:12):
most of our bugs and requests and and confusion
is how do you take multiple data and merge conflicting data
and how to make the decision of the precedence,
which 1 wins and how the final data will look. So, yeah.
We are still learning and trying to improve better ways of giving flexibility for people that wants to merge data from

(55:35):
different sources.
And so for
an individual or a team who's looking for
a way to manage settings for their application or their project? What are the cases where DynaConf is the wrong choice and maybe they're better suited with a different framework or building their own custom
solution? Well, I like to think that DynaConf is a good fit for any project that needs the couple settings from code.

(56:02):
Yeah? So if your project doesn't need to decouple the setting from your code, and you are good with just a constant dot py file and everything static there, I think then maybe DynaConf is not needed because you were adding the loading overhead and all the things, and you actually doesn't have the need. But I like to think that this is simple, small, and you can use it in any project.

(56:25):
From a simple project having only a single settings dot toml or dot yml file from
a project using multiple environments, thousands of files and external
services.
However,
if your project is a library
or a plugin
to be added to another project,
I don't recommend you using Dynacomf. And I saw people trying to do this and coming to the issues saying that

(56:51):
there were some kind of bug or problem.
And actually, I think libraries
or plugins doesn't have to define settings.
Libraries and plugins
will
consume the settings from the application
that are consuming it. So I think
that
that will be the good choice. If you are doing a library or a plugin,

(57:12):
you
add interfaces that you can receive the settings from the application that you are plugging to and data conf will definitely not be a good solution. You're gonna be, like, frustrated because data conf depends on environment and you don't control environment when you are releasing a library or a plug
in. And as you continue to build and maintain and evolve the Dynacont project, you mentioned a little bit of the work that is kind of in process, but I'm wondering what do you have planned for the near to medium term or any areas of feedback or contribution that you're looking for help with?

(57:48):
Yeah. So first, I would like to thank the contributors and co maintainers of the project. We gather, like,
amount of people that helps me, like, taking decisions and giving the project to the right direction.
And we recently defined the road map for the version 4 0, and there are
3 big changes. We have small fixes,

(58:09):
improvements, but the 3 biggest changes is
we are implementing
that new way to validate settings. This is based on Pydantic and Python type annotations.
So we want to keep all the current way of doing things working without breaking any compatibility,
but we are adding this new way for you to define schema in the Pydantic style that will work good.

(58:32):
The second thing
is we are splitting the code base and removing the core features and transforming in plugins.
So the Dynacomf package, when you install Dynacomf,
will keep being a bundle
with all the
parts installed together.
So it's not going to break any installation.

(58:53):
But then we are creating
external projects on the organization
that will be Dynacom Core,
Dynacom Vault, Dynacom Redis, Dynacom VM. So every single loader, every single functionality will be split into,
separate repository.
Not everything, but all the things that make sense to be separated.
So those can evolve like plugins, and when deciding which feature you want, you can pull only the feature you need to build your settings library. So this is like, technical depth we had on the first implementation,

(59:27):
and we are solving in the version 40.
And I
think the other 1, which is the hardest 1, and I'm working a lot on it,
is the data merging algorithm. I said to you that data merging is the most difficult thing to do in a settings library
and data conf today has a part written in Rust. If you go to crates. Io, you have a project there called

(59:51):
HydroConf.
HydroConf is
the same philosophy, the same strategies of DynaConf
written in Rust for Rust web applications.
And when doing this in Rust, we found that the merging algorithm
is more efficient
in Rust.
So
what we're gonna do is we're gonna write this

(01:00:12):
as a separate crate on the Rust
and export that crate to a Python model.
And then we use this in Python because it's something that today is really easy to do to create a Python plugin, Python model in Rust. And then we're gonna fix the problem of data merging to be really more efficient
using Rust. So those are the the 3 things that are on the road map for the version 4 0.

(01:00:37):
And, yeah, hopefully, it comes this year,
I hope.
Well, for anybody who wants to get in touch with you and follow along, I'll have you add your preferred contact information to the show notes. And so with that, I'll move us into the picks. This week, my pick is on the theme of secrets management for applications. So I've been using the SOPS project for a while for some of my work. It's a great utility for being able to encrypt data in flat files and do it in a manner that is

(01:01:06):
relatively secure because it's able to hook into things like either HashiCorp Vault or a different cloud key management solution so that you can use their secret key material to be able to manage the encryption. So it's a bit safer to be able to actually store it in your source repositories and version your secrets along with your code. So definitely recommend taking a look at that for anybody who's interested. It's a project out of the Mozilla group. And so with that, I'll pass it to you, Bruno. Do you have any picks this week?

(01:01:34):
Yeah. So first, a note on on your pick. I think it's interesting. I heard about it, never use it. I hope it becomes like a Dynacomps plugging in the future to use the SOPS, the encryption
stuff. We are using currently a thing called Jose, Jose. I don't know how to mention it, but it also does some kind of encryption.
And, yeah, with our plug in system, probably, we're gonna have subs integrated with Dynacomp and yeah. So now going to my picks.

(01:02:02):
Well, I don't have a pick in terms of software and as you said it can be anything.
So I recently
watched
the series, very interested, called Severance,
and I think it's really great. I just finished it and it's on my mind. I can't start thinking about what's happening there. So I recommend
for people who like a little bit of sci fi, but just a little bit and also computers. This is really interesting.

(01:02:27):
Yeah. So my other pick is gonna be learn Rust. I think the future of Python is full of Rust. Yeah. We're gonna see more and more Rust in the future for doing better Python projects. So
it's gonna be a skill for every
Python person to learn a little bit of Rust. Yeah.
Alright. Well, thank you very much for taking the time today to join me and share the work that you've been doing on the DynaConference project. It's definitely a very interesting and full featured framework, and settings management is a important capability for any application. So I appreciate all of the time and energy that you and the other maintainers and contributors of that project have put into it. So thank you again for your time, and I hope you enjoy the rest of your day. Yeah. Thank you. Thanks for the invitation. It was very nice to participate and, yeah, have a nice rest of the day.

(01:03:17):
Thank you for listening. Don't forget to check out our other shows, the Data Engineering Podcast, which covers the latest on modern data management, and the Machine Learning Podcast, which helps you go from idea to production with machine learning. Visit the site at pythonpodcast.com
to subscribe to the show, sign up for the mailing list, and read the show notes. And if you learned something or tried out a project from the show, then tell us about it. Email host at pythonpodcast.com

(01:03:40):
with your story.
And to help other people find the show, please leave a review on Apple Podcasts and tell your friends and coworkers.
Advertise With Us

Popular Podcasts

United States of Kennedy
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.