Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
S1 (00:19):
Hey. What's up? All right, episode 497. Yeah. Hope things
are starting off well this week. Um, got some lab
results back. This is definitely too much information, but, uh,
c metabolic is pristine. Liver is great. Kidneys not so good.
(00:43):
Got to improve kidney function. Uh, turns out too much
creatine is a bad thing. I was actually getting dizziness
from ten grams of creatine, so I think I'm going
to have to go back down to five, maybe even lower. Uh,
because it turns out creatine is not great for the kidneys. And, um,
(01:06):
it's not like real bad, but it's just like, not optimal.
So I need to optimize that, uh, HDL a little
bit too low, LDL a little bit too high. So
I've basically been super sedentary for like the last 4
or 5 months because of working on Kai and all the, uh,
(01:28):
all the crazy stuff going on with AI and projects and,
you know, all the typical stuff, all the crazy world stuff. So, um,
starting to try to adjust to this, uh, stand up
alarms for standing up with my standing desk, um, foam
seating pad with, like, a cutout. So I'm not, like, sitting, um.
(01:52):
In an uncomfortable position all day. Mandatory cardio twice a week. Uh,
table tennis or rucking weights at least twice a week. Uh,
more coffee to balance the Celsius. So that was the
other problem. Wasn't just creatine. I'm drinking like two Celsius
a day. Sometimes more, but usually only 1 to 2,
(02:18):
and energy drinks and creatine are two of the things
on the list for, um, things to avoid. If you
have kidney function that's, uh, not optimal. And exercise. Yeah.
Raises HDL. So let me know if you're on a
(02:38):
similar journey. That's kind of where I'm at. Oh sugar
was great. So my metabolic health was ideal. It was
like perfect. So that was that was a great story.
Okay Kai. Update massive context loading. Breakthrough. So I am
now doing a really cool thing where I'm basically I
(03:01):
changed the hook to point to load dynamic context, and
I move that to point two load dynamic context MD
so now my routing is actually in the MD file,
which means I can actually navigate that whole thing contextually.
Instead of having to edit TypeScript code, I am now
(03:23):
able to write it out in text in markdown, basically describing. Yeah.
If you see this type of question, um, that means
I want you to use the researcher. If you see
this type of thing, that means dynamically load this piece
of context. If you see me talk about this project,
that means go load this piece of context, which means
(03:44):
my primary boot load just gets my core context right.
So this whole UFC system that I talked about in
the video is, um, it's all based on only loading
what you need to load at that particular moment and
keeping the context extremely clean. So, uh, this is just
(04:05):
it's really powerful and it's working perfectly. Like I'm not
having any misses. It's actually dynamically loading exactly what I need.
And it's not just the context files, but it actually
loads the correct researcher as well. Or I'm sorry, the
correct agent, which could be researcher, engineer, designer or pentester.
(04:26):
It can load all those different agents according to the
type of asset that, um, that I give it. Right.
So pretty. Pretty cool. And, um, speaking of Chi Pi,
speaking of the whole Pi system, I am open sourcing it.
In fact, I already have. In fact, it's already live.
(04:47):
And I was just going to put the framework up
there and say, hey, look, I'm going to add stuff later.
But then I just started adding stuff. Um, and now
I've added a whole bunch of stuff. It's kind of
like the basic structure. It doesn't have the voice stuff
in there yet, but I do have my basic outline
for the main context file and the main tools file,
(05:10):
and then the pointers to where like, it's pretty obvious
where to go. Put the other context for the other pieces.
So that is a lot of infrastructure there. I said
in one of the git commits, I'm probably definitely going
to leak something because I'm putting so much of like
my actual stuff out there, and it's like, no secrets.
(05:31):
Like I'm literally sharing the actual files. A couple of
things are like redacted for like keys and stuff like that.
But other than that, like, this is seriously the exact
infrastructure that Cai is using, and you're going to have
to tweak it for that reason, right? Because I still
have Cai's name in there. I still have like exactly
what I'm saying to him, exactly what what his personality
(05:53):
is and stuff like that. So you'll have to customize
some stuff. But the whole purpose of this is going forward.
When you're watching videos on YouTube that I'm talking about,
I'm not going to say, oh, here's like a gist
file or like, here's some code in the YouTube description.
It's going to be like, hey, go check the Pi
(06:14):
repository and you will see it. And the Pi repository
is set up with a very clear mission. It's like, look,
enable humans to have this functionality. It's the same as fabric,
the same mission as fabric. And now fabric basically falls
under this, right? It falls under pi because fabric is
just one of the sets of tools that I have available. Um,
(06:38):
so Kai has full access to fabric, has full access
to my foundry stuff, which is the fobs and the
custom commands, the custom cloud commands, like all the different stuff,
it's all in the commands directory and in mic, and
Kai can use all of it. And that is the
whole purpose of this entire project. And it is now
(06:59):
open source. So it's capital Pi is the name of
the project on GitHub. Okay. Oh, yeah. Just thought of
a cool. Um. I was sitting here trying to write
the newsletter, uh, and just kept adding more stuff. So
I've got this one. I just added it. So sick.
(07:22):
It's called capture learning. It's a command. And so the
idea is, if I have a I'm working on something
with Kai. We make something new or we're troubleshooting something
and we finally get it to work. I'm just going
to say, hey, write this down or document this, or
capture learning or whatever. And that same thing that I
(07:45):
told you about before, um, that dynamically loads context. Well,
it could also listen because it's being triggered on the prompt,
the user prompt submit that hook is what's actually triggering this.
So if Kai hears anything like that, Kai now knows
to go and execute capture learning. What does that do?
(08:08):
that drops it into the Learning Context directory. The learning
context directory includes a full report of the lesson, the
false assumption that we had, the mistake that we were making,
or whatever talks through the problem, talks through the process
because it has the whole log of our conversation, the
(08:31):
process of how we got there and then what the
solution was and then what the takeaway is. So that's
documented for each one. I'm going to have that going
forward now. And guess what you could do. You could
now say something like Harvest Learnings and then have that
(08:51):
go and compare your current documentation, your current workflows, your
current pipelines, your current context and instructions and everything, and
compare it with the lessons learned and say, is there
anything here that we should update based on the lessons
learned over the last week or the last year or whatever.
So I just think this is a really cool sort
of meta improvement concept. All right. Cybersecurity. Npm's most used
(09:18):
UI packages got backdoored to hijack crypto. Yeah. This, uh,
aikido company evidently found this, and, um, put out a
report about it. It's 18 different packages and, uh. Yeah,
rewritten to steal crypto. Uh, lots of supply chain stuff
(09:40):
going on. Uh, I really like the idea of using.
AI for many eyes. I feel like many eyes is
like my my big, uh, thing I keep coming back
to with with AI. And I think I talked about
this maybe last week or the week before, but SSL,
(10:01):
you remember the giant SSL vulnerability? I think we still
called it SSL back then. Maybe it was already TLS,
but anyway, that was a huge vuln and everyone was like, hey,
how could this possibly happen? This is open source. And
it's like, well, turns out open source is just a guy.
I heard someone say that recently. Open source is just
(10:21):
a guy. You know, it's just Chris Richards or whatever
who lives in Idaho. Turns out that whole open source,
many eyes things. Turns out it's just Chris and Chris
hurt his ankle and he hasn't been online in a while.
You know what I mean? So it's like Many Eyes
didn't turn out to be what we thought it was
(10:41):
going to be. It's because what it means, what many eyes,
what we thought many eyes was going to be was
many eyes are looking at the project, but what it
turned out to be was many eyes could look at
the project, but they weren't actually. And so this is
why I love the idea of AI. Many eyes. Millions
(11:05):
of eyes. Right. Cheap eyes that don't get tired, don't
get sick or whatever. And you just stick them on
the internet. You're just like, hey, go watch Chris's project.
You know, watch the code, you know, see if anything
hasn't been updated, like, maybe reach out and ping them
(11:27):
and say, hey, do you want to update something? Maybe
offer to help, uh, you know, maintain the project or something?
Or more importantly, if something nasty is in there, you
can actually find it, go and report it to him
or whatever. Or at the very least, maybe submit an
issue and say there's a vulnerability here, right? It's like
(11:50):
these are the eyes that we need. And I guess
the naive assumption was, well, we'll just train people up
in the school or in the trade schools, and eventually
we'll have millions of developers, millions and millions and they'll
all be security trained. And guess what? It didn't happen
(12:10):
because one, there's not that many of them. And two,
I mean, who's got time for that? Who's got time
for going and watching all the internets, you know, open
source projects and making sure there's nothing nasty in them
or nothing nasty is being done to them. The answer
is nobody. And it's likely to be nobody forever. Um,
(12:35):
so I think this AI agents thing is a really
good way to achieve this whole mini AIS thing. My
buddy Clint Gibler and team found tons of vulns with
a ten line cloud code prompt, so I think found
like 400 vulns. I think he said it was something
like 20 something Vulns were real, and it's actually fairly easy,
(12:57):
I think, um, to whittle that down and get rid
of a lot of those false positives. But the important
point to me is that ten lines of, you know,
a prompt in cloud code, and it actually found real vulns.
These are real projects that are sitting online. Who knows
(13:17):
if they've been assessed before, but either way, it found
real violence. Now it was noisy. But again, I think
that's a filter. Um, that is likely to, uh, be
pretty easy to build to just, um, you know, take
that in stages and clean those up. Um, yeah. And
there might be a blog post about that. Uh, maybe
(13:41):
a follow up would be nice, um, to maybe talk
about that whole cleanup process, but, uh, very cool work.
Hiring fraud flips zero. Trust from network to identity. So, yeah,
somebody's talking about here Hiring fake people, like they don't
(14:04):
have to break in and they don't have to phish
you if they can just get onboarded as regular employees
with full credentials. So this whole fake hiring thing is
becoming a serious issue. It's funny that it's happening at
the same time that people can't get hired. So it's like, okay, well,
if nobody's getting hired, and obviously it's not nobody. But
(14:28):
if it's hard to get hired right now, why are
we hiring all these North Koreans anyway? Yeah. Identity becoming
more important than zero trust. Or like the headline said,
zero trust moves to identity instead of, you know, perimeter
or whatever. Critical fixes for Chrome definitely go update their
(14:54):
scattered spider now targets browsers as enterprises move operations there. Yeah. So, uh,
my buddy Jason Haddox has been talking about AI moving
to the browser as the next phase. And he's right.
He's right. I didn't think it would go this heavy,
but it's going heavy like tons of browsers coming out
(15:15):
with AI built directly in. And, uh, yeah. And now
attackers are going after it as well because. And this
was Jason. This is what Jason was explaining to me
why he thought this was going to happen. It's just
like it is the focal point. Like we are usually
in the browser. We need to log into things with
the browser. So we've got like our passkeys set up.
(15:37):
We got like our password set up. And it's just
like it's a focal point of so much that we do.
So it's a good attack point. Attackers can serve poison
websites only to agents. Yeah, I love this. This is
a really cool one. So you can actually. Yeah. Have
(15:59):
a site that looks fine and acts fine if it's
a human browsing it and using it. But if it's
an AI, they basically get fed, uh, prompt AI stuff
that takes them astray. Kind of like steganography. Actually, think
about it. US puts $10 million bounties on three Russian
(16:20):
FSB hackers. Yeah. This administration. Trump is a little bit
going after Putin, a little bit in a way that
I quite like. Seems like the relationship is fraying and
I'm very happy with that. And it looks like he's
going after some of their, uh, hacker groups. Yeah. Good news.
(16:41):
CISOs face growing pressure to hide breaches Cisa pushes universal
software ingredients lists. Yeah, this is great. This is all
part of the whole supply chain stuff. My buddy, uh,
Sasha's dealer is, over at crosspoint and he talks tons
about software supply chain stuff. Uh, and definitely in relation to, uh,
(17:06):
reversing labs, which, as far as I could tell, is
the best company doing this stuff. I'm not affiliated with them,
but they are, uh, really cool at this stuff. And, um, yeah,
probably should get, uh, more affiliated with them. Um, but yeah.
CSA's talking about this. Uh, we got the NPM story.
(17:30):
Lots of people have been working on this. It goes
back to what I was just saying about Many Eyes.
It reminds me of also vendor lists, like I used to, uh,
work a lot with vendor lists when I was at Apple, and.
Holy crap, you you think you got a lot of vendors?
You should see Apple. They got a lot of vendors. Um,
but how many eyes do you need to watch all
(17:54):
the vendors that you have? Let's say you're a small startup.
Let's say you're a medium sized company. Whatever. You don't
have to be Apple or Google, right? How many eyes
do you need to watch all the different contracts to watch. Conduct, request. Read, parse,
make sense of and do something with all the different
(18:17):
assessments that have to be happening continuously. Then how about
determining where that stuff is installed on your network and
what impact that might have to the business? How many
people you need for that? Do you need to hire
three extra people? Because that's going to be a hard ask.
Like we're trying to reduce headcount right now. We can
(18:37):
maybe get you one. How much is that going to help? Right.
I mean, it's just ridiculous. It's a ridiculous question. So.
We need millions of extra eyes who can watch continuously
and constantly like, it's just like, you know, the solution
(18:59):
is not, you know, an extra person. So I love
this combination of many eyes with the supply chain problem.
You can find bugs by reading code. Yeah. Really cool. Um,
article here by Alex. Over 1100 exposed Olama servers found
(19:20):
via Shodan. And that's messed up because the question is,
what models are they running? What models are they pointing to?
Are we using up their GPUs. Like yeah what infrastructure
is that on. You can start asking questions like are
they vulnerable? Can you pivot. But you can also ask
resource exhaustion questions. National security US launches a Department of
(19:47):
War site. So our guy was supposed to be like
the isolationist guy. sky. Getting us out of this stuff.
Not going to go political here, but. Yeah. Department of war. Cool.
America steps back from the information fight. This one is
(20:07):
disturbing to me. Anne Applebaum says the current team is
gutting US overseas media and Anti-propaganda systems, which in my opinion,
is basically handing that over to China and Russia to
have like, full control over the narrative and story about
the West, West versus East, you know, China versus US,
(20:30):
Russia versus the West, whatever. I mean, this work is
nasty regardless. It's nasty. If the US is doing it
and of course we have done it and we continue
to do it, I'm sure. So it's kind of gross
no matter who's doing it, but. I don't know. That's
(20:51):
a longer conversation and definitely, uh, somewhat political. So I'm
going to skip it. But I would say if you
don't want China and Russia to win, maybe don't allow
them to control the narrative in every other country other
than the US. F-35's head to Puerto Rico as US
(21:15):
Venezuela standoff heats up. Hmm. Yeah. Sending ten F-35s to
Puerto Rico after Venezuelan F-16s buzzed a Navy destroyer. And. Yeah,
and then that's when that, uh, cartel boat got hit.
(21:39):
Strange times. Uh, State Department says non-immigrant visa applicants now
need to interview from their actual country or their actual residence, residents,
not inside the US. Sudan closes oil facilities after drone strikes.
DHS wants $100 million for counter drone systems to protect America.
(22:01):
I'm on board. I'm on board. I think we need
to stop spending money on F-35s or whatever modern fighters.
I think we need to move to the next phase.
And China is using its private sector to advance military capabilities. Uh, yes. Yes.
(22:24):
And it's explicit and it's like direct. And it's kind
of super gross and nasty, but also not really. It's nationalistic.
It's like, I mean, doesn't super bother me. What bothers
me is when they claim it's not happening and they're
trying to, you know, spread across the world and basically
(22:46):
get control of everything. And they're hiding this and people
are accepting them in like with Huawei, for example. Like
that's the part that bothers me, them being nationalistic and
having unity between their private sector and the goals of
the country. And it doesn't really bother me. AI OpenAI
(23:08):
launches an academy and jobs board to reskill displaced workers.
I mean, to give credit to Sam Altman here, he's
been saying for like the whole time, starting in like 22,
probably way before that. But I started paying attention mostly
in 22. And he's like, look, there's going to be
tremendous job loss from AGI or AI in general or whatever.
(23:33):
But definitely after AGI, depending on how you define it.
But he's like, yeah, we're going to have to figure
out things for people to do. He's been saying this
over and over in interviews for years and years and years. Um,
so if you didn't know that, then this is going
to look kind of crazy. Like all the reports come
out saying everyone's getting laid off. And then he's like, hey,
(23:55):
you know, we launched a job board. It's a bulletin board.
And you can come and you can pin a piece
of paper to it, and someone can come by and
offer you a job. Thanks, Sam. Super helpful. That's one
way to look at it. Another way to look at
it is obviously you would want to do this to
help people. I'm not sure if this is the same
(24:19):
article as the one competing with LinkedIn. Did I include
that one or not? That was something else I read.
I think it is. I think this is the one
that people are saying is competing with LinkedIn. It's probably
how could they possibly launch two? Anyway, there's another thing
where they they're trying to compete directly with LinkedIn and
(24:42):
basically help people find jobs by matching skills, which is
exactly what LinkedIn Microsoft is doing. A lot of people
forget that Microsoft owns LinkedIn. I forget sometimes, which is
why I just said a lot of people. But really,
(25:03):
it's just me. Anthropic agrees to pay authors at least
1.5 billion in settlement. So they're doing this to try
to avoid a much bigger one. This is for authors
of books. So roughly $3,000 per book. Huh? Can I
(25:27):
get $3,000? How do I find out if my data
is in here? Anthropic valued at 183 billion after $13 billion.
Raise anthropic blocks. Chinese owned firms from Claude. This is
because of that threat research that came out that basically
showed a whole bunch of Chinese companies are actually using
it to do, you know, military oriented stuff and then
(25:51):
handing it directly to the Chinese government. So they disallow
listed a bunch of them. DeepMind's AI cuts gravitational wave
detector noise by 100,000. I'm telling you DeepMind Google stuff,
they are quiet and they just make these massive jumps.
(26:17):
And I I'm very long on Google, very long on Google,
especially as it relates to AI. I think they're just
they're business mind is there, especially with Sundar. Like he's
like the real deal. And they woke up on AI
like they're just they're just crushing it. AI agents could
(26:40):
reshape entire economies within the next decade. This is a
arXiv archived paper. Agents on MD emerges as the universal
standard for AI coding agent instructions. So I'm messing with
using agents versus coded MD with Chi. I don't know.
I've gone back and forth twice now. Um, right now
(27:02):
I'm currently back on Cloudmd because I felt like I
was getting less enforcement. Yeah, I'm going to watch it closely.
High school senior says AI is destroying real learning for
his generation. And, uh, yeah. Says classmates constantly use ChatGPT
for everything from literature annotations to math homework and basically
(27:27):
copy paste exercises from AI, which. Well, I mean, that's
to be expected. The question is, are they also doing
good things with it? And I think from this person's answer,
the answer is no. And this was an article in
The Atlantic. Okay. Um, evidence shows AI is already replacing
(27:51):
entry level workers. Derek Thompson looks at this. Yeah, there's
just more and more data on this coming out. This.
I don't think it's really debatable at this point, but
good to have additional data there in the newsletter. Technology.
Google gets to keep Chrome but must end exclusive deals
and share search data. So kind of a mostly a
(28:14):
win I would say for Google to not get, uh,
Chrome divested out. OpenAI moves to make its own AI
chips with Broadcom and UC Santa Cruz. Engineers show Wi-Fi
signals can measure heart rate. Yeah, I love this stuff.
You can actually see through walls. You can measure your
heart rate. You can. Yeah. You could do all sorts
(28:34):
of stuff with Wi-Fi signals. Can't wait to see more
of this. Tech comes to the consumer side. SpaceX can
now launch 120 times a year from Florida humans. Job
growth stalls as unemployment hits four year high. Looks like
we added just 22,000 jobs in August. Got a new
(28:57):
drug that outperforms aspirin for heart attack prevention. Private equity
rentals make suburbs more diverse, but apply pressure to buyers.
So sometimes it raises the prices, but sometimes it does
bring in other people who normally couldn't get into that neighborhood.
Interesting idea to have like. The houses being bought up
(29:21):
by giant investment companies. And it's like I guess it's
just another landlord. So I guess maybe it's not that
much different than what we already have, but landlords now
it's just a guy, you know, just a guy or
a woman or whatever. And it's just like. They have
(29:42):
like three properties and that's their life. Their landlord feels
kind of weird to just see houses go off the
market and it's, you know, Blackrock or whoever it is.
More people are opting out of news to protect their mood. 40%
now skip news, according to Reuters. Mathematicians just broke an
(30:06):
87 year old knot theory, conjecture and attention loops. Quietly
rewrite your model of reality. Whatever you stare at long
enough starts echoing back and reshaping you. Yeah. Who was it?
Mark Twain. Somebody said, uh, whatever. You pay attention, you
(30:27):
become whatever you pay attention to. So be careful what
you pay attention to. Not sure if that was Mark
Twain or not. Phones on the toilet raise hemorrhoid risk. Discovery,
Rick Rubin's The Way of Code. This is a very
strange site. Gorgeous. Not sure exactly how it has to
(30:49):
do with code, but I was just checking it out.
Quite nice. Muscle memory turns repeated agent tasks into deterministic replays.
This thing is sick, so it monitors what your AI
is doing on your behalf when it's calling tools and everything.
And then it's just like learning how to do that
and turning that into steps that it can then replay.
(31:12):
So yeah, it's just kind of like learning from what
AI does and incorporating that into your workflows. Being good
isn't enough to win anymore. John Sword's last name is swords.
Is that the coolest last name? It's one of them.
Argues that craft alone doesn't carry you. You need distribution,
(31:34):
timing and taste. Or your good never gets found. This
is why I'm arguing that everyone needs to become a creator.
It's not that you need to become what a creator
is today, but creator just needs to be the new
way of living, which is thinking, building, sharing. I'm arguing
that these should be human things, not creator things. Blogs
(31:59):
used to be weird and personal. So this is a
blog by Jet Girl. Really really good. You should check
it out. Cutting end to end test time. 84% with
Cloud Code SDK. Cloud code. Cloud code SDK. By the way,
is just you run cloud with a switch of P
and that is a standalone like AI. It's it's completely insane.
(32:22):
I don't know why they call it the SDK. It's
just a switch P command. Anyway, probably something I'm missing there.
But anyway, that is it's extremely powerful functionality and I'm
going to be using it a lot more. Make your
own handwriting font easily. Claude FAQs tool use unless you
(32:43):
enforce strict tool calling. I have this in tons of
my scaffolding and I have this. Well, it's all in
the Pi repo now you can go check it out.
AI enables live phone calls across language barriers. Oh, I
just saw the, um, Apple event to today. Two new
AirPods live transcription. Talking to anybody. They could just talk
(33:06):
in their language and you get the translation in your ear.
Absolutely insane. I wonder if it works at the haircut
place with Vietnamese people. Like having conversations around you and
you don't know if it's like. Damn, his feet are big.
Or if it's like they're trying to figure out lunch.
(33:29):
Like you're not quite sure. Tech interviews reward complexity over
practical engineering. Diallo's blog post. Yeah, this is a really
good blog on kind of the problems with overall interviewing, which, yeah,
(33:50):
there's just yeah, this is why a lot of AI interviewing,
recruiting services I think are starting to do fairly well
is not so much that they're awesome, it's just that
the status quo is notably bad. Chibi. I think that's
how you pronounce it, analyzes why users churn. This is
(34:14):
actually just a straight link to a vendor, and I
just thought the vendor was pretty cool. And um, no
sponsorship obviously. Otherwise I'd have like a block for them and,
you know, the asterisk or sponsorship or something to indicate
that it's a sponsor. But no, this is just some
pretty cool tech. I feel like churn is like one
(34:35):
of the biggest things. It's like not about the gaining
the users. It's about keeping the users. Random walks almost
never return home in three dimensions. This is a cool
physics thing. Recommendation. Whoa. Recommendation of the week. I'm going
to reuse one that I typically have to ask myself
(34:57):
and kind of always try to ask myself. And that
is especially important right now with like extraordinary change, I think.
What would you do if you're not afraid? Now, normally
I would follow up and say, you shouldn't be afraid, right?
Everything's fine. And, you know, you should just be okay
and think what you would do if you weren't afraid.
(35:19):
But everything isn't okay. But that doesn't mean it's still
not a really good time to ask this question. So
I think the question still stands. What would you do
if you were not afraid? Here's what I would challenge
here Just to add to this. Think about what your
ideal self looks like. Think about what you and your
(35:44):
perfect Super Saiyan form. What abilities and skills would you have?
What confidence would you have? What narratives would you be
able to tell people? And here's the question is that
person more prepared for what is coming in the future
than you are? And if that's the case, then you
need to think up some dramatic ways, some extreme ways
(36:07):
to possibly migrate to that version of yourself, because that
version is going to survive a lot better maybe, than
your current version. And the aphorism of the week, very
much in line with this, the privilege of a lifetime
is to become who you truly are. The privilege of
a lifetime is to become who you truly are. Carl Jung.