All Episodes

March 10, 2025 12 mins

Create agentic solutions quickly and efficiently with Azure AI Foundry. Choose the right models, ground your agents with knowledge, and seamlessly integrate AI into your development workflow—from early experimentation to production. Test, optimize, and deploy with built-in evaluation and management tools. See how to leverage the Azure AI Foundry SDK to code and orchestrate intelligent agents, monitor performance with tracing and assessments, and streamline DevOps with production-ready management.

Yina Arenas, from the Azure AI Foundry team, shares its extensive capabilities as a unified platform that supports you throughout the entire AI development lifecycle.

► QUICK LINKS: 
00:00 - Create agentic solutions with Azure AI Foundry

00:20 - Model catalog in Azure AI Foundry

02:15 - Experiment in the Azure AI Foundry playground

03:10 - Create and customize agents

04:13 - Assess and improve agents

05:58 - Monitor and manage apps

06:50 - Create a multi-agentic app in code

09:26 - Create a Sender agent

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
If you're looking tocreate agentic solutions
and want to move quickly and efficiently,
Azure AI Foundry is the one place
for discovering and accessing
the right building blocks for your agents,
with everything youneed for AI development.
Today, I'll share theessentials for Azure AI Foundry,
starting with a tour ofits extensive capabilities

(00:23):
as a unified platform that supports you
throughout the entire AIdevelopment lifecycle;
from initial concept withearly experimentation,
coding in your preferred ID,
pre-production assessment,
management in production, and beyond,
followed by a real example of the steps
for creating a multi-agent application

(00:44):
using the new Azure AI Agentsservice in Azure AI Foundry,
all integrated with your code.
Starting with our tour,
you can easily reach to AzureAI Foundry at ai.azure.com,
and once you've created a project,
panning down the left rail,
you can quickly see the core experiences.
First, the model cataloghelps you discover

(01:06):
and access a growing collectionof thousands of models
to power your individual agentsas you build your system,
including premium large language models
from OpenAI, Meta,DeepSeek, Cohere, and more,
as well as small languagemodels like Microsoft Phi.
And of course, there arehundreds of open models

(01:26):
like those from HuggingFace for you to try out.
Models are also availableby area of specialization.
For example, there areregional and focused models
to support interactions withdifferent spoken languages,
like Mistral for European languages
and Jais for Arabic.
And separately, there areindustry-specific models

(01:46):
that you can choose from.
The entire model catalog is hosted
on Microsoft's supercomputerinfrastructure in Azure
for optimized cost performance.
Next, in terms of model deployment,
you can choose to runmodels on hosted hardware
with managed compute,
and for those of ourpopular premium models,
you can use our serverless API option.

(02:07):
As you use Azure AI Foundry,
you can of course alsobring your own models
to run on your Azure infrastructure.
Then, to help you choose theright model for your agent,
you can easily experimentin our playground.
For agents, you can addknowledge to ground your model.
You can choose files to upload,
use existing search index,

(02:29):
or add web knowledge using Microsoft Bing.
There are also options to adddata from Microsoft Fabric
as well as SharePoint
to connect with data in Microsoft 365.
You can also define actionsfor your agents to perform,
like calling APIs, functions,
or using Code interpreter
to write and run Pythoncode to automate processes.

(02:50):
And back on the homepage,
clicking into AI Services
provide additionaltask-specific capabilities
that you can use to augment your agents,
like speech, translation,
vision, and content safety.
So right from the start onyour application design,
you can leverage Azure AI Foundry
to evaluate AI models andservices for your application.
Next, to create and customize agents,

(03:13):
the new Azure AI Agents service
helps you orchestrate AI agents
without managing the underlying resources.
Importantly, everythingyou do in Azure AI Foundry
is integrated with your coding workspaces.
In the code experience,
you can take advantageof multiple templates
as well as a cloud hosted
pre-configured devenvironment to get started.

(03:34):
And importantly, integration with GitHub,
your code in Visual Studio,
and even Copilot Studiofor your low-code apps
where you can connect toAzure AI services and more.
This means that yourwork in Azure AI Foundry
carries on seamlesslyinto your code and agents,
or you can do everything from your code.
By using a single API

(03:55):
and calling Azure AI Foundry capabilities
as service endpointswhen you create projects
leveraging the Azure AI Foundry SDK.
For example, you canconnect to different models
using the new Azure AImodel inference endpoint,
which lets you easily compare models
without changing your underlying code.
And as you create your agent,
you can easily assess andimprove the experience.

(04:17):
In fact, Azure AI Foundryoffers a range of capabilities
to help you as you continuously iterate
for centralized observability,
such as application tracing
for debugging and performance checks,
along with detailedviews for execution flows
integrated with yourapplication insight resources.
Additionally, automated evaluations
help you continuously assessthe quality of AI outputs

(04:40):
based on key metrics,
like relevance to look at how well
the model meets expectations,
groundedness to see how well the model
refers to your grounding data,
fluency for the language proficiency
of the answers, and more.
From there, you will use this information
to create reporting, set up alerts,
and share dashboardswith other stakeholders.

(05:02):
You can also take advantage
of built-in safety and security controls
of text, image, and multimodal content
that go beyond basicsystem prompt guardrails
to automatically detectand optionally block
unwanted inputs and outputs
for content involving violence, hate,
sexual, and self-harm topics.

(05:22):
Azure AI Foundry servicesalso can help you
onboard more advanced techniques
as you optimize the outputof your AI applications.
For example, built-in serviceslike model fine-tuning
lets you adapt model output
with specific trainingdata sets that you define,
helping you improve model accuracy
and effectiveness inreal-world applications.

(05:44):
Additionally, integrationswith Semantic Kernel
and AutoGen as well as LangChain
let you orchestrate execution flows
for multi-agent processes,
making it easier to embed AIinto new or existing workflows.
Then, as your apps move into production,
we give you tools to monitor
and manage resource utilization.

(06:04):
Integration with Azure Monitorand Application Insights
helps you quickly observe trends
and get alerts for keygenerative AI metrics.
And the Centralized Management Center
helps simplify ongoing resource management
and governance tasks,
like managing quota,accessing permissions,
and connected resources.
Additionally, built-in integration

(06:26):
across Microsoft's securityand governance stack
enables you to enforceorganizational standards
and compliance with Azure policy,
manage identity base access data
and services with Microsoft Entra,
leverage your data security
and compliance from Microsoft Purview,
and protect your AI apps at scale
with ongoing threat detection

(06:46):
and security posture managementusing Microsoft Defender.
So, with our tour complete,
next, let me show you
how you can create amulti-agent application
using Azure AI Foundry
along with SemanticKernel for orchestration.
I'll start by explainingthe agentic app scenario,
which should sound familiar ifyou've ever written a report.

(07:08):
It's a four-agent solution
that can be initiated with any topic.
There is a researcher agent
that gathers information from the internet
as my defined knowledge source.
This process loops with the writer agent,
which uses the information provided
or requests more until it is satisfied.
The writer agent then creates the report

(07:29):
and loops with the editor agent,
which can request additionaledits until it is satisfied.
And once it has approvedthe reported text,
it shares the outputwith the sender agent,
which emails the report usingOutlook in Microsoft 365.
These multi-agentic scenarios
are similar in concept to microservices
and other modular architectures.

(07:50):
There are several benefits
to breaking down a monolithic process,
but now it's got a new name.
So let's build it.
I'll begin in Azure AI Foundry,
and in the Agents pageI can see the agents
that I've already started building,
like the writer and the editor agents.
The researcher and thesender agents are missing
because we're going tobuild them right now.

(08:11):
I'll start with the researchagent as a new agent.
Next, the setup pane on the right
gives me my agent configuration options.
I'll give it a name, Research agent,
and then under DeploymentI can choose the model
I want this agent to use.
I'll pick gpt-4o.
Next, I'll provide it with instructions
for what it is supposed to do.

(08:33):
Since it is the research agent,
I'll instruct it to use Bingsearch to find information.
And because the research agent
is part of this four-agent team,
I'll specify that it shouldnot try to write the report,
which is the job of the writer agent.
It should just provide the data.
Next, I'll add a knowledge source.
Again, we want Bing to ground the agent

(08:55):
with public information from the web.
Then I just need to selectan Azure connection,
and once I hit Connect, our agent is done.
To try it out, I'll use the playground.
I'll ask "What is dot net,"
and it will generate a summarized result
using knowledge from Bing search.
And because it is the researchagent and not the writer,
you will see that itsresults are super concise

(09:17):
but dense with knowledge about the topic.
Next, I could specifyactions, but I don't need to.
This agent already haseverything it needs.
And so our research agent is ready to go,
and I can move on tocreating our sender agent,
which by the way is going toneed some defined actions.
To create our email sender agent,

(09:37):
I'm going to switch overto VS Code and use the SDK.
I have this Python file open,
and if you look at thevery bottom of the screen,
there is a create_agent command.
And just like we saw in theAzure AI Foundry portal,
I can point it to themodel for its deployment,
define its name, and add its instructions,
as well as its tools.

(09:58):
As the email sender agent,
we'll provide it with Outlook as a tool,
and when I run this file,
it will create an agentinside of Azure AI Foundry.
In fact, if I move back to ourlist of agents in the portal,
we can see that our senderagent was just created.
And so now with all ofmy four agents created,
it's time to wire themup using Semantic Kernel.

(10:19):
Back in VS Code, I'll open my program file
where I've already startedusing Semantic Kernel
to describe its broader process,
and you will see all the logic
for how each agentinteracts with each other.
As I explained in the graphic,
each agent needs to satisfythe requirement for it's task
before moving to thenext step in the process.
Now, you might be wondering

(10:41):
how to connect theorchestration logic together
with my four agents.
Well, let me show you.
Each agent has its own configuration file
for our Semantic Kernel orchestration.
I'll connect the researcher agent config
with the agent ID from Azure AI Foundry.
So, if I go back to theAzure AI Foundry portal
and select the researcher agent,

(11:01):
I can just copy the agent ID
and go back to add it to my code,
and you would do this processfor each of the agents.
Now with everythingconnected and complete,
let's try it out.
Back in my code, I'll go ahead and run
to see how well our agents work together.
My program asks, "Whatwould you like a report on?"
Let's make it python,

(11:22):
but not the code, the snake.
While these agents work,
I can watch exactly what they're doing
and comment on them play by play.
First, we can see the researcher agent
pulling some material from the web.
The writer agent can then pick things up.
But wait, the writer agentisn't easily satisfied
and needs some additionalresearch on user sentiment.

(11:43):
The researcher agent comesback with that detail,
but the writer agent still has questions
and needs more additional facts
about the pet trade and other topics.
The researcher agent, unfazed,
comes back with that information.
And once the writer is satisfied,
it starts generating the report.
When it's finished,
it sends the report to the editor agent,

(12:04):
and it looks like the writer agent
met all of the requirements,
which is confirmed andapproved by the editor agent,
and the approval triggers the sender agent
to send it out as an email.
In fact, if I move over to Outlook,
we can see the Report on Python Snakes
just landed on my inbox,
and that was just one example
of how you can create agentic solutions

(12:24):
to automate business processes.
Azure AI Foundry helps you create
powerful agents quickly and efficiently
by providing a unified platform
with extensive capabilities
throughout the entire AIdevelopment lifecycle.
To get started, just headover to ai.azure.com.
Subscribe to Microsoft Mechanicsif you haven't already,

(12:44):
and thank you for watching.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.