Category: News
Manage the latest versions of Azure Stack HCI with SCVMM
Azure Stack HCI is a hybrid cloud solution that lets you run virtualized workloads on-premises with direct access to Azure services. It combines the performance, security, and scalability of hyperconverged infrastructure (HCI) with the flexibility and innovation of Azure.
As a datacenter scale customer, to take full advantage of these new capabilities, you need a powerful and reliable management solution that can handle the complexity and scale that comes with large scale deployments. To address these requirements, customers can continue to leverage System Center components as the management solution for larger deployments of Azure Stack HCI 23H2 clusters for a select set of scenarios while leveraging Arc based management of HCI clusters for other scenarios.
Supported Azure Stack HCI scenarios with System Center
The following scenarios will be supported in SCVMM to manage Azure Stack HCI 23H2:
Addition, creation and management of Azure Stack HCI clusters.
Ability to provision and deploy Virtual Machines (VMs) on the Azure Stack HCI clusters and perform VM lifecycle operations.
Set up networking on Azure Stack HCI clusters.
Deployment and management of SDN network controller on Azure Stack HCI clusters.
Management of storage pool settings, creation of virtual disks, creation of cluster shared volumes (CSVs) and application of QoS settings.
Migration of VMware and Windows Server based workloads to Azure Stack HCI.
Management of Azure Stack HCI clusters using the same PowerShell cmdlets used to manage Windows Server clusters.
Azure based VM self-serve capabilities and Azure management services through Azure Arc-enabled SCVMM.
Supported Azure Stack HCI scenarios through Azure and WAC
The following scenarios will continue to be supported from the Azure Portal/WAC to manage Azure Stack HCI 23H2:
Creation of Azure Stack HCI clusters.
Register and unregister Azure Stack HCI clusters from VMM.
Upgrading Azure Stack HCI 22H2 clusters to 23H2.
Enablement of Azure benefits on VMs running on Azure Stack HCI clusters.
All operations on Azure Stack HCI clusters deployed with Windows Defender Application Control (WDAC).
All new Azure Stack HCI 23H2 features like GPU-Partitioning, SDN Multi-site, etc.
All Azure Stack HCI features that were previously unsupported with SCVMM like Stretched clustering.
When is the support for Azure Stack HCI 23H2 coming with System Center?
Azure Stack HCI 23H2 support will be added to the next LTSC version of System Center. The General Availability of the next LTSC version of System Center will be closer to the General Availability of Windows Server 2025.
Contact us
The System Center team is committed to delivering new features and quality updates with the LTSC and UR releases at regular cadence. For any feedback and queries, you can reach us at systemcenterfeedback@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Manage the latest versions of Azure Stack HCI with SCVMM
Azure Stack HCI is a hybrid cloud solution that lets you run virtualized workloads on-premises with direct access to Azure services. It combines the performance, security, and scalability of hyperconverged infrastructure (HCI) with the flexibility and innovation of Azure.
As a datacenter scale customer, to take full advantage of these new capabilities, you need a powerful and reliable management solution that can handle the complexity and scale that comes with large scale deployments. To address these requirements, customers can continue to leverage System Center components as the management solution for larger deployments of Azure Stack HCI 23H2 clusters for a select set of scenarios while leveraging Arc based management of HCI clusters for other scenarios.
Supported Azure Stack HCI scenarios with System Center
The following scenarios will be supported in SCVMM to manage Azure Stack HCI 23H2:
Addition, creation and management of Azure Stack HCI clusters.
Ability to provision and deploy Virtual Machines (VMs) on the Azure Stack HCI clusters and perform VM lifecycle operations.
Set up networking on Azure Stack HCI clusters.
Deployment and management of SDN network controller on Azure Stack HCI clusters.
Management of storage pool settings, creation of virtual disks, creation of cluster shared volumes (CSVs) and application of QoS settings.
Migration of VMware and Windows Server based workloads to Azure Stack HCI.
Management of Azure Stack HCI clusters using the same PowerShell cmdlets used to manage Windows Server clusters.
Azure based VM self-serve capabilities and Azure management services through Azure Arc-enabled SCVMM.
Supported Azure Stack HCI scenarios through Azure and WAC
The following scenarios will continue to be supported from the Azure Portal/WAC to manage Azure Stack HCI 23H2:
Creation of Azure Stack HCI clusters.
Register and unregister Azure Stack HCI clusters from VMM.
Upgrading Azure Stack HCI 22H2 clusters to 23H2.
Enablement of Azure benefits on VMs running on Azure Stack HCI clusters.
All operations on Azure Stack HCI clusters deployed with Windows Defender Application Control (WDAC).
All new Azure Stack HCI 23H2 features like GPU-Partitioning, SDN Multi-site, etc.
All Azure Stack HCI features that were previously unsupported with SCVMM like Stretched clustering.
When is the support for Azure Stack HCI 23H2 coming with System Center?
Azure Stack HCI 23H2 support will be added to the next LTSC version of System Center. The General Availability of the next LTSC version of System Center will be closer to the General Availability of Windows Server 2025.
Contact us
The System Center team is committed to delivering new features and quality updates with the LTSC and UR releases at regular cadence. For any feedback and queries, you can reach us at systemcenterfeedback@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Update records in a Kusto Database (Public Preview)
Kusto databases, either in Azure Data Explorer or in Fabric KQL Database, are optimize for append ingestion.
In recent years, we’ve introduce the .delete command allowing you to selectively delete records.
Today we are introducing the .update command. This command allows you to update records by deleting existing records and appending new ones in a single transaction.
This command comes with two syntaxes, a simplified syntax covering most scenarios efficiently and an expanded syntax giving you the maximum of control.
Here is an example of the simplified syntax:
.update table MyTable on Id <|
MyTable
| where Id==3
| extend Color=”Orange”
This command will update all records where Id==3 by replacing the Color column value by “Orange”.
As mentioned above, the command really does a .delete and .append in one go. In this case, it is equivalent to those 2 commands:
.delete table MyTable records <|
MyTable
| where Id==3
.append MyTable <|
MyTable
| where Id==3
| extend Color=”Orange”
The only exception to running those 2 commands is that the append command is run with the state of the table prior to the deletion. Indeed, if you would run those two commands, the .append command wouldn’t do anything since the records with Id==3 would have been deleted by the first command.
This is a good way to show how the same command would be represented using the expanded syntax:
.update table MyTable delete D append A <|
let D = MyTable
| where Id==3;
let A = MyTable
| where Id==3
| extend Color=”Orange”;
The expanded syntax allows you to explicitly define the delete and append queries.
Both syntaxes support a whatif mode where the command doesn’t change the table but returns the expected changes. We recommend always starting with a whatif mode to validate the predicates.
We encourage you to go through the many examples of the online documentation page to familiarize yourself with the syntax.
We believe this new command gives you an alternative for your data pipelines. Many loading scenarios involve updating records. For instance, ingesting new data in a staging table to then update the records of a main table with those new records. This is now possible with the .update command.
The command is in public preview and we are looking forward for your feedback!
Microsoft Tech Community – Latest Blogs –Read More
Always Encrypted with secure enclaves – Intel SGX vs VBS
Always Encrypted with secure enclaves is a feature of Azure SQL Database that allows you to protect sensitive data from unauthorized access, even from the database administrators. Secure enclaves are regions of memory isolated from the server that can perform computations on encrypted data without revealing the plaintext. When processing SQL queries, the database engine delegates computations on encrypted data to a secure enclave. The code in the enclave decrypts the data and performs computations on plaintext. This can be done safely, because the enclave has strong isolation guarantees. It is a black box to the containing database engine process and the OS, so database administrators or machine administrators cannot see the data inside the enclave.
By leveraging secure enclaves, Always Encrypted can support rich confidential queries, including pattern matching, range comparisons, sorting and more. It also enables in-place cryptographic operations, such as encrypting existing data or rotating the data encryption keys.
Azure SQL Database supports two types of secure enclaves: Intel SGX enclaves and VBS enclaves. In this blog post, we will compare these two options and help you choose the best one for your use case.
What are Intel SGX enclaves and VBS enclaves?
Intel Software Guard Extensions (Intel SGX) enclaves is a hardware-based trusted execution environment technology. Intel SGX protects data actively being used in the processor and memory by creating a trusted execution environment (TEE) called an enclave.
Virtualization-based Security (VBS) enclaves (also known as Virtual Secure Mode, or VSM enclaves) is a software-based technology that relies on Windows hypervisor and doesn’t require any special hardware. The hypervisor creates a logical separation between the “normal world” and “secure world”, designated by Virtual Trust Levels, VTL0 and VT1, respectively. VBS secure memory enclaves create a means for secure, computation in an otherwise untrusted environment.
What are the advantages and disadvantages of Intel SGX and VBS enclaves?
The main advantage of Intel SGX enclaves is that they provide stronger security guarantees than VBS enclaves. Intel SGX enclaves are resistant to attacks from the host operating system.
The main disadvantage of Intel SGX enclaves is that they have limited availability. The databases require specific hardware (DC-series) that are not supported by all Azure SQL Database service tiers and regions. Let us know if you need a region to be enabled where we currently do not support DC-series. Secondly, DC-series comes with an extra cost because of the specific hardware that is needed which is limited to a maximum of 40 physical cores.
The main advantage of VBS enclaves is that they have wider availability than Intel SGX enclaves because we don’t have the hardware dependency. VBS enclaves can run on any Azure SQL Database service tier in any region and comes with no extra cost.
The main disadvantage of VBS enclaves is that they provide weaker security guarantees than Intel SGX enclaves. VBS enclaves help protect your data from attacks inside the VM. However, they don’t provide any protection from attacks using privileged system accounts originating from the host.
Below is a summary comparison of Intel SGX and VBS enclaves:
Intel Software Guard eXtensions (SGX)
Virtualization-based security (VBS)
Available in DC-series hardware configuration
No hardware dependency
Purchasing model
vCore model
DTU and vCore
Compute mode
Provisioned
Provisioned and serverless
Compute size
Up to 40 (physical) vCores
Any (up to 128 vCores)
Regional availability
Regional availability: East/West US,
North/West EU, Canada Central, UK South, Southeast Asia
All Azure regions
Security
Protection from rogue customer’s DBAs
Protection from rogue customer’s DBAs
Protection from attacks originating from both guest and host OS (rogue cloud operators, malware)
Protection from attacks originating from guest OS (rogue cloud operators, malware), but not host OS
Attestation using Microsoft Azure Attestation
No attestation currently supported
How to choose between Intel SGX and VBX enclaves?
The choice between Intel SGX enclaves and VBS enclaves depends on your security requirements. Think about who you want to protect your data for. Do you want to protect your data from malicious insiders or do you also want to protect your data from the host provider. If you need the highest level of security, you should use Intel SGX enclaves.
The table below can help you with that decision.
Attacker
Attack method
Always Encrypted with Intel SGX enclaves
Always Encrypted with VBS enclaves
DBAs connecting over TDS
Querying encrypted columns without access to the encryption keys
Y
Y
VM (guest OS) administrators
Generating a memory dump of the SQL Server process or scanning its memory
Y
Y
Data center/host administrators
Generating a memory dump of the host server
Y
N
If needed, you can always switch the enclave type by changing the SLO of the database. In general, there are no changes needed in the application if you switch from VBS to Intel SGX or the other way around.
Conclusion
Unlike Intel SGX, VBS is a software-based solution with no hardware dependency. This allows us to bring the benefits of Always Encrypted with secure enclaves to all Azure SQL Database offerings, so that you can use the feature with a compute tier (provisioned or serverless), a purchasing model (vCore or DTU), a compute size (currently, up to 128 vCores), and a region that best matches your workload requirements. And, since VBS enclaves are available in existing hardware offerings, they come with no extra cost. It is important to note that Intel SGX enclaves remain a recommended option for customers who seek the strongest level of protection, including the isolation from host OS administrators, which VBS enclaves do not provide.
Learn more
Always Encrypted with secure enclaves documentation
Getting started using Always Encrypted with secure enclaves
GitHub Demo
Data Exposed episode (video)
Microsoft Tech Community – Latest Blogs –Read More
Master Azure OpenAI Services with Azure for student: A Comprehensive Guide for Students
Unleashing the Power of Azure OpenAI services with Azure for student: A Guide for Students
If you have a keen interest in artificial intelligence, you’re in the right place. Today, we’re going to explore an incredible learning resource that can help you dive deep into the world of Azure OpenAI Services, all thanks to Microsoft Azure.
What is OpenAI?
OpenAI is a cutting-edge artificial intelligence research lab that aims to ensure that artificial general intelligence (AGI) benefits all of humanity. It’s a fascinating field that’s shaping the future of technology, and there’s no better time than now to start learning about it.
Why Learn OpenAI Services with Azure?
Microsoft Azure for Students offers a comprehensive suite of cloud services that developers and IT professionals use to build, deploy, and manage applications. By learning OpenAI services with Azure, you’ll not only understand the intricacies of AGI but also learn how to leverage the power of Azure to develop and deploy your AI models.
The Ultimate Learning Resource: Microsoft Learn Collection
Microsoft Learn has curated an amazing collection dedicated to OpenAI. This collection is a treasure trove of knowledge, packed with learning modules that cover everything from the basics of OpenAI to advanced concepts. The best part? It’s absolutely free!
You can access the Microsoft Learn OpenAI Collection here.
Let’s Get Started!
Whether you’re a beginner or an advanced learner, this collection has something for everyone. So why wait? Dive in and start your journey with Azure OpenAI Services today. Remember, the future belongs to those who learn. So, let’s learn and lead!
Happy learning! if your interested in learning more see our Open Sourced Generative AI for Beginners – A Course Learn the fundamentals of building Generative AI applications in 18-lesson.
Microsoft Tech Community – Latest Blogs –Read More
Always Encrypted with secure enclaves – Intel SGX vs VBS
Always Encrypted with secure enclaves is a feature of Azure SQL Database that allows you to protect sensitive data from unauthorized access, even from the database administrators. Secure enclaves are regions of memory isolated from the server that can perform computations on encrypted data without revealing the plaintext. When processing SQL queries, the database engine delegates computations on encrypted data to a secure enclave. The code in the enclave decrypts the data and performs computations on plaintext. This can be done safely, because the enclave has strong isolation guarantees. It is a black box to the containing database engine process and the OS, so database administrators or machine administrators cannot see the data inside the enclave.
By leveraging secure enclaves, Always Encrypted can support rich confidential queries, including pattern matching, range comparisons, sorting and more. It also enables in-place cryptographic operations, such as encrypting existing data or rotating the data encryption keys.
Azure SQL Database supports two types of secure enclaves: Intel SGX enclaves and VBS enclaves. In this blog post, we will compare these two options and help you choose the best one for your use case.
What are Intel SGX enclaves and VBS enclaves?
Intel Software Guard Extensions (Intel SGX) enclaves is a hardware-based trusted execution environment technology. Intel SGX protects data actively being used in the processor and memory by creating a trusted execution environment (TEE) called an enclave.
Virtualization-based Security (VBS) enclaves (also known as Virtual Secure Mode, or VSM enclaves) is a software-based technology that relies on Windows hypervisor and doesn’t require any special hardware. The hypervisor creates a logical separation between the “normal world” and “secure world”, designated by Virtual Trust Levels, VTL0 and VT1, respectively. VBS secure memory enclaves create a means for secure, computation in an otherwise untrusted environment.
What are the advantages and disadvantages of Intel SGX and VBS enclaves?
The main advantage of Intel SGX enclaves is that they provide stronger security guarantees than VBS enclaves. Intel SGX enclaves are resistant to attacks from the host operating system.
The main disadvantage of Intel SGX enclaves is that they have limited availability. The databases require specific hardware (DC-series) that are not supported by all Azure SQL Database service tiers and regions. Let us know if you need a region to be enabled where we currently do not support DC-series. Secondly, DC-series comes with an extra cost because of the specific hardware that is needed which is limited to a maximum of 40 physical cores.
The main advantage of VBS enclaves is that they have wider availability than Intel SGX enclaves because we don’t have the hardware dependency. VBS enclaves can run on any Azure SQL Database service tier in any region and comes with no extra cost.
The main disadvantage of VBS enclaves is that they provide weaker security guarantees than Intel SGX enclaves. VBS enclaves help protect your data from attacks inside the VM. However, they don’t provide any protection from attacks using privileged system accounts originating from the host.
Below is a summary comparison of Intel SGX and VBS enclaves:
Intel Software Guard eXtensions (SGX)
Virtualization-based security (VBS)
Available in DC-series hardware configuration
No hardware dependency
Purchasing model
vCore model
DTU and vCore
Compute mode
Provisioned
Provisioned and serverless
Compute size
Up to 40 (physical) vCores
Any (up to 128 vCores)
Regional availability
Regional availability: East/West US,
North/West EU, Canada Central, UK South, Southeast Asia
All Azure regions
Security
Protection from rogue customer’s DBAs
Protection from rogue customer’s DBAs
Protection from attacks originating from both guest and host OS (rogue cloud operators, malware)
Protection from attacks originating from guest OS (rogue cloud operators, malware), but not host OS
Attestation using Microsoft Azure Attestation
No attestation currently supported
How to choose between Intel SGX and VBX enclaves?
The choice between Intel SGX enclaves and VBS enclaves depends on your security requirements. Think about who you want to protect your data for. Do you want to protect your data from malicious insiders or do you also want to protect your data from the host provider. If you need the highest level of security, you should use Intel SGX enclaves.
The table below can help you with that decision.
Attacker
Attack method
Always Encrypted with Intel SGX enclaves
Always Encrypted with VBS enclaves
DBAs connecting over TDS
Querying encrypted columns without access to the encryption keys
Y
Y
VM (guest OS) administrators
Generating a memory dump of the SQL Server process or scanning its memory
Y
Y
Data center/host administrators
Generating a memory dump of the host server
Y
N
If needed, you can always switch the enclave type by changing the SLO of the database. In general, there are no changes needed in the application if you switch from VBS to Intel SGX or the other way around.
Conclusion
Unlike Intel SGX, VBS is a software-based solution with no hardware dependency. This allows us to bring the benefits of Always Encrypted with secure enclaves to all Azure SQL Database offerings, so that you can use the feature with a compute tier (provisioned or serverless), a purchasing model (vCore or DTU), a compute size (currently, up to 128 vCores), and a region that best matches your workload requirements. And, since VBS enclaves are available in existing hardware offerings, they come with no extra cost. It is important to note that Intel SGX enclaves remain a recommended option for customers who seek the strongest level of protection, including the isolation from host OS administrators, which VBS enclaves do not provide.
Learn more
Always Encrypted with secure enclaves documentation
Getting started using Always Encrypted with secure enclaves
GitHub Demo
Data Exposed episode (video)
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s commitment to Azure IoT
There was a recent erroneous system message on Feb 14th regarding the deprecation of Azure IoT Central. The error message stated that Azure IoT Central will be deprecated on March 31st, 2027 and starting April 1, 2024, you won’t be able to create new application resources. This message is not accurate and was presented in error.
Microsoft does not communicate product retirements using system messages. When we do announce Azure product retirements, we follow our standard Azure service notification process including a notification period of 3-years before discontinuing support. We understand the importance of product retirement information for our customers’ planning and operations. Learn more about this process here: 3-Year Notification Subset – Microsoft Lifecycle | Microsoft Learn
Our goal is to provide our customers with a comprehensive, secure, and scalable IoT platform. We want to empower our customers to build and manage IoT solutions that can adapt to any scenario, across any industry, and at any scale. We see our IoT product portfolio as a key part of the adaptive cloud approach.
The adaptive cloud approach can help customers accelerate their industrial transformation journey by scaling adoption of IoT technologies. It helps unify siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, edge, and IoT. Learn more about our adaptive cloud approach here: Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Our approach is exemplified in the public preview of Azure IoT Operations, which makes it easy for customers to onboard assets and devices to flow data from physical operations to the cloud to power insights and decision making. Azure IoT Operations is designed to simplify and accelerate the development and deployment of IoT solutions, while giving you more control over your IoT devices and data. Learn more about Azure IoT Operations here: https://azure.microsoft.com/products/iot-operations/
We will continue to collaborate with our partners and customers to transform their businesses with intelligent edge and cloud solutions, taking advantage of our full portfolio of Azure IoT products.
We appreciate your trust and loyalty and look forward to continuing to serve you with our IoT platform offerings.
Microsoft Tech Community – Latest Blogs –Read More
Building AI Agent Applications Series – Assembling your AI agent with the Semantic Kernel
In the previous series of articles, we learned about the basic concepts of AI agents and how to use AutoGen or Semantic Kernel combined with the Azure OpenAI Service Assistant API to build AI agent applications. For different scenarios and workflows, powerful tools need to be assembled to support the operation of the AI agent. If you only use your own tool chain in the AI agent framework to solve enterprise workflow, it will be very limited. AutoGen supports defining tool chains through Function Calling, and developers can define different methods to assemble extended business work chains. As mentioned before, Semantic Kernel has good business-based plug-in creation, management and engineering capabilities. Through AutoGen + Semantic Kernel, powerful AI agent solutions can be built.
Scenario 1 – Constructing a single AI agent for writing technical blogs
As a cloud advocate, I often need to write some technical blogs. In the past, I needed a lot of supporting materials. Although I could write some of the materials through Prompt + LLMs, some professional content might not be enough to meet the requirements. For example, I want to write based on the recorded YouTube video and the syllabus. As shown in the picture above, combine the video script and outline around the three questions as basic materials, and then start writing the blog.
Note: We need to save the data as vector first. There are many methods. You can choose to use different frameworks for embedded vector processing. Here we use Semantic Kernel combined with Qdrant. Of course, the more ideal step is to add this part to the entire technical blog writing agent, which we will introduce in the next scenario.
Because the AI agent simulates human behavior, when designing the AI agent, the steps that need to be set are the same as in my daily work.
Find relevant content based on the question
Set a blog title, extended content and related guidance, and write it in markdown
Save
We can complete steps 1 and 2 through Semantic Kernel. As for step 3, we can directly use the traditional method of reading and writing files. We need to define these three functions ask, writeblog, and saveblog here. After completion, we need to configure Function Calling and set the parameters and function names corresponding to these three functions.
llm_config={
“config_list”: config_list,
“functions”: [
{
“name”: “ask”,
“description”: “ask question about Machine Learning, get basic knowledge”,
“parameters”: {
“type”: “object”,
“properties”: {
“question”: {
“type”: “string”,
“description”: “About Machine Learning”,
}
},
“required”: [“question”],
},
},
{
“name”: “writeblog”,
“description”: “write blogs in markdown format”,
“parameters”: {
“type”: “object”,
“properties”: {
“content”: {
“type”: “string”,
“description”: “basic content”,
}
},
“required”: [“content”],
},
},
{
“name”: “saveblog”,
“description”: “save blogs”,
“parameters”: {
“type”: “object”,
“properties”: {
“blog”: {
“type”: “string”,
“description”: “basic content”,
}
},
“required”: [“blog”],
},
}
],
}
Because this is a single AI agent application, we only need to define an Assistant and a UserProxy. We only need to define our goals and inform the relevant steps to run.
assistant = autogen.AssistantAgent(
name=“assistant”,
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name=“user_proxy”,
is_termination_msg=lambda x: x.get(“content”, “”) and x.get(“content”, “”).rstrip().endswith(“TERMINATE”),
human_input_mode=“NEVER”,
max_consecutive_auto_reply=10,
code_execution_config=False
)
user_proxy.register_function(
function_map={
“ask”: ask,
“writeblog”: writeblog,
“saveblog”: saveblog
}
)
with Cache.disk():
await user_proxy.a_initiate_chat(
assistant,
message=“”“
I’m writing a blog about Machine Learning. Find the answers to the 3 questions below and write an introduction based on them. After preparing these basic materials, write a blog and save it.
1. What is Machine Learning?
2. The difference between AI and ML
3. The history of Machine Learning
Let’s go
““”
)
We tried running it and it worked fine. For specific effects, please refer to:
Scenario 2 – Building a multi-agent interactive technical blog editor solution.
In the above scenario, we successfully built a single AI agent for technical blog writing. We hope that our technology will be more intelligent. From content search to writing and saving to translation, it is all completed through AI agent interaction. We can use different job roles to achieve this goal. This can be done by generating code from LLMs in AutoGen, but the uncertainty of this is a bit high. Therefore, it is more reliable to define additional methods through Function Calling to ensure the accuracy of the call. The following is a structural diagram of the division of labor roles:
Notice
Admin – Define various operations through UserProxy, including the most important methods.
Collector KB Assistant – Responsible for downloading relevant subtitle scripts of technical videos from YouTube, saving them locally, and vectorizing them by extracting different knowledge points and saving them to the vector database. Here I only made a video subtitle script. You can also add local documents and support for different types of audio files.
Blog Editor Assistant – When the data collection assistant completes its work, it can hand over the work to the blog writing assistant, who will write the blog as required based on a simple question outline (title setting, content expansion, and usage markdown format, etc.), and automatically save the blog to the local after writing.
Translation Assistant – Responsible for the translation of blogs in different languages. What I am talking about here is translating Chinese (can be expanded to support more languages)
Based on the above division of labor, we need to define different methods to support it. At this time, we can use SK to complete related operations.
Here we use AutoGen’s group chat mode to complete related blog work. You can clearly see that you have a team working, which is also the charm of the agent. Set it up with the following code.
groupchat = autogen.GroupChat(
agents=[user_proxy, collect_kb_assistant, blog_editor_assistant,translate_assistant], messages=[],max_round=30)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config={‘config_list’: config_list})
“”“
)
The code for group chat dispatch is as follows:
await user_proxy.a_initiate_chat(
manager,
message=“”“
Use this link https://www.youtube.com/watch?v=1qs6QKk0DVc as knowledge with collect knowledge assistant. Find the answers to the 3 questions below to write blog and save and save this blog to local file with blog editor assistant. And translate this blog to Chinese with translate assistant.
1. What is GitHub Copilot ?
2. How to Install GitHub Copilot ?
3. Limitations of GitHub Copilot
Let’s go
““”
)
Different from a single AI agent, a manager is configured to coordinate the communication work of multiple AI agents. Of course, you also need to have clear instructions to assign work.
You can view the complete code on this Repo.
If you want to see the result about English blog, you can also click this link.
If you want to see the result about Chinese blog, you can also click this link.
More
AutoGen helps us easily define different AI agents and plan how different AI agents interact and operate. The Semantic Kernel is more like a middle layer to help support different ways for agents to solve tasks, which will be of great help to enterprise scenarios. When AutoGen appears, some people may think that it overlaps with Semantic Kernel in many places. In fact, it complements and does not replace it. With the arrival of the Azure OpenAI Service Assistant API, you can believe that the agent will have stronger capabilities as the technical framework and API are improved.
Resources
Microsoft Semantic Kernel https://github.com/microsoft/semantic-kernel
Microsoft Autogen https://github.com/microsoft/autogen
Microsoft Semantic Kernel CookBook https://aka.ms/SemanticKernelCookBook
Get started using Azure OpenAI Assistants. https://learn.microsoft.com/en-us/azure/ai-services/openai/assistants-quickstart
What is an agent? https://learn.microsoft.com/en-us/semantic-kernel/agents
What are Memories? https://learn.microsoft.com/en-us/semantic-kernel/memories/
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Fabric AI Hack Workshop: Build a Custom Object Detection Model | Feb 20, 2024
️Build, innovate, and #HackTogether!
How you can participate
Register for the global hack for a chance to win incredible prizes:
Interact with other hackers and create your projects in a team or you can also fly solo.
Attend the live workshops including Building a Custom Object Detection Model on Microsoft Fabric with Snapshot Serengeti Dataset
Submit your project when ready!
Building a Custom Object Detection Model with Microsoft Fabric
How you can load in your data using Data Factory Pipelines
How you can use SQL queries to explore and analyze the data
How you can use Apache Spark to process your data
How you can analyze and train your data using Notebooks
How you can leverage on mlflow to track your experiments and models
What are you waiting for?
Don’t miss out on this amazing opportunity to learn something new! Join us for our session and get hands-on with Microsoft Fabric.
Microsoft Tech Community – Latest Blogs –Read More
Windows Server Advanced Auditing Policies
Security auditing is a methodical examination and review of activities that may affect the security of a system. In the Windows Server and Active Directory environments, security auditing is the features and services that log and review events for specified security-related activities.
Hundreds of events occur as the Windows operating system and the applications that run on it perform their tasks. Monitoring these events can provide valuable information to help administrators troubleshoot and investigate security-related activities.
Audit policies are configured through Group Policy. You can configure local policies, but in most Windows Server Active Directory environments, auditing is configured through application of policies at the Domain, Site or Organizational Unit Level.
The basic security audit policy settings in Security SettingsLocal PoliciesAudit Policy and the advanced security audit policy settings in Security SettingsAdvanced Audit Policy ConfigurationSystem Audit Policies appear to overlap, but they’re recorded and applied differently.
There are nine basic audit policy settings under Security SettingsLocal PoliciesAudit Policy and settings under Advanced Audit Policy Configuration. The settings available in Security SettingsAdvanced Audit Policy Configuration address similar issues as the nine basic settings in Local PoliciesAudit Policy, but they allow administrators to be more selective in the number and types of events to audit. Instead of the nine basic audit policy settings, there are 58 different audit policy settings available through advanced audit policies. Advanced audit policies allow you to be far more specific in what you are auditing than the basic audit policies can.
To help you come to terms with all these different policies, we’ve created a set of short videos, 5-10 minutes in length, that go through each of the advanced auditing policies categories, explain the different policies and the interesting event log entries the policies are likely to generate. The videos are as follows:
Introduction to Windows Server Advanced Security Auditing: https://www.youtube.com/watch?v=OvIraaN2ZnI
Account Logon policies: https://www.youtube.com/watch?v=A-EjL5sz5rk
Account Management policies: https://www.youtube.com/watch?v=jmxloIQp_yg
Detailed Tracking policies: https://www.youtube.com/watch?v=EXHWhGrlH5c
DS Access policies: https://www.youtube.com/watch?v=tZVFuFOppwA
Logon/Logoff policies: https://www.youtube.com/watch?v=9uooYpTBlsA
Object Access policies: https://www.youtube.com/watch?v=b9juS5RT1lg
Policy Change policies: https://www.youtube.com/watch?v=GKc4lo_shUg
Privilege Use policies: https://www.youtube.com/watch?v=L5bJ4z4qlco
System policies: https://www.youtube.com/watch?v=WhoLstyh0pA
Global Object Access Auditing policies: https://www.youtube.com/watch?v=NCNXWQoApIk
Understanding and applying audit policies is critical to making sure that the activity you want tracked on the computers you manage is actually recorded in the event log. Hopefully this set of videos, broken down into snack sized chunks, will allow you to review what these policies can do and will assist you to be more deliberative in how you audit activity in the computers that you manage.
You can also consult detailed information about advanced audit policies at the following link on Microsoft Learn: https://learn.microsoft.com/en-us/windows/security/threat-protection/auditing/advanced-security-auditing-faq
Microsoft Tech Community – Latest Blogs –Read More
Check This Out! (CTO!) Guide (January 2024)
Hi everyone! Brandon Wilson here once again with this month’s “Check This Out!” (CTO!) guide. Apologies for the late post this month; its been a busy month!
These posts are only intended to be your guide, to lead you to some content of interest, and are just a way we are trying to help our readers a bit more, whether that is learning, troubleshooting, or just finding new content sources! We will give you a bit of a taste of the blog content itself, provide you a way to get to the source content directly, and help to introduce you to some other blogs you may not be aware of that you might find helpful. If you have been a long-time reader, then you will find this series to be very similar to our prior series “Infrastructure + Security: Noteworthy News”.
From all of us on the Core Infrastructure and Security Tech Community blog team, thanks for your continued reading and support!
Title: KRB_AP_ERR_BAD_INTEGRITY
Source: Ask the Directory Services Team
Author: Jesse Vurgason-Graham
Publication Date: 1/12/24
Content excerpt:
Most anyone who would be interested in reading an article like this has very likely encountered the error, KRB_AP_ERR_MODIFIED. This error tells us one thing: The account secret (aka password hash) that is being used to decipher the ticket cannot decipher the ticket.
The most common reasons are…
Title: Stop Worrying and Love the Outage, Vol I: Group Policy and Sharing Violations
Source: Ask the Directory Services Team
Author: Chris Cartwright
Publication Date: 1/26/24
Content excerpt:
Recently, we have seen an uptick in cases related to sharing violations when processing or editing group policies. Most of these issues are caused by locks on policy-related files within the SysVol share, from either security products or environmental conditions. Security product mitigations are already covered by exclusions and need not be repeated here. Our focus will be on the environmental conditions, latency and/or packet loss. Failure to follow this guidance may result in unexpected behaviors of both group policy processing and group policy editing. I can save you some time by making the following recommendation…
Title: Windows Server 2012/R2 Extended Security Updates Licensing and Billing
Source: Azure Arc
Author: Garima Singh
Publication Date: 1/17/24
Content excerpt:
While more and more organizations are moving towards cloud they are all using cloud in their own way depending on size and scale. Some have adopted cloud native model using Microsoft Azure, but some decided to use cloud services while still maintaining their on-premises footprint. The latter approach is known as Hybrid model. Hybrid also means having presence in more than one cloud provider.
Title: Public facing Azure Container Registry Reference Architecture
Source: Azure Architecture
Author: Kumar Ashwin Hubert
Publication Date: 1/23/24
Content excerpt:
This reference architecture describes the deployment of secured Azure Container Registry for consuming docker images and artifacts by customer applications over external (public internet) network.
This architecture builds on Microsoft’s recommended security best practices to expose private applications for external access. It utilizes the ACR’s token and scope map feature to provide granular access control to ACR’s repositories. Also, ACR internally uses the Docker APIs, and it is recommended to be familiar with these concepts before deploying this architecture.
Title: Load Testing Azure Event Hubs services with restricted public access
Source: Azure Architecture
Author: Frédéric Le Coquil
Publication Date: 1/24/24
Content excerpt:
This article describes how to use Azure Load Testing to test a service based on Azure Event Hubs with a restricted public endpoint. The access to the Azure Event Hubs endpoint is restricted to specific client IP addresses. For instance, the service collects events from different on-premises events sources, analyzes those events and generates alerts as anomalies are detected.
Title: Using Azure Load Testing to test Multi-Tenant services
Source: Azure Architecture
Author: Frédéric Le Coquil
Publication Date: 1/24/24
Content excerpt:
This article describes how to use Azure Load Testing to test a multi-tenant service based on Azure App Service. It also describes how to run the Load Testing scenario from either…
Title: Take control of your cloud spend with Microsoft Cost Management
Source: Azure Governance and Management
Author: Antonio Ortoll
Publication Date: 1/8/24
Content excerpt:
Nobody wants a surprise when it comes to their cloud bill. To effectively manage your cloud investments, you need to know what you’re spending and where it’s being spent. We developed Microsoft Cost Management to provide visibility into your resource usage to help you better understand where you’re accruing costs in the cloud, identify and prevent inefficient spending patterns, and offer you the ability to optimize costs across different usage groups. By leveraging data on your resource usage, you can enforce cost-control measures and create reporting dashboards for your stakeholders across the organization.
Title: Rehosting On-Premises Process Automation when migrating to Azure
Source: Azure Governance and Management
Author: Swati Devgan
Publication Date: 1/23/24
Content excerpt:
Many enterprises seek to migrate on-premises IT infrastructure to cloud for cost optimization, scalability, and enhanced reliability. During modernization, key aspect is to transition automated processes from on-premises environments, where tasks are automated using scripts (PowerShell or Python) and tools like Windows Task Scheduler or System Center Service Management Automation (SMA).
This blog showcases successful transitions of customer automated processes to the cloud with Azure Automation, emphasizing script re-use and modernization through smart integrations with complementing Azure products. Using runbooks in PowerShell or Python, the platform supports PowerShell versions 5.1, and PowerShell 7.2.
Title: Azure Blob Storage Events: A event-driven solution for Blob Storage changes
Source: Azure Storage
Author: Nishant Ranjan
Publication Date: 1/10/24
Content excerpt:
Azure Blob Storage event, a powerful feature of Azure blob storage platform, has emerged as a game-changing solution that allows applications to react to changes in blob storage, providing a more efficient and cost-effective alternative to traditional methods.
Azure Blob Storage events provide an event-driven architecture to track changes in your blob storage in near real-time, such as the creation, tier-change, and deletion of blobs. Traditionally, achieving this level of monitoring and responsiveness required complex code or expensive and inefficient polling services.
Title: TLS 1.0 and 1.1 support will be removed for new & existing Azure storage accounts starting Nov 2024
Source: Azure Storage
Author: Srikumar Vaitinadin
Publication Date: 1/10/24
Content excerpt:
To meet evolving technology and regulatory needs and align with security best practices, we are removing support for Transport Layer Security (TLS) 1.0 and 1.1 for both existing and new storage accounts in all clouds. TLS 1.2 will be the minimum supported TLS version for Azure Storage starting Nov 1, 2024.
Title: Protecting Azure VM against Zonal/Regional outages using Azure Site Recovery and Azure Backup
Source: Azure Storage
Author: Srinath Vasireddy
Publication Date: 1/18/24
Content excerpt:
Disaster Recovery (DR) and Backup are two ways to recover from outages. To ensure that you have the necessary controls to protect your data even when relying on native tools included by your provider, you must get familiar with the platform features, weigh in the cost and benefits, and formulate a data protection strategy that best works for your business. The following provides a summary of choices provided by Azure Backup and Azure Site Recovery…
Title: Prepare for upcoming TLS 1.3 support for Azure Storage
Source: Azure Storage
Author: Srikumar Vaitinadin
Publication Date: 1/18/24
Content excerpt:
Azure Storage has started to enable TLS 1.3 support on public HTTPS endpoints across its platform globally to align with security best practices. Azure Storage currently supports TLS 1.0, 1.1 (scheduled for deprecation by November 2024), and TLS 1.2 on public HTTPS endpoints. This blog provides additional guidance on how to prepare for upcoming support for TLS 1.3 for Azure Storage.
Title: Announcing the general availability of NFS Azure file share snapshots
Source: Azure Storage
Author: Subhash Athri N
Publication Date: 1/29/24
Content excerpt:
Azure Files is offered as a fully managed file share service in Azure cloud. Azure file shares can be mounted via SMB (Server Message Block) and NFS (Network file System) protocols on clients running either on-premises or in the cloud.
We first made snapshot support available for SMB Azure file shares, and since then we’ve seen many of our customers and partners reaping the benefits of having point-in-time copies of their production data. In late 2023, we announced the Public preview of snapshot support for NFS Azure file shares. With this blog, I’m excited to announce General availability (GA) of snapshot support for NFS Azure file shares.
Title: Announcing Public Preview of Confidential VMs with Intel TDX in Azure Virtual Desktop
Source: Azure Virtual Desktop
Author: Derek Su
Publication Date: 1/12/24
Content excerpt:
We are excited to announce that Azure Virtual Desktop now supports the public preview of DCesv5 and ECesv5-series confidential VMs. These confidential VMs are powered by 4th Gen Intel® Xeon® Scalable processors with Intel® Trust Domain Extensions (Intel® TDX) and enable organizations to bring confidential workloads to the cloud without code changes to applications. Through the gated preview, we continued to enhance performance with our Intel partnership. These new virtual machines are up to 20% faster than 3rd Gen Intel Xeon virtual machines, and we expect performance for I/O intensive workloads to continue to improve as the technology matures.
Title: Modernize ASP.NET web apps with Azure Migrate on Azure Kubernetes Service
Source: Containers
Author: Anirudh Raghunath
Publication Date: 1/31/24
Content excerpt:
In this blog, we’ll go over how you can modernize a legacy ASP.NET web app using Azure Migrate and run in on Windows containers on Azure Kubernetes Service. You’ll walk away with an understanding of how to…
Title: Onboarding Intune Managed iOS User Enrollment Devices to Microsoft Defender for Endpoint
Source: Core Infrastructure and Security
Author: Arnab Mitra
Publication Date: 1/3/24
Content excerpt:
Microsoft Defender for Endpoint is a unified endpoint security platform that provides protection, detection, investigation, and response capabilities. To use Microsoft Defender for Endpoint on iOS devices, you need to onboard them to the service and assign licenses to users.
This blog post explains the onboarding process of the recently announced support of Microsoft Defender for Endpoint on Intune managed iOS/iPadOS devices enrolled with Apple User Enrollment mode. This enrollment method was introduced with iOS 13 that allows users to enroll their personal devices in a way that protects their privacy and separates work data (stored on a separate volume) from personal data. User Enrollment devices are managed by Intune with a limited set of policies and configurations.
Title: Intune iOS/iPadOS Management In a Nutshell
Source: Core Infrastructure and Security
Author: Jonas Ohmsen
Publication Date: 1/8/24
Content excerpt:
I’m a Microsoft Cloud Solution Architect and this blog post should give a brief overview of how to manage iOS and iPadOS devices with Microsoft Intune and how to get started.
If you are planning to migrate to Intune, I highly recommend the following link to a migration guide some colleagues wrote: https://aka.ms/intunemigrationguide.
Title: ConfigMgr CMG Least Privilege Setup Approach
Source: Core Infrastructure and Security
Author: Jonas Ohmsen
Publication Date: 1/15/24
Content excerpt:
I’m a Microsoft Cloud Solution Architect and this blog post is meant as a guide to setup a ConfigMgr Cloud Management Gateway (CMG) without the need for a Global Admin to use the ConfigMgr console.
I will also briefly explain what a CMG is and how the setup looks like in Azure. This part is a mix of the official documentation and of my own view on the product.
Title: Zero Touch Enrollment of MDE on iOS/iPadOS devices managed by Intune
Source: Core Infrastructure and Security
Author: Arnab Mitra
Publication Date: 1/18/24
Content excerpt:
Microsoft Defender for Endpoint (MDE) is a unified endpoint security platform that helps protect your devices from advanced threats. MDE on iOS/iPadOS devices provides protection against phishing and unsafe network connections. To use MDE on iOS devices, you need to enroll them in Microsoft Intune, a cloud-based service that helps you manage and secure your mobile devices.
This blog post helps you prepare your environment for zero-touch aka silent enrollment of MDE on your Intune managed iOS/iPadOS devices. Zero Touch enrollment is not available for all scenarios, below is a matrix for reference…
Title: Intune, Event, Azure Monitor Agent
Source: Core Infrastructure and Security
Author: Bindusar Kushwaha
Publication Date: 1/23/24
Content excerpt:
Hello everyone, I am Bindusar (CSA) working with Intune. I have received multiple requests from customers asking to collect specific event IDs from internet-based client machines with either Microsoft Entra ID or Hybrid Joined and upload to Log Analytics Workspace for further use cases. There are several options available like…
Title: Migrating from the Azure MMA to AMA Agent
Source: Core Infrastructure and Security
Author: Paul Bergson
Publication Date: 1/29/24
Content excerpt:
I have another conversation about the sunset of the Microsoft Monitoring Agent (MMA). Back on November 13, 2023 I posted and article on how to do a bulk removal of the Azure MMA agent, but before you can remove the MMA agent you need to have the AMA agent ready to take over the work. Below are details to assist in this endeavor.
Title: The Case of the Rogue Azure Arc Connected Machine Agent
Source: FastTrack for Azure
Author: Laura Hutchcroft
Publication Date: 1/9/24
Content excerpt:
My customer needed a way to manage their on-premises Windows and Linux servers as well as some non-Azure servers in the Azure Portal. This customer needed to be able to monitor server performance, update servers, manage compliance and many other Azure Management capabilities all in one place; not on premises but manage them all in the cloud. This customer selected to use the Azure Arc capabilities in Azure for these requirements.
In a nutshell, Azure Arc is a centralized way to manage your existing non-Azure and/or on-premises resources in Azure Resource Manager. If you want an easy way to manage Windows servers, Linux servers, Kubernetes clusters, VMware servers, AWS servers, GCP servers, Azure Arc can provide the way. In this article, we are going to specifically discuss Azure Arc-enabled servers and a specific troubleshooting case with the Azure Arc Connected Machine Agent.
Title: Monitor your Virtual Machines and Arc servers’ workloads with Azure Monitor
Source: FastTrack for Azure
Author: Jose Fehse
Publication Date: 1/12/24
Content excerpt:
Azure Monitor is an amazing suite of technologies that lets you collect, visualize, and act on data from your Azure resources. You can use metrics and logs to monitor the health and performance of any Azure resource. Microsoft offers tailored experiences for specific workloads, such as Virtual Machine Insights. Some of these experiences also include alerts and modern dashboards (Grafana) to help you act and troubleshoot issues. However, for server-based workloads, such as IIS, Print Servers, DNS, and others, there was no native cloud solution for monitoring. Until now.
The Azure Monitor Starter Packs (or “MonStar” packs) is a set of pre-configured components that provide monitoring configuration for multiple Azure resources without the need to create rules, alerts or dashboards. The monitoring features will be ready for assignment and consumption as soon as deployed.
Each pack will contain the required rules to collect the pertinent information (DCRs), the Alert Rules to inform about observations (alerts) and Dashboards (Grafana) to visualize the data.
Title: Reducing costs for Windows workloads on Azure Kubernetes Service with Azure Hybrid Benefits
Source: ITOps Talk
Author: Vinicius Apolinario
Publication Date: 1/10/24
Content excerpt:
Happy new year everyone! What better way to get the year started than saving some money, right? Last year, as customers continued to move their Windows workloads to Azure Kubernetes Service (AKS) and evolve these deployments, they started to explore cost saving strategies. Granted, there are many ways to save costs when running in the cloud and especially when it comes to AKS as you can scale up or down and in or out, reduce the size of your deployment, replicas, node size, and more. However, for Windows workloads, one of the simplest ways to save is by leveraging the Azure Hybrid Benefit.
Title: Wired for Hybrid – What’s New in Azure Networking – January 2024 edition
Source: ITOps Talk
Author: Pierre Roman
Publication Date: 1/24/24
Content excerpt:
Azure Networking is the foundation of your infrastructure in Azure. Each month we bring you an update on What’s new in Azure Networking.
In this blog post, we’ll cover what’s new with Azure Networking in January 2024. In this blog post, we will cover the following announcements and how they can help you.
Title: Why Azure Image Builder – Getting Started
Source: ITOps Talk
Author: Amy Colyer
Publication Date: 1/31/24
Content excerpt:
You might be familiar with building golden images or templates for use on-premises. Back in the olden days we used to “ghost” machines and now you may use a VM template with sysprep. Azure offers the managed service Azure Image Builder so you can configure your image as a template for reuse within your cloud. Golden or base images are usually built upon governance, standards and best practices within your organization. These images especially come into play if you have immutable infrastructure, servers or virtual machines that will not be modified after deployment. To ensure consistency and speed up deployment, you can create golden images or templates.
Title: Addressing Data Exfiltration: Token Theft Talk
Source: Microsoft Entra (Azure AD)
Author: Anna Barhudarian
Publication Date: 1/2/24
Content excerpt:
Let’s continue our discussion on preventing data exfiltration. In previous blogs, we shared Microsoft’s approach to securing authentication sessions with Continuous Access Evaluation (CAE) and discussed securing cross-tenant access with Tenant Restrictions v2. Today our topic is stolen authentication artifacts.
Stolen authentication artifacts – tokens and cookies – can be used to impersonate the victim and gain access to everything the victim had access to. Up until a few years ago, token theft was a rare attack and was most often exercised by corporate Red Teams. Why? Because it’s simpler to steal a password than a cookie. However, with multifactor authentication (MFA) becoming more prevalent, we’re seeing real-life attacks involving artifact theft and replay.
Before diving into details, it’s important to note that Microsoft recommends that the first line of defense against token theft is protecting your devices by deploying endpoint protections, device management, phishing-resistant MFA, and antimalware, as described in Token tactics: How to prevent, detect, and respond to cloud token theft | Microsoft Security Blog.
Title: Introducing More Granular Certificate-Based Authentication Configuration in Conditional Access
Source: Microsoft Entra (Azure AD)
Author: Alex Weinert
Publication Date: 1/30/24
Content excerpt:
I’m thrilled to announce the public preview of advanced certificate-based authentication (CBA) options in Conditional Access, which provides the ability to allow access to specific resources based on the certificate Issuer or Policy Object Identifiers (OIDs) properties.
Title: Enable your key business needs within Microsoft Sentinel with step-by-step guidance
Source: Security, Compliance, and Identity
Author: Shirleyse Haley
Publication Date: 1/31/24
Content excerpt:
Modernize your security operations center (SOC) with Microsoft Sentinel. Uncover sophisticated threats and respond decisively with an intelligent, comprehensive security information and event management (SIEM) solution for proactive threat detection, investigation, and response.
This lightweight guide quickly walks you through business needs related to modernizing your SOC. It helps you make the most of Microsoft security solutions by pointing you to specific training and technical documentation…
Title: Windows Server Insider Preview 26040 is out – and so is the new name
Source: Storage at Microsoft
Author: Ned Pyle
Publication Date: 1/26/24
Content excerpt:
Heya folks, Ned here again. We’ve resumed the Windows Server Insider program after our winter break and there’s a new build, new features, and – finally – the official branding: Windows Server 2025.
Previous CTO! Guides:
CIS Tech Community-Check This Out! (CTO!) Guides
Additional resources:
Azure documentation
Azure pricing calculator (VERY handy!)
Microsoft Azure Well-Architected Framework
Microsoft Cloud Adoption Framework
Windows Server documentation
Windows client documentation for IT Pros
PowerShell documentation
Core Infrastructure and Security blog
Microsoft Tech Community blogs
Microsoft technical documentation (Microsoft Docs)
Sysinternals blog
Microsoft Learn
Microsoft Support (Knowledge Base)
Microsoft Archived Content (MSDN/TechNet blogs, MSDN Magazine, MSDN Newsletter, TechNet Newsletter)
Microsoft Tech Community – Latest Blogs –Read More
Enable Automatic Secret rotation by triggering Azure Function from Event Grid over virtual network
Background:
In most cases, the best authentication method for Azure services is by using a managed identity. However, there are scenarios where this may not be an option, and access keys or passwords are used. In such cases, it is important to rotate access keys and passwords regularly.
Automatic Secret rotation solution:
Based on the document (https://learn.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation-dual?tabs=azure-cli), it proposes a solution to generate new access keys on storage account and update it in the key vault based on the secret expiry time. It uses Azure Event Grid to send the secret expiry event to an Azure Function App (aka the Azure Function App triggered by Azure Event Grid notifications) – which in turn generates the new access key and updates the secret in the Key vault to automate the periodic rotation of secret.
But it has a limitation as follows: The EventGrid System Topic cannot deliver events to the target which has “Public access disabled and only private endpoint enabled” (the block enclosed in red in the above screenshot)
To overcome this limitation a few tweaks to the architecture need to be made, which are presented in the following sections of this document. We introduce an Azure Event Hubs in the architecture with public access disabled but “allow trusted microsoft service to bypass” setting enabled.
Secret Rotation when private endpoints are enabled:
1. Configure event subscription on the Azure Key Vault.
Thirty days before the expiration date of a secret, Key Vault publishes the near expiry event to Event Grid.
2. Event Grid checks the event subscriptions and delivers the event to the Event Hub endpoint. The managed identity of the Event Grid should have “Azure Event Hub data sender” role assigned in the Event Hub. “Allow trusted Microsoft services to bypass this firewall” setting must be enabled on the Event hub namespace and public access disabled.
Event Hubs -> Networking blade:
3. The event hub triggers the function app.
4. The function app (created with event hub trigger) identifies the key and calls the storage account to regenerate it.
5. The function app adds the newly regenerated key to Azure Key Vault as the new version of the secret.
The function app managed identity should be assigned the following roles:
1. Key Vault secret officer role on the key vault (when using RBAC based access model)
2. Storage Account Key Operator Service Role assigned on the storage account to generate access keys.
The following link has the sample Function app code (PowerShell) which performs the secret rotation in the method “RoatateSecret” including RegenerateKey on storage account and AddSecretToKeyVault. (https://github.com/Azure-Samples/keyvault-rotation-storageaccountkey-powershell/blob/main/AKVStorageRotation/run.ps1)
But it needs a few tweaks as our function app is being triggered by Event hub and not event grid topic directly.
The last few lines must be modified in the following way:
$eventHubMessages | ConvertTo-Json | Write-Host
$eventHubMessages | ForEach-Object {
$secretName = $_.subject
$keyVaultName = $_.data.VaultName
Write-Host “Key Vault Name: $keyVaultName”
Write-Host “Secret Name: $secretName”
Write-Host “Rotation started.”
RotateSecret $keyVAultName $secretName
Write-Host “Secret Rotated Successfully”
}
Reference:
1.Automate the rotation of a secret for resources that have two sets of authentication credentials
https://learn.microsoft.com/en-us/azure/key-vault/secrets/tutorial-rotation-dual?tabs=azure-cli
2.How to trigger Azure Function from Event Grid over virtual network
https://techcommunity.microsoft.com/t5/blogs/blogworkflowpage/blog-id/AppsonAzureBlog/article-id/1715
3.Azure services that support system topics in Azure Event Grid
https://learn.microsoft.com/en-us/azure/event-grid/system-topics
4.Keyvault-rotation-storageaccountkey-powershell
Microsoft Tech Community – Latest Blogs –Read More
Protecting Tier 0 the Modern Way
How should your Tier 0 Protection look like?
Almost every attack on Active Directory you hear about today – no matter if ransomware is involved or not – (ab)uses credential theft techniques as the key factor for successful compromise. Microsoft’s State of Cybercrime report confirms this statement: “The top finding among ransomware incident response engagements was insufficient privilege access and lateral movement controls.”
Despite the fantastic capabilities of modern detection and protection tools (like the Microsoft Defender family of products), we should not forget that prevention is always better than cure (which means that accounts should be protected against credential theft proactively). Microsoft’s approach to achieving this goal is the Enterprise Access Model. It adds the aspect of hybrid and multi-cloud identities to the Active Directory Administrative Tier Model. Although first published almost 10 years ago, the AD Administrative Tier Model is still not obsolete. Not having it in place and enforced is extremely risky with today’s threat level in mind.
Most attackers follow playbooks and whatever their final goal may be, Active Directory Domain domination (Tier 0 compromise) is a stopover in almost every attack. Hence, securing Tier 0 is the first critical step towards your Active Directory hardening journey and this article was written to help with it.
AD Administrative Tier Model Refresher
The AD Administrative Tier Model prevents escalation of privilege by restricting what Administrators can control and where they can log on. In the context of protecting Tier 0, the latter ensures that Tier 0 credentials cannot be exposed to a system belonging to another Tier (Tier 1 or Tier 2).
Tier 0 includes accounts (Admins-, service- and computer-accounts, groups) that have direct or indirect administrative control over all AD-related identities and identity management systems. While direct administrative control is easy to identify (e.g. members of Domain Admins group), indirect control can be hard to spot: e.g. think of a virtualized Domain Controller and what the admin of the virtualization host can do to it, like dumping the memory or copying the Domain Controller’s hard disk with all the password hashes. Consequently, virtualization environments hosting Tier 0 computers are Tier 0 systems as well. This also applies to the virtualization Admin accounts.
The three Commandments of AD Administrative Tier Model
Rule #1: Credentials from a higher-privileged tier (e.g. Tier 0 Admin or Service account) must not be exposed to lower-tier systems (e.g. Tier 1 or Tier 2 systems).
Rule #2: Lower-tier credentials can use services provided by higher-tiers, but not the other way around. E.g. Tier 1 and even Tier 2 system still must be able to apply Group Policies.
Rule #3: Any system or user account that can manage a higher tier is also a member of that tier, whether originally intended or not.
Implementing the AD Administrative Tier Model
Most guides describe how to achieve these goals by implementing a complex cascade of Group Policies (The local computer configuration must be changed to avoid that higher Tier level administrators can expose their credentials to a down-level computer). This comes with the downside that Group Policies can be bypassed by local administrators and that the Tier Level restriction works only on Active Directory joined Windows computers. The bad news is that there is still no click-once deployment for Tiered Administration, but there is a more robust way to get things done by implementing Authentication policies. Authentication Policies provide a way to contain high-privilege credentials to systems that are only pertinent to selected users, computers, or services. With these capabilities, you can limit Tier 0 account usage to Tier 0 hosts. That’s exactly what we need to achieve to protect Tier 0 identities from credential theft-based attacks.
To be very clear on this: With Kerberos Authentication Policies you can define a claim which defines where the user is allowed to request a Kerberos Granting Ticket from.
Optional: Deep Dive in Authentication Policies
Authentication Policies are based on a Kerberos extension called FAST (Flexible Authentication Secure Tunneling) or Kerberos Armoring. FAST provides a protected channel between the Kerberos client and the KDC for the whole pre-authentication conversation by encrypting the pre-authentication messages with a so-called armor key and by ensuring the integrity of the messages.
Kerberos Armoring is disabled by default and must be enabled using Group Policies. Once enabled, it provides the following functionality:
Protection against offline dictionary attacks. Kerberos armoring protects the user’s pre-authentication data (which is vulnerable to offline dictionary attacks when it is generated from a password).
Authenticated Kerberos errors. Kerberos armoring protects user Kerberos authentications from KDC Kerberos error spoofing, which can downgrade to NTLM or weaker cryptography.
Disables any authentication protocol except Kerberos for the configured user.
Compounded authentication in Dynamic Access Control (DAC). This allows authorization based on the combination of both user claims and device claims.
The last bullet point provides the basis for the feature we plan to use for protecting Tier 0: Authentication Policies.
Restricting user logon from specific hosts requires the Domain Controller (specifically the Key Distribution Center (KDC)) to validate the host’s identity. When using Kerberos authentication with Kerberos armoring, the KDC is provided with the TGT of the host from which the user is authenticating. That’s what we call an armored TGT, the content of which is used to complete an access check to determine if the host is allowed.
Kerberos armoring logon flow (simplified):
The computer has already received an armored TGT during computer authentication to the domain.
The user logs on to the computer:
An unarmored AS-REQ for a TGT is sent to the KDC.
The KDC queries for the user account in Active Directory and determines if it is configured with an Authentication Policy that restricts initial authentication that requires armored requests.
The KDC fails the request and asks for Pre-Authentication.
Windows detects that the domain supports Kerberos armoring and sends an armored AS-REQ to retry the sign-in request.
The KDC performs an access check by using the configured access control conditions and the client operating system’s identity information in the TGT that was used to armor the request. If the access check fails, the domain controller rejects the request.
If the access check succeeds, the KDC replies with an armored reply (AS-REP) and the authentication process continues. The user now has an armored TGT.
Looks very much like a normal Kerberos logon? Not exactly: The main difference is the fact that the user’s TGT includes the source computer’s identity information. Requesting Service Tickets looks similar to what we described above, except that the user’s armored TGT is used for protection and restriction.
Implementing a Tier 0 OU Structure and Authentication Policy
The following steps are required to limit Tier 0 account usage (Admins and Service accounts) to Tier 0 hosts:
Enable Kerberos Armoring (aka FAST) for DCs and all computers (or at least Tier 0 computers).
Before creating an OU structure similar to the one pictured below, you MUST ensure that Tier 0 accounts are the only ones having sensitive permissions on the root level of the domain. Keep in mind that all ACLs configured on the root-level of fabrikam.com will be inherited by the OU called “Admin” in our example.
3. Create the following security groups:
– Tier 0 Users
– Tier 0 Computer
4. Constantly update the Authentication policy to ensure that any new T0 Admin or T0 service account is covered.
5. Ensure that any newly created T0 computer account is added to the T0 Computers security group.
6. Configure an Authentication Policy with the following parameters and enforce the Kerberos Authentication policy:
(User) Accounts
Conditions (Computer accounts/groups)
User Sign On
T0 Admin accounts
(Member of each({ENTERPRISE DOMAIN CONTROLLERS}) Or Member of any({Tier 0 computers (FABRIKAMTier 0 computers)}))
Kerberos only
The screenshot below shows the relevant section of the Authentication Policy:
Find more details about how to create Authentication Policies at https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/manage/how-to-configure-protected-accounts#create-a-user-account-audit-for-authentication-policy-with-adac.
Tier 0 Admin Logon Flow: Privileged Access Workstations (PAWs) are a MUST
As explained at the beginning of the article, attackers can sneak through an open (and MFA protected) RDP connection when the Admin’s client computer is compromised. To protect from this type of attack Microsoft has been recommending using PAWs since many years.
In case you ask yourself, what the advantage of restricting the source of a logon attempt through Kerberos Policies is: Most certainly you do not want your T0 Admins to RDP from their – potentially compromised – workplace computers to the DC. Instead, you want them to use a Tier 0 Administrative Jump Host or – even better – a Privileged Access Workstation. With a compromised workplace computer as a source for T0 access it would be easy for an attacker to either use a keylogger to steal the T0 Admin’s password, or to simply sneak through the RDP channel once it is open (using a simple password or MFA doesn’t make a big difference for this type of attack). Even if an attacker would be able to steal the credential of a Tier 0 user, the attacker could use those credentials from a computer which is defined in the claim. On any other computer, Active Directory will not approve a TGT, even if the user provides the correct credentials. This will give you the easy possibility to monitor the declined requests and react properly.
There are too many ways of implementing the Tier 0 Admin logon flow to describe all of them in a blog. The “classic” (some call it “old-fashioned”) approach is a domain-joined PAW which is used for T0 administrative access to Tier 0 systems.
The solution above is straightforward but does not provide any modern cloud-based security features.
“Protecting Tier 0 the modern way” not only refers to using Authentication Policies, but also leverages modern protection mechanisms provided by Azure Entra ID, like Multi-Factor-Authentication, Conditional Access or Identity Protection (to cover just the most important ones).
Our preferred way of protecting the Tier 0 logon flow is via an Intune-managed PAW and Azure Virtual Desktop because this approach is easy to implement and perfectly teams modern protection mechanisms with on-premises Active Directory:
Logon to the AVD is restricted to come from a compliant PAW device only, Authentication Policies do the rest.
Automation through PowerShell
Still sounds painful? While steps 1 – 3 (enable Kerberos FAST, create OU structure, create Tier 0 groups) of Implementing a Tier 0 OU Structure and Authentication Policy are one-time tasks, step 4 and 6 (keep group membership and Authentication policy up-to-date) have turned out to be challenging in complex, dynamic environments. That’s why Andreas Lucas (aka Kili69) has developed a PowerShell-based automation tool which …
creates the OU structure described above (if not already exists)
creates the security groups described above (if not already exist)
creates the Authentication policy described above (if not already exists)
applies the Tier 0 authentication policy to any Tier 0 user object
removes any object from the T0 Computers group which is not located in the Tier 0 OU
removes any user object from the default Active directory Tier 0 groups, if the Authentication policy is not applied (except Built-In Administrator, GMSA and service accounts)
Additional Comments and Recommendations
Prerequisites for implementing Kerberos Authentication Policies
Kerberos Authentication Policies were introduced in Windows Server 2012 R2, hence a Domain functional level of Windows Server 2012 R2 or higher is required for implementation.
Authentication Policy – special Settings
Require rolling NTLM secret for NTLM authentication
Configuration of this feature was moved to the properties of the domain in Active Directory Administrative Center. When enabled, for users with the “Smart card is required for interactive logon” checkbox set, a new random password will be generated according to the password policy. See https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/whats-new-in-credential-protection#rolling-public-key-only-users-ntlm-secrets for more details.
Allow NTLM network authentication when user is restricted to selected devices
We do NOT recommend enabling this feature because with NTLM authentication allowed the capabilities of restricting access through Authentication Policies are reduced. In addition to that, we recommend adding privileged users to the Protected Users security group. This special group was designed to harden privileged accounts and introduces a set of protection mechanisms, one of which is making NTLM authentication impossible for the members of this group. See https://learn.microsoft.com/en-us/windows-server/security/credentials-protection-and-management/authentication-policies-and-authentication-policy-silos#about-authentication-policies for more details.
Have Breakglass Accounts in place
Break Glass accounts are emergency access accounts used to access critical systems or resources when other authentication mechanisms fail or are unavailable. In Active Directory, Break Glass accounts are used to provide emergency access to Active Directory in case normal T0 Admin accounts do not work anymore, e.g. because of a misconfigured Authentication Policy.
Clean Source Principle
The clean source principle requires all security dependencies to be as trustworthy as the object being secured. Implementation of the clean source principle is beyond the scope of this article, but explained in detail at Success criteria for privileged access strategy | Microsoft Learn.
Review ACLs on the Root-level of your Domain(s)
The security implications of an account having excessive privileges (e.g. being able to modify permissions at the root-level of the domain are massive. For that reason, before creating the new OU (named “Admin” in the description above), you must ensure that there are no excessive ACLs (Access Control List) configured on the Root-level of the domain. In addition to that, consider breaking inheritance on the OU called “Admin” in our example.
Microsoft Tech Community – Latest Blogs –Read More
Ryan O’Connell Making a Difference in Inclusive Skilling
Ryan O’Connell is a Microsoft Most Valuable Professional (MVP) in Microsoft Azure, a cloud architect, a trainer, and a community leader. He has over 20 years of experience in the IT industry, working with various technologies and platforms. He is passionate about sharing his knowledge and skills with others, especially those who face challenges and barriers in accessing quality education and career opportunities.
An instructor of several free Udemy courses on Azure and AI, Ryan is also the co-organizer of Women for IT, a live hands-on training program that aims to empower women in technology.
Microsoft Azure MVP, Ryan O’Connell
Skilling for all
It is a strong belief of Ryan’s that everyone deserves a chance to learn and grow in the IT industry, regardless of their background, education, or financial situation. Ryan is aware of the struggles that many people face in finding good training, gaining real-world experience, and landing their first job. That’s why he dedicates his time and energy to creating and delivering free courses that cover in-demand and relevant topics in the cloud, data, and AI. Ryan also provides mentoring and guidance to his students, helping them to overcome challenges, build confidence, and achieve their career goals.
“I see the struggles many face in the IT industry, and I also see the lack of real-world-hands-on experience. Many cannot afford to pay for good training due to various circumstances around their daily lives and cost of living, hence my Udemy and Women for IT live hands-on courses are all free,” explain Ryan.
MVP Ryan O’Connell leading the 10kWomen training
Supporting #10kWomen
Ryan is passionate about empowering women and increasing diversity in the IT industry. One of the initiatives that Ryan is involved in is #10kWomen, a Microsoft program that aims to enable 10,000 women in New Zealand with new skills that help them secure digital roles.
Choosing to be part of the #10kWomen initiative was an easy choice for Ryan, because he recognizes the value and potential of women in IT. Ryan says, “I see the struggles many women face in the IT industry and women bring so much value to the industry. In addition to boosting revenue, women improve other aspects of the IT industry. Bringing more women into the workforce leads to greater innovation within tech organizations. However, we also need to address the issue beyond the current job market. Only 20% of computer science professionals are women. Increasing the inclusion of women is a sound business strategy. A study by Deloitte found that women’s choices account for up to 85% of buying decisions nationwide, and that diversity drives innovation.”
Ryan praises the #10kWomen initiative as a special program that showcases Microsoft’s commitment to supporting and empowering women in the IT sector. He says, “The #10kWomen initiative is so special as it highlights Microsoft’s commitment to upskilling women and future IT gurus, techs and leaders and Microsoft’s support to the industry as a whole.”
Chief Partner Officer ANZ and Managing Director Microsoft New Zealand, Vanessa Sorenson, praised Ryan’s commitment to women in IT in a recent LinkedIn post, “Today I want to say a massive thank you to Ryan O’Connell, he is working so hard to bring more diverse skills into the tech industry, connected into our Microsoft 10K Woman program. This picture, taken of him leading Tech training warms my heart. He has personally trained 1000’s of people and is such an amazing ally to all”.
Advocating for inclusivity
Ryan demonstrates his commitment to creating an inclusive and respectful work environment, where everyone feels valued and supported. Ryan shares how he benefits from the diversity of perspectives and experiences, which enriches his own professional development and enhances his team’s performance.
“I am brave enough to show up at the workplace. When working with my teams, I am professionally authentic, and remember that whatever I put out there will be reflected back. I treat everyone with respect no matter our differences. I am mindful of the words that I use. If words are not used correctly, they can be misinterpreted. I am patient, always listen and allow others to speak and express themselves. I respect the time of the person I am addressing, give them my full attention by being sensitive and not interrupting and over-talking, ” explains Ryan.
Learn Microsoft Azure with Ryan O’Connell
Learn from Ryan O’Connell on Microsoft Learn.
Explore free courses on Microsoft Azure and AI, led by Ryan O’Connell.
Watch Ryan O’Connell on YouTube.
Follow Ryan O’Connell on LinkedIn.
Microsoft Tech Community – Latest Blogs –Read More
How to trigger Azure Function from Event Grid over virtual network
1. Background
In the cloud environment, it’s very important to securely protect sensitive APIs from unauthorized access and potential security threats originating from the public internet. Azure Function App provide Access Restriction and Private Endpoint feature to safeguard your Function App from unauthorized inbound requests.
In order to safeguard the Function App, one customer enable the private endpoint in Function App and implement the Event Grid trigger function, sending a message to Event Grid when the blog storage file changes and bring up the Azure Function. However, the Ip Forbidden (403) issue occurred from Azure Function.
2. The IP Forbidden (403) issue from Event Grid trigger function
Assessment:
1.Function App aspect
There is Ip Forbidden (403) issue in Front End. The Event Grid’s IP (20.xxx.xxx.130) is denied.
– The IP (20.xxx.xxx.130) is from Event Grid. (MicrosoftIPs (csstoolkit.azurewebsites.net)
– The Function App didn’t have Access Restriction configured, but the Private Endpoint is enabled.
2.From Event Grid aspect
There is a limitation for Event Grid which is unable to support virtual network (VNet) integration for outbound traffic. With push delivery in Event Grid, your application cannot receive events over private IP address. (Referring to the document https://learn.microsoft.com/en-us/azure/event-grid/consume-private-endpoints)
Investigation Result:
Therefore, it’s a well-known limitation that the Event Grid trigger function would encounter Ip Forbidden (403) issue when the private endpoint being enabled in Function App.
3. How to safeguard the Event Grid trigger function?
There are two options as a workaround:
3.1 Enable Access Restriction
Enable Access Restriction in Function App and allow the source from Event Grid’s service tag (AzureEventGrid).
3.2 Enable Private Endpoint and configure the intermediate components
Enable Private Endpoint in Function App and configure the intermediate components between Event Grid and Function App.
Under above configuration, we can make sure entire traffic is secured. (Referring to the document https://learn.microsoft.com/en-us/azure/event-grid/consume-private-endpoints#use-managed-identity)
The First traffic: Event Grid -> Intermediate Component, using Managed identity for secured.
The secured traffic from Event Grid to an intermediate component such as Event Hubs, Service Bus, or Azure Storage, stays on the Microsoft backbone and a managed identity of Event Grid is used.
The Second traffic: Intermediate Component -> Function App, using Private Link Service for secured.
Configuring your Azure Function from within your virtual network to use an intermediate component such as Event Hubs, Service Bus, or Azure Storage via private link ensures the traffic between those services and your function stays within your virtual network perimeter.
3.2.1 How to deliver events from Event Grid to the intermediate component?
For example:
– Deliver events from Event Grid to Service Bus Queue using managed identity, please refer to the following steps for setup:
3.2.1.1 Event Grid Part
Step#1: Create an Event Grid topic with system-assigned or user-assigned managed identity.
If there is existing Event Grid topic, just assign a system-assigned or user-assigned managed identity.
Step#2: Configure the event subscription that uses a Service Bus queue or topic as an endpoint to use the system-assigned or user-assigned managed identity.
3.2.1.2 Managed Identity Part
Add the Managed Identity to the Azure Service Bus Data Sender role on the Service Bus namespace.
3.2.1.3 Service Bus Part
Enable the Allow trusted Microsoft services to bypass this firewall setting on your Service Bus namespace.
4. Reference
https://learn.microsoft.com/en-us/azure/event-grid/consume-private-endpoints
Microsoft Tech Community – Latest Blogs –Read More
Drive your AI enablement journey with master data management from Profisee in Azure Marketplace
In this guest blog post, Eric Melcher, Chief Technology Officer at Profisee, examines the problems of siloed data and how the Profisee MDM platform integrated with Microsoft Fabric and Microsoft Purview can match, merge, and standardize raw data for AI enablement so you can extract meaningful insights about your business.
Everyone with access to Microsoft Copilot is a data analyst now. While that’s empowering and has amazing potential to speed time to insights and boost productivity, the advantages you stand to gain from artificial intelligence (AI) won’t come unless your data is ready for consumption.
The road to AI enablement
As any data leader worth their salt will tell you, AI enablement is not a simple plug-and-play. There’s a lot of work going on behind the scenes to ensure a company’s data is ready for the primetime of AI-derived insights and analytics. Enterprise data typically comes from a variety of different sources, with different people entering that data in different places and at different times for different reasons. For many companies, this siloed data is inconsistent, incomplete, duplicative at best, and unusable at worst.
Microsoft Fabric is a modern analytics platform with both integrated tools and storage. You can and should load data from source systems into Fabric to break down those data silos. So, does having all your data in one place solve the problem of siloed and inconsistent data? Loading data into Fabric improves data accessibility, but it does not address the inconsistency problem. This has more to do with logical or semantic integration and is more an issue of data usability. To address that, we need master data management (MDM).
Creating consumable data with master data management
MDM is key to getting your data to a state where it’s consumption-ready so you can start leveraging AI. Consider this example:
Let’s say your company has records for me, Eric Melcher, CTO at Profisee, in three different systems. The record in your enterprise resource planning (ERP) system shows my name as “Melcher, Eric.” In your customer relationship management (CRM) system it’s “Erik Melcher.” And in a legacy application it’s “E. Melcher.”
Do all three of these records refer to me, or do they refer to three different customers who happen to have similar names? If a human can’t be sure without doing some digging, how will an AI know that these records are, in fact, for the same person?
To further complicate things, each source system mentioned above holds this data in different data structures. In other words, the data is not ready for consumption.
This is where MDM fits into a data fabric architecture. With the Profisee MDM platform, you can match, merge, and standardize this raw data (bronze medallion data in data lakehouse terminology) and publish it as consumable data products (gold medallion data). This is what your business intelligence (BI) analysts need, and it’s what your AI-powered co-pilots need if they are to provide meaningful insights about your business.
Profisee is not only integrated with Microsoft Fabric to enable this data improvement, it’s also integrated with Microsoft Purview to ensure Profisee can implement and enforce any governance standards noted in Purview as part of this transformation. To learn more, check out our solution on Azure Marketplace: Master Data Management for Azure (SaaS).
Tying it all together
AI enablement is not a destination — it’s a journey. But embarking on this odyssey without proper preparation is like setting sail without a compass. Equip yourself with the right tools — Profisee MDM for master data management, Microsoft Purview for data governance, and Microsoft Fabric for a unified analytics platform — to navigate this exciting but complex landscape.
By harnessing the power of these platforms, you can make the most of the data you already have to enable both traditional BI and generative AI — and deliver insights at a scale and depth previously thought impossible.
Check out this four-minute video for a more visual description of how to drive AI-enablement at scale through Microsoft Fabric and Profisee MDM.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s commitment to Azure IoT
There was a recent erroneous system message on Feb 14th regarding the deprecation of Azure IoT Central. The error message stated that Azure IoT Central will be deprecated on March 31st, 2027 and starting April 1, 2024, you won’t be able to create new application resources. This message is not accurate and was presented in error.
Microsoft does not communicate product retirements using system messages. When we do announce Azure product retirements, we follow our standard Azure service notification process including a notification period of 3-years before discontinuing support. We understand the importance of product retirement information for our customers’ planning and operations. Learn more about this process here: 3-Year Notification Subset – Microsoft Lifecycle | Microsoft Learn
Our goal is to provide our customers with a comprehensive, secure, and scalable IoT platform. We want to empower our customers to build and manage IoT solutions that can adapt to any scenario, across any industry, and at any scale. We see our IoT product portfolio as a key part of the adaptive cloud approach.
The adaptive cloud approach can help customers accelerate their industrial transformation journey by scaling adoption of IoT technologies. It helps unify siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, edge, and IoT. Learn more about our adaptive cloud approach here: Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Our approach is exemplified in the public preview of Azure IoT Operations, which makes it easy for customers to onboard assets and devices to flow data from physical operations to the cloud to power insights and decision making. Azure IoT Operations is designed to simplify and accelerate the development and deployment of IoT solutions, while giving you more control over your IoT devices and data. Learn more about Azure IoT Operations here: https://azure.microsoft.com/products/iot-operations/
We will continue to collaborate with our partners and customers to transform their businesses with intelligent edge and cloud solutions, taking advantage of our full portfolio of Azure IoT products.
We appreciate your trust and loyalty and look forward to continuing to serve you with our IoT platform offerings.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s commitment to Azure IoT
There was a recent erroneous system message on Feb 14th regarding the deprecation of Azure IoT Central. The error message stated that Azure IoT Central will be deprecated on March 31st, 2027 and starting April 1, 2024, you won’t be able to create new application resources. This message is not accurate and was presented in error.
Microsoft does not communicate product retirements using system messages. When we do announce Azure product retirements, we follow our standard Azure service notification process including a notification period of 3-years before discontinuing support. We understand the importance of product retirement information for our customers’ planning and operations. Learn more about this process here: 3-Year Notification Subset – Microsoft Lifecycle | Microsoft Learn
Our goal is to provide our customers with a comprehensive, secure, and scalable IoT platform. We want to empower our customers to build and manage IoT solutions that can adapt to any scenario, across any industry, and at any scale. We see our IoT product portfolio as a key part of the adaptive cloud approach.
The adaptive cloud approach can help customers accelerate their industrial transformation journey by scaling adoption of IoT technologies. It helps unify siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, edge, and IoT. Learn more about our adaptive cloud approach here: Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Our approach is exemplified in the public preview of Azure IoT Operations, which makes it easy for customers to onboard assets and devices to flow data from physical operations to the cloud to power insights and decision making. Azure IoT Operations is designed to simplify and accelerate the development and deployment of IoT solutions, while giving you more control over your IoT devices and data. Learn more about Azure IoT Operations here: https://azure.microsoft.com/products/iot-operations/
We will continue to collaborate with our partners and customers to transform their businesses with intelligent edge and cloud solutions, taking advantage of our full portfolio of Azure IoT products.
We appreciate your trust and loyalty and look forward to continuing to serve you with our IoT platform offerings.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s commitment to Azure IoT
There was a recent erroneous system message on Feb 14th regarding the deprecation of Azure IoT Central. The error message stated that Azure IoT Central will be deprecated on March 31st, 2027 and starting April 1, 2024, you won’t be able to create new application resources. This message is not accurate and was presented in error.
Microsoft does not communicate product retirements using system messages. When we do announce Azure product retirements, we follow our standard Azure service notification process including a notification period of 3-years before discontinuing support. We understand the importance of product retirement information for our customers’ planning and operations. Learn more about this process here: 3-Year Notification Subset – Microsoft Lifecycle | Microsoft Learn
Our goal is to provide our customers with a comprehensive, secure, and scalable IoT platform. We want to empower our customers to build and manage IoT solutions that can adapt to any scenario, across any industry, and at any scale. We see our IoT product portfolio as a key part of the adaptive cloud approach.
The adaptive cloud approach can help customers accelerate their industrial transformation journey by scaling adoption of IoT technologies. It helps unify siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, edge, and IoT. Learn more about our adaptive cloud approach here: Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Our approach is exemplified in the public preview of Azure IoT Operations, which makes it easy for customers to onboard assets and devices to flow data from physical operations to the cloud to power insights and decision making. Azure IoT Operations is designed to simplify and accelerate the development and deployment of IoT solutions, while giving you more control over your IoT devices and data. Learn more about Azure IoT Operations here: https://azure.microsoft.com/products/iot-operations/
We will continue to collaborate with our partners and customers to transform their businesses with intelligent edge and cloud solutions, taking advantage of our full portfolio of Azure IoT products.
We appreciate your trust and loyalty and look forward to continuing to serve you with our IoT platform offerings.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s commitment to Azure IoT
There was a recent erroneous system message on Feb 14th regarding the deprecation of Azure IoT Central. The error message stated that Azure IoT Central will be deprecated on March 31st, 2027 and starting April 1, 2024, you won’t be able to create new application resources. This message is not accurate and was presented in error.
Microsoft does not communicate product retirements using system messages. When we do announce Azure product retirements, we follow our standard Azure service notification process including a notification period of 3-years before discontinuing support. We understand the importance of product retirement information for our customers’ planning and operations. Learn more about this process here: 3-Year Notification Subset – Microsoft Lifecycle | Microsoft Learn
Our goal is to provide our customers with a comprehensive, secure, and scalable IoT platform. We want to empower our customers to build and manage IoT solutions that can adapt to any scenario, across any industry, and at any scale. We see our IoT product portfolio as a key part of the adaptive cloud approach.
The adaptive cloud approach can help customers accelerate their industrial transformation journey by scaling adoption of IoT technologies. It helps unify siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, edge, and IoT. Learn more about our adaptive cloud approach here: Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Our approach is exemplified in the public preview of Azure IoT Operations, which makes it easy for customers to onboard assets and devices to flow data from physical operations to the cloud to power insights and decision making. Azure IoT Operations is designed to simplify and accelerate the development and deployment of IoT solutions, while giving you more control over your IoT devices and data. Learn more about Azure IoT Operations here: https://azure.microsoft.com/products/iot-operations/
We will continue to collaborate with our partners and customers to transform their businesses with intelligent edge and cloud solutions, taking advantage of our full portfolio of Azure IoT products.
We appreciate your trust and loyalty and look forward to continuing to serve you with our IoT platform offerings.
Microsoft Tech Community – Latest Blogs –Read More