Month: February 2024
Microsoft and open-source software
Microsoft has embraced open-source software—from offering tools for coding and managing open-source projects to making some of its own technologies open source, such as .NET and TypeScript. Even Visual Studio Code is built on open source. For March, we’re celebrating this culture of open-source software at Microsoft.
Explore some of the open-source projects at Microsoft, such as .NET on GitHub. Learn about tools and best practices to help you start contributing to open-source projects. And check out resources to help you work more productively with open-source tools, like Python in Visual Studio Code.
.NET is open source
Did you know .NET is open source? .NET is open source and cross-platform, and it’s maintained by Microsoft and the .NET community. Check it out on GitHub.
Python Data Science Day 2024: Unleashing the Power of Python in Data Analysis
Celebrate Pi Day (3.14) with a journey into data science with Python. Set for March 14, Python Data Science Day is an online event for developers, data scientists, students, and researchers who want to explore modern solutions for data pipelines and complex queries.
C# Dev Kit for Visual Studio Code
Learn how to use the C# Dev Kit for Visual Studio Code. Get details and download the C# Dev Kit from the Visual Studio Marketplace.
Visual Studio Code: C# and .NET development for beginners
Have questions about Visual Studio Code and C# Dev Kit? Watch the C# and .NET Development in VS Code for Beginners series and start writing C# applications in VS Code.
Reactor series: GenAI for software developers
Step into the future of software development with the Reactor series. GenAI for Software Developers explores cutting-edge AI tools and techniques for developers, revolutionizing the way you build and deploy applications. Register today and elevate your coding skills.
Use GitHub Copilot for your Python coding
Discover a better way to code in Python. Check out this free Microsoft Learn module on how GitHub Copilot provides suggestions while you code in Python.
Getting started with the Fluent UI Blazor library
The Fluent UI Blazor library is an open-source set of Blazor components used for building applications that have a Fluent design. Watch this Open at Microsoft episode for an overview and find out how to get started with the Fluent UI Blazor library.
Remote development with Visual Studio Code
Find out how to tap into more powerful hardware and develop on different platforms from your local machine. Check out this Microsoft Learn path to explore tools in VS Code for remote development setups and discover tips for personalizing your own remote dev workflow.
Using GitHub Copilot with JavaScript
Use GitHub Copilot while you work with JavaScript. This Microsoft Learn module will tell you everything you need to know to get started with this AI pair programmer.
Generative AI for Beginners
Want to build your own GenAI application? The free Generative AI for Beginners course on GitHub is the perfect place to start. Work through 18 in-depth lessons and learn everything from setting up your environment to using open-source models available on Hugging Face.
Use OpenAI Assistants API to build your own cooking advisor bot on Teams
Find out how to build an AI assistant right into your app using the new OpenAI Assistants API. Learn about the open playground for experimenting and watch a step-by-step demo for creating a cooking assistant that will suggest recipes based on what’s in your fridge.
What’s new in Teams Toolkit for Visual Studio 17.9
What’s new in Teams Toolkit for Visual Studio? Get an overview of new tools and capabilities for .NET developers building apps for Microsoft Teams.
Embed a custom webpage in Teams
Find out how to share a custom web page, such as a dashboard or portal, inside a Teams app. It’s easier than you might think. This short video shows how to do this using Teams Toolkit for Visual Studio and Blazor.
Get to know GitHub Copilot in VS Code and be more productive
Get to know GitHub Copilot in VS Code and find out how to use it. Watch this video to see how incredibly easy it is to start working with GitHub Copilot…Just start coding and watch the AI go to work.
Customize Dev Containers in VS Code with Dockerfiles and Docker Compose
Dev containers offer a convenient way to deliver consistent and reproducible environments. Follow along with this video demo to customize your dev containers using Dockerfiles and Docker Compose.
Designing for Trust
Learn how to design trustworthy experiences in the world of AI. Watch a demo of an AI prompt injection attack and learn about setting up guardrails to protect the system.
AI Show: LLM Evaluations in Azure AI Studio
Don’t deploy your LLM application without testing it first! Watch the AI Show to see how to use Azure AI Studio to evaluate your app’s performance and ensure it’s ready to go live. Watch now.
What’s winget.pro?
The Windows Package Manager (winget) is a free, open-source package manager. So what is winget.pro? Watch this special edition of the Open at Microsoft show for an overview of winget.pro and to find out how it differs from the well-known winget.
Use Visual Studio for modern development
Want to learn more about using Visual Studio to develop and test apps. Start here. In this free learning path, you’ll dig into key features for debugging, editing, and publishing your apps.
Build your own assistant for Microsoft Teams
Creating your own assistant app is super easy. Learn how in under 3 minutes! Watch a demo using the OpenAI Assistants, Teams AI Library, and the new AI Assistant Bot template in VS Code.
GitHub Copilot fundamentals – Understand the AI pair programmer
Improve developer productivity and foster innovation with GitHub Copilot. Explore the fundamentals of GitHub Copilot in this free training path from Microsoft Learn.
How to get GraphQL endpoints with Data API Builder
The Open at Microsoft show takes a look at using Data API Builder to easily create Graph QL endpoints. See how you can use this no-code solution to quickly enable advanced—and efficient—data interactions.
Microsoft, GitHub, and DX release new research into the business ROI of investing in Developer Experience
Investing in the developer experience has many benefits and improves business outcomes. Dive into our groundbreaking research (with data from more than 2000 developers at companies around the world) to discover what your business can gain with better DevEx.
Build your custom copilot with your data on Teams featuring Azure the AI Dragon
Build your own copilot for Microsoft Teams in minutes. Watch this video to see how in this demo that builds an AI Dragon that will take your team on a cyber role-playing adventure.
Microsoft Graph Toolkit v4.0 is now generally available
Microsoft Graph Toolkit v4.0 is now available. Learn about its new features, bug fixes, and improvements to the developer experience.
Microsoft Mesh: Now available for creating innovative multi-user 3D experiences
Microsoft Mesh is now generally available, providing a immersive 3D experience for the virtual workplace. Get an overview of Microsoft Mesh and find out how to start building your own custom experiences.
Global AI Bootcamp 2024
Global AI Bootcamp is a worldwide annual event that runs throughout the month of March for developers and AI enthusiasts. Learn about AI through workshops, sessions, and discussions. Find an in-person bootcamp event near you.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28. Learn more and register now.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft and open-source software
Microsoft has embraced open-source software—from offering tools for coding and managing open-source projects to making some of its own technologies open source, such as .NET and TypeScript. Even Visual Studio Code is built on open source. For March, we’re celebrating this culture of open-source software at Microsoft.
Explore some of the open-source projects at Microsoft, such as .NET on GitHub. Learn about tools and best practices to help you start contributing to open-source projects. And check out resources to help you work more productively with open-source tools, like Python in Visual Studio Code.
.NET is open source
Did you know .NET is open source? .NET is open source and cross-platform, and it’s maintained by Microsoft and the .NET community. Check it out on GitHub.
Python Data Science Day 2024: Unleashing the Power of Python in Data Analysis
Celebrate Pi Day (3.14) with a journey into data science with Python. Set for March 14, Python Data Science Day is an online event for developers, data scientists, students, and researchers who want to explore modern solutions for data pipelines and complex queries.
C# Dev Kit for Visual Studio Code
Learn how to use the C# Dev Kit for Visual Studio Code. Get details and download the C# Dev Kit from the Visual Studio Marketplace.
Visual Studio Code: C# and .NET development for beginners
Have questions about Visual Studio Code and C# Dev Kit? Watch the C# and .NET Development in VS Code for Beginners series and start writing C# applications in VS Code.
Reactor series: GenAI for software developers
Step into the future of software development with the Reactor series. GenAI for Software Developers explores cutting-edge AI tools and techniques for developers, revolutionizing the way you build and deploy applications. Register today and elevate your coding skills.
Use GitHub Copilot for your Python coding
Discover a better way to code in Python. Check out this free Microsoft Learn module on how GitHub Copilot provides suggestions while you code in Python.
Getting started with the Fluent UI Blazor library
The Fluent UI Blazor library is an open-source set of Blazor components used for building applications that have a Fluent design. Watch this Open at Microsoft episode for an overview and find out how to get started with the Fluent UI Blazor library.
Remote development with Visual Studio Code
Find out how to tap into more powerful hardware and develop on different platforms from your local machine. Check out this Microsoft Learn path to explore tools in VS Code for remote development setups and discover tips for personalizing your own remote dev workflow.
Using GitHub Copilot with JavaScript
Use GitHub Copilot while you work with JavaScript. This Microsoft Learn module will tell you everything you need to know to get started with this AI pair programmer.
Generative AI for Beginners
Want to build your own GenAI application? The free Generative AI for Beginners course on GitHub is the perfect place to start. Work through 18 in-depth lessons and learn everything from setting up your environment to using open-source models available on Hugging Face.
Use OpenAI Assistants API to build your own cooking advisor bot on Teams
Find out how to build an AI assistant right into your app using the new OpenAI Assistants API. Learn about the open playground for experimenting and watch a step-by-step demo for creating a cooking assistant that will suggest recipes based on what’s in your fridge.
What’s new in Teams Toolkit for Visual Studio 17.9
What’s new in Teams Toolkit for Visual Studio? Get an overview of new tools and capabilities for .NET developers building apps for Microsoft Teams.
Embed a custom webpage in Teams
Find out how to share a custom web page, such as a dashboard or portal, inside a Teams app. It’s easier than you might think. This short video shows how to do this using Teams Toolkit for Visual Studio and Blazor.
Get to know GitHub Copilot in VS Code and be more productive
Get to know GitHub Copilot in VS Code and find out how to use it. Watch this video to see how incredibly easy it is to start working with GitHub Copilot…Just start coding and watch the AI go to work.
Customize Dev Containers in VS Code with Dockerfiles and Docker Compose
Dev containers offer a convenient way to deliver consistent and reproducible environments. Follow along with this video demo to customize your dev containers using Dockerfiles and Docker Compose.
Designing for Trust
Learn how to design trustworthy experiences in the world of AI. Watch a demo of an AI prompt injection attack and learn about setting up guardrails to protect the system.
AI Show: LLM Evaluations in Azure AI Studio
Don’t deploy your LLM application without testing it first! Watch the AI Show to see how to use Azure AI Studio to evaluate your app’s performance and ensure it’s ready to go live. Watch now.
What’s winget.pro?
The Windows Package Manager (winget) is a free, open-source package manager. So what is winget.pro? Watch this special edition of the Open at Microsoft show for an overview of winget.pro and to find out how it differs from the well-known winget.
Use Visual Studio for modern development
Want to learn more about using Visual Studio to develop and test apps. Start here. In this free learning path, you’ll dig into key features for debugging, editing, and publishing your apps.
Build your own assistant for Microsoft Teams
Creating your own assistant app is super easy. Learn how in under 3 minutes! Watch a demo using the OpenAI Assistants, Teams AI Library, and the new AI Assistant Bot template in VS Code.
GitHub Copilot fundamentals – Understand the AI pair programmer
Improve developer productivity and foster innovation with GitHub Copilot. Explore the fundamentals of GitHub Copilot in this free training path from Microsoft Learn.
How to get GraphQL endpoints with Data API Builder
The Open at Microsoft show takes a look at using Data API Builder to easily create Graph QL endpoints. See how you can use this no-code solution to quickly enable advanced—and efficient—data interactions.
Microsoft, GitHub, and DX release new research into the business ROI of investing in Developer Experience
Investing in the developer experience has many benefits and improves business outcomes. Dive into our groundbreaking research (with data from more than 2000 developers at companies around the world) to discover what your business can gain with better DevEx.
Build your custom copilot with your data on Teams featuring Azure the AI Dragon
Build your own copilot for Microsoft Teams in minutes. Watch this video to see how in this demo that builds an AI Dragon that will take your team on a cyber role-playing adventure.
Microsoft Graph Toolkit v4.0 is now generally available
Microsoft Graph Toolkit v4.0 is now available. Learn about its new features, bug fixes, and improvements to the developer experience.
Microsoft Mesh: Now available for creating innovative multi-user 3D experiences
Microsoft Mesh is now generally available, providing a immersive 3D experience for the virtual workplace. Get an overview of Microsoft Mesh and find out how to start building your own custom experiences.
Global AI Bootcamp 2024
Global AI Bootcamp is a worldwide annual event that runs throughout the month of March for developers and AI enthusiasts. Learn about AI through workshops, sessions, and discussions. Find an in-person bootcamp event near you.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28. Learn more and register now.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft and open-source software
Microsoft has embraced open-source software—from offering tools for coding and managing open-source projects to making some of its own technologies open source, such as .NET and TypeScript. Even Visual Studio Code is built on open source. For March, we’re celebrating this culture of open-source software at Microsoft.
Explore some of the open-source projects at Microsoft, such as .NET on GitHub. Learn about tools and best practices to help you start contributing to open-source projects. And check out resources to help you work more productively with open-source tools, like Python in Visual Studio Code.
.NET is open source
Did you know .NET is open source? .NET is open source and cross-platform, and it’s maintained by Microsoft and the .NET community. Check it out on GitHub.
Python Data Science Day 2024: Unleashing the Power of Python in Data Analysis
Celebrate Pi Day (3.14) with a journey into data science with Python. Set for March 14, Python Data Science Day is an online event for developers, data scientists, students, and researchers who want to explore modern solutions for data pipelines and complex queries.
C# Dev Kit for Visual Studio Code
Learn how to use the C# Dev Kit for Visual Studio Code. Get details and download the C# Dev Kit from the Visual Studio Marketplace.
Visual Studio Code: C# and .NET development for beginners
Have questions about Visual Studio Code and C# Dev Kit? Watch the C# and .NET Development in VS Code for Beginners series and start writing C# applications in VS Code.
Reactor series: GenAI for software developers
Step into the future of software development with the Reactor series. GenAI for Software Developers explores cutting-edge AI tools and techniques for developers, revolutionizing the way you build and deploy applications. Register today and elevate your coding skills.
Use GitHub Copilot for your Python coding
Discover a better way to code in Python. Check out this free Microsoft Learn module on how GitHub Copilot provides suggestions while you code in Python.
Getting started with the Fluent UI Blazor library
The Fluent UI Blazor library is an open-source set of Blazor components used for building applications that have a Fluent design. Watch this Open at Microsoft episode for an overview and find out how to get started with the Fluent UI Blazor library.
Remote development with Visual Studio Code
Find out how to tap into more powerful hardware and develop on different platforms from your local machine. Check out this Microsoft Learn path to explore tools in VS Code for remote development setups and discover tips for personalizing your own remote dev workflow.
Using GitHub Copilot with JavaScript
Use GitHub Copilot while you work with JavaScript. This Microsoft Learn module will tell you everything you need to know to get started with this AI pair programmer.
Generative AI for Beginners
Want to build your own GenAI application? The free Generative AI for Beginners course on GitHub is the perfect place to start. Work through 18 in-depth lessons and learn everything from setting up your environment to using open-source models available on Hugging Face.
Use OpenAI Assistants API to build your own cooking advisor bot on Teams
Find out how to build an AI assistant right into your app using the new OpenAI Assistants API. Learn about the open playground for experimenting and watch a step-by-step demo for creating a cooking assistant that will suggest recipes based on what’s in your fridge.
What’s new in Teams Toolkit for Visual Studio 17.9
What’s new in Teams Toolkit for Visual Studio? Get an overview of new tools and capabilities for .NET developers building apps for Microsoft Teams.
Embed a custom webpage in Teams
Find out how to share a custom web page, such as a dashboard or portal, inside a Teams app. It’s easier than you might think. This short video shows how to do this using Teams Toolkit for Visual Studio and Blazor.
Get to know GitHub Copilot in VS Code and be more productive
Get to know GitHub Copilot in VS Code and find out how to use it. Watch this video to see how incredibly easy it is to start working with GitHub Copilot…Just start coding and watch the AI go to work.
Customize Dev Containers in VS Code with Dockerfiles and Docker Compose
Dev containers offer a convenient way to deliver consistent and reproducible environments. Follow along with this video demo to customize your dev containers using Dockerfiles and Docker Compose.
Designing for Trust
Learn how to design trustworthy experiences in the world of AI. Watch a demo of an AI prompt injection attack and learn about setting up guardrails to protect the system.
AI Show: LLM Evaluations in Azure AI Studio
Don’t deploy your LLM application without testing it first! Watch the AI Show to see how to use Azure AI Studio to evaluate your app’s performance and ensure it’s ready to go live. Watch now.
What’s winget.pro?
The Windows Package Manager (winget) is a free, open-source package manager. So what is winget.pro? Watch this special edition of the Open at Microsoft show for an overview of winget.pro and to find out how it differs from the well-known winget.
Use Visual Studio for modern development
Want to learn more about using Visual Studio to develop and test apps. Start here. In this free learning path, you’ll dig into key features for debugging, editing, and publishing your apps.
Build your own assistant for Microsoft Teams
Creating your own assistant app is super easy. Learn how in under 3 minutes! Watch a demo using the OpenAI Assistants, Teams AI Library, and the new AI Assistant Bot template in VS Code.
GitHub Copilot fundamentals – Understand the AI pair programmer
Improve developer productivity and foster innovation with GitHub Copilot. Explore the fundamentals of GitHub Copilot in this free training path from Microsoft Learn.
How to get GraphQL endpoints with Data API Builder
The Open at Microsoft show takes a look at using Data API Builder to easily create Graph QL endpoints. See how you can use this no-code solution to quickly enable advanced—and efficient—data interactions.
Microsoft, GitHub, and DX release new research into the business ROI of investing in Developer Experience
Investing in the developer experience has many benefits and improves business outcomes. Dive into our groundbreaking research (with data from more than 2000 developers at companies around the world) to discover what your business can gain with better DevEx.
Build your custom copilot with your data on Teams featuring Azure the AI Dragon
Build your own copilot for Microsoft Teams in minutes. Watch this video to see how in this demo that builds an AI Dragon that will take your team on a cyber role-playing adventure.
Microsoft Graph Toolkit v4.0 is now generally available
Microsoft Graph Toolkit v4.0 is now available. Learn about its new features, bug fixes, and improvements to the developer experience.
Microsoft Mesh: Now available for creating innovative multi-user 3D experiences
Microsoft Mesh is now generally available, providing a immersive 3D experience for the virtual workplace. Get an overview of Microsoft Mesh and find out how to start building your own custom experiences.
Global AI Bootcamp 2024
Global AI Bootcamp is a worldwide annual event that runs throughout the month of March for developers and AI enthusiasts. Learn about AI through workshops, sessions, and discussions. Find an in-person bootcamp event near you.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28. Learn more and register now.
Microsoft Tech Community – Latest Blogs –Read More
Leverage anomaly management processes with Microsoft Cost Management
The cloud comes with the promise of significant cost savings compared to on-premises costs. However, realizing those savings requires diligence to proactively plan, govern, and monitor your cloud solutions. Your ability to detect, analyze, and quickly resolve unexpected costs can help minimize the impact on your budget and operations. When you understand your cloud costs you can make more informed decisions on how to allocate and manage those costs. But even with proactive cost management, surprises can still happen. That’s why we developed several tools in Microsoft Cost Management to help you set up thresholds and rules so you can detect problems early and ensure the timely detection of out-of-scope changes in your cloud costs. Let’s take a closer look at some of these tools and how you can use them to discover anomalous costs and usage patterns.
Identify atypical usage patterns with anomaly detection
Anomaly detection is a powerful tool that can help you minimize unexpected charges by identifying atypical usage patterns like cost spikes or dips based on your cost and usage trends and take corrective actions. For example, you might notice that something has changed, but you’re not sure what. Suppose you have a subscription that consumes around $100 every day. A new service was added into the subscription by mistake, resulting in the daily cost doubling to $200. With anomaly detection, you will be notified about the steep spike in daily cost, which you can then investigate to see if it’s an expected increase or a mistake, leading to early corrective measure.
You can also embed time-series anomaly detection capabilities into your apps to identify problems quickly. AI Anomaly Detector ingests time-series data of all types and selects the best anomaly detection algorithm for your data to ensure high accuracy. Detect spikes, dips, deviations from cyclic patterns, and trend changes through both univariate and multivariate APIs. Customize the service to detect any level of anomaly. Deploy the anomaly detection service where you need it—in the cloud or at the intelligent edge.
Use Alerts to get notified when an anomalous usage change is detected
You can subscribe to anomaly alerts to be automatically notified when an anomalous usage change is detected, with a subscription-scope email displaying the underlying resource groups that contributed to the anomalous behavior. Alerts can also be set up for your Azure reserved instances usage to receive email notifications, so you can take remedial action when your reservations have low utilization.
Here’s an example of how to create an anomaly alert rule:
Select the scope as the subscription which needs monitoring.
Navigate to the ‘Cost alerts’ page in Cost Management. Select ‘Anomaly’ as the Alert type.
Specify the recipient email IDs.
Click on ‘Create alert rule.’
In the event that an anomaly is detected, you will receive alert emails which give you basic information to help you start your investigation.
Get deeper insights with smart views
Use smart views in Cost Analysis to view anomaly insights that were automatically detected for each subscription. To drill into the underlying data for something that has changed, select the Insight link. You can also create custom views for anomalous usage detection such as unused costs from Azure reserved instances and savings plans that could point to further optimization for specific workloads.
You can also group related resources in Cost Analysis and smart views. For example, group related resources, like disks under virtual machines or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID. Or use Charts in Cost Analysis smart views to view your daily or monthly cost over time.
Use Copilot for AI-based assistance
For quick identification and analysis of anomalies in your cloud spend, try the AI-powered Copilot in Cost Management––available in preview on the Azure Portal. For example, if a cost doubles you can ask Copilot natural language questions to understand what happened and get the insights you need faster. You don’t need to be an expert in navigating the cost management UI or analyzing the data, you simply let the AI do it for you. For example, you can ask, “why did my cost increase this month?” or “which service led to the increase in cost this month?” Copilot will then provide a breakdown by categories of spend and their percentage impact on your total invoice. From there, you can leverage the generated suggestions to investigate your bill further.
Learn more about streamlining anomaly management
Optimizing your cloud spend with Azure becomes much easier when you streamline your anomaly management processes with tools like anomaly detection, alerts, and smart views in Microsoft Cost Management. You can learn even more about using FinOps best practices to manage anomalies in your resource usage at aka.ms/finops/solutions.
Microsoft Tech Community – Latest Blogs –Read More
Zero-trust Security for Windows Container-based application with Calico
Hello, we would like to feature our partners from Tigera Calico that we teamed up with to co-author a blog on Zero-Trust security for Windows container-based applications with Calico. Below are the names of the partners that co-authored the blog.
Dhiraj Sehgal Jen Luther Thomas
Enterprises are increasingly integrating Windows containers into their Kubernetes workflow and much like Linux containers they are looking to strengthen their Windows container based application’s security posture by explicitly authorizing and verifying every communication request and minimizing trust assumptions. Zero-trust workload security restricts communication between pods and services at a very fine-grained level, resulting in multiple benefits that include:
Enhanced Security: Ensures each pod has limited and authorized communication access, preventing potential threats from spreading across the cluster.
Compliance: Achieves compliance requirements by enforcing strict access controls and data isolation.
Isolation of Sensitive Data: Isolates sensitive data from other less sensitive workloads to reduce the risk of unauthorized access.
Workload Communication Visibility: Provides better visibility into workload-workload communication and security gaps, including network security policies.
As the number of Windows container-based workloads and associated pods running in the cluster grows, building security posture requires zero-trust workload access security with the following:
Egress access controls: Secure access from individual pods running Linux or windows containerized workloads in a Kubernetes cluster to external resources, including cloud services, databases, and 3rd-party APIs.
DNS Policies: Enforce DNS policies at the source pod so that fully qualified domain names (FQDN/DNS) can be used to allow access from a pod or set of pods (via label selector) to external resources—eliminating the need for a firewall rule or equivalent.
Global and Namespaced Network Sets: Automatically update access controls for all IPs described by the CIDR notation using IP subnet/CIDR in security policies.
Identity-Aware Microsegmentation: Segment workloads using workload identities to achieve workload isolation and limit lateral communication.
Application-Layer Policy: Apply security controls at the application level to secure pod-to-pod traffic, including HTTP methods and URL paths. Eliminate the operational complexity of deploying an additional service mesh.
Let’s go through an example to build zero-trust security for the demo application Online Boutique (previously known as Hipster Shop), an 11 microservice demo application, in a Azure Kubernetes Service environment and connected to Calico Cloud. After Online Boutique is deployed, the associated microservices, including RecommendationService and ProductCatalogService, as shown below, are monitored for breakdown, timeouts, and slow performance.
The deployment looks like this:
Figure: Online Boutique microservices architecture
Zero-trust workload access control for CartService:
We will explore two scenarios to secure CartService that carries products for the checkoutservice after product selection from the Redis database has happened. CartService is powered by an external third-party service. The service needs to be secure and have exclusive access from checkout to prevent tampering with the changes in the cart.
Scenario 1: Building the security policy for CartService
Whether or not the DevOps engineer understands the layout of their microservice architecture or the associated label schema for those workloads, once the application is introduced into the cluster, the team can make use of Calico Cloud’s ‘Recommend a policy’ feature to automatically highlight flows between workloads as seen below:
Policy recommendation will aggregate the metadata of those flows to understand their full context and suggest a policy that allow-lists traffic between cartservice and checkoutservice based explicitly on the port, protocol, and the label’s key-pair value match.
Users then assess the impact of the recommended policy using Preview and/or Stage to observe the effect on traffic without impacting actual traffic flows.
The preview option comes in handy as teams can collectively understand the impact from their respective roles, which can be developer, security, DevOps, or network engineer. DevOps engineer or Developer can enforce their policy after understanding its impact on network flow. Further, they can also download Kubernetes CR YAML and check into their git repo to apply it as part of their code deployment. Even if the environment is rebuilt, the policies being part of code are directly applied to the services.
Once the zero-trust security policy is enforced between trusted workloads of cartservice and checkoutservice, the user can also create a default-deny policy at the end of the namespace to deny unwanted lateral connections.
Scenario 2: How to implement a security policy (if a threat is detected and CartService is vulnerable)
If the CartService is vulnerable due to poor policy design, and an identified threat is able to probe that workload, DevOps can create a quarantine policy to log and deny those flows at the earliest possible stage of the policy tier board in Calico Cloud.
Implement identity-aware microsegmentation for Frontend and ShippingService
Frontend talks to ShippingService, but under organizational rules. Shippingservice is the service that stores all mailing information for all customers. The Frontend purpose is to provide customer login and interact with other services which changes with respect to newer product availability and existing product inventory. Both services have distinct security requirements as they are owned by different teams and contain different levels of confidential information. Let’s simplify it to the next figure, where frontend and backend are in different zones and have controlled communication among them.
Figure 1: Storefront microservices architecture
How to make sure that ‘frontend’ and ‘backend’ microsegmentation happen according to organizational requirements
In this scenario, DevOps can create a zone-based architecture via a security policy similar to traditional firewall solutions. The frontend workload is given a label match of `fw-zone=dmz` (Demilitarized Zone). Any workloads with the DMZ label match can receive ingress traffic from the public internet and can then relay those flows to workloads in a trusted zone (i.e. service-1)
The “trusted” zone is responsible for controlling flows between microservices within that zone, as well as securely allowing traffic to and from the DMZ and ‘restricted’ zones. Team can implement zero trust by only allowing traffic between these pods explicitly, based on label match, port, and protocol. That way, if a new workload is introduced into this namespace, it would need to match all three of the above contexts in order for the packet to be allowed.
Figure 2: Kubernetes insecure flat network design to rogue workloads
Finally, the team implements a “restricted” zone that ensures workloads handling sensitive data, such as databases or log event handlers, are only able to talk to workloads in a trusted zone. This applies to both ingress and egress traffic. Under no circumstance could a rogue workload in our cluster talk to this database, nor could the database interface against any third-party services/APIs. The only way it could talk to any external IP is via this secure zone-based architecture.
Conclusion
Windows on AKS can be extended with partner solutions, just like Linux by utilizing Calico’s recommended policies, policy board, and tiering, teams can reduce the attack surface of deployed Windows-based containers in a namespace and implement microsegmentation to prevent lateral movement of threats across different workloads within a namespace to strengthen their application’s security posture.
Try it yourself here in self-paced workshop.
Microsoft Tech Community – Latest Blogs –Read More
Inclusive and productive Windows 11 experiences for everyone
Today we begin to release new features and enhancements to Windows 11 Enterprise—features that offer a more intuitive and user-friendly experience for both workers and IT admins. Most of these new features will be enabled by default in the March 2024 optional non-security preview release for all editions of Windows 11, versions 23H2 and 22H2. IT admins who want to get the new Windows 11 features can enable optional updates for their managed devices via policy.
New in accessibility
One of the most exciting areas of enhancement involves voice access, a feature in Windows 11 that enables everyone, including people with mobility disabilities, to control their PC and author text using only their voice and without an internet connection. Voice access now supports multiple languages, including French, German, and Spanish. People can create custom voice shortcuts to quickly access frequently used commands. And, voice access now works across multiple displays with number and grid overlays that help people easily switch between screens using only voice commands.
Enhancements to Narrator, the built-in screen reader, are also coming. You’ll be able to preview natural voices before downloading them and utilize a new keyboard command that allows you to more easily move between images on a screen. Narrator’s detection of text in images, including handwriting, has been improved, and it now announces the presence of bookmarks and comments in Microsoft Word.
If you’re interested in learning about Windows 11 accessibility features, please check out the following resources:
Inside Windows 11 accessibility setting and tools
Skilling snack: Accessibility in Windows 11
Skilling snack: Voice access in Windows
Enhanced sharing
Sharing content is now easier with updates to Windows share and Nearby Share. The Windows share window now displays different apps for “Share using” based on the account you use to sign in. Nearby Share has also been improved, with faster transfer speeds for people on the same network and the ability to give your device a friendly name for easier identification when sharing.
Casting
Casting, the feature that allows you to wirelessly send content from your device to a nearby display, has been enhanced. You will receive notifications suggesting the use of Cast when multitasking, and the Cast menu in quick settings now provides more help in finding nearby displays and fixing connections.
Snap layouts
Snap layouts, the feature that helps you organize the apps on your screen, now allows you to hover over the minimize or maximize button of an app to open the layout box, and to view various layout options. This makes it easier for you to choose the best layout for the task at hand.
New Windows 365 features now available
Windows 365 now offers new features including a new, dedicated mode for Windows 365 Boot that allows you to sign in to your Cloud PC using passwordless authentication. A fast account switching experience has also been added. For Windows 365 Switch, which lets you sign in and connect to your Cloud PC using Windows 11 Task view, you’ll now find it easier to disconnect from your Cloud PC and see desktop indicators to help you easily see whether you are on your Cloud PC or local PC.
For more information, see today’s post, New Windows 365 Boot and Switch features now available.
Unified enterprise update management
We are also releasing enhancements to Windows Autopatch in direct response to your feedback. Several new and upcoming enhancements give you more control, extend the value of your investments, and help you streamline update management, including:
The ability to import Update rings for Windows 10 and later (preview)
Customer defined service outcomes (preview)
Improved data refresh speed and reporting accuracy
Looking ahead, one of the most noticeable changes in Windows Autopatch will be a simplified update management interface that will make the update ecosystem easier to understand. We are unifying our update management offering for enterprise organizations—bringing together Windows Autopatch and the Windows Update for Business deployment service into a single service that enterprise organizations can use to update and upgrade Windows devices as well as update Microsoft 365 Apps, Microsoft Teams, and Microsoft Edge.
We invite you to read our ongoing Windows Autopatch updates in the Windows IT Pro Blog to find out more about richer functionality planned for Windows Autopatch. For the latest, see What’s new in Windows Autopatch: February 2024.
Get familiar with the latest innovations, including Copilot, creator apps, and more
Today’s announcement from Yusuf Mehdi offers more details about new innovations coming to Windows 11 including availability and rollout plans. You can find a summary of all the new enhancements and features in the Windows Update configuration documentation and, as always, stay up to date on rollout plans and known issues (identified and resolved) via the Windows release health dashboard.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Inclusive and productive Windows 11 experiences for everyone
Today we begin to release new features and enhancements to Windows 11 Enterprise—features that offer a more intuitive and user-friendly experience for both workers and IT admins. Most of these new features will be enabled by default in the March 2024 optional non-security preview release for all editions of Windows 11, versions 23H2 and 22H2. IT admins who want to get the new Windows 11 features can enable optional updates for their managed devices via policy.
New in accessibility
One of the most exciting areas of enhancement involves voice access, a feature in Windows 11 that enables everyone, including people with mobility disabilities, to control their PC and author text using only their voice and without an internet connection. Voice access now supports multiple languages, including French, German, and Spanish. People can create custom voice shortcuts to quickly access frequently used commands. And, voice access now works across multiple displays with number and grid overlays that help people easily switch between screens using only voice commands.
Enhancements to Narrator, the built-in screen reader, are also coming. You’ll be able to preview natural voices before downloading them and utilize a new keyboard command that allows you to more easily move between images on a screen. Narrator’s detection of text in images, including handwriting, has been improved, and it now announces the presence of bookmarks and comments in Microsoft Word.
If you’re interested in learning about Windows 11 accessibility features, please check out the following resources:
Inside Windows 11 accessibility setting and tools
Skilling snack: Accessibility in Windows 11
Skilling snack: Voice access in Windows
Enhanced sharing
Sharing content is now easier with updates to Windows share and Nearby Share. The Windows share window now displays different apps for “Share using” based on the account you use to sign in. Nearby Share has also been improved, with faster transfer speeds for people on the same network and the ability to give your device a friendly name for easier identification when sharing.
Casting
Casting, the feature that allows you to wirelessly send content from your device to a nearby display, has been enhanced. You will receive notifications suggesting the use of Cast when multitasking, and the Cast menu in quick settings now provides more help in finding nearby displays and fixing connections.
Snap layouts
Snap layouts, the feature that helps you organize the apps on your screen, now allows you to hover over the minimize or maximize button of an app to open the layout box, and to view various layout options. This makes it easier for you to choose the best layout for the task at hand.
New Windows 365 features now available
Windows 365 now offers new features including a new, dedicated mode for Windows 365 Boot that allows you to sign in to your Cloud PC using passwordless authentication. A fast account switching experience has also been added. For Windows 365 Switch, which lets you sign in and connect to your Cloud PC using Windows 11 Task view, you’ll now find it easier to disconnect from your Cloud PC and see desktop indicators to help you easily see whether you are on your Cloud PC or local PC.
For more information, see today’s post, New Windows 365 Boot and Switch features now available.
Unified enterprise update management
We are also releasing enhancements to Windows Autopatch in direct response to your feedback. Several new and upcoming enhancements give you more control, extend the value of your investments, and help you streamline update management, including:
The ability to import Update rings for Windows 10 and later (preview)
Customer defined service outcomes (preview)
Improved data refresh speed and reporting accuracy
Looking ahead, one of the most noticeable changes in Windows Autopatch will be a simplified update management interface that will make the update ecosystem easier to understand. We are unifying our update management offering for enterprise organizations—bringing together Windows Autopatch and the Windows Update for Business deployment service into a single service that enterprise organizations can use to update and upgrade Windows devices as well as update Microsoft 365 Apps, Microsoft Teams, and Microsoft Edge.
We invite you to read our ongoing Windows Autopatch updates in the Windows IT Pro Blog to find out more about richer functionality planned for Windows Autopatch. For the latest, see What’s new in Windows Autopatch: February 2024.
Get familiar with the latest innovations, including Copilot, creator apps, and more
Today’s announcement from Yusuf Mehdi offers more details about new innovations coming to Windows 11 including availability and rollout plans. You can find a summary of all the new enhancements and features in the Windows Update configuration documentation and, as always, stay up to date on rollout plans and known issues (identified and resolved) via the Windows release health dashboard.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
What is AI? Jared Spataro at the Global Nonprofit Leaders Summit
Jared Spataro, Microsoft Corporate Vice President, AI at Work, presented an engaging keynote at the Global Nonprofit Leaders Summit that left everyone amazed and optimistic about the abilities and simplicity of AI for everyone.
Watch Jared’s session for a walk through that shows how Microsoft Copilot can be a powerful tool for productivity and creativity. From the fun and fantastic, to the practical and powerful, Jared queries Copilot in a real-time demo using his own workstreams in Outlook, Teams, and more:
Can elephants tow a car?
What will the workplace of the future look like?
Can you write a Python script to extract insights from this data?
Can you summarize and prioritize the latest emails from my boss?
Jared shares important tips for prompt engineering, previews the new “Sounds like me” feature to co-create responses in your own voice, and talks about the value of AI being “usefully wrong.”
And he reminds us to say please and thank you.
What did you learn from Jared’s session? How are you using Copilot to enhance creativity and productivity?
Microsoft Tech Community – Latest Blogs –Read More
What’s New in Copilot for Microsoft 365
Welcome to the first edition of What’s new in Copilot for Microsoft 365. We are continuing to enhance Copilot to provide deeper experiences for users and tighter integration with your organization’s data to unlock even more capabilities. Whether you’re a Microsoft 365 admin for a large enterprise or smaller company or someone who uses Copilot for Microsoft 365 for their daily work, every month we’ll highlight updates to let you know about new and upcoming features and where you can find more information to help make your Copilot experience a great one. In addition to these monthly posts, we’ll continue to provide updates through our usual message center posts and on our public roadmap.
Today, we are highlighting Copilot support in 17 additional languages, expanded resources and coming features in Copilot Lab, the updated Copilot experience in Teams, Copilot in the Microsoft 365 mobile app, and a new feature that provides a single entry point to help you create content from scratch. We’ll also take a look at updates to Copilot in OneDrive, Stream, and Forms plus a new feature that generates content summaries when you share files with coworkers. Finally, we’ll share a bit on what’s new in the Copilot for Microsoft 365 Usage report for admins. Let’s take a closer look at what’s new this month:
Experience Copilot support for more languages
Begin your Copilot journey and build new skills with Copilot Lab
Copilot now available in the Microsoft 365 mobile app
Introducing Copilot in Forms
Extract information quickly from your files with Copilot in OneDrive
Include quick summaries when sharing documents
Get instant video summaries and insights with Copilot in Stream
Try new ways of working with Help me create
Draft emails quicker and get coaching tips for your messages with Copilot in classic Outlook for Windows
Experience the new Copilot experience in Microsoft Teams
Check out the improved usage reports for Microsoft Copilot in the admin center
Catch up on the Copilot for Microsoft 365 Tech Accelerator
Experience Copilot support for more languages
We are adding support for an additional 17 languages, further expanding access to Copilot worldwide. We will start rolling out Arabic, Chinese Traditional, Czech, Danish, Dutch, Finnish, Hebrew, Hungarian, Korean, Norwegian, Polish, Portuguese (Portugal), Russian, Swedish, Thai, Turkish and Ukrainian over March and April. Copilot is already supported in the following languages: English (US, GB, AU, CA, IN), Spanish (ES, MX), Japanese, French (FR, CA), German, Portuguese (BR), Italian, and Chinese Simplified. Check the public roadmap and message center to track roll out status.
Copilot in Excel (preview) is currently supported in English (US, GB, AU, CA, IN) and will be supported in Spanish (ES, MX), Japanese, French (FR, CA), German, Portuguese (BR), Italian, and Chinese Simplified starting in March.
Begin your Copilot journey and build new skills with Copilot Lab
Copilot Lab helps users get started with the art of prompting and helps organizations with onboarding and adoption by providing a single experience that meets Copilot users where they are in their journey. Today, we’re expanding Copilot Lab by transforming the current prompts library into a comprehensive learning resource that helps everyone begin their Copilot journey with confidence and to take greater advantage of Copilot in their daily work.
Start your Copilot journey with ease. We’ve learned from our earliest Copilot adopters that working with generative AI requires new skills and habits. Copilot Lab already shows up in Copilot for Microsoft 365, Word, PowerPoint, Excel, and OneNote via the small notebook icon that suggests relevant prompts to inspire you. Now, we have consolidated our best resources, training videos, ready-made prompts, and inspiration to make Copilot Lab the single resource to help you get started. To do this, we’ve brought together our own internal best practices, insights from our earliest customers, findings from the Microsoft Research team, and thought leadership published on WorkLab.
Achieve more together by sharing your favorite prompts. With Copilot Lab, we are making it even easier to create, save, and share your favorite prompts with colleagues inside your organization. Now you can share prompts with colleagues to prepare for a customer meeting or to generate ideas for a new product launch. And leaders across your organization can showcase how they’re using Copilot by sharing their favorite prompts to save time or tackle any task at hand, to help improve personal and team productivity and encourage community-centric learning and adoption. This feature is integrated into the Copilot Lab website and in-app experiences will begin rolling out by this summer.
You can access Copilot Lab today at copilot.cloud.microsoft/prompts or directly in app by selecting the notebook icon next to the Copilot prompt window.
Copilot now available in the Microsoft 365 mobile app
We’re extending Copilot to the Microsoft 365 mobile app and to the Word and PowerPoint mobile apps. With the new Microsoft 365 app look and feel, you can easily find Copilot alongside your content, apps, and shortcuts. You can use it to:
Bring your content into Copilot to complete tasks on the go. Summarize documents, translate, explain, or ask questions, and have your answer grounded in the content you select.
Start generating content wherever you work based on your ideas and existing information, and hand over to Microsoft 365 mobile apps to continue working.
Interact with Copilot in Word mobile and PowerPoint mobile to comprehend content better and skim through only the most important slides on the go (requires a Copilot license).
The Microsoft 365 mobile app complements the Copilot mobile app rolled out earlier this month, and licensed users can continue to use the Copilot mobile app to have responses grounded in both web or work data. IT admins can easily deploy both the Microsoft 365 mobile app and the Copilot mobile app to corporate devices using Microsoft Intune or a third-party tool, or users can simply download the Microsoft 365 mobile app on any supported device and sign in.
Copilot integration in the Microsoft 365 mobile app and the Word and PowerPoint mobile apps is rolling out now. You can learn more here.
Create compelling surveys, polls, and forms with Copilot in Forms
Use Copilot to simplify the process of creating surveys, polls, and forms, saving you time and effort. Go to forms.microsoft.com, select New, and tell Copilot your topic, length, and any additional context. Copilot will provide relevant questions and suggestions, and then you can refine the draft by adding extra details, editing text, or removing content. Once you’ve created a solid draft with Copilot, you can then customize the background with one of the many Forms style options. With Copilot in Forms, you’ll effortlessly create well-crafted forms that capture your audience’s attention, leading to better response rates.
Copilot in Forms will be available in March. You can learn more here.
Extract information quickly from your files with Copilot in OneDrive
Copilot in OneDrive gives you instant access to information contained deep within your files. Initially available from the OneDrive web experience, Copilot will provide you with smart and intuitive ways to interact with your documents, presentations, spreadsheets, and files. You can use Copilot in OneDrive to:
Get information from your files: Ask questions about your content using natural language, and Copilot will fetch the information from your files, saving you the work and time of manually searching for what you need.
Generate file summaries: Need a quick overview of a file? Copilot can summarize the contents of one or multiple files, offering you quick insights without having to even open the file.
Find files using natural language: Find files in new ways by using Copilot prompts such as “Show me all the files shared with me in the past week” or “Show files that Kat Larson has commented in.”
Copilot in OneDrive will be available in late April on OneDrive for Web. You can learn more here.
Alt text: Video showing Copilot in OneDrive with a prompt to extract information from a collection of resumes.
Include quick summaries when sharing documents
Add Copilot-generated summaries when you share documents with your colleagues. These summaries, included in the document sharing notification, give your recipients immediate context around a document and a quick overview of its content without needing to open the file. Sharing summaries helps users prioritize work, increases engagement, and reduces cognitive burden.
Sharing summaries will be available in March 2024, starting when sharing a Word document from the web, with support in the desktop client and the mobile app later this year. Learn more here.
Get instant video summaries and insights with Copilot in Stream
By using Copilot in Microsoft Stream, you can quickly get the information you need about videos in your organization, whether you’re viewing the latest Teams meeting recording, town hall, product demo, how-to, or onsite videos from frontline workers. Copilot helps you get what you need from your videos in seconds. You can use it to:
Summarize any video and identify relevant points you need to watch
Ask questions to get insights from long or detailed videos
Locate when people, teams, or topics are discussed so you can jump to that point in the video
Identify calls to action and where you can get involved to help
Copilot in Stream will be available in late April. You can learn more here.
Try new ways of working with Help me create
In March, we’re rolling out a new Copilot capability in the Microsoft 365 web app that helps you focus on the substance of your content while Copilot suggests the best format: a white paper, a presentation, a list, an icebreaker quiz, and so on. In the Microsoft 365 app at microsoft365.com, simply tell Help me create what you want to work on and it will suggest the best app for you and give you a boost with generative AI suggestions. Learn more here.
Draft emails quicker and get coaching tips for your messages with Copilot in classic Outlook for Windows
Customers of the new Outlook for Windows have been enjoying Copilot features like draft, coaching, and summary which we announced last year. Since November last year, summary by Copilot has also been available in classic Outlook for Windows. Soon, draft and coaching will be coming to classic Outlook too.
Draft with Copilot helps you reduce time spent on email by drafting new emails or responses for you with just a short prompt that explains what you want to communicate. Because you are always in control with Copilot, you can choose to adjust the proposed draft in length and tone or ask Copilot to generate a new message – and you can always go back to the previous options if you prefer.
Coaching by Copilot can help you get your point across in the best possible way, coaching you on tone (for example, too aggressive, too formal, and so on), reader sentiment (how a reader might perceive your message), and clarity. Copilot can provide coaching for drafts it created or drafts you wrote yourself.
Coaching will start rolling out in early March and draft by Copilot will start rolling out in late March.
Experience the new Copilot in Microsoft Teams
We have recently enabled a new Copilot experience in Microsoft Teams that offers better prompts, easier access, and more functionality than the previous version. Copilot in Teams will be automatically pinned above your chats, and you can use it to catch up, create, and ask anything related to Microsoft 365. Learn more about the new Copilot experience in Teams here.
Check out the improved usage reports for Microsoft Copilot in the admin center
The Microsoft 365 admin center Usage reports offer a growing set of usage insights across your Microsoft 365 cloud services. Among these reports, the Copilot for Microsoft 365 Usage report (Preview) is built to help Microsoft 365 admins plan for rollout, inform adoption strategy, and make license allocation decisions.
The report now includes usage metrics for Microsoft Copilot with Graph-grounded chat. This allows you to see how Chat compares with usage of Copilot in other apps like Teams, Outlook, Word, PowerPoint, Excel, OneNote and Loop. You can review the enabled and active user time series chart to assess how usage is trending over time. The new metric has been added retroactively dating back to late November of 2023. To access the report, navigate to Reports > Usage and select the Copilot for Microsoft 365 product report. Learn more here.
Learn more about the use of Copilot for Microsoft 365 in the Financial Services Industry
Today we are releasing the new white paper for the financial services industry (FSI) with information about use cases and benefits for the FSI, information about risks and regulations, guidance for managing and governing a generative AI solution, and more information about how to prepare for Copilot. Read the paper here.
Catch up on the Copilot for Microsoft 365 Tech Accelerator
In case you missed it, you can catch up on all the sessions from the Copilot for Microsoft 365 Tech Accelerator via recordings on the event page. The event covered a range of topics including how Copilot works, how to prepare your organization for Copilot, strategies for deploying, driving adoption, and measuring impact, and deep dives on how to extend Copilot with Copilot Studio and Graph connectors. Chat Q&A is open through Friday, March 1, 12:00 P.M. PT, so watch the recordings and get any questions you might have answered.
Did you know? The Microsoft 365 Roadmap is where you can get the latest updates on productivity apps and intelligent cloud services. Check out what features are in development or coming soon on the Microsoft 365 Roadmap. All future rollout dates assume the feature availability on the Current Channel. Customers should expect these features to be available on the Monthly Enterprise Channel the second Tuesday of the upcoming month.
Microsoft Tech Community – Latest Blogs –Read More
Azure DDoS Protection – SecOps Deep Dive
This blog is written in collaboration with @SaleemBseeu
Introduction:
Azure DDoS protection is a security solution offered by Microsoft Azure to protect applications and resources from Distributed Denial of Service (DDoS) attacks. DDoS attacks are a type of attacks that attempt to overwhelm a target application or service by flooding it with a massive volume of malicious traffic, thereby rendering it unavailable to legitimate users. Azure DDoS protection addresses these concerns by providing advanced mitigation capabilities and ensuring the availability of resources.
Some of the key features of Azure DDoS Protection include:
Adaptive Tuning – Adaptive tuning will help with setting up protection policies tuned to your application’s traffic profiles. It automatically learns a baseline representing your application posture in peace time and sets mitigation threshold. The profile adjusts as traffic changes over time.
Attack Analytics and Metrics – Attack analytics will help with detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. We can stream these mitigation flow logs to Microsoft Sentinel or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. On top of this, alerts can be configured at the start and stop of an attack, and over the attack’s duration, using built-in attack metrics.
DDoS Rapid Response – During an active attack, Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis.
Cost Guarantee – Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks.
SecOps Deep Dive Investigation:
In this blog we will be focusing on how to investigate a DDoS Attack using the logs/metrics and newly built KQL queries.
Prerequisites:
To initiate an investigation into a DDOS attack, we must establish the following prerequisites. More information on configuration of the prerequisites can be found here.
Create a DDoS Protection Plan.
Associate the DDoS Protection Plan to an existing Virtual Network.
Set Up Diagnostic Logging for the respective Public IP resource.
Simulate a DDoS Attack using one of our Simulation Partners. For more information on attack simulation, refer to this documentation.
Once the Attack Simulation is completed, we can look in to following metrics available under a public IP resource to better understand the attack patterns.
Under DDoS Attack or Not: This Metric will give information on whether our Public IP resource is under a DDoS attack or not. This metric can also be used to set up an alert to notify about DDoS attacks via email and other available options. As we can see here, during the attack duration, the metric changed from 0 to 1 and rest of the times it remained 0.
Inbound SYN Packets to trigger DDoS Mitigation: This metric will provide information on the Threshold for Inbound Syn packets to trigger DDoS Mitigation. This threshold will be unique to each public IP resource depending on the average application traffic and adaptive tuning functionality of Azure DDoS Protection. In this case the threshold is 10k Syn Packets per second.
Inbound TCP Packets DDoS: This metric will give us information on number of TCP packets came in during a DDoS Attack. In this case 49.30k Packets per second.
Inbound TCP Packets Dropped DDoS: This metric will give us information on number of TCP packets dropped out of the total incoming TCP packets. In this case almost all the packets 49.30k packets per second were dropped.
Inbound TCP Packets Forwarded DDOS: This metric will give us information on number of TCP packets forwarded to the service or application. In this case only about 19 packets were forwarded and rest all the packets were dropped.
In this way we can utilize the Public IP resource metrics to get deeper insights into the DDoS attacks.
Queries for DDoS Mitigation Trends:
To understand the attack patterns easily, our team has developed 2 new KQL queries that will give detailed information on total packet trends and Attack Vectors as shown below. These Queries are available in our Net Sec GitHub Repository – GitHub
To get the trend of the total and dropped packets for each public IP address:
To get the information on DDoS attack mitigation duration and attack vectors
As demonstrated earlier, the two queries yield essential insights into recent attacks occurring within a specified time frame. For a deeper analysis of these attacks, we can explore the following three log categories:
AzureDiagnostics | where Category == “DDoSProtectionNotifications”: This log category furnishes details about the initiation and cessation of DDoS mitigation. These logs serve as a basis for configuring alerts to notify the Security Operations Center (SOC) Analyst as necessary.
AzureDiagnostics | where Category == “DDoSMitigationReports”: Within this category, you’ll find comprehensive post-mitigation reports and incremental updates, generated every 5 minutes during an ongoing DDoS attack. These reports encompass critical information such as packet counts, attack types, protocols, and details about the attacker’s source. To get summarized information on this, we can also use the queries from GitHub.
AzureDiagnostics | where Category == “DDoSMitigationFlowLogs”: This log category provides a granular view of each packet encountered during an attack. It includes crucial data such as packet forwarding or dropping status, along with specifics like source IP, destination IP, port, and protocol.”
In this case, our application is situated behind an Azure Application Gateway WAF. Once we have conducted a comprehensive log analysis, we can proceed to evaluate the health of our application using the metrics furnished by the Application Gateway. These metrics offer detailed insights during the attack time, including but not limited to the metrics listed below:
Failed Requests – Count of Failed Requests that the App Gateway has served.
Throughput – Number of Bytes per second the App Gateway has served.
Backend First Byte Response Time – Approximating Processing time of backend server.
In addition to the details provided earlier, we have a predefined workbook that provides comprehensive insights into historical Distributed Denial of Service (DDoS) attacks targeting specific sets of public IP resources. This consolidated dashboard outlines critical attack information, including recent incidents, protocols involved, drop reasons, and the countries of origin etc.
Conclusion:
Azure DDoS protection provides a powerful shield for your infrastructure, helping you to defend against DDoS attacks. By investigating the telemetry, and using the provided metrics and logs, you can gain a deeper understanding of the nature of DDoS attacks and take appropriate action to protect your resources.
Microsoft Tech Community – Latest Blogs –Read More
What is AI? Jared Spataro at the Global Nonprofit Leaders Summit
Jared Spataro, Microsoft Corporate Vice President, AI at Work, presented an engaging keynote at the Global Nonprofit Leaders Summit that left everyone amazed and optimistic about the abilities and simplicity of AI for everyone.
Watch Jared’s session for a walk through that shows how Microsoft Copilot can be a powerful tool for productivity and creativity. From the fun and fantastic, to the practical and powerful, Jared queries Copilot in a real-time demo using his own workstreams in Outlook, Teams, and more:
Can elephants tow a car?
What will the workplace of the future look like?
Can you write a Python script to extract insights from this data?
Can you summarize and prioritize the latest emails from my boss?
Jared shares important tips for prompt engineering, previews the new “Sounds like me” feature to co-create responses in your own voice, and talks about the value of AI being “usefully wrong.”
And he reminds us to say please and thank you.
What did you learn from Jared’s session? How are you using Copilot to enhance creativity and productivity?
Microsoft Tech Community – Latest Blogs –Read More
Updates from 162.1 and 162.2 releases of SqlPackage and the DacFx ecosystem
Within the past 4 months, we’ve had 2 minor releases and a patch release for SqlPackage. In this article, we’ll recap the features and notable changes from SqlPackage 162.1 (October 2023) and 162.2 (February 2024). Several new features focus on giving you more control over the performance of deployments by preventing potential costly operations and opting in to online operations. We’ve also introduced an alternative option for data portability that can provide significant speed improvements to databases in Azure. Read on for information about these improvements and more, all from the recent releases in the DacFx ecosystem. Information on features and fixes is available in the itemized release notes for SqlPackage.
.NET 8 support
The 162.2 release of DacFx and SqlPackage introduces support for .NET 8. SqlPackage installation as a dotnet tool is available with the .NET 6 and .NET 8 SDK. Install or update easily with a single command if the .NET SDK is installed:
# install
dotnet tool install -g microsoft.sqlpackage
# update
dotnet tool update -g microsoft.sqlpackage
Online index operations
Starting with SqlPackage 162.2, online index operations are supported during publish on applicable environments (including Azure SQL Database, Azure SQL Managed Instance, and SQL Server Enterprise edition). Online index operations can reduce the application performance impact of a deployment by supporting concurrent access to the underlying data. For more guidance on online index operations and to determine if your environment supports them, check out the SQL documentation on guidelines for online index operations.
Directing index operations to be performed online across a deployment can be achieved with a command line property new to SqlPackage 162.2, “PerformIndexOperationsOnline”. The property defaults to false, where just as in previous versions of SqlPackage, index operations are performed with the index temporarily offline. If set to true, the index operations in the deployment will be performed online. When the option is requested on a database where online index operations don’t apply, SqlPackage will emit a warning and continue the deployment.
An example of this property in use to deploy index changes online is:
sqlpackage /Action:Publish /SourceFile:yourdatabase.dacpac /TargetConnectionString:”yourconnectionstring” /p:PerformIndexOperationsOnline=True
More granular control over the index operations can be achieved by including the ONLINE=ON/OFF keyword in index definitions in your SQL project. The online property will be included in the database model (.dacpac file) from the SQL project build. Deployment of that object with SqlPackage 162.2 and above will follow the keyword used in the definition, superseding any options supplied to the publish command. This applies to both ONLINE=ON and ONLINE=OFF settings.
DacFx 162.2 is required for SQL project inclusion of ONLINE keywords with indexes and is included with the Microsoft.Build.Sql SQL projects SDK version 0.1.15-preview. For use with non-SDK SQL projects, DacFx 162.2 will be included in future releases of SQL projects in Azure Data Studio, VS Code, and Visual Studio. The updated SDK or SQL projects extension is required to incorporate the index property into the dacpac file. Only SqlPackage 162.2 is required to leverage the publish property “PerformIndexOperationsOnline”.
Block table recreation
With SqlPackage publish operations, you can apply a new desired schema state to an existing database. You define what object definitions you want in the database and pass a dacpac file to SqlPackage, which in turn calculates the operations necessary to update the target database to match those objects. The set of operations are known as a “deployment plan”.
A deployment plan will not destroy user data in the database in the process of altering objects, but it can have computationally intensive steps or unintended consequences when features like change tracking are in use. In SqlPackage 162.1.167, we’ve introduced an optional property, /p:AllowTableRecreation, which allows you to stop any deployments from being carried out that have a table recreation step in the deployment plan.
/p:AllowTableRecreation=true (default) SqlPackage will recreate tables when necessary and use data migration steps to preserve your user data
/p:AllowTableRecreation=false SqlPackage will check the deployment plan for table recreation steps and stop before starting the plan if a table recreation step is included
SqlPackage + Parquet files (preview)
Database portability, the ability to take a SQL database from a server and move it to a different server even across SQL Server and Azure SQL hosting options, is most often achieved through import and export of bacpac files. Reading and writing the singular bacpac files can be difficult when databases are over 100 GB and network latency can be a significant concern. SqlPackage 162.1 introduced the option to move the data in your database with parquet files in Azure Blob Storage, reducing the operation overhead on the network and local storage components of your architecture.
Data movement in parquet files is available through the extract and publish actions in SqlPackage. With extract, the database schema (.dacpac file) is written to the local client running SqlPackage and the data is written to Azure Blob Storage in Parquet format. With publish, the database schema (.dacpac file) is read from the local client running SqlPackage and the data is read from or written to Azure Blob Storage in Parquet format.
The parquet data file feature benefits larger databases hosted in Azure with significantly faster data transfer speeds due to the architecture shift of the data export to cloud storage and better parallelization in the SQL engine. This functionality is in preview for SQL Server 2022 and Azure SQL Managed Instance and can be expected to enter preview for Azure SQL Database in the future. Dive into trying out data portability with dacpacs and parquet files from the SqlPackage documentation on parquet files.
Microsoft.Build.Sql
The Microsoft.Build.Sql library for SDK-style projects continues in the preview development phase and version 0.1.15-preview was just released. Code analysis rules have been enabled for execution during build time with .NET 6 and .NET 8, opening the door to performing quality and performance reviews of your database code on the SQL project. To enable code analysis rules on your project, add the item seen on line 7 of the following sample to your project definition (<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>).
<Project DefaultTargets=”Build”>
<Sdk Name=”Microsoft.Build.Sql” Version=”0.1.15-preview” />
<PropertyGroup>
<Name>synapseexport</Name>
<DSP>Microsoft.Data.Tools.Schema.Sql.Sql160DatabaseSchemaProvider</DSP>
<ModelCollation>1033, CI</ModelCollation>
<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>
</PropertyGroup>
</Project>
During build time, the objects in the project will be checked against a default set of code analysis rules. Code analysis rules can be customized through DacFx extensibility.
Ways to get involved
In early 2024, we added preview releases of SqlPackage to the dotnet tool feed, such that not only do you have early access to DacFx changes but you can directly test SqlPackage as well. Get the quick instructions on installing and updating the preview releases in the SqlPackage documentation.
Most of the issues fixed in this release were reported through our GitHub community, and in several cases the person reporting put together great bug reports with reproducing scripts. Feature requests are also discussed within the GitHub community in some cases, including the online index operations and blocking table recreation capabilities. All are welcome to stop by the GitHub repository to provide feedback, whether it is bug reports, questions, or enhancement suggestions.
Microsoft Tech Community – Latest Blogs –Read More
Introducing the Microsoft 365 Lighthouse and Microsoft Power Platform Integration Guide
We’re excited to announce the launch of the Microsoft 365 Lighthouse and Microsoft Power Platform Integration Guide. This guide provides a clear, step-by-step approach to integrating Microsoft 365 Lighthouse and Microsoft Power Platform into your organization and addresses the unique aspects and specific nuances of Microsoft 365 Lighthouse.
As Lighthouse seasoned users, you’re already familiar with its capabilities to efficiently manage services for customers. With Lighthouse and Power Platform together, you can really push the envelope. This guide is designed to elevate your expertise and take service management to the next level.
Power Platform is a suite of business application tools, including Power BI, Power Apps, Power Automate, Power Pages, and Copilot Studio. It provides a low-code environment that empowers users to quickly build custom business solutions. Many of you are likely already familiar with Power Platform in one way or another. The entire platform is built on top of an underlying data model called Microsoft Dataverse.
Integrating Lighthouse and Power Platform can revolutionize how your organization handles Lighthouse data, automation, and customer management. The integration allows for customized solutions that address specific organizational needs, driving efficiency and innovation.
Additionally, the recent general availability of Microsoft Copilot marks a turning point in how we approach AI in business processes. We’ll not only show you how you can integrate Lighthouse with Power Platform, but we’ll take it a step further and demonstrate how to develop your own Lighthouse copilot using Copilot Studio—built on the foundations of Power Virtual Agents and other Power Platform technologies. You’ll learn to pull and analyze Lighthouse data in new ways, from crafting personalized reports to setting up smart automations, all enhanced with the savvy insights of AI.
Microsoft 365 Lighthouse: A quick recapMicrosoft 365 Lighthouse empowers Managed Service Providers (MSPs) to deliver value to their customers consistently and at scale. It offers AI-driven insights for customer acquisition and retention, streamlined management of security baselines for securing and driving productivity, and comprehensive risk-management tools.
Key features:
Proactive account management with Sales Advisor, an AI-powered tool that provides insights and recommendations to improve customer relationships.
Simplified onboarding across all customer tenants, with a guided onboarding process that follows best practices.
Tenant configuration that’s efficient, consistent, and simplified, providing insight into users across all customer tenants.
User, device, and data protection for both MSPs and their customers, with the ability to quickly identify and act on threats.
Proactive monitoring and alerts to easily identify gaps in end-customer configurations.
Check out Microsoft 365 Lighthouse and learn more at Microsoft 365 Lighthouse: 2023 Year in Review.
Microsoft 365 Lighthouse API
The Microsoft 365 Lighthouse API, currently in its beta phase and accessible through Microsoft Graph, is available via the OData subnamespace microsoft.graph.managedTenants. It provides MSPs with a comprehensive and programmable interface and offers extensive capabilities to streamline operations, bolster security, and ensure compliance across multiple customer tenants.
Key details:
Provides access to over 20 different endpoints
Accessible via the “beta” path through Microsoft Graph
Requires multifactor authentication (MFA)
To familiarize yourself with the API, we recommend that you try it out using Microsoft Graph Explorer. In Graph Explorer, select beta from the version drop-down menu, and then enter a URL such as: https://graph.microsoft.com/beta/tenantRelationships/tenants. Make sure you also sign in using MFA. To learn more, see Work with Graph Explorer.
APIs under the beta version of Microsoft Graph are subject to change. Use of these APIs in production applications is not supported. We encourage feedback on which endpoints you’d like to see in the beta version or in v1.0. To provide feedback, use the Microsoft 365 Lighthouse feedback portal.
Why integrate Power Platform with Lighthouse?
Integrating with other services can significantly enhance the business value of Microsoft 365 Lighthouse. You can access data quicker, connect it with your own business apps, and develop custom workflows. We’ve already seen that Power Platform can provide some huge benefits to an organization:
Improve performance: Leverage the low-code/no-code capabilities of Power Platform to improve operational efficiency and outcomes.
Integrate with other services: Leverage thousands of pre-existing connectors to connect your Lighthouse data with your organization’s business apps to uncover meaningful trends.
Save on costs: Automate manual processes and reduce operational costs, while indirectly saving by improving product quality and resource utilization.
Mitigate risks: Use applications to improve data security and compliance, reducing the risks of sensitive data leakage.
Topics covered in the guide
The first version of the guide walks you through various aspects of the Power Platform and Lighthouse integration, including these high-level steps:
Register your app in Entra: We’ll start by registering a new app in Entra that gives Power Platform permission to access Lighthouse on your behalf.
Create your custom connector: We’ll show you how to create custom connectors to connect to your Lighthouse account.
Custom connectors in Power Platform serve as bridges between your data sources and applications. In the context of Lighthouse, they can be used to fetch real-time data directly into your custom applications or dashboards.
The connector we’ll build will be extremely simple. However, its importance cannot be overstated. With this connector, you’ll have a multifactor-authenticated way to pull data from Lighthouse to use in Microsoft Power Automate, store in Dataverse, create Power BI reports, or (in the case of this guide) create a Lighthouse copilot.
Once your connector is set up, we’ll test it by simulating various scenarios to ensure that the connector accurately fetches data as intended.
Authorize your connector: Using your registered app from step 1, authorize your new connector for use in Power Platform.
Use Lighthouse with Power Automate: Power Automate allows you to create automated workflows that integrate with your Lighthouse data via your custom connector. These workflows can streamline tasks like incident reporting, compliance checks, and user management.
Build your Lighthouse copilot: We’ll delve into how you can build a comprehensive management and monitoring system by fully leveraging the capabilities of Lighthouse and Power Platform.
Next steps
We’ll dive deeper into each aspect of integrating Lighthouse with Power Platform. Our goal is to provide you with a step-by-step guide to truly transform the way you manage and deliver services to your clients.
Join us in exploring the full potential of this integration and how it can elevate your MSP operations to new heights of efficiency, security, and business value.
Start using Microsoft 365 Lighthouse today!
To learn more about Microsoft 365 Lighthouse, see Sign up for Microsoft 365 Lighthouse or review the Microsoft 365 Lighthouse documentation.
To learn more about Microsoft Power Platform, review the Microsoft Power Platform documentation.
Microsoft Tech Community – Latest Blogs –Read More
Unleashing the Power of SAM in Azure Machine Learning and Azure AI Studio
Unleashing the Power of SAM in Azure Machine Learning and Azure AI Studio: Generating Segmentation Masks by leveraging Bounding Box data from Object Detection Models
Introduction
In the ever-evolving world of artificial intelligence and machine learning, the ability to process images stands swiftly and accurately at the forefront of technological advancements. Azure Machine Learning (AzureML) and Azure AI Studio, cutting-edge machine learning platforms by Microsoft, has consistently been at the helm of such innovations. In our latest update, we’ve introduced an exciting advancement: the availability of SAM (Segment Anything Model) models in Azure AI model catalog. Now users have the flexibility to create their own SAM endpoints, tailored to their specific project needs. This Model is a game-changer in the realm of image processing, particularly for generating segmentation masks in scenarios where extensive datasets are unavailable.
Traditionally, the generation of accurate segmentation masks requires large and comprehensive datasets. However, many real-world applications lack this luxury, especially when dealing with niche or rapidly changing subjects. This is where SAM steps in, offering a novel solution by leveraging bounding box data from object detection (OD) models.
In this blog, we delve into the applications of SAM in scenarios where sufficient data for training a traditional segmentation model is not available.
Throughout this article, we will provide a comprehensive step-by-step guide, from training OD models with constrained datasets to deploying these models effectively. Our aim is to illustrate how SAM can be harnessed to achieve precise image segmentation, especially in contexts where data limitations have traditionally hindered the development of effective segmentation models. Whether you’re well-versed in AI or just beginning your journey, this guide will offer a clear understanding of SAM’s capabilities on Azure Machine Learning Platform and its transformative potential in your next image segmentation project.
What is SAM?
SAM, or the Segment Anything Model, introduced by Meta, stands at the forefront of a new era in computer vision. Segment Anything task involves identifying and isolating specific objects within an image, regardless of the object type or the image domain.
What sets SAM apart is its ability to understand and respond to a variety of input prompts. These prompts can be as simple as foreground/background points, rough boxes, or masks around the object of interest. Additionally, SAM can interpret freeform text and interactive clicks, allowing users to specify or refine the object to be segmented with ease. This adaptability makes SAM a highly versatile tool, capable of catering to a wide array of segmentation needs.
For more detailed information about this model, visit Segment-Anything.com.
Reference image showing bounding box as prompt and corresponding generated mask. [source]
Steps for Generating Segmentation Masks on OD Dataset Using SAM
Fine-Tune and deploy an OD (Object Detection) Model on Azure Machine Learning/Azure AI Studio.
Infer the deployed OD model endpoint for getting the Bounding Box Prompts.
Deploying the SAM Model.
Infer the SAM Model with the Bounding Box prompts generated by OD model.
For a practical walkthrough, refer to our detailed Jupyter notebook, which complements this blog. It’s a great resource for those who want to dive into the code and see these steps in action. Access it here: Jupyter Notebook for SAM Segmentation Masks.
1. Fine-Tune and deploy an Object Detection Model
We will start by fine-tuning an Object Detection (OD) model using the odFridgeObjects dataset, a collection of 128 images featuring four types of beverage containers (can, carton, milk bottle, water bottle) against different.
For our current task, we have selected ‘YoloF’ (identified as ‘mmd-3x-yolof_r50_c5_8x8_1x_coco’) from the model catalog. However, users have the freedom to choose any Object Detection model from the Model Catalog. In Azure AI model catalog, we have curated a selection of OD models that are optimized with a rich set of defaults, offering a good out-of-the-box performance for a diverse range of datasets. In addition to the curated models, you can use any model from OpenMMLab’s MMDetection Model Zoo. This flexibility and ease of use open a plethora of possibilities for users to tailor their projects according to their specific requirements.
Please check out this Blog to get the comprehensive overview of available vision models in Azure AI model catalog.
Video gif showing the different OD models on AzureML.
To fine-tune our model, we’ll be utilizing a comprehensive guide outlined in Jupyter notebook. This Jupyter notebook is dedicated to detailing the process to fine-tune, deploy and infer on the models from the OpenMMLab’s MMDetection zoo within Azure Machine Learning.
Video gif for showcasing the finetuning notebook and result pipeline in Designer.
2. Infer the Deployed OD model endpoint for Bounding Box Prompts
Next, we’ll run inference on the deployed Object Detection (OD) model to generate bounding box prompts. These prompts effectively highlight the areas of interest within the images, marking out the specific regions for subsequent segmentation.
Here’s how deployed YoloF finetuned model online endpoint would look after successful deployment and finetuning of the model:
Video GIF Showing the OD Model endpoint.
To ensure successful inference with the YoloF model, it’s crucial that both the input and output formats align with what is expected by the deployed YoloF endpoint. For guidance on the correct formats, you can refer to the sample inputs and outputs provided in the YoloF model card. Additionally, the notebook mentioned earlier offers detailed instructions and examples to help you achieve successful inference with YoloF.
Video GIF showing the yolof model card and expected input and output format for the same.
3. Deploying the SAM Model
Now, let’s deploy SAM to an online endpoint into our AzureML/Azure AI Studio Workspace, to process the input for creating segmentation masks.
You can deploy SAM in your project effortlessly either by coding with the SDK, guided by this reference notebook, or using CLI, guided by this example, or using Azure Machine Learning/Azure AI Studio UI for a seamless no-code experience. We recommend checking the SAM model card for required input and output formats before starting.
Azure AI model catalog currently offers three versions of the SAM model: ‘ facebook-sam-vit-base’, ‘ facebook-sam-vit-large ‘, and ‘ facebook-sam-vit-huge’. For our experiment, we have chosen ‘facebook-sam-vit-huge’ to balance accuracy, compute requirements, and inference latency. However, you have the flexibility to select the model version that best aligns with your project’s needs, whether it is prioritizing higher accuracy, available computational resources, or faster inference times.
Video GIF Showing the SAM deployment via UI and the model card.
4. Infer the SAM Model with the Bounding Box Prompt
The final step in our image segmentation process is to use SAM for inference. We start by converting the bounding box prompts from our Object Detection model into a format compatible with SAM, as detailed in the SAM model card. Typically, OD model bounding boxes are normalized (‘topX’, ‘topY’, ‘bottomX’, ‘bottomY’), but SAM requires absolute coordinate values. This conversion is key for SAM to accurately generate precise segmentation.
When inputting data into SAM, ensure it matches the format specified in the SAM model card. Depending on your needs, set the ‘multimask_output’ variable accordingly. Setting it to True provides multiple masks for each bounding box, allowing for varied segmentation options. If set to False, SAM generates a single mask per prompt.
Image Showing input format needed by SAM endpoint.
The output is structured as a JSON response, which includes the encoded binary mask and the Intersection over Union (IoU) score for each mask. This format provides a clear and comprehensive view of the segmentation results, allowing for easy interpretation and application in subsequent processes.
Image Showing the json response and its fields.
After obtaining the output from the SAM model, the subsequent action involves decoding the encoded binary mask. Utilize the following code snippet to efficiently convert and store the generated binary mask:
import base64
import io
from PIL import Image
import os
def save_image_from_base64(base64_string, save_path):
# Decode the base64 string
image_data = base64.b64decode(base64_string)
# Convert binary data to a file-like object
image_file = io.BytesIO(image_data)
# Open the image file using PIL
image = Image.open(image_file)
# Save the image
image.save(save_path, format=’PNG’) # You can change the format if necessary
# Usage example
base64_string = ‘your_base64_string_here’ # Replace with your base64 string
save_path = ‘path_to_save_image/image_name.png’ # Replace with your desired save path and file name
save_image_from_base64(base64_string, save_path)
Finally, we can store the segmentation masks corresponding to each bounding box prediction made by the OD model for various objects in our images.
Visualization of Bounding box and corresponding Mask generated by SAM.
Evaluating the generated Segmentation Masks
In our analysis, we utilized an 80:20 split for training and testing data with the odFridgeObjects dataset. After training, the bounding boxes generated by the ‘YoloF’ Object Detection model were fed into the SAM model to create segmentation masks. We then conducted a comprehensive evaluation of these SAM-generated masks against the ground truth using the test split of the dataset. To assess the quality of the masks, we employed metrics such as Intersection over Union (IoU) and Accuracy. Below are the results and insights derived from this evaluation process.
The table shows the different evaluation metrics and their values:
Metric
Value
Average IoU
0.951422254326566
Average Accuracy
0.9280402048325956
As we wrap up our exploration of using the SAM model for advanced image segmentation, it’s exciting to note the high evaluation metrics we achieved without training a traditional segmentation model. By harnessing the combination of an OD model and SAM, we’ve navigated scenarios with limited data and still attained impressive results. This method not only showcases significant progress in computer vision but also demonstrates the practicality and effectiveness of the OD+SAM approach. We hope this technique proves to be an asset to your next project.
Remember, the journey does not end here. Azure Machine Learning and Azure AI Studio offer a rich catalog of vision models, each with unique capabilities and applications. We encourage you to explore these models and discover how they can further enhance your machine learning endeavors. Happy experimenting, and we look forward to seeing the innovative ways you apply these technologies in your work!
Learn more:
Explore our model catalog in Azure Machine Learning and Azure AI Studio.
Read this announcement blog for introducing vision models in Azure AI model catalog.
Learn more about new models added into our model catalog at Ignite.
Microsoft Tech Community – Latest Blogs –Read More
AI Chat App Hack: Watch all the streams!
We recently concluded our first Microsoft AI Chat App Hack, a hackathon which challenged developers to build applications using RAG (Retrieval Augmented Generation) in order to answer questions in custom domains. You can browse through the winners and submissions to see what developers built.
To help developers understand the new world of generative AI, we held 17 live streams across four languages. If you missed the hackathon this time around, you can still watch the stream recordings from the links below and start building an AI chat app today.
English streams
Spanish streams
Portuguese streams
Chinese streams
If you’re an educator or student organizer, you could even run your own hackathon! We’ve provided links to the slide decks and demos. Post in the discussion forum if you’re looking for any additional resources to help your event.
English streams
Building a RAG Chat App in Python
Want to learn how to create a chat app on your own data using Python? This session shows you how to use RAG (Retrieval Augmented Generation), a powerful technique for combining knowledge retrieval with LLMs. You’ll learn how to combine OpenAI with Azure AI search and Azure Document Intelligence to ingest and vectorize data, and then deploy a frontend to chat with that data on Azure App Service.
Slides (SpeakerDeck) | Slides (PPTx) | Repo
Ready to customize your RAG app for your own data and organization? This session shows you how to bring in your own documents and websites. It also includes tips on customizing the frontend to add your own branding, and ideas for additional features you might want to bring into your chat app.
Slides (SpeakerDeck) | Slides (PPTx) | Repo
Azure AI Search Best Practices
When you’re building an AI chat app using RAG, the best way to get great answers is to get great retrieval (the “R” in RAG!). In this session, learn about best practictices from an Azure AI Search engineer for indexing and querying documents. You’ll discover the differences between text search, vector search and hybrid search, and see the benefits of the semantic reranker. You’ll also learn about custom analyzers, integrated vectorization, and more.
Slides (SpeakerDeck) | Slides (PPTx) | Demo Repo
Connecting a RAG chat app to Azure Cosmos DB
Learn how to use RAG with Azure Cosmos DB for MongoDB vCore. This session demonstrates how to efficiently store and retrieve transactional and vector data together. It also walks through creating a low-code RAG application in Azure OpenAI Studio with just a database, coding the retrieval component in Python, and explains when to choose Azure Cosmos DB for MongoDB vCore for your RAG implementations.
Ready to see the future of AI? You can now build AI chat apps that can answer questions based on images – like photos, graphs, diagrams, and illustrations. Learn how to use the Azure Computer Vision multi-modal embedding API, Azure AI search to index and query images, and then use GPT-4-Turbo with Vision to answer questions about images. This cutting edge technique can be an amazing approach for document types that are image heavy, like financial charts.
Web Components are the new standard way to build reusable custom elements for web sites, and our team has created a set of web components for you to build AI chat apps. This session shows you how to use a frontend built entirely of web components, and how to easily swap your current frontend with a modern web component based frontend.
Access Control in RAG Chat Apps
When you’re building an AI chat app for internal documents, you need to think about access control: which of your users can access which documents? This session shows a sophisticated approach to access control that stores documents in a secure ACL’d datastore, remembers ACLs in Azure AI search, and only sends user-visible documents to OpenAI. This session also shows you how to set up authentication for your app and how to block unwanted access.
Chat Completion API Tools & Functions in RAG Chat Apps
This session shows off a relatively new way to retrieve knowledge and direct a conversational flow, using the Azure OpenAI function calling feature. This session will give you the skills to extend your AI chat to cover more use cases, like structured queries and user question pre-processing.
This session shows you how to use various GPT-based metrics and tools to evaluate a RAG chat app. We try out changes like different prompts, various search parameters, and alternate LLM options, to see how that affects the overall metrics. When building a RAG chat app, you absolutely need an evaluation step in your pipeline to build confidence that your chat app will provide helpful, grounded, and relevant answers for your users.
Slides (SpeakerDeck) | Slides (PDF) | Repo
Continuous Deployment of your Chat App
Do you want to learn how to deploy your chat app to Azure without any hassle? Join this live session with an Azure Developer CLI software engineer and discover the best practices for continuous deployment of your chat app. We’ll focus on GitHub actions, but you can use the same approach for Azure DevOps or other CIs.
Slides (SpeakerDeck) | Slides (PPTx)
Content Safety for Azure OpenAI
A special session from a Microsoft MVP on Responsible AI and the Azure OpenAI content safety filters. Learn how to change the filter levels and use them in your OpenAI chat apps.
Building a Chat on Your Business Data Without Writing a Line of Code
A special session from a Microsoft MVP on the Azure OpenAI Studio “On Your Data” feature, a low-code approach to developing a RAG chat app.
The final session, showcasing many submissions and demos from Microsoft’s AI-In-a-Box team.
Spanish streams
Las dos primeras sesiones, presentadas por Bruno Capuano.
Creación de una aplicación de chat RAG en Python
¿Quieres aprender a crear una aplicación de chat con tus propios datos usando Python? Esta sesión le muestra cómo utilizar RAG (Generación Aumentada de Recuperación), una técnica poderosa para combinar la recuperación de conocimientos con LLM. Aprenderá cómo combinar OpenAI con la búsqueda de Azure AI y Azure Document Intelligence para ingerir y vectorizar datos, y luego implementar una interfaz para chatear con esos datos en Azure App Service.
Personalización de la aplicación RAG + Chat
¿Listo para personalizar su aplicación RAG para sus propios datos y organización? Esta sesión le muestra cómo traer sus propios documentos y sitios web. También incluye consejos sobre cómo personalizar la interfaz para agregar su propia marca e ideas para funciones adicionales que quizás desee incorporar a su aplicación de chat.
Portuguese streams
As duas primeiras sessões, apresentadas por Pablo Lopes.
Criando um aplicativo RAG Chat em Python
Quer aprender como criar um aplicativo de bate-papo com seus próprios dados usando Python? Esta sessão mostra como usar RAG (Retrieval Augmented Generation), uma técnica poderosa para combinar recuperação de conhecimento com LLMs. Você aprenderá como combinar o OpenAI com a pesquisa de IA do Azure e o Azure Document Intelligence para ingerir e vetorizar dados e, em seguida, implantar um front-end para conversar com esses dados no Azure App Service.
Personalização do aplicativo RAG + Chat
Pronto para personalizar seu aplicativo RAG para seus próprios dados e organização? Esta sessão mostra como trazer seus próprios documentos e sites. Ele também inclui dicas sobre como personalizar o front-end para adicionar sua própria marca e ideias para recursos adicionais que você pode querer trazer para seu aplicativo de bate-papo.
Chinese streams
我们将首先来学习如何使用 Python,以及 Azure OpenAI Service 构建我们的第一个 RAG 应用,入门生成式人工智能。
这一期,我们将结合 Embedding,学习如何定制化我们的 RAG 应用。
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL DB – Failed to add database to failover group – find out why
Symptoms:
When attempting to add database to a failover group, saving the modification on the failover group resource succeeded, but the database never added to the failover group.
Troubleshooting:
When submitting the request to add database to failover group the success indicate that the request submitted successfully, the actual process to add the database to the failover group happen in the background.
As there is no notification for the add operation you might feel that the operation has been successfully completed which might not always be the case.
To get more information about the actual process navigate to the Activity Log of your SQL Server resource group to monitor the outcome of the request.
In the activity log of your resource group, you can find the actual failure:
Opening the actual failed operation gives you the following summary message.
in this case it does not contains enough information to help me solve my problem.
To get the underlying actual error that will guide you how to fix the problem open the JSON document and search for the “statusMessage”:
In my case I simulate the error by creating the database on the destination, while attempting to add the database to the failover group (and to the geo-replication relationship) it failed as the database already exists.
Author: Yochanan Rachamim.
Microsoft Tech Community – Latest Blogs –Read More
Introduction and Deployment of Backstage on Azure Kubernetes Service
Introduction to Backstage
Backstage, originally created by a small team during a hack week, is now a CNCF Incubation project. An incubation project means that it is considered stable and successfully being used in production, at scale by enterprises. Backstage is a platform for building developer portals which act as an operational platform between developers and the services they need to deploy. Developer portals can accelerate developer velocity providing push button deployment of production ready services and infrastructure.
In this Walkthrough we will learn about what Backstage is as well as how we can build and deploy it on Azure using multiple services including:
Azure Kubernetes Service
Azure Container Registry
Azure Database for PostgreSQL
Microsoft Entra ID
Backstage Features:
– Software Catalogue: Backstage’s software catalogue shows developers the services available too them including key information such as the connected repository, the service owners, the latest build and API’s exposed.
– Software Templates: Software templates allow users to create components within Backstage that allows for the loading, variable setting and publishing of a template to GitHub.
– TechDocs: Backstage centralises documentation under “TechDocs” which allows for engineers to write markdown files in parallel to development to live with the service code.
– Integrations: Backstage allows for integrations to be created in order to pull or publish data from external sources with GitHub and Azure as supported identity providers.
– Plugins: One of the best parts of Backstage is the open source plugin library designed to allow you to extend Backstage’s core feature set. Some notable plugins include the Azure Resources plugin to view the status of services deployed related to a component and an ArgoCD plugin to view the status of your GitOps deployments for your services.
– Search: Finally tying all these features together is Backstage’s search feature allowing for document and service discovery across the platform.
This means that deployment, documentation and search are all integrated into a single portal. Backstage also (using the Kubernetes plugin) centralises your services deployed across clusters into one central dashboard, abstracting the underlying cloud from the software developers and providing all the important information regarding that deployments status.
Backstage Architecture
Backstage is split into three parts. These components are split this way based on the three different groups of contributors for each part.
Core – Base functionality built by core developers in the open source project.
App – The app is an instance of a Backstage app that is deployed and tweaked. The app ties together core functionality with additional plugins. The app is built and maintained by app developers, usually a productivity team within a company.
Plugins – Additional functionality to make your Backstage app useful for your company. Plugins can be specific to a company or open sourced and reusable.
The backstage “App” layer may already give you an idea that Backstage itself does not provide a docker image for deployment. This is because it is assumed customisation at the “App” layer is required for each enterprise. This means we have to build our own image which we will cover later in this article.
The system architecture for Backstage has some flexibility and will differ depending on your requirements. I would advise reviewing the material here: Architecture overview | Backstage Software Catalog and Developer Platform to make an informed decision. The three components are the Frontend (Backstage UI), the Backend (Core/Plugin App logic & proxy) and a database which for production is recommended to be PostgreSQL. Backstage also now supports Caches. You may want to decouple your front and back end to enable static hosting of the frontend, Backstage supports this although limits the features of the backend plugin API. In this example we will deploy both in a single container. It is certainly advisable to deploy your database outside of your cluster.
Getting Started with Backstage on AKS
Now we know a bit more about Backstage and how it works, let’s look at how we can build it and get it set up and integrated with a variety of Azure services.
Creating your Database
To start with we will need to create our Postgres Database. As we are already building in Azure we will use our fully managed flexible Azure Database for PostgreSQL as our Backstage database. This service is simple to setup and manage, for enterprises deploying into production please refer to your own internal data standards.
First lets use our terminal of choice and ensure we are logged in with the correct Azure subscription set:
az login
az account set –subscription <subscription id>
Now lets create our resource group for this project:
az group create –name backstage–location eastus
We can now create our server. We will append our own initials to our server name to avoid conflicts. One server can contain multiple databases:
az postgres flexible-server create –name backstagedb-{your initals} –resource-group backstage
Since the default connectivity method is Public access (allowed IP addresses), the command will prompt you to confirm if you want to add your IP address, and/or all IPs (range covering 0.0.0.0 through 255.255.255.255) to the list of allowed addresses. For the objective of this blog I will add my own IP address when prompted. This will be the same authorised IP that our AKS cluster will use. In production it is advisable to adjust these deployments to leverage full private networking with VNET integration and private endpoints.
The server created has the following attributes:
The same location as your resource group
Auto-generated admin username and admin password (which you should save in a secure place)
A default database named “flexibleserverdb”
Service defaults for remaining server configurations: compute tier (General Purpose), compute size/SKU (Standard_D2s_v3 – 2 vCore, 8 GB RAM), backup retention period (7 days), and PostgreSQL version (13)
Once deployed we can now check our connection details for our server:
az postgres flexible-server show –name backstagedb-{your initals} –resource-group backstage
Make a note of the admin username output at the top of the above command.
We can then change the admin password of the server.
az postgres flexible-server update –resource-group backstage–name backstagedb-{your initals} –admin-password <new-password>
We can now test connecting from our command line by using psql. If you are using Cloud Shell psql is already installed. If not you can install it here. You can now use your admin user and the admin password you have set to connect to the database.
psql -h backstagedb-{your initals}.postgres.database.azure.com -U {your generated admin user} flexibleserverdb
Providing everything has been configured correctly you should now be logged in and able to verify your database is running as seen below:
You can type “q” to exit the database.
Let’s now add a database that will be referenced by Backstage. We can do this with the following command:
az postgres flexible-server db create –resource-group backstage –server-name backstagedb-{your inital} –database-name backstage_plugin_catalog
For this article we will not require secure connections for our database so we can make connections from our application easier. This means we need to update our server parameter with the following:
az postgres flexible-server parameter set –resource-group testGroup –server-name servername –name require_secure_transport –value off
Building Backstage
Things now get slightly more complicated. As mentioned earlier Backstage does not come ready with an image that can be deployed into your cluster due to the degree of customisation required such as the database connection and plugins. This means we will have to build Backstage ourself.
Backstage is built with Node. For the next steps we can do this through CloudShell or an IDE of your choice. For the sake of readability I will use Visual Studio Code as we need to make some changes to files and this will be easier for those unfamiliar with Vi/Vim.
To start I created a new folder and opened the empty directory in Visual Studio. I also opened a WSL Terminal.
Within the WSL Terminal we then run the following command which will install backstage and create a backstage app subdirectory within the directory we execute the command in:
npx @backstage/create-app@latest
Once this is complete we can test the demo version of the application locally. We can do this with the following commands:
cd backstage
yarn dev
Once we verify that Backstage is running locally we can add the Postgres client to our application:
# From your Backstage root directory
yarn –cwd packages/backend add pg
Our next step is to add our database config to our application. To do this we need to open app-config.yaml and add our PostgreSQL configuration in the root directory of our Backstage app using the credentials from the previous steps.
Host: backstagedb-{your initals}.postgres.database.azure.com
Port: 5432
backend:
database:
client: better-sqlite3 <—- Delete this existing line
connection: ‘:memory:’ <—- Delete this existing line
# config options: https://node-postgres.com/apis/client <—- Add all lines below here
client: pg
connection:
host: ${POSTGRES_HOST}
port: ${POSTGRES_PORT}
user: ${POSTGRES_USER}
password: ${POSTGRES_PASSWORD}
# https://node-postgres.com/features/ssl
# ssl:
# host is only needed if the connection name differs from the certificate name.
# This is for example the case with CloudSQL.
# host: servername in the certificate
# ca:
# $file: <file-path>/server.pem
# key:
# $file: <file-path>/client.key
# cert:
# $file: <file-path>/client-cert.pem
For the sake of this demo we will pass our user and password in as hard coded values to our yaml file. This is not advisable in production. Please use your accepted application config method if deploying into production.
If you have deployed your sql database in Cloudshell and have now moved to your local machine you will need to add your local machines IP address to the authorised IP range in your servers firewall under the networking tab on the portal or update it through the CLI.
Authentication
Now we have setup and connected our database we can move on to authentication. To do this we will be using our Microsoft Entra ID tenant. We can do this through the CLI however for readability I will create the app registration etc through the Portal. To start with we need to access our Microsoft Entra ID tenant and select “App Registrations”.
We then select “New Registration” and provide a name such as “Backstage”. We can keep the scope to this single tenant and put our local development url in for now under the redirect this would be:
http://localhost:7007/api/auth/microsoft/handler/frame
We now can go to the API Permissions of the app registration we have created and add the following permissions:
email
offline_access
openid
profile
User.Read
We also need to add some application permissions to allow us to onboard our users to Backstage later. To do this we follow the same method as the last step however select Application permissions instead of delegated. Here we need to add two permissions:
User.Read.All
GroupMember.Read.All
You may need to grant admin consent to the application if your organisation requires it. This can be done through the portal once the permissions are added by clicking the “Grant admin consent to ___” button. You may want to do this anyway so users do not need to provide consent individually on first sign in.
We next need to add a client secret for this app registration. This will be used in our application config. To do this we go to secrets and certificates in your application registration. Once created make a note of the client secret as it is only shown once.
We now need to add this to our application config. This can be done in the same app-config.yaml file as before under the “Auth” header. We need to add the following block:
auth:
environment: development
providers:
microsoft:
development:
clientId: ${AZURE_CLIENT_ID}
clientSecret: ${AZURE_CLIENT_SECRET}
tenantId: ${AZURE_TENANT_ID}
domainHint: ${AZURE_TENANT_ID}
additionalScopes:
– Mail.Send
Like the database connection I will be passing the hard coded values into the file. For production it would again be recommended to pass these through as secrets.
We also need to make some changes to the application source code to add the front end sign in page. To do this we need to add the following block to packages/app/src/App.tsx.
import { microsoftAuthApiRef } from ‘@backstage/core-plugin-api’;
import { SignInPage } from ‘@backstage/core-components’;
components: {
SignInPage: props => (
<SignInPage
{…props}
auto
provider={{
id: ‘microsoft-auth-provider’,
title: ‘Microsoft’,
message: ‘Sign in using Microsoft’,
apiRef: microsoftAuthApiRef,
}}
/>
),
},
const app = createApp({
apis,
components: {
SignInPage: props => (
<SignInPage
{…props}
auto
provider={{
id: ‘microsoft-auth-provider’,
title: ‘Microsoft’,
message: ‘Sign in using Microsoft’,
apiRef: microsoftAuthApiRef,
}}
/>
),
},
bindRoutes({ bind }) {
bind(catalogPlugin.externalRoutes, {
createComponent: scaffolderPlugin.routes.root,
viewTechDoc: techdocsPlugin.routes.docRoot,
createFromTemplate: scaffolderPlugin.routes.selectedTemplate,
});
bind(apiDocsPlugin.externalRoutes, {
registerApi: catalogImportPlugin.routes.importPage,
});
bind(scaffolderPlugin.externalRoutes, {
registerComponent: catalogImportPlugin.routes.importPage,
viewTechDoc: techdocsPlugin.routes.docRoot,
});
bind(orgPlugin.externalRoutes, {
catalogIndex: catalogPlugin.routes.catalogIndex,
});
},
});
The other addition to our application code that we must add is the Microsoft resolver. This is not mentioned anywhere in the backstage authentication documentation so this will save you the time I spent figuring out why my sign it was failing. We need to remove the GitHub example block from the packagesbackendsrcpluginsauth.ts file and replace it. I have pasted the full file below:
import {
createRouter,
providers,
defaultAuthProviderFactories,
} from ‘@backstage/plugin-auth-backend’;
import { Router } from ‘express’;
import { PluginEnvironment } from ‘../types’;
export default async function createPlugin(
env: PluginEnvironment,
): Promise<Router> {
return await createRouter({
logger: env.logger,
config: env.config,
database: env.database,
discovery: env.discovery,
tokenManager: env.tokenManager,
providerFactories: {
…defaultAuthProviderFactories,
microsoft: providers.microsoft.create({
signIn: {
resolver:
providers.microsoft.resolvers.emailMatchingUserEntityAnnotation(),
},
}),
},
});
}
If we try to login in now we shall be taken through the Entra ID Auth flow however we will see the error “User not found”. This is because Backstage also requires us to setup the ingestion of users from our Entra tenant. This is because the user entity in Backstage has no concept of accounts but rather matches the user logging in to a user entity registered with Backstage itself. To onboard our Entra users we need to add the following to our app-config.yaml:
catalog:
providers:
microsoftGraphOrg:
providerId:
target: https://graph.microsoft.com/v1.0
authority: https://login.microsoftonline.com
tenantId: ${TENANT_ID}
clientId: ${CLIENT_ID}
clientSecret: ${CLIENT_SECRET}
queryMode: basic
We are adding our secrets in plain text again for the sake of getting this up and running quickly without building out a CI/CD pipeline. In production cases we would use the ${CLIENT_SECRET} at run time to pass the secrets through from a secure store.
We could add user or group querying to this provider to avoid loading our entire tenant and only select specific users however in this example I have kept it simple. You can learn how to add the user/group filtering parameters here:
Microsoft Entra Tenant Data | Backstage Software Catalog and Developer Platform
We also need to add the MS Graph plugin to our application packages. We do this with the following command:
# From your Backstage root directory
yarn –cwd packages/backend add @backstage/plugin-catalog-backend-module-msgraph
We also then have to register the Microsoft entity provider in our plugin catalog. We do this with the following changes to the /packages/backend/src/plugins/catalog.ts file:
builder.addEntityProvider(
MicrosoftGraphOrgEntityProvider.fromConfig(env.config, {
logger: env.logger,
schedule: env.scheduler.createScheduledTaskRunner({
frequency: { hours: 1 },
timeout: { minutes: 50 },
initialDelay: { seconds: 15},
}),
}),
);
Our catalog.ts file should look like this now:
import { CatalogBuilder } from ‘@backstage/plugin-catalog-backend’;
import { ScaffolderEntitiesProcessor } from ‘@backstage/plugin-catalog-backend-module-scaffolder-entity-model’;
import { Router } from ‘express’;
import { PluginEnvironment } from ‘../types’;
import { MicrosoftGraphOrgEntityProvider } from ‘@backstage/plugin-catalog-backend-module-msgraph’;
export default async function createPlugin(
env: PluginEnvironment,
): Promise<Router> {
const builder = await CatalogBuilder.create(env);
builder.addEntityProvider(
MicrosoftGraphOrgEntityProvider.fromConfig(env.config, {
logger: env.logger,
schedule: env.scheduler.createScheduledTaskRunner({
frequency: { hours: 1 },
timeout: { minutes: 50 },
initialDelay: { seconds: 15},
}),
}),
);
builder.addProcessor(new ScaffolderEntitiesProcessor());
const { processingEngine, router } = await builder.build();
await processingEngine.start();
return router;
}
We also need to add this new provider to our providers block in our app-config.yaml file. We can add the following below our existing auth provider block.
microsoftGraphOrg:
default:
tenantId: ${TENANT_ID}
user:
filter: accountEnabled eq true and userType eq ‘member’
group:
filter: >
securityEnabled eq false
and mailEnabled eq true
and groupTypes/any(c:c+eq+’Unified’)
schedule:
frequency: PT1H
timeout: PT50M
If your environment has restrictions on outgoing access, for example when we deploy it into a UDR egress AKS cluster we need to make sure our Backstage backend has access to the following hosts:
login.microsoftonline.com, to get and exchange authorization codes and access tokens
graph.microsoft.com, to fetch user profile information (as seen in this source code). If this host is unreachable, users may see an Authentication failed, failed to fetch user profile error when they attempt to log in.
If we now stop Backstage in our terminal using ctrl/cmd + c and restart with Yarn Dev we should be greeted by a Microsoft Auth login button.
Once were logged in feel free to configure Backstage to include any other integrations or plugins you feel are suitable. Once done we are now ready to create our container images. This will allow us to deploy backstage on our Azure Application service of our choosing.
We also need to update our app.baseUrl in our app-config.yaml ready for deploying our application outside of our local environment. This is to avoid CORS policy issues once deployed on AKS.
app:
title: Scaffolded Backstage App
baseUrl: http://localhost:7007
organization:
name: My Company
backend:
# Used for enabling authentication, secret is shared by all backend plugins
# See https://backstage.io/docs/auth/service-to-service-auth for
# information on the format
# auth:
# keys:
# – secret: ${BACKEND_SECRET}
baseUrl: http://localhost:7007
listen:
port: 7007
# Uncomment the following host directive to bind to specific interfaces
# host: 127.0.0.1
csp:
connect-src: [“‘self'”, ‘http:’, ‘https:’]
# Content-Security-Policy directives follow the Helmet format: https://helmetjs.github.io/#reference
# Default Helmet Content-Security-Policy values can be removed by setting the key to false
cors:
origin: http://localhost:7007
methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
credentials: true
Access-Control-Allow-Origin: ‘*’
Containerise Backstage
First let’s start provisioning a private Azure Container Registry. We can do this with the following command:
az acr create -n backstageacr${YOUR INTIALS} -g backstage –sku Premium –public-network-enabled true –admin-enabled true
While this resource spins up lets now create a Dockerfile for our Backstage application. We will be doing a host build to save some time. We’ll build the backend on our host whether thats local or a CI pipeline and then we will build our docker image. To start with from our backstage root folder we need to run the following commands:
yarn install –frozen-lockfile
# tsc outputs type definitions to dist-types/ in the repo root, which are then consumed by the build
yarn tsc
# Build the backend, which bundles it all up into the packages/backend/dist folder.
# The configuration files here should match the one you use inside the Dockerfile below.
yarn build:backend –config ../../app-config.yaml
We now need to create the Dockerfile if the app creation didn’t initialise one in the first place (version dependent). The Dockerfile is below:
FROM node:18-bookworm-slim
# Install isolate-vm dependencies, these are needed by the @backstage/plugin-scaffolder-backend.
RUN –mount=type=cache,target=/var/cache/apt,sharing=locked
–mount=type=cache,target=/var/lib/apt,sharing=locked
apt-get update &&
apt-get install -y –no-install-recommends python3 g++ build-essential &&
yarn config set python /usr/bin/python3
# Install sqlite3 dependencies. You can skip this if you don’t use sqlite3 in the image,
# in which case you should also move better-sqlite3 to “devDependencies” in package.json.
RUN –mount=type=cache,target=/var/cache/apt,sharing=locked
–mount=type=cache,target=/var/lib/apt,sharing=locked
apt-get update &&
apt-get install -y –no-install-recommends libsqlite3-dev
# From here on we use the least-privileged `node` user to run the backend.
USER node
# This should create the app dir as `node`.
# If it is instead created as `root` then the `tar` command below will
# fail: `can’t create directory ‘packages/’: Permission denied`.
# If this occurs, then ensure BuildKit is enabled (`DOCKER_BUILDKIT=1`)
# so the app dir is correctly created as `node`.
WORKDIR /app
# This switches many Node.js dependencies to production mode.
ENV NODE_ENV production
# Copy repo skeleton first, to avoid unnecessary docker cache invalidation.
# The skeleton contains the package.json of each package in the monorepo,
# and along with yarn.lock and the root package.json, that’s enough to run yarn install.
COPY –chown=node:node yarn.lock package.json packages/backend/dist/skeleton.tar.gz ./
RUN tar xzf skeleton.tar.gz && rm skeleton.tar.gz
RUN –mount=type=cache,target=/home/node/.cache/yarn,sharing=locked,uid=1000,gid=1000
yarn install –frozen-lockfile –production –network-timeout 300000
# Then copy the rest of the backend bundle, along with any other files we might want.
COPY –chown=node:node packages/backend/dist/bundle.tar.gz app-config*.yaml ./
RUN tar xzf bundle.tar.gz && rm bundle.tar.gz
CMD [“node”, “packages/backend”, “–config”, “app-config.yaml”]
We can now use ACR Tasks to build our image. After the image is successfully built it’s pushed to the registry we created earlier. Azure provides a hosted pool to build these images however ACR also now supports using a self hosted pool for production environments. If we had set our endpoint to private we would use a self hosted pool to build the image.
As we require Buildkit to be enabled we need to use ACR’s multi-step YAML file. Create a file called acr-task.yaml. It should contain the following:
version: v1.0.0
stepTimeout: 1000
env:
[DOCKER_BUILDKIT=1]
steps: # A collection of image or container actions.
– build: -t backstageacr${YOUR_INITALS}.azurecr.io/backstageimage:v1 -f Dockerfile .
– push:
– backstageacr${YOUR_INITALS}.azurecr.io/backstageimage:v1
And we can run it with the following ACR command:
az acr run -r backstageacroow -f acr-task.yaml .
Once this has run we should see our image in our registry:
We can now look to deploy this image on Azure. Deploying Backstage now as a container gives us a variety of platforms to select. We could use App Service, Azure Container Apps or AKS. ACA and AKS are the most suitable depending on your use case and architecture. One thing to keep in mind with both services is that Backstage, as we described earlier sits as an operational platform between developers and infrastructure. This means that we will need to strongly consider the networking implications, if we are using a single Backstage per organisation or product we will need to be able to make inbound and outbound calls to multiple environments. For example using the Kubernetes plugin we will need to ensure that we can connect to all of the clusters that we want to add to our single view.
For the rest of the article we will focus on deploying Backstage to AKS however I believe ACA is a brilliant destination for Backstage within a hub network for easy and managed hosting. I have deployed and verified Backstage working in ACA.
Deploy Backstage on Azure Kubernetes Service
To deploy Backstage we first need to create our AKS cluster if we have not already. To do this we can use the following command. For simplicity we will not use Azure Firewall with UDR to restrict ingress and egress traffic. If deploying in production please use your own standards for your deployment. The cluster will need networking access to any other clusters you want to onboard with the Kubernetes plugin, GIT repositories and the Postgres Database.
To add an admin user we will need to get our user ID. If deploying in production this may not be necessary as your service deployment will be different.
To get your user ID you can use the following command:
az ad signed-in-user show –query id -o tsv
We can then create our cluster with the following command:
az aks create -g backstage -n backstagecluster –enable-aad –enable-oidc-issuer –enable-azure-rbac –load-balancer-sku standard –outbound-type loadBalancer –network-plugin azure –attach-acr backstageacr${YOUR_INITALS} –aad-admin-group-object-ids {YOUR USER ID}
We now need to create our manifest for our Backstage deployment. This file can be created either in your backstage root folder or in a sub Kubernetes folder. Depending on your requirements your manifest should look something like the following replacing the container image with your location:
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
– name: backstage
image: backstageacr${YOUR_INITALS}.azurecr.io/backstageimage:v1
imagePullPolicy: IfNotPresent
ports:
– name: http
containerPort: 7007
We can then use the command invoke feature of AKS to deploy the manifest. Command Invoke uses the Azure backbone network to avoid having to directly connect to the cluster, this works brilliantly for administration of private clusters.
For now we will also assign cluster admin permissions to our user so we can create our namespace and deployment. In production we would want to roles with with limited scope if we are assigning individual roles to users at all. First we need our cluster ID:
AKS_ID=$(az aks show -g backstage -n backstagecluster –query id -o tsv)
We can then assign the role to our user for the cluster using the user ID we got earlier:
az role assignment create –role “Azure Kubernetes Service RBAC Cluster Admin” –assignee <AAD-ENTITY-ID> –scope $AKS_ID –scope subscriptions/<YOUR SUBSCRIPTION ID>
We can now deploy our application and service. To do this we need to create a manifest for each. We can do this in our backstage root folder or in a Kubernetes folder within our backstage root folder. For ease I would advise a separate Kubernetes folder.
apiVersion: v1
kind: Namespace
metadata:
name: backstage
—
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
– name: backstage
image: backstageacroow.azurecr.io/backstageimage:v1
imagePullPolicy: IfNotPresent
ports:
– name: http
containerPort: 7007
And our service should look like this:
# kubernetes/backstage-service.yaml
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
spec:
selector:
app: backstage
type: LoadBalancer
ports:
– name: http
port: 80
targetPort: http
We can then deploy them with the following commands:
az aks command invoke –name backstagecluster –resource-group backstage –command “kubectl apply -f deployment.yaml” –file kubernetes/deployment.yaml
az aks command invoke –name backstagecluster –resource-group backstage –command “kubectl apply -f backstage-service.yaml” –file kubernetes/backstage-service.yaml
It is likely that you will want to create an ingress of some description for internal access to your backstage application. If you are doing this on Azure Container Apps the ingress creation can be enabled with a click of a button, on AKS you may want to use our Application Gateway for Containers service to use a managed Gateway resource (Learn more about this product in a previous blog I wrote here.). In both instances you will need to change the base url of your application to the custom domain that you want to host Backstage on as well as subsequently your application redirect URI to now also point at that domain. For the sake of this demo we will verify our deployment by using portforwading meaning we can keep our redirect URI to localhost:7007 and not return to the app config/setup our domain and certs on the cluster.
Lets first check our deployment has been successful:
az aks command invoke –name backstagecluster –resource-group backstage –command “kubectl get pods -n backstage” –file kubernetes/backstage-service.yaml
We should see our backstage pod up and running.
Up until this point we have used command invoke as a best practice for our full configuration and deployment. As we now want to tunnel traffic from our cluster to our local host we will need to use kubectl on our own command line. First to do this we need to login in to our cluster in our terminal with the following command:
az aks get-credentials –resource-group backstage –name backstagecluster –overwrite-existing
We can then execute the following to setup port forwarding:
kubectl port-forward deployment/backstage -n backstage 7007:7007
If we now open up a private browser and navigate to http://localhost:7007 we should be greeted by our backstage application.
If we navigate to that localhost:7007 we will be able to view our application and complete our login flow. I advise testing this in a private window otherwise you may skip the login flow and use your existing sessions credentials.
Once authenticated we are greeted by our backstage application!
Conclusion:
We have now managed to deploy Backstage on AKS with a managed Postgres backend and Entra ID authentication configured with users in our tenant onboarded. We have used Azure Container Registry to simplify our image build and deploy process too. We can now begin to further develop out Backstage instance by installing one of the many community supported additional plugins and onboard our Kubernetes Clusters. As mentioned Azure Container Apps has the potential to be a great platform for centralised Backstage deployments. This would allow deployment in a Hub with the ability to deploy & monitor clusters or applications/apis in spokes without managing an entire AKS cluster.
Next steps to build upon this walkthrough if looking to move towards production are:
– Custom Domains and an Internal LB for secure ingress within the network.
– CI/CD Pipeline using environment variables and secrets to build, push and deploy the application. This can be great if needing to overwrite the base app url once a randomly generated URL has been assigned e.g. Container Apps without custom domain. See more here: Writing Backstage Configuration Files | Backstage Software Catalog and Developer Platform
– Secure networking between Backstage and your ideally private Postgres instance.
Microsoft Tech Community – Latest Blogs –Read More
New on Microsoft AppSource: February 15-22, 2024
We continue to expand the Microsoft AppSource ecosystem. For this volume, 158 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
AKQUINET 365 Prices Plus: Available in German, the AKQUINET 365 Prices Plus complements the extensive functionalities of Microsoft Dynamics 365 Business Central by offering hierarchical pricing, multi-level discounts, and special discounts. It allows you to maintain complex discount structures for different articles, define price groups for similar items, set minimum prices, and assign price lists to categories.
Microsoft Teams Rooms Devices from AudioCodes: AudioCodes offers software licensing for personal and meeting room devices as part of its managed service portfolio. Straightforward to deploy and easy to use in meeting rooms of any size, it simplifies the integration of Microsoft Teams, UC, and enterprise telephony and provides a seamless and cost-effective migration for high-quality voice and video collaboration.
BrandMail: Consistent Email Signatures: BrandMail by BrandQuantum integrates with Microsoft Outlook and empowers every employee in your organization to create consistently branded emails with tamper-proof signatures, banners, and content. Compatible with various email platforms, BrandMail provides analytics and reporting, QR code-enabled digital business cards, and access to brand resources.
Credit Device: The Credit Device app integrates with Microsoft Dynamics 365 Business Central to automatically send customers and invoices to your Credit Device software, saving time and preventing mistakes. It supports the Essentials and Premium Editions of Microsoft Dynamics 365 Business Central and is available in English and Dutch.
Dataverse Manager – Compare Data and Solutions: Dataverse Manager streamlines deployment and management of Dataverse solutions by offering comprehensive comparison capabilities across different environments. It includes entity/table record comparison and Microsoft Power Automate flow monitoring. Compatible with all Dataverse backed solutions, it’s an essential tool for organizations looking to optimize their use of Microsoft Power Platform and maintain high standards of operational excellence.
Deventral: Deventral offers a platform for businesses to build their own internal tools using generative AI without any development skills. Micro tools connect data sources and work management systems, providing a new way to make better decisions faster. The platform is accessible through an application developed for Microsoft Teams, Microsoft 365, and Microsoft Outlook.
Drill Down Network PRO (Filter): Part of the ZoomCharts visual bundle, this mobile-friendly network chart used by 80 percent of Fortune 200 companies offers on-chart interactions, category-based customization, force feedback layout, link decorations, and legend support. It is ideal for accounting, finance, human resources, production, sales, and marketing.
Easy Call Report: Easy Call Report is a solution for Microsoft Teams that posts missed calls in call queues directly to a Teams channel, allowing users to monitor any number of call queues. The solution offers quick action buttons for easy call-backs, email responses, and CRM transfer, as well as call history display.
Enquiry 360: Dialogs for Dynamics 365 CRM and Power Apps: The Enquiry 360 app from Maya Information Systems serves as a replacement for the now deprecated Microsoft Dynamics 365 Dialogs. Elevate customer service experiences and enhance efficiency of your call center operations with Enquiry 360. It utilizes Microsoft Dynamics 365 CRM and Microsoft Power Apps to triage call center inquiries using interactive dialogs.
Imperium Job Safety Analysis (SaaS): Imperium’s solution evaluates workplace safety with built-in surveys as per the Occupation Safety and Health Administration (OSHA) standards. Designed to analyze health measures provided to employees at the workplace, it offers real-time dashboards, automated survey assignments, and supports anonymous survey assignments and secured submissions.
Named Entity Extraction API: The Named Entity Extraction API extracts individuals, places, animals, plants, historical figures, monuments, organizations, and other entities from text. It lists named entities across categories like name, place, and animal. This API from Datadome is useful for analyzing large bodies of text for specific mentions.
PDF Uploader/Viewer: The PDF Uploader/Viewer visual allows users to securely upload and share PDF files with colleagues. It features Microsoft certification, automatic preference saving, and the ability to add text and draw lines. Users can download files directly from the visual, with a 30MB size limit.
Send Individually: This add-in for Outlook allows you to send personalized mass emails without the risk of being flagged as spam. Send Individually offers privacy, Microsoft Excel integration, and a rate throttled send speed to avoid server overload. It can send up to 2,000 emails at a time and is perfect for your newsletters, press releases, business announcements, or other marketing emails.
Skypoint AI Platform for Home Health: Skypoint’s AI platform unifies fragmented data sources related to patient care and wellness into complete data models that can be accessed through a natural language interface. Streamline workflows, eliminate inefficiencies, and enable data-driven decision making for optimized resource allocation, customized care, and regulatory compliance.
WeTransact ANZ Custom: WeTransact.io streamlines the publication of your Software as a Service (SaaS) offer in the Microsoft commercial marketplace by handling all technical and integration requirements. SaaS companies can quickly publish their offers without a tech team and can refer a friend for a discount. Managed applications are also supported.
Zimplicit: Zimplicit is an application that simplifies change and continuous improvements by empowering employees to contribute to desired outcomes. It provides clear instructions, exercises, and task clarifications through its intuitive companion, SAM, ensuring that teams stay informed and equipped with necessary tools to excel in their roles.
Go further with workshops, proofs of concept, and implementations
Copilot for Microsoft 365: 1-Day Workshop: Available in Japanese, this workshop by Fujisoft Corporation will demonstrate the functionality and usage of Microsoft Copilot embedded in the Microsoft 365 apps you use every day — Word, Excel, Outlook, Teams, and more — so your employees can deepen their understanding of the tool and improve productivity. The agenda includes evaluation, confirmation of capabilities, and planning.
Accelerate Innovation with Low Code: Lantern offers a three-phase approach to help organizations expand their usage of the Microsoft Power Platform for low-code development, transforming processes, and accelerating innovation at scale. The approach includes advisory, strategy, envisioning, governance workshops, value realization projects, and ongoing support.
Conditional Access in Microsoft Entra ID: 4- to 6-Week Workshop: The experts from ACP will walk you through best practices and implement a set of rules that combine both security and usability to manage your conditional access using Microsoft Entra ID. The workshop, available only in German, is aimed at IT professionals and covers planning, preparation, configuration, and documentation.
AI in Action – Copilot for Microsoft 365: 4-Week Adoption Service: The Copilot for Microsoft 365 Adoption Service by Metro Systems Corporation helps organizations adopt Copilot for Microsoft 365. It includes assessments, adoption activities, optimization recommendations, and support. The service accelerates adoption, enhances productivity, and leverages the expertise of a certified Microsoft partner.
Dynamics 365 Business Central Finance Module: 6-Week Implementation: BCPrise will help implement Microsoft Dynamics 365 Business Central for small to medium businesses with budgets ranging from $30,000 to $75,000. Deliverables include training workshops, configuration, support, and a functional design document. Price quoted is indicative and will vary depending upon actual complexity and scope. This offer is available in Australia and New Zealand.
Cloud Endpoint Management Implementation: This service from Migrate offers tailored implementation of Microsoft 365 and Microsoft Intune design and device migration and deployment. It prepares enterprises for Windows 10 end of service and future solutions like Windows 11 and Azure Virtual Desktop. Intune simplifies Microsoft Endpoint management, cuts costs, and protects users and company data.
Concept to Production with Microsoft Copilot Studio: 8-Week Implementation: Microsoft Copilot Studio is a conversational assistant that uses natural language processing, natural language understanding, and generative AI to enhance customer and employee experiences. Capgemini’s service will create tailored chatbots, voice assistants, and interactive voice response agent assistants, with a roadmap to production.
Copilot for Microsoft 365 Adoption Accelerator Workshop: Changing Social will identify high-value scenarios and establish both technical and organizational baselines for adopting Copilot for Microsoft 365. This adoption and change management engagement includes customized use case prioritization, onboarding support, and a flexible training plan to streamline business processes and improve productivity.
Copilot for Microsoft 365 Deployment: This service from SMART Business will help you deploy Copilot for Microsoft 365, an AI tool that connects various Microsoft 365 apps and uses Microsoft Graph to simplify complex tasks. Robust security configuration is crucial before use, and the tool adheres to enterprise-grade security, privacy, and compliance standards.
Copilot for Microsoft 365 Workshop: Copilot for Microsoft 365 is a tool that helps businesses unlock the full potential of AI integration. It offers hands-on learning, AI-readiness, and bespoke adoption support to transform teams and organizations. This service from Changing Social includes optimization assessment, scenario analysis, and prioritization.
Copilot for Microsoft 365: 3-Day Workshop: In this workshop from VNEXT Group, your team will get an overview of Microsoft Copilot, its benefits, and how you can use it in your organization. The workshop includes an assessment, practical scenarios, and customized reports to help businesses streamline operations and enhance creativity.
Copilot for Microsoft 365: 3-Week Deployment: Orange Business offers a proven approach for implementing Copilot for Microsoft 365, including an ideation workshop, a readiness check, technical delivery, and a user adoption assessment. The goal is to prime your organization for AI adoption and obtain maximum business value while anticipating risks and impacts.
Copilot for Microsoft 365: 4-Week Deployment: Copilot for Microsoft 365 is a natural language generation tool that helps users write documents with elegant and linguistically creative writing. This service by Diyar will help you deploy Copilot and includes installation, configuration, training, and support for users. Areas out of scope include data migration, network issues, and third-party applications.
Edgile Caller Verifier Powered by Entra ID: 8-Week Implementation: Wipro’s solution helps authenticate service desk agents and verify end users calling in for support. It offers customizable verification methods, including SMS, PIN verification, and authenticator apps, and integrates directly into ITSM systems. Wipro will deploy the solution using your existing Microsoft 365 and Entra ID licenses.
Copilot for Microsoft 365: 1-Day Workshop: Copilot for Microsoft 365 helps businesses boost productivity using AI. This workshop by Elisa will assess your environment, prioritize scenarios and business requirements, and create a roadmap for implementation. This service provides essential training and advice for immediate benefits using AI capabilities embedded in Copilot.
Envisioning Copilot for Microsoft 365: 1-Day Workshop: Available in German, novaCapta’s workshop will help your business understand Microsoft Copilot, review technical requirements for implementation, and identify use cases for immediate return on investment. Mission, responsibility, and privacy in dealing with AI, integration with Microsoft 365, and priority planning will be covered.
Hackathon Power Platform: Atea offers a half-day workshop for decision-makers, leaders, and project managers to assess and prioritize Microsoft Power Platform use cases. The workshop introduces best practice tools from AI Sweden, resulting in increased knowledge of low-code automation and app development. Attendees receive suggestions for their own applications.
Introducing Copilot for Microsoft 365: 1-Hour Workshop: Copilot for Microsoft 365 is an AI tool that streamlines tasks, enhances productivity, and drives innovation. Cloud Factory’s workshop will provide decision-makers in your organization with an understanding of Copilot’s capabilities, real-time demos, licensing and technical requirements, privacy, security, and compliance considerations, and a guide to successful adoption.
Managed Backup and Backup-Restore: 1-Month Service: IT Partner will provide a robust backup and recovery solution to ensure your critical data stored within the Microsoft 365 environment is fortified against loss or corruption. It helps check backup and recovery status and assesses the strategy used to ensure business continuity, adhering to compliance requirements, and maximizing the reliability and effectiveness of Microsoft 365 deployment.
Microsoft 365 Copilot: 2-Week Implementation: Through training, tips, best practices, and resources, q.beyond AG will help your organization adopt Copilot for Microsoft 365. The workshop supports adoption and change management, builds a champion program, and leverages generative AI, large language models, and natural language in Microsoft applications. This offer is only available in German.
Microsoft 365 Copilot Accelerator: 5-Week Implementation: Arinco will help businesses integrate Copilot for Microsoft 365 to increase employee productivity and improve operational efficiency by collating internal data and effective engineering prompts. The service also enables Copilot in a scalable and structured manner, builds plugins to integrate it with organizational data, and measures impact on business efficiency.
Microsoft 365 Copilot Adoption: 8-Hour Workshop: Available in Italian, the Copilot for Microsoft 365 workshop by Porini is an interactive learning experience that helps organizations discover how AI can transform their work and learning processes. It includes analysis, hands-on demonstrations, and planning to identify use cases and develop a plan for implementation.
Microsoft 365 Copilot Rapid Onboarding Program: 4-Week Implementation: Microsoft Copilot is an AI-powered tool that integrates with Microsoft 365 to enhance productivity and collaboration. Tech One Global offers a rapid onboarding program to help organizations seamlessly transition to Copilot, including people-readiness, use-case discovery, platform readiness, and extensibility options.
Microsoft 365 Copilot: 1-Day Workshop: Citrin Cooperman Advisors offers this workshop to help organizations adopt Copilot for Microsoft 365 and enhance productivity, creativity, and innovation with responsible AI. The workshop includes an assessment of organizational readiness, demonstrations of Copilot’s capabilities, and development of a strategic roadmap.
Microsoft 365 Copilot: 4-Hour Workshop: This workshop from Nexer Group prepares businesses for using Copilot for Microsoft 365 securely and productively. Decision makers in commercial or public sector organizations can learn how to prepare data and documents, understand information architecture, and classify data to create a safe and productive environment.
Microsoft 365: Strategic Roadmap Development for Modern Workplace: Seer’s workshop will cover how Microsoft 365 collaboration tools can be used to meet company objectives such as increased efficiency and productivity, as well as increased employee engagement. It includes workforce and digital workplace analysis, environment assessment, workload education, solution envisioning, and strategy planning.
Microsoft Copilot Readiness: 2-Week Workshop: Centric Consulting offers an engagement to prepare organizations for secure deployment of Microsoft Copilot. The assessment covers technical and organizational readiness, with a roadmap that includes recommendations for configuring existing Microsoft technologies and adoption and change management plans.
Mobile Device Management Services: 2-Week Implementation and Piloting: Dayta offers services for end users to understand Microsoft Intune’s features, security configuration, and device management capabilities. The workshop includes envisioning, designing, piloting, and testing activities. It provides insights into Intune’s potential for controlling company devices.
Microsoft Security Solutions: Implementation: NEC Africa offers an end-to-end portfolio of consulting, implementation, and managed security services, all powered by Microsoft’s security technologies. These solutions include Microsoft Entra ID, Microsoft PIM, Microsoft 365 Defender, and more. It is designed to expand on your existing security investment to increase profitability and cyber resilience.
OneDrive Tenant-to-Tenant Migration: 2-Week Service: This service streamlines Microsoft 365 data migration between tenants, ensuring uninterrupted access to files and documents. The process is implemented in two stages, with pre-stage and final data transfers, minimizing downtime. IT Partner provides usage reports and migration plans, while clients coordinate resources and staff schedules.
Power BI Consulting: Dear Watson Consulting offers customizable and dynamic Microsoft Power BI reporting solutions that provide real-time data analytics and personalized reports. It will assess specific reporting requirements, identify key metrics and data sources, and provide comprehensive training and ongoing support to maximize the utility of Power BI reporting.
Power BI Training: Elevate your team’s proficiency in Microsoft Power BI with this offer from Dear Watson Consulting. Its Microsoft-certified trainers will offer personalized mentoring sessions and customized courses such as “Dashboard in a Day,” “Data Visualization,” and “Data Modelling,” as well as practical insights and scenario-based learning along with ongoing implementation support.
Defender for Servers: Workshop: The security experts from Move will help you understand the functionality of Microsoft Defender for Servers so you can increase insight into security, vulnerability management, and expertise in hardening servers. Defender for Servers integrates with Microsoft Defender for Endpoint to provide next-generation protection, including real-time scanning, protection, endpoint detection and response (EDR), and much more.
SoftServe Production Control Tower: 10-Day Proof of Concept: SoftServe’s Production Control Tower (PCT) is a comprehensive solution that simplifies data collection and provides visibility into key production KPIs. SoftServe offers a step-by-step approach to implement PCT, with deliverables including access to the application and the ability to implement and review all functionalities.
Microsoft Teams Phone: 3-Month Implementation: JBS will implement and configure Microsoft Teams Phone for seamless communication, flexibility, and cost reduction. The cloud-based service integrates with Microsoft Teams and allows remote work. JBS provides support and expertise in various configurations and carrier services. This service is available only in Japanese.
Wipro Copilot in a Day using Dynamics 365: Wipro’s workshop teaches generative AI for Microsoft Dynamics 365 modules. Copilot is an AI-powered assistant that provides operational support for sales, supply chain management, finance, marketing, and other teams. It offers a chat interface for quick summaries of sales opportunities and leads, meeting preparations, account-related news, and much more.
Contact our partners
2021.AI AI Governance Service (AGS)
24-Point ISV Partnership Health Check
Adoption and Change Management: Prepaid Change Support: 8-Week Assessment
AIMMO Enterprise: AI Application Development Platform
AMLcheck: Prevent Money Laundering
Antidote Connector for Microsoft Outlook
Synch Microsoft Endpoint Devices to Atlassian Assets
Atos || Enabling a sustainable enterprise with Atos and Microsoft
TechAlign: Business and Technology Alignment Service
Community-Based Wellness Platform for Companies
Copilot Customer Success Service
Coupler.io PPC Dashboard for Campaign Analytics
DeepSign Integration for Excel
Digital Transformation Assessment for Social Housing
DIIT Documents Electronic Integration for Microsoft Dynamics 365 Business Central
Dynamics 365 Business Central Migration with Alletec
Dynamics 365 Business Central Rescue: 4-Week Assessment
G-Collection for Microsoft Dynamics 365 Customer Engagement and Microsoft Power Apps
Green Project – SMB Carbon Accounting Platform
GuardianAI FraudSafe Protasval
Highlight Service Assurance Platform
Information Management: Clean-up, Back-up, and Governance: 3-Month Assessment
Information Management Advisory: 12-Month Consulting Service
Ivy Distribution Management System
The Tower of The Albanian for Microsoft Word
Lightstream Data Security Assessment
Lightstream Microsoft 365 Comprehensive Security Assessment
m+m Extended Text Module with Production Order Overview
m+m Production Order Overview with Production Order Assistant
m+m Production Order Overview with Routing Status Overview
Maplytics for Microsoft Dynamics 365 CRM
Meltwater for Media, Social, and Consumer Intelligence
Microsoft 365 Copilot Readiness Assessment
Microsoft 365 Copilot Readiness: 5-Day Assessment
Microsoft Fabric: 2-Hour Foundation Session
nShift Shipping Connector (Trial)
Paribus 365 Data Quality Management
Planner for Microsoft Dynamics 365 Business Central
Power BI Consulting: 2-Week Assessment
Power Platform Adoption Journey Assessment
Sales Copilot Readiness: 4-Week Assessment
Security Copilot Readiness: 4-Week Assessment
ServiceNow Healthcare and Life Sciences Clinical Device Management
ServiceNow Healthcare and Life Sciences Service Management
ServiceNow Order Management for Technology Providers (OMTP)
ServiceNow Order Management for Telecommunications (OMT)
ServiceNow Technology Providers Service Management (TPSM)
ServiceNow Telecommunications Network Inventory (TNI)
ServiceNow Telecommunications Service Management (TSM)
ServiceNow Telecommunications Service Operations Management (TSOM)
Signzy End-to-End No Code Onboarding
Stripe for Dynamics 365 Sales – TAPP
Typeface Generative AI Platform for Enterprise
Viva Copilot Readiness: 4-Week Assessment
Wylo – Comprehensive Community Platform
XPRIMER HRM – Effective Workforce Management
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
Zero Trust: Rapid off boarding with Intune and Microsoft Entra Id
Hi Jason Cody here!
I would like to talk about using Intune policies with Microsoft Entra ID Governance as part of the offboarding process. Using the method below you can rapidly offboard an employee/contractor while preserving device data, Entra ID join status, and Intune enrollment. This could be used for multi-user endpoints or in events where forensics may be necessary for the device.
In the past, quickly offboarding users while preserving data on the devices was fairly easy. As soon as the decision was made to offboard an employeecontractor they would lose physical access to their equipment and to the facility, their account would be disabled, and typically a security personnel would also ensure they do not leave with any critical company assets in the process.
With the rise in remote work, it is not uncommon for employees or contractors to not only work remotely, but to work in a different geographical location as their company’s office. Requesting the user to travel onsite to the office so that they can be offboarded is impractical and not realistic.
At the same time the nature of today’s authenticationauthorization protocols does not allow for rapid offboarding of remote users. Depending on the protocol and application, a user can continue to accessmodify data for an hour or more after their account was disabled. When it comes to Windows Client, the user can potentially login forever using cached credentials.
While remotely wiping of the device is typically the route we take with these scenarios. Sometimes we need to preserve the data on the device for investigative or legal purposes. In those cases, we need an alternative. One way of addressing this is to revoke logon permissions after the organization has decided to offboard them. This will give the best opportunity to preserve the integrity of data on the device should it be required for investigative or legal purposes.
Things you need to create this workflow
Intune Licenses
Permissions to create Intune polices and Entra ID Security Groups
Windows 10 or higher
Entra ID joined or Hybrid Entra ID joined
Intune managed or Co-Managed
Internet connection
(optional) Entra ID Governance License for Automation
How it will work
We are going to take advantage of Windows Security User Rights assignments to lock the user out of Windows Client device:
Deny Local Logon
Deny Remote Logon
Deny Remote Desktop Logon
Once prerequires are configured in Intune and Microsoft Entra Id. The overall process will require two steps and can be automated with Logic App, Azure Automation, or PowerShell script.
Add user to a group that will enforce Deny Device access Intune policy.
Send a restart command to the users of Windows Client to apply the new policy.
After the restart is complete, the user will no longer be able to login to Windows Client nor access any organizations data on that device. Please note that the device must be online to be aware of the users group membership change.
Please note that the device needs to have internet access for this to work. If the user intentionally keeps the device offline they would be able to continue logging in using cached credentials indefinitely.
Intune steps required to implement authentication restrictions
First, we will create a group that the revoked accounts will be added to, to apply the policies. For this example, we will name this group “Logon Restriction User” and it will be an Assigned Entra ID security group. 1 Do not populate members into this group yet.
Note: These steps are for Entra ID Joined devices. For Hybrid Entra ID Joined devices a separate configuration using Group Policy Objects will need to be created.
If you want to test this on a subset of devices, then you should also create a device security group for assigning the policies at this time.
Next, In the Intune portal we will go to Device -> Configuration Profile and create a new Settings Catalog Configuration Profile to define User Rights on the target device group. In this policy we will be denying log on and remote access to the local Guest security group.2 Assign this policy to a test device group.
Finally, we will create a Local User Group membership configuration. In Intune -> Endpoint Security -> Account Protection create a new Account Protection (preview) policy. In this policy we will be adding the “Logon Restrictive Users” user group to the local Guest security group membership.3 Assign this policy to the test device group used in the prior step.
Validate settings are applied.
In local group membership you will see the GUID of the Entra ID Security Group in the local Guest group members
In SecPol.msc we will see
Now we are ready to test the policy. Add the test account that will be restricted to the assigned members of the security group and restart the user’s device. In this example, my foster dog Koa was adopted and so they left my house and should no longer have access to their laptop. Cookie however also uses this device and their account is unaffected.
Additional Details
For restarting the device the easiest way to do this is from the Intune portal using device actions. However, I prefer using Graph API to do this by referencing the users known managedDeviceId. Unfortunately, targeting the members of the “Logon Restrictive Users” security group with a script won’t work quickly enough for this. Newly assigned Intune script assignments are not recognized until after the Intune Management Extension service is restarted either manually or via machine reboot.
If you do want to deploy a reboot script, which would be recommended, I would use something like this to clear the cached credentials prior to rebooting. Clearing the cached credentials is not required for group membership changes to be processed so long as the device is online at the time of logon. However, clearing the cached logon is recommended so that the user cannot continue using the device in an offline state. Please note that the device needs to be online long enough to receive these commands.4
# Set Cached Credential Count to 0
Set-ItemProperty -Path ‘HKLM:SoftwareMicrosoftWindows NTCurrentVersionWinlogon’ -Name CachedLogonsCount -Value 0
# Remove all cached Kerberos tickets from the device
Get-WmiObject Win32_LogonSession | Where-Object {$_.AuthenticationPackage -ne ‘NTLM’} | ForEach-Object {klist.exe purge -li ([Convert]::ToString($_.LogonId, 16))}
# Delete Hello for Business keys
certutil.exe -DeleteHelloContainer
# Reboot the device to force user log off and apply changes
Shutdown /r /t 0
To revert the restriction, you will need to remove the user from the “Logon Restrictive Users” security group and temporarily remove the computer from the Endpoint Security Account Protection policy for Local User Group membership policy. Once this policy is removed the user will be able to log in again and the policy can be re-applied. If your processes are well defined, then hopefully you will rarely need to revert this restriction.
Note1: It would be recommended to use a Microsoft Entra ID group over an on-premises group, because 1) it would allow for any additional automation task we might implement in Flow in the future, and 2) because when you are working with cloud-based policy it is best practice to use cloud-based groups.
Note2: If you already have policies defining User Rights, make sure you are not creating a policy conflict by applying both policies to the target device.
Note3: If you already have policies defining Local Group Membership, make sure you are not creating a policy conflict by applying both policies to the target device.
Note4: This will clear all cached credentials on the device, it is not specific to an individual user so use with caution.
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
Microsoft Tech Community – Latest Blogs –Read More
Updates from 162.1 and 162.2 releases of SqlPackage and the DacFx ecosystem
Within the past 4 months, we’ve had 2 minor releases and a patch release for SqlPackage. In this article, we’ll recap the features and notable changes from SqlPackage 162.1 (October 2023) and 162.2 (February 2024). Several new features focus on giving you more control over the performance of deployments by preventing potential costly operations and opting in to online operations. We’ve also introduced an alternative option for data portability that can provide significant speed improvements to databases in Azure. Read on for information about these improvements and more, all from the recent releases in the DacFx ecosystem. Information on features and fixes is available in the itemized release notes for SqlPackage.
.NET 8 support
The 162.2 release of DacFx and SqlPackage introduces support for .NET 8. SqlPackage installation as a dotnet tool is available with the .NET 6 and .NET 8 SDK. Install or update easily with a single command if the .NET SDK is installed:
# install
dotnet tool install -g microsoft.sqlpackage
# update
dotnet tool update -g microsoft.sqlpackage
Online index operations
Starting with SqlPackage 162.2, online index operations are supported during publish on applicable environments (including Azure SQL Database, Azure SQL Managed Instance, and SQL Server Enterprise edition). Online index operations can reduce the application performance impact of a deployment by supporting concurrent access to the underlying data. For more guidance on online index operations and to determine if your environment supports them, check out the SQL documentation on guidelines for online index operations.
Directing index operations to be performed online across a deployment can be achieved with a command line property new to SqlPackage 162.2, “PerformIndexOperationsOnline”. The property defaults to false, where just as in previous versions of SqlPackage, index operations are performed with the index temporarily offline. If set to true, the index operations in the deployment will be performed online. When the option is requested on a database where online index operations don’t apply, SqlPackage will emit a warning and continue the deployment.
An example of this property in use to deploy index changes online is:
sqlpackage /Action:Publish /SourceFile:yourdatabase.dacpac /TargetConnectionString:”yourconnectionstring” /p:PerformIndexOperationsOnline=True
More granular control over the index operations can be achieved by including the ONLINE=ON/OFF keyword in index definitions in your SQL project. The online property will be included in the database model (.dacpac file) from the SQL project build. Deployment of that object with SqlPackage 162.2 and above will follow the keyword used in the definition, superseding any options supplied to the publish command. This applies to both ONLINE=ON and ONLINE=OFF settings.
DacFx 162.2 is required for SQL project inclusion of ONLINE keywords with indexes and is included with the Microsoft.Build.Sql SQL projects SDK version 0.1.15-preview. For use with non-SDK SQL projects, DacFx 162.2 will be included in future releases of SQL projects in Azure Data Studio, VS Code, and Visual Studio. The updated SDK or SQL projects extension is required to incorporate the index property into the dacpac file. Only SqlPackage 162.2 is required to leverage the publish property “PerformIndexOperationsOnline”.
Block table recreation
With SqlPackage publish operations, you can apply a new desired schema state to an existing database. You define what object definitions you want in the database and pass a dacpac file to SqlPackage, which in turn calculates the operations necessary to update the target database to match those objects. The set of operations are known as a “deployment plan”.
A deployment plan will not destroy user data in the database in the process of altering objects, but it can have computationally intensive steps or unintended consequences when features like change tracking are in use. In SqlPackage 162.1.167, we’ve introduced an optional property, /p:AllowTableRecreation, which allows you to stop any deployments from being carried out that have a table recreation step in the deployment plan.
/p:AllowTableRecreation=true (default) SqlPackage will recreate tables when necessary and use data migration steps to preserve your user data
/p:AllowTableRecreation=false SqlPackage will check the deployment plan for table recreation steps and stop before starting the plan if a table recreation step is included
SqlPackage + Parquet files (preview)
Database portability, the ability to take a SQL database from a server and move it to a different server even across SQL Server and Azure SQL hosting options, is most often achieved through import and export of bacpac files. Reading and writing the singular bacpac files can be difficult when databases are over 100 GB and network latency can be a significant concern. SqlPackage 162.1 introduced the option to move the data in your database with parquet files in Azure Blob Storage, reducing the operation overhead on the network and local storage components of your architecture.
Data movement in parquet files is available through the extract and publish actions in SqlPackage. With extract, the database schema (.dacpac file) is written to the local client running SqlPackage and the data is written to Azure Blob Storage in Parquet format. With publish, the database schema (.dacpac file) is read from the local client running SqlPackage and the data is read from or written to Azure Blob Storage in Parquet format.
The parquet data file feature benefits larger databases hosted in Azure with significantly faster data transfer speeds due to the architecture shift of the data export to cloud storage and better parallelization in the SQL engine. This functionality is in preview for SQL Server 2022 and Azure SQL Managed Instance and can be expected to enter preview for Azure SQL Database in the future. Dive into trying out data portability with dacpacs and parquet files from the SqlPackage documentation on parquet files.
Microsoft.Build.Sql
The Microsoft.Build.Sql library for SDK-style projects continues in the preview development phase and version 0.1.15-preview was just released. Code analysis rules have been enabled for execution during build time with .NET 6 and .NET 8, opening the door to performing quality and performance reviews of your database code on the SQL project. To enable code analysis rules on your project, add the item seen on line 7 of the following sample to your project definition (<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>).
<Project DefaultTargets=”Build”>
<Sdk Name=”Microsoft.Build.Sql” Version=”0.1.15-preview” />
<PropertyGroup>
<Name>synapseexport</Name>
<DSP>Microsoft.Data.Tools.Schema.Sql.Sql160DatabaseSchemaProvider</DSP>
<ModelCollation>1033, CI</ModelCollation>
<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>
</PropertyGroup>
</Project>
During build time, the objects in the project will be checked against a default set of code analysis rules. Code analysis rules can be customized through DacFx extensibility.
Ways to get involved
In early 2024, we added preview releases of SqlPackage to the dotnet tool feed, such that not only do you have early access to DacFx changes but you can directly test SqlPackage as well. Get the quick instructions on installing and updating the preview releases in the SqlPackage documentation.
Most of the issues fixed in this release were reported through our GitHub community, and in several cases the person reporting put together great bug reports with reproducing scripts. Feature requests are also discussed within the GitHub community in some cases, including the online index operations and blocking table recreation capabilities. All are welcome to stop by the GitHub repository to provide feedback, whether it is bug reports, questions, or enhancement suggestions.
Microsoft Tech Community – Latest Blogs –Read More