Tag Archives: microsoft
Building Better Apps: Better Together
Helping you build better apps has been one of our key focus areas in Azure. Our latest tooling focuses on providing guidance for architecting, optimizing, and deploying apps. Whether you’re creating a new proof of concept or improving an existing app, these capabilities can boost productivity and performance. These capabilities are all in Preview, so please give them a try and let us know what you think!
Starting Right: Architecting Your Azure App
Let’s say you’re starting a proof of concept for a new application. Normally, you might spend a lot of time picking services, architecting apps, and deploying them based on industry best practices. Better Together can streamline this process with the below capabilities.
Better Together in Microsoft Copilot for Azure
The Better Together capability which can be accessed from Copilot can be helpful to understanding if you’re on the right track when building your app. In the past it might’ve been time-consuming to learn about the kinds of services that similar apps are using through docs and videos. This capability can streamline some of this process by recommending services based on patterns that other similar apps have used.
To give this a try, navigate to the Azure Portal and select the Copilot button in the toolbar to open the chat window. Here you can ask questions to recommended services for your app, or architecture, including, “What are popular services that are deployed with App Service apps like mine?” and “Which database should I use with my ACA app?”, and “What services would you recommend to implement distributed caching?”
Sometimes it’s important to validate if you’re on the right track. When you ask architectural or infrastructure-level questions to Azure Copilot, it helps you discover the most commonly used services for your specific use case. In the example below, after identifying performance bottlenecks in your app and considering implementing distributed caching to enhance performance, the recommendation points to Azure Cache for Redis. This service is widely deployed by many App Service apps similar to yours.
Boosting Performance: Optimizing Your Azure App
If your App Service app is running a little slower than expected, or if you’re suspecting any performance bottlenecks, these are some capabilities that can diagnose and optimize these problems.
Diagnostics Insights (Preview)
Diagnostic logs can return pages of information that are difficult to interpret. This capability can make it easier to identify anomalies and quickly identify bottlenecks . In the Azure Portal, you can efficiently evaluate your application’s CPU usage and track any anomalies by navigating to Diagnose & Solve Problems > Web App Slow. Within this section, you’ll find a chart that provides insights into performance and latency.
Notably, over the last 24 hours, approximately 90% of users accessing this web app experienced low latency.
Another way to access suggestions is to type in “my web app is slow” into Copilot for Azure, which will offer suggestions around any bottlenecks.
Diagnostic charts can sometimes be time-consuming to analyze. However, Copilot offers a helpful Summarization capability. When you input variations of “summarize this page,” Copilot will generate concise summaries of the insights, allowing you to quickly grasp the main points without having to read through every chart and detail.
Application Insights Code Optimizations (Preview)
Performance can be improved by making code-level changes. Code Optimizations helps identify where to make these improvements. By leveraging AI, Code Optimizations detects CPU and memory bottlenecks of your application during runtime. It is available for .NET applications that have Application Insights Profiler enabled. To access Code Optimizations in the Azure Portal, navigate to the Performance blade in Application Insights. For App Service, it’s also available in Diagnose & Solve Problems > Web App Slow.
In this example, some of the performance issues identified may be caused by inefficient code, which can be investigated.
Selecting any of these suggestions will open more details about the performance issue, show where and when in the code it’s occurring, and show the recommended solution.
For many recommendations, a code fix can be generated using the Code Optimizations extension (currently in limited preview) for Visual Studio and Visual Studio Code – Insiders. You can sign up here.
Learn more about Code Optimizations.
Making Improvements: Augmenting Your Azure App
If you have deployed an App Service app and you’re unsure which services to use to improve scalability and reliability for it, these capabilities can help optimize without reinventing the wheel.
Better Together (Preview) in Azure Portal
It can be time-consuming to pick, create, deploy, and connect a service to your App Service app. Better Together can help you deploy and connect popular services for your App Service app. This capability primarily focuses on connecting newly-created resources to your App Service app more easily. Navigate to Better Together for your App Service app through the Azure Portal using the menu item Better Together.
Enabling Azure Cache for Redis will automatically create a new Redis instance and establish the connection with your existing App Service app. If you choose to “Create” any of the other services, you’ll be directed to their onboarding flow, where you’ll receive guidance on creating and connecting the service. Stay tuned for the next release for a more customized experience!
Take a look at these capabilities in action with the video below.
Conclusion: Better Together
Azure strives to empower you to create robust, high-performing apps. Whether you’re starting a new app or improving an existing one, we are creating tools and services that can help. Please give these capabilities a try and let us know what you think by leaving a comment or emailing us at bettertogetherteam@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Deep Dive: Secure Orchestration of Confidential Containers on Azure Kubernetes Service
Introduction
Building on our previous blog post about Confidential Containers on Azure Kubernetes Services (AKS) powered by Azure Linux, this blog post dives into the design and implementation of the stack’s security policy. The security policy feature is a critical building block for the trustworthy orchestration of confidential Kubernetes workloads on IaaS platforms. The feature protects the interface between the cloud provider’s stack and the user’s trusted computing base (TCB). The user’s confidential workloads run inside the TCB within virtual machines (VMs) which are encrypted by a hardware-based Trusted Execution Environment (TEE), such as AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP). Trust in the security policy and its enforcement can be established via remote attestation. We will explore establishing this trust and how end users can generate and apply security policies using our new genpolicy tool.
Protecting the Trust Boundary Interface
One of the main components of the Kata Containers system architecture is the Kata Agent, which we will refer to as Agent. When using Kata Containers to implement Confidential Containers, the Agent is executed inside the hardware-based TEE and therefore is part of the TCB. As shown in the Figure 1, the Agent provides a set of ttRPC APIs allowing the system components outside of the TEE, i.e., the Kata Shim, to create and manage Kubernetes pods inside confidential VMs (CVMs) transparent to the Kubernetes stack. From a confidentiality standpoint, the Kata Shim to Agent communication represents a control channel crossing the TCB boundary, which is why the Agent must protect itself from potentially malicious Agent API calls.
To systematically secure this control channel, we designed and implemented a security policy feature for the Kata Containers project, known as the Kata “Agent Policy” feature. This feature allows the owner of a confidential pod deployment to specify a document articulating the security policy a priori to running the pod. This policy document dictates what API calls are allowed and disallowed for the pod.
The policy document can be added in the form of an encoded string as an annotation to Kubernetes pod manifests, allowing the policy document to naturally travel through kubelet and containerd to the Kata Shim, which we will refer to as Shim. The Shim then provides the policy document to the Agent during early CVM initialization. Since the policy document travels through components that are not part of the TCB prior to reaching the Agent, the policy is not inherently trustworthy at CVM initialization. We can establish trustworthiness through remote attestation which will be explained in an upcoming section.
Structure of the Security Policy Document
The security policy document is composed using the Rego policy language and describes all the Agent’s ttRPC API calls along with their parameters that are expected for creating and managing the confidential pod. This section takes a closer look at the three high-level sections of the policy document – the rules, default values and data sections.
Rules
The rules section is a static part of the policy document, independent of the individual pod deployment. Rules express the semantics for validating API calls, and in particular implement input parameter validation for parametrized calls. An example for a simple rule is the one for the unparametrized WriteStreamRequest call which explicitly enforces that the call can only be made if the policy document’s default value for the call is set to true:
WriteStreamRequest {
policy_data.request_defaults.WriteStreamRequest == true
}
Let’s now look at a rule for the parametrized CreateContainerRequest call which implements input parameter validation:
CreateContainerRequest {
i_oci := input.OCI
i_storages := input.storages
…
some p_container in policy_data.containers
p_pidns := p_container.sandbox_pidns
i_pidns := input.sandbox_pidns
p_pidns == i_pidns
p_oci := p_container.OCI
p_oci.Version == i_oci.Version
p_oci.Root.Readonly == i_oci.Root.Readonly
…
p_storages := p_container.storages
…
}
This rule validates all input parameters by comparing them with the expected parameters based on the document’s data section and rejects when a change to fields like the command line, storage mount, execution security context, or environment variables is detected. In the code snippet, the variables starting with “i_” are the input parameters whereas the variables starting with “p_” represent the expected values based on the policy document’s data section.
Default Values
Default values for API calls determine the behavior when no rule for a given call was positively evaluated:
default CreateContainerRequest := false
The default value of false means that any CreateContainer API call will be rejected unless a set of policy rules explicitly allows that call.
default GuestDetailsRequest := true
The default value of true means that calls from outside of the TEE to the GuestDetailsRequest API are always allowed to be executed. One would set this default value to true when the data returned by this API is not considered sensitive for confidentiality of the workloads.
Data
The data section contains expected values that are derived from a Kubernetes pod manifest and that are compared during policy rule evaluation with the actual values from the input parameters of a ttRPC API request. With this, the data section directly depends on the individual pod deployment with its containers. Based on the result of the comparison between the values, a rule can either allow or deny the call by returning true or false.
Coming back to the above rule for CreateContainerRequest, all the characteristics of a container are specified in a fine-granular way in the policy document’s data section: image integrity information, command line, storage volumes and mounts, the execution security context, environment variables, and other fields from the Open Container Initiative (OCI) container runtime configuration. An example for the command line section is the following:
policy_data := {
“containers”: [
{
“OCI”: {
…
“Args”: [
“/bin/sh”
],
…
},
…
},
…
Any diverging command line observed in the CreateContainerRequest for the given container will be rejected by policy. Another example is for the validation of the storages input field of the CreateContainerRequest:
policy_data := {
“containers”: [
{
“OCI”: {
…
},
“storages”: [
{
“driver”: “blk”,
“driver_options”: [],
“source”: “”,
“fstype”: “tar”,
“options”: [
“$(hash0)”
],
“mount_point”: “$(layer0)”,
“fs_group”: null
},
…
This example shows how the security policy constrains the way block devices can be mapped from the host into the CVM. In this example, a tar filesystem type block device is expected to be mapped to a certain mount point into the CVM.
Policy Enforcement in the Kata Agent
The Agent is responsible for enforcing the security policy by evaluating the policy for each Agent ttRPC API call. We implemented the enforcement of the security policy using the Open Policy Agent (OPA) – a graduated project of the Cloud Native Computing Foundation (CNCF). Before carrying out the actions corresponding to the API, the Agent queries OPA by using the OPA REST API to check if the policy rules and data allow or block the call. The Agent provides the policy document and all input data from the API request parameters as a JSON format representation to OPA. OPA uses the rules to check if the inputs are consistent with policy data. OPA tries to find at least one rule with the same name as the ttRPC API call to return true while considering the call’s potential input parameters.
For example, when the Agent receives a CreateContainerRequest call, any rules defined in the policy that are using the name CreateContainerRequest are evaluated. OPA evaluates these rules and tries to find at least one CreateContainerRequest rule that returns value true. If at least one CreateContainerRequest rule returns true, OPA returns a true result to the Agent, and the Agent creates the container as requested by the Shim. On the other hand, if the API inputs are not allowed by the document’s rules or if no rule exists, OPA returns the default value for that API to the Agent, or false when no default value is supplied. In the case false is returned, the Agent rejects the API call by returning a “blocked by policy” error message.
We achieved this behavior by adding a gate to the Agent’s RPC interface implementation for each call. We added the is_allowed() function call early in every call handler:
async fn exec_process(…) -> ttrpc::Result<Empty> {
…
is_allowed(&req).await?;
…
}
The function enforces above-described logic and can be found in the Agent policy implementation.
An important policy enforcement aspect of the CreateContainerRequest call is the Agent’s protection of the integrity of block devices, as described in the example for the storages input field of the CreateContainerRequest from the previous section and replicated below.
policy_data := {
“containers”: [
{
“OCI”: {
…
},
“storages”: [
{
“driver”: “blk”,
“driver_options”: [],
“source”: “”,
“fstype”: “tar”,
“options”: [
“$(hash0)”
],
“mount_point”: “$(layer0)”,
“fs_group”: null
},
…
As each container image layer is exposed as a read-only virtio block device to the CVM, the Agent protects the integrity of these block devices using the dm-verity technology of the CVM’s Linux kernel, enforcing the root value of the dm-verity hash tree through policy enforcement. The policy document’s data section contains the expected root values of the dm-verity hash tree for each container image layer, hash0 in the above example. These root values are verified at runtime by the Agent via calling OPA to compare the received input values with the expected values using policy rules semantics as defined by the policy document. With this, not only the security policy enforcement but also the integrity of the container image layers can be verified by remote attestation, as described next.
Security Policy and Remote Attestation
Before handling sensitive information, confidential workloads should perform remote attestation to prove to any relying party that exactly the desired workload with the user’s desired policy, using exactly the expected versions of the TEE, and of the CVM’s software stack has been orchestrated by the control plane.
Figure 2 depicts the confidential container creation flow starting with a user deploying a pod manifest to running the workload in the CVM. The pod manifest depicted in orange color reaches the Shim which in turn brings up the CVM with the help of the VMM and HV. The Shim uses the CreateVM call the VMM exposes through its API.
Before triggering this call, the Shim computes the SHA256 hash of the user-provided policy document that the VMM uses to set a field measured by the TEE. In the case of AMD SEV-SNP, the VMM sets the HOST_DATA field to the hash value which the AMD SEV-SNP TEE includes in the attestation evidence. This action creates a strong binding between the contents of the policy and the CVM. This TEE field cannot be modified later by the software executed inside or outside of the CVM. However, it is readable within the TEE after launch.
As the Shim launches the CVM and the CVM OS boots, the Agent starts up using an initial security policy that is included in the CVMs root file system. This initial security policy only allows the Shim to set a new policy document through the SetPolicyRequest ttRPC call once the Agent’s ttRPC interface becomes available. Upon receiving the policy from the Shim, the Agent verifies that the hash of the policy matches the value in the immutable TEE field. The Agent rejects the incoming policy if it detects a hash mismatch. If the hash matches, the Agent enforces the new policy and listens for ttRPC calls. After the Agent receives and validates the Shim’s CreateContainerRequest call, the Agent creates the workload container pertaining to the user’s pod manifest.
The remote attestation procedure can be implemented in different ways. One option is to implement in a container running inside the CVM that obtains the signed attestation evidence from the AMD SEV-SNP TEE. With the policy hash being part of one of the measured TEE fields above, the attestation service can verify the integrity of the security policy by comparing the value of this field with the expected hash of the pod policy that was preconfigured by the user.
Microsoft’s Azure Attestation (MAA) provides an end-to-end attestation solution for workloads in Azure. We have added support for Confidential Containers on AKS to MAA by utilizing the open-source confidential sidecar container as the attestation client. So, MAA just needs to be seeded with relevant policy measurements for confidential pods to enable remote attestation.
Policy Document Creation using the genpolicy Tool
To simplify creating the policy document for container workloads, we built the genpolicy tool to automate the generation of the security policy document with its policy data, rules, and default values derived from the users’ individual Kubernetes pod manifests. The genpolicy tool encodes the security policy document in base64 format and adds it to the Kubernetes pod manifest as an annotation. An example is a pod manifest for Confidential Containers on AKS where the given runtimeClassName field indicates that the pod is to be run as a confidential container:
apiVersion: v1
kind: Pod
metadata:
annotations:
io.katacontainers.config.agent.policy: cGFja2FnZSBhZ2VudF<…>
spec:
runtimeClassName: kata-cc-isolation
…
The annotation value can be decoded using “base64 -d”, revealing the set of default values, rules, and data, for example:
…
# default values for API calls
default CopyFileRequest := false
…
default ExecProcessRequest := false
…
# rules for API calls
CreateContainerRequest { … }
…
CreateSandboxRequest { … }
…
WriteStreamRequest { … }
…
# data, for instance listing the pod’s containers and fields
policy_data := {
“containers”: [
{
“OCI”: {
“Version”: “1.1.0-rc.1”,
…
}
To generate the policy, run the following command:
genpolicy -y <path/to/pod.yaml>
This will embed the policy into the pod yaml file. Then the pod manifest can be deployed onto a cluster supporting confidential containers as normal, for instance, using:
kubectl apply -f <path/to/pod.yaml>
If any policy violations are detected, the Agent will refuse to execute the relevant ttRPC call, resulting in the following failure when using kubectl describe pod:
Error: failed to create containerd task: failed to create shim task: “CreateContainerRequest is blocked by policy”
Users should review the auto-generated policy document and verify that the policy fits the desired confidentiality goals and modify the policy as needed. To change the behavior of the tool, the user can specify further parameters:
genpolicy -p <path/to/rules.rego> -j <path/to/genpolicy-settings.json> -y <path/to/pod.yaml>
Using these parameters, the policy’s default values and rules and data fields can be modified by supplying custom rules.rego and settings JSON files. More details and examples are provided in the upstream Kata Agent policy documentation.
To simplify genpolicy usage in Azure, the Azure CLI ‘confcom’ extension wraps the latest releases of the genpolicy tool to enable end users generating pod security policies via the Azure CLI, which is as simple as calling:
az confcom katapolicygen -y <path/to/pod.yaml>
An end-to-end example starting with cluster deployment and running a confidential container with attached security policy can be found in our confidential container deployment documentation.
Conclusion
We have walked through the security policy of our Confidential Containers on AKS offering – from the syntax of the policy file to the enforcement with OPA, to establishing trust with remote attestation, and how to automatically generate and embed the policy using our genpolicy tool. The Azure Linux team collaborated with the Confidential Containers and Kata Containers communities on the design and implementation of Confidential Containers, as part of Microsoft’s commitment to open source. We contributed the policy implementation upstream – the Agent code responsible for enforcing the security policy, the Shim and Agent code for setting the policy and reading its measured hash value with different VMMs and HVs for AMD SEV-SNP and Intel TDX, and the genpolicy tool to create the security policy document. Along with this, a how-to for the policy feature and a README for the genpolicy tool can be found. We will continue to contribute and expand the security policy implementation upstream with the Kata Containers and Confidential Containers communities, so join us there to build this feature with us.
Microsoft Tech Community – Latest Blogs –Read More
A Serious Bug in Windows Explorer
When you need an icon (thumbnail) that displays a picture or video file, it can’t be displayed and Windows Explorer keeps loading. When you right-click on such a file, Windows Explorer freezes.
When you need an icon (thumbnail) that displays a picture or video file, it can’t be displayed and Windows Explorer keeps loading. When you right-click on such a file, Windows Explorer freezes. Read More
Two separate Outlook instances on Android/iPhone without joining MDM?
I’m looking for a way to maintain separate instances of Outlook on my Android/iPhone device – one for work-related emails and another for personal emails. Our organization uses Microsoft Intune for managing work applications, and I would like to use the Outlook app for both work and personal accounts without mixing the data.
Is it possible to configure two distinct Outlook instances on a single device to keep work and personal emails separate? If so, could you provide guidance on how to set this up, especially in the context of using Mobile Application Management (MAM) policies to secure work data without enrolling the device in Mobile Device Management (MDM)?
I’m looking for a way to maintain separate instances of Outlook on my Android/iPhone device – one for work-related emails and another for personal emails. Our organization uses Microsoft Intune for managing work applications, and I would like to use the Outlook app for both work and personal accounts without mixing the data. Is it possible to configure two distinct Outlook instances on a single device to keep work and personal emails separate? If so, could you provide guidance on how to set this up, especially in the context of using Mobile Application Management (MAM) policies to secure work data without enrolling the device in Mobile Device Management (MDM)? Read More
Teams website tabs not displaying
Hello
Please i need your help on this issue.
Figure 1: New Teams website app link to compliance wire
Figure 2: New Teams website app link to compliance wire after submitting log-in credentials
in Classic Teams, when I log-in to Compliance wire, I am able to do so successfully.
Figure 3: Classic Teams Compliance Wire Log-in
The sites do not work in the new teams, but they work in the old teams.
Hello Please i need your help on this issue. Figure 1: New Teams website app link to compliance wire Figure 2: New Teams website app link to compliance wire after submitting log-in credentials in Classic Teams, when I log-in to Compliance wire, I am able to do so successfully.Figure 3: Classic Teams Compliance Wire Log-in The sites do not work in the new teams, but they work in the old teams. Read More
Migration of on prem file server to Azure cloud. Trying to avoid domain authention
I currently have an on-prem environment. We are wanting to migrate to azure cloud. We have a file server on prem that is running Linux Samba server to share files to users What are the bests way to migrate this to the Azure cloud environment . I do not want this to be part of a domain, I want it to be a file share. I am trying to avoid the users having to login to access the files. If there is not a way to avoid the login process can I keep a server on-prem and have the Azure cloud environment access the files thru the local environment firewall. Give me multiple ways to accomplish this task.
I currently have an on-prem environment. We are wanting to migrate to azure cloud. We have a file server on prem that is running Linux Samba server to share files to users What are the bests way to migrate this to the Azure cloud environment . I do not want this to be part of a domain, I want it to be a file share. I am trying to avoid the users having to login to access the files. If there is not a way to avoid the login process can I keep a server on-prem and have the Azure cloud environment access the files thru the local environment firewall. Give me multiple ways to accomplish this task. Read More
KQL help Exchange Online
Hello,
I need help in buildinga KQL Query as I’m fairly new to this. I have a set of 2 keyword list like
Set 1 = “A”,”B”,”C”
Set 2 = “1”,”2″,”3″
I want a KQL Query that matches any combinations those 2 sets match. I have tried
(“A” OR “B” OR “C”) AND (“1” OR “2” OR “3”) but that does not seem to work.
Many Greetings
Erik
Hello,I need help in buildinga KQL Query as I’m fairly new to this. I have a set of 2 keyword list like Set 1 = “A”,”B”,”C”Set 2 = “1”,”2″,”3″ I want a KQL Query that matches any combinations those 2 sets match. I have tried(“A” OR “B” OR “C”) AND (“1” OR “2” OR “3”) but that does not seem to work. Many GreetingsErik Read More
Reduction of Password Prompts with Intune Enrolled Phone
My company is transitioning from our current MDM to Intune, while at the same time moving our mailboxes from On-Prem to Exchange Online. Our current MDM requires no passwords from our users after initial enrollment of their mobile devices (both BYOD and company owned) thanks to Kerberos based authentication, whose tokens persists even after passwords are changed or expire and then are changed. In our testing using Intune enrollment and MFA using Authenticator, we found that users are still prompted to enter their password within Outlook when passwords change for an EXO mailbox and Entra ID / Azure account.
This is even true when enabling Passwordless Authentication, which is branded as eliminating the need for passwords (https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-passwordless) yet still, users are prompted when passwords change and they are attempting to access company mail in Outlook for iOS on an Intune enrolled device for example.
This not only can present itself as a pain point for our users who are used to not having to enter their password on their mobile devices or on their desktops using Windows Hello for example – it also after years of backwards thinking in the security industry pushing password complexity and frequent passwords changes is now considered a gaping security risk where users will circumvent or resist draconian password policies with incredibly simple passwords. Sadly, Passwordless Authentication on a mobile device, even with the added bonus of Face ID / biometrics, still doesn’t eliminate having to retype a password when a password changes.
Per everything I’ve read and also discussions with some within Microsoft, it appears there is no way around this. Certificate Based Authentication (https://learn.microsoft.com/en-us/entra/identity/authentication/how-to-certificate-based-authentication) with a PKI issuing certs to users that are deployed to devices via Intune may provide some relief, but we can’t know for sure how this plays out in password change scenarios and don’t have the luxury of testing this without a considerable amount of work even in a test / dev capacity.
For those out there using CBA, does CBA indeed make it so that password changes won’t require users to retype their passwords in to re-authenticate and is the cert fully trusted in those instances? My concern is that it could be a similar scenario to Passwordless Authentication with Authenticator, where the cert is used commonly as a cred but it doesn’t outtrump the requirement to occasionally enter the password when password changes occur causing tokens to expire. But if CBA does indeed eliminate the requirement for entering passwords, it’s something we will seriously consider.
My company is transitioning from our current MDM to Intune, while at the same time moving our mailboxes from On-Prem to Exchange Online. Our current MDM requires no passwords from our users after initial enrollment of their mobile devices (both BYOD and company owned) thanks to Kerberos based authentication, whose tokens persists even after passwords are changed or expire and then are changed. In our testing using Intune enrollment and MFA using Authenticator, we found that users are still prompted to enter their password within Outlook when passwords change for an EXO mailbox and Entra ID / Azure account. This is even true when enabling Passwordless Authentication, which is branded as eliminating the need for passwords (https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-passwordless) yet still, users are prompted when passwords change and they are attempting to access company mail in Outlook for iOS on an Intune enrolled device for example.This not only can present itself as a pain point for our users who are used to not having to enter their password on their mobile devices or on their desktops using Windows Hello for example – it also after years of backwards thinking in the security industry pushing password complexity and frequent passwords changes is now considered a gaping security risk where users will circumvent or resist draconian password policies with incredibly simple passwords. Sadly, Passwordless Authentication on a mobile device, even with the added bonus of Face ID / biometrics, still doesn’t eliminate having to retype a password when a password changes.Per everything I’ve read and also discussions with some within Microsoft, it appears there is no way around this. Certificate Based Authentication (https://learn.microsoft.com/en-us/entra/identity/authentication/how-to-certificate-based-authentication) with a PKI issuing certs to users that are deployed to devices via Intune may provide some relief, but we can’t know for sure how this plays out in password change scenarios and don’t have the luxury of testing this without a considerable amount of work even in a test / dev capacity.For those out there using CBA, does CBA indeed make it so that password changes won’t require users to retype their passwords in to re-authenticate and is the cert fully trusted in those instances? My concern is that it could be a similar scenario to Passwordless Authentication with Authenticator, where the cert is used commonly as a cred but it doesn’t outtrump the requirement to occasionally enter the password when password changes occur causing tokens to expire. But if CBA does indeed eliminate the requirement for entering passwords, it’s something we will seriously consider. Read More
Pre-fill Responses in Your Microsoft Forms
We are excited to share that Microsoft Forms now supports pre-filled links, making your data collection process more efficient and improving data accuracy. This feature not only allows you to set default answers for your questions, it empowers you to strategize how you would like the responses categorized. To help you better understand how to leverage this new feature, let’s try it together with an online training feedback survey. You can also try to pre-fill a form from this template.
Imagine your company conducted three online training sessions for participants in different time zones: Asia, Europe, and North America, each with a different lecturer. To streamline the process and avoid creating separate feedback forms for each session, you decide to use Forms pre–filled links to consolidate all feedback into a single form.
Find the pre-fill link from “…” icon
After creating your feedback survey, click on the “…” icon in the upper right corner and select “Get Pre-filled URL” to start setting your pre-filled answers.
Set pre–filled answers
Before setting pre-filled answers, you need to first activate “Enable pre-filled answers” in the top section of the form. After that, you can proceed to select pre-filled answers. In this case, the prefilled answers would be the session participated in and the lecturer’s name.
Send out the pre–fill link to different audiences
Once you’ve finished setting up the pre-filled answers, you can click the “Get Pre–filled link” button at the bottom of the form to copy/paste the URL for distribution. In this scenario, since you have three different sessions and lecturers, you’ll need to generate three different links with different prefilled answers before sending the form to the corresponding audience.
Recipients open the survey with pre–filled answers
When participants who attended the Asia session opens the survey, they will see that “Asia session” and “John Wang” have already been selected. They can then proceed to answer the remaining questions and submit the form.
Here are two additional real-life use cases to provide inspiration on how this feature can benefit you:
End-of-semester university course evaluations: Fields such as course name and instructor name can be pre–filled to track feedback from multiple courses in one form.
Customer feedback survey: pre-fill fields like employee name, service period, and department.
Microsoft Tech Community – Latest Blogs –Read More
Automate AKS Deployment and Chaos Engineering with Terraform and GitHub Actions
Azure Chaos Studio is a fully managed chaos engineering platform that helps you identify and mitigate potential issues in your applications before they impact customers. It enables you to intentionally introduce faults and disruptions to test the resilience and robustness of your systems. By using Chaos Studio, you can uncover hard-to-find problems in your applications, from late-stage development through production, and plan mitigations to improve overall system reliability.
The provided GitHub Action workflows demonstrate a comprehensive approach to automating the deployment and management of an AKS (Azure Kubernetes Service) cluster using Terraform, as well as deploying Chaos Mesh experiments and the Azure Vote service within the AKS cluster. These workflows streamline the infrastructure management process by integrating directly with GitHub, enabling seamless updates and deployments based on code changes or manual triggers. By leveraging GitHub Actions, Azure, and Kubernetes, these workflows ensure a robust, automated pipeline for maintaining and testing the resilience of applications deployed in the AKS environment.
Automating AKS with Terraform
To automate the deployment and management of an Azure Kubernetes Service (AKS) cluster, I utilized Terraform with the AKS module provided by Azure. This module simplifies the process by abstracting many of the complex configurations needed to set up and manage an AKS cluster.
In the Terraform configuration, I specified the AKS module with the latest version at the time, ensuring compatibility with the latest features and updates. The configuration began by defining essential parameters, such as the resource group name, Kubernetes version, and admin username. Automatic patch upgrades were enabled to ensure the cluster remains updated with the latest patches.
The cluster was configured to use virtual machine scale sets for agent nodes, with a specific node size and a range of nodes to accommodate varying workloads. Custom Linux OS configurations were applied to the agent nodes, enhancing their performance and security settings.
To enhance security, the API server was restricted to authorized IP ranges, including both public and private IP addresses of a bastion host and additional CIDR ranges. Integration with Azure Container Registry (ACR) was facilitated by attaching the ACR ID to the AKS cluster, enabling seamless container management.
Advanced features such as Azure Policy, auto-scaling, and HTTP application routing were enabled to improve cluster governance, scalability, and traffic management. User-assigned managed identities were employed for secure access control, and key management services (KMS) were enabled to secure sensitive data using Azure Key Vault.
Network settings were carefully configured, including DNS service IP, service CIDR, network plugin, and policy settings, ensuring robust network management and security. Role-based access control (RBAC) was enabled and managed through Azure Active Directory (AAD) to streamline user and group management.
Additional features such as log analytics, maintenance windows, and secret rotation were configured to enhance cluster monitoring, maintenance, and security. Tags and labels were added to agent nodes for better organization and resource management.
By defining these configurations in Terraform, the AKS deployment process was automated, making it reproducible and manageable through code. This approach not only reduced manual intervention but also ensured consistency and reliability in the AKS infrastructure.
Note: The code provided below is for exhibit purposes only and may be outdated at the time of writing. This code was used solely in a demo environment to illustrate the automation of an Azure Kubernetes Service (AKS) cluster/Chaos Mesh using the AKS module in Terraform. While the configuration showcases a comprehensive setup, including security, scalability, and management features, it is essential to review and update the code according to the latest Azure and Terraform best practices and versions when implementing it in a production environment. The exhibit is intended to serve as an educational example and may require modifications to align with current standards and specific use cases.
module “aks” {
source = “Azure/aks/azurerm”
version = “7.4.0”
prefix = random_id.aks.hex
resource_group_name = azurerm_resource_group.aks.name
kubernetes_version = “1.27” # don’t specify the patch version!
admin_username = “azureuser”
automatic_channel_upgrade = “patch”
agents_availability_zones = [“1”]
agents_count = null
agents_max_count = var.agents_max_count
agents_max_pods = 75
agents_min_count = var.agents_min_count
agents_size = “Standard_D2s_v3”
agents_pool_name = “testnodepool”
agents_type = “VirtualMachineScaleSets”
agents_pool_linux_os_configs = [
{
transparent_huge_page_enabled = “always”
sysctl_configs = [
{
fs_aio_max_nr = 65536
fs_file_max = 100000
fs_inotify_max_user_watches = 1000000
}
]
}
]
api_server_authorized_ip_ranges = concat([“${azurerm_linux_virtual_machine.bastion.public_ip_address}/32”, “${azurerm_linux_virtual_machine.bastion.private_ip_address}/32”, “REDACTED”],var.chaos_studio_cidr_ranges)
attached_acr_id_map = {
example = azurerm_container_registry.aks.id
}
azure_policy_enabled = true
auto_scaler_profile_enabled = true
auto_scaler_profile_expander = “least-waste”
enable_auto_scaling = true
http_application_routing_enabled = true
identity_ids = [azurerm_user_assigned_identity.aks_mid.id]
identity_type = “UserAssigned”
ingress_application_gateway_enabled = false
#ingress_application_gateway_id = azurerm_application_gateway.aks_appgw.id
#ingress_application_gateway_subnet_cidr = “10.52.1.0/24”
key_vault_secrets_provider_enabled = true
kms_enabled = true
kms_key_vault_key_id = “https://${azurerm_key_vault.aks_kv.name}.vault.azure.net/keys/${azurerm_key_vault_key.aks_key.name}/${azurerm_key_vault_key.aks_key.version}”
local_account_disabled = false
log_analytics_workspace_enabled = true
cluster_log_analytics_workspace_name = random_id.aks.hex
microsoft_defender_enabled = false
maintenance_window = {
allowed = [
{
day = “Sunday”,
hours = [22,23]
},
]
not_allowed = [
{
start = “2024-01-01T20:00:00Z”,
end = “2024-01-01T21:00:00Z”
},
]
}
net_profile_dns_service_ip = “10.0.0.10”
net_profile_service_cidr = “10.0.0.0/16”
network_plugin = “azure”
network_policy = “azure”
os_disk_size_gb = 60
private_cluster_enabled = false
public_network_access_enabled = true
rbac_aad = true
rbac_aad_managed = true
role_based_access_control_enabled = true
secret_rotation_enabled = true
sku_tier = “Standard”
storage_profile_blob_driver_enabled = true
storage_profile_enabled = true
temporary_name_for_rotation = “a${random_string.aks_temporary_name_for_rotation.result}”
vnet_subnet_id = azurerm_subnet.aks.id
rbac_aad_admin_group_object_ids = [azuread_group.aks_admins.object_id]
agents_labels = {
“Agent” : “agentLabel”
}
agents_tags = {
“Agent” : “agentTag”
}
depends_on = [
azurerm_subnet.aks,
]
}
Automating AKS with GitHub Actions
The provided GitHub Action workflow automates the deployment of an Azure Kubernetes Service (AKS) cluster using Terraform. This workflow is triggered on two conditions: when changes are pushed to the main branch within the terraform directory, or manually through a workflow dispatch event. The manual trigger allows users to specify the desired Terraform operation (plan, apply, or destroy) through an input parameter. This flexibility enables users to review changes, apply the infrastructure configuration, or tear it down as needed.
The workflow defines a single job named ‘Terraform’ that runs on the latest Ubuntu environment. It sets up necessary environment variables using secrets for secure authentication with Azure. The steps include checking out the repository, setting up the specified version of Terraform, and initializing Terraform with backend configuration sourced from environment variables. The workflow then validates the Terraform configuration to ensure correctness. Depending on the trigger, it proceeds to execute the appropriate Terraform command: plan to review the changes, apply to deploy the infrastructure, or destroy to remove it. This automation streamlines the management of the AKS cluster, ensuring consistent and reproducible deployments.
on:
push:
branches: [main]
paths:
– ‘terraform/**’
workflow_dispatch:
inputs:
terraform_operation:
description: “Terraform operation: plan, apply, destroy”
required: true
default: “plan”
type: choice
options:
– plan
– apply
– destroy
name: Deploy AKS Cluster
jobs:
terraform:
name: ‘Terraform’
runs-on: ubuntu-latest
env:
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
GITHUB_TOKEN: ${{ secrets.GH_TOKEN }}
TF_VERSION: 1.6.1
defaults:
run:
shell: bash
working-directory: ./terraform
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: ${{ env.TF_VERSION }}
– name: Terraform Init
id: init
run: |
set -a
source ../.env.backend
terraform init
-backend-config=”resource_group_name=$TF_VAR_state_resource_group_name”
-backend-config=”storage_account_name=$TF_VAR_state_storage_account_name”
– name: Terraform Validate
id: validate
run: terraform validate -no-color
– name: Terraform Plan
id: plan
run: terraform plan -no-color
if: “${{ github.event_name == ‘workflow_dispatch’ && github.event.inputs.terraform_operation == ‘plan’ || github.event_name == ‘push’ }}”
– name: Terraform Apply
id: apply
run: terraform apply -auto-approve
if: “${{ github.event_name == ‘workflow_dispatch’ && github.event.inputs.terraform_operation == ‘apply’ }}”
– name: Terraform Destroy
id: destroy
run: terraform destroy –auto-approve
if: “${{ github.event.inputs.terraform_operation == ‘destroy’ }}”
Automating Chaos Studio with Terraform
The provided Terraform code defines resources for deploying Chaos Mesh. First, it creates a new Kubernetes namespace named “chaos-testing” using the kubernetes_namespace resource. This namespace isolates the Chaos Mesh components from other workloads in the cluster, enhancing organization and security by confining the chaos engineering experiments to a dedicated area.
Next, the code uses the helm_release resource to install Chaos Mesh via Helm, a package manager for Kubernetes. The Helm chart for Chaos Mesh is specified from its official repository, with version 2.6 explicitly chosen. The installation occurs within the previously defined “chaos-testing” namespace. The set blocks within the helm_release resource customize the installation by configuring the chaosDaemon to use containerd as the runtime and specifying the socket path for the container runtime. This setup ensures that Chaos Mesh integrates correctly with the underlying container runtime, enabling effective chaos engineering experiments to test the resilience and robustness of applications running in the Kubernetes cluster.
resource “kubernetes_namespace” “chaos_testing” {
metadata {
name = “chaos-testing”
}
}
resource “helm_release” “chaos_mesh” {
name = “chaos-mesh”
repository = “https://charts.chaos-mesh.org”
chart = “chaos-mesh”
namespace = kubernetes_namespace.chaos_testing.metadata[0].name
version = “2.6” # specify the version of the Chaos Mesh chart you want to deploy
set {
name = “chaosDaemon.runtime”
value = “containerd”
}
set {
name = “chaosDaemon.socketPath”
value = “/run/containerd/containerd.sock”
}
}
Automating Chaos Studio with GitHub Actions
The GitHub Action workflow provided facilitates the deployment and management of Chaos Mesh experiments and the Azure Vote service within an AKS (Azure Kubernetes Service) cluster. This workflow can be triggered by three types of events: a push to the main branch, a published release, and a manual trigger via workflow_dispatch. The manual trigger allows users to choose between three operations: deploying the vote service, uninstalling the vote service, or deploying chaos experiments.
The workflow defines three separate jobs corresponding to these operations, each running on a self-hosted runner. The deploy_vote_service job checks out the repository, logs into Azure using provided credentials, and sets up the Kubernetes configuration to interact with the AKS cluster. It then creates a namespace and deploys the Azure Vote service. The uninstall_vote_service job follows similar steps but focuses on removing the Azure Vote service from the cluster. The deploy_chaos_experiments job is more complex, involving the setup of the AKS configuration, deployment of chaos experiments, and management of necessary role assignments in Azure AD. It iterates over a set of predefined chaos experiment configurations, applies them, and ensures appropriate permissions are set for the experiments to interact with the AKS cluster. This structured approach ensures a consistent and automated deployment process for both the Azure Vote service and Chaos Mesh experiments.
on:
push:
branches:
– main
release:
types: [published]
workflow_dispatch:
inputs:
chaos_experiments_operation:
description: ‘Operation: Deploy Experiments for Chaos Mesh’
required: true
default: ‘deploy_vote_service’
type: choice
options:
– deploy_vote_service
– uninstall_vote_service
– deploy_chaos_experiments
name: Deploy Chaos Mesh Experiments & Vote Service
jobs:
deploy_vote_service:
runs-on: self-hosted
if: ${{ github.event.inputs.chaos_experiments_operation == ‘deploy_vote_service’ }}
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Azure Login
uses: azure/login@v1
with:
creds: ‘{“clientId”:”${{ secrets.ARM_CLIENT_ID }}”,”clientSecret”:”${{ secrets.ARM_CLIENT_SECRET }}”,”subscriptionId”:”${{ secrets.ARM_SUBSCRIPTION_ID }}”,”tenantId”:”${{ secrets.ARM_TENANT_ID }}”}’
– name: kubeconfig
run: |
az aks get-credentials –resource-group ${{ secrets.AKS_RESOURCE_GROUP }} –name ${{ secrets.AKS_NAME }} –overwrite-existing
kubelogin convert-kubeconfig -l azurecli
– name: Create Namespace
run: |
kubectl get namespace azure-vote || kubectl create namespace azure-vote
– name: Install Azure Vote Service
run: |
kubectl apply -f ./app/azure-vote.yaml -n azure-vote
kubectl get service azure-vote-front -n azure-vote
uninstall_vote_service:
runs-on: self-hosted
if: ${{ github.event.inputs.chaos_experiments_operation == ‘uninstall_vote_service’ }}
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Azure Login
uses: azure/login@v1
with:
creds: ‘{“clientId”:”${{ secrets.ARM_CLIENT_ID }}”,”clientSecret”:”${{ secrets.ARM_CLIENT_SECRET }}”,”subscriptionId”:”${{ secrets.ARM_SUBSCRIPTION_ID }}”,”tenantId”:”${{ secrets.ARM_TENANT_ID }}”}’
– name: kubeconfig
run: |
az aks get-credentials –resource-group ${{ secrets.AKS_RESOURCE_GROUP }} –name ${{ secrets.AKS_NAME }} –overwrite-existing
kubelogin convert-kubeconfig -l azurecli
– name: Uninstall Azure Vote Service
run: |
kubectl delete -f ./app/azure-vote.yaml -n azure-vote
deploy_chaos_experiments:
runs-on: self-hosted
if: ${{ github.event_name == ‘push’ || (github.event_name == ‘workflow_dispatch’ && github.event.inputs.chaos_experiments_operation == ‘deploy_chaos_experiments’) }}
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Azure Login
uses: azure/login@v1
with:
creds: ‘{“clientId”:”${{ secrets.ARM_CLIENT_ID }}”,”clientSecret”:”${{ secrets.ARM_CLIENT_SECRET }}”,”subscriptionId”:”${{ secrets.ARM_SUBSCRIPTION_ID }}”,”tenantId”:”${{ secrets.ARM_TENANT_ID }}”}’
– name: Deploy Chaos Experiment AKS Targets
run: |
for file in ${{ github.workspace }}/json/*.json; do
sed -i ‘s/SUBSCRIPTION_ID_PLACEHOLDER/${{ secrets.ARM_SUBSCRIPTION_ID }}/g’ “$file”
sed -i ‘s/RESOURCE_GROUP_PLACEHOLDER/${{ secrets.AKS_RESOURCE_GROUP }}/g’ “$file”
sed -i ‘s/AKS_NAME_PLACEHOLDER/${{ secrets.AKS_NAME }}/g’ “$file”
done
# Create the chaos target
az rest –method put –uri “https://management.azure.com/${{ secrets.AKS_RESOURCE_ID }}/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh?api-version=${{ secrets.API_VERSION }}” –headers ‘Content-Type=application/json’ –body “{“properties”:{}}”
headers='{“Content-Type”:”application/json”}’
# Create the chaos experiments
experimentNames=(“PodChaos-2.1” “DNSChaos-2.1” “HTTPChaos-2.1” “KernelChaos-2.1” “TimeChaos-2.1” “IOChaos-2.1” “StressChaos-2.1” “NetworkChaos-2.1”)
for experimentName in “${experimentNames[@]}”; do
echo “Creating capability ${experimentName}”
az rest –method put –uri “https://management.azure.com/${{ secrets.AKS_RESOURCE_ID }}/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/${experimentName}?api-version=${{ secrets.API_VERSION }}” –headers “$headers” –body “{“properties”:{}}”
echo “Creating experiment ${experimentName}”
response=$(az rest –method put –uri “https://management.azure.com/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.Chaos/experiments/${experimentName}?api-version=${{ secrets.API_VERSION }}” –headers “$headers” –body @”${{ github.workspace }}/json/${experimentName}.json”)
echo “Response: $response”
done
– name: Get Principal IDs
id: get_principal_ids
run: |
# Define the experiment names
experimentNames=(“PODCHAOS-2.1” “DNSCHAOS-2.1” “HTTPCHAOS-2.1” “KERNELCHAOS-2.1” “TIMECHAOS-2.1” “IOCHAOS-2.1” “STRESSCHAOS-2.1” “NETWORKCHAOS-2.1”)
principal_ids=””
for experiment_name in “${experimentNames[@]}”; do
echo “Processing experiment: $experiment_name”
api_url=”https://management.azure.com/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.Chaos/experiments/$experiment_name?api-version=2024-01-01″
echo “API URL: $api_url”
experiment_response=$(az rest –method get –uri “$api_url”)
echo “Response for $experiment_name: $experiment_response”
principal_id=$(echo $experiment_response | jq -r ‘.identity.principalId’)
echo “Principal ID for $experiment_name: $principal_id”
principal_ids=”$principal_ids$principal_id,”
done
principal_ids=”${principal_ids%,}” # Remove trailing comma
echo “principal_ids=$principal_ids” >> $GITHUB_ENV
echo “::set-output name=principal_ids::$principal_ids”
– name: Add Principals to AD Group and Assign AKS Cluster Admin Role
run: |
IFS=’,’ read -ra IDS <<< “${{ steps.get_principal_ids.outputs.principal_ids }}”
for id in “${IDS[@]}”; do
# Check if the principal is already a member of the AD group
group_member_check=$(az ad group member check –group “${{ secrets.AKS_AD_GROUP }}” –member-id “$id” –query ‘value’ -o tsv)
if [ “$group_member_check” == “false” ]; then
az ad group member add –group “${{ secrets.AKS_AD_GROUP }}” –member-id “$id”
else
echo “Principal $id is already a member of the AD group.”
fi
# Check if the principal already has the AKS Cluster Admin role
role_assignment_check=$(az role assignment list –assignee “$id” –role “Azure Kubernetes Service Cluster Admin Role” –scope “/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.ContainerService/managedClusters/${{ secrets.AKS_NAME }}” –query ‘length(@)’ -o tsv)
if [ “$role_assignment_check” -eq 0 ]; then
# Assign AKS Cluster Admin role
az role assignment create
–assignee-object-id “$id”
–role “Azure Kubernetes Service Cluster Admin Role”
–scope “/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.ContainerService/managedClusters/${{ secrets.AKS_NAME }}”
else
echo “Principal $id already has the AKS Cluster Admin role assigned.”
fi
done
Automating Chaos Studio JSON Templates with GitHub Actions and Terraform
The JSON configuration provided (also see Azure Chaos Studio fault and action library) defines a detailed chaos experiment setup intended for deployment within an AKS (Azure Kubernetes Service) cluster. This configuration, which is stored in a separate root GitHub folder named json, is utilized by the GitHub Action workflows to orchestrate chaos engineering experiments using Chaos Mesh. By keeping these JSON configurations organized in a dedicated folder, the workflows can easily reference and apply them during deployment, ensuring a structured and maintainable approach to chaos testing.
The JSON file specifies the location of the experiment (eastus) and sets up a system-assigned identity for the resources. Within the properties section, the experiment steps are outlined, beginning with “Step 1.” This step includes a single branch (“Branch 1”) that defines a continuous action targeting all pods within the “azure-vote” namespace. The action is configured to simulate pod failures for a duration of five minutes, utilizing a specific Chaos Mesh capability (podChaos/2.1). The JSON configuration also defines a selector (“Selector1”) that identifies the specific AKS cluster targeted by the experiment. This setup ensures that the chaos experiment is precisely targeted and executed within the intended cluster, helping to test the resilience and fault tolerance of the applications running in the “azure-vote” namespace.
By integrating these JSON configurations into the GitHub Action workflows, the automation process becomes seamless. The workflows dynamically replace placeholder values (SUBSCRIPTION_ID_PLACEHOLDER, RESOURCE_GROUP_PLACEHOLDER, and AKS_NAME_PLACEHOLDER) with actual values during execution. This dynamic replacement allows for flexibility and reusability of the JSON configurations across different environments and clusters. The structured approach of keeping these configurations in a dedicated folder and calling them within the GitHub Action workflows ensures a streamlined and efficient process for deploying and managing chaos experiments, ultimately contributing to the robustness and reliability of the AKS-deployed applications.
{
“location”: “eastus”,
“identity”: {
“type”: “SystemAssigned”
},
“properties”: {
“steps”: [
{
“name”: “Step 1”,
“branches”: [
{
“name”: “Branch 1”,
“actions”: [
{
“type”: “continuous”,
“selectorId”: “Selector1”,
“duration”: “PT5M”,
“parameters”: [
{
“key”: “jsonSpec”,
“value”: “{“action”:”pod-failure”,”mode”:”all”,”selector”:{“namespaces”:[“azure-vote”]}}”
}
],
“name”: “urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1”
}
]
}
]
}
],
“selectors”: [
{
“id”: “Selector1”,
“type”: “List”,
“targets”: [
{
“type”: “ChaosTarget”,
“id”: “/subscriptions/SUBSCRIPTION_ID_PLACEHOLDER/resourceGroups/RESOURCE_GROUP_PLACEHOLDER/providers/Microsoft.ContainerService/managedClusters/AKS_NAME_PLACEHOLDER/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh”
}
]
}
]
}
}
Summary
We covered several aspects of automating and managing AKS (Azure Kubernetes Service) clusters and chaos engineering experiments using Terraform and GitHub Actions. We started by detailing the Terraform code used to deploy an AKS cluster, highlighting the configuration of various components such as agent nodes, network settings, security policies, and integrations with Azure services. This automation not only ensures a consistent deployment process but also leverages the power of infrastructure as code to manage complex cloud resources efficiently.
We then explored a GitHub Action workflow designed to automate the deployment and management of Chaos Mesh experiments and the Azure Vote service. This workflow uses triggers based on code changes and manual inputs to execute specific tasks, such as deploying, uninstalling, or running chaos experiments within the AKS cluster. By integrating Azure credentials and Kubernetes configurations, the workflow streamlines the process of setting up and managing these experiments, ensuring that they are applied accurately and securely.
Additionally, we delved into the JSON configurations used for chaos experiments, stored in a dedicated GitHub folder and referenced within the GitHub Action workflows. These configurations define detailed chaos experiment steps and selectors, targeting specific resources within the AKS cluster to simulate various fault scenarios. By organizing these configurations and automating their deployment, we enhance the resilience and fault tolerance of applications running in the cloud.
Together, these discussions illustrate a robust approach to managing cloud infrastructure and testing application resilience through automation and chaos engineering. Utilizing Terraform for infrastructure deployment and GitHub Actions for orchestration and management allows for a streamlined, efficient, and consistent process, ultimately contributing to more reliable and resilient cloud-native applications.
Here are some helpful links from Microsoft Learn that relate to the topics we discussed today:
Create an AKS Cluster – Step-by-step guide to creating an AKS cluster using the Azure portal.
Terraform on Azure Documentation – Comprehensive documentation on using Terraform with Azure, including examples and best practices.
Chaos Studio Overview – An introduction to Azure Chaos Studio, its features, and capabilities.
Deploy Chaos Mesh on AKS – Tutorial on setting up and using Chaos Mesh within an AKS cluster through the Azure portal.
GitHub Actions for Azure – Detailed guide on using GitHub Actions to automate workflows for Azure deployments.
Helm for Kubernetes – Information on using Helm to manage Kubernetes applications on AKS.
Azure Kubernetes Service Documentation – Comprehensive resource for all things AKS, including tutorials, reference architectures, and best practices.
Azure Chaos Studio Tutorial – Instructions on creating chaos experiments using Chaos Studio and the Azure CLI.
Microsoft Tech Community – Latest Blogs –Read More
Building Better Azure Apps: Better Together
Helping you build better apps has been one of our key focus areas in Azure. Our latest tooling focuses on providing guidance for architecting, optimizing, and deploying apps. Whether you’re creating a new proof of concept or improving an existing app, these capabilities can boost productivity and performance. These capabilities are all in Preview, so please give them a try and let us know what you think!
Starting Right: Architecting Your Azure App
Let’s say you’re starting a proof of concept for a new application. Normally, you might spend a lot of time picking services, architecting apps, and deploying them based on industry best practices. Better Together can streamline this process with the below capabilities.
Better Together in Microsoft Copilot for Azure
The Better Together capability which can be accessed from Copilot can be helpful to understanding if you’re on the right track when building your app. In the past it might’ve been time-consuming to learn about the kinds of services that similar apps are using through docs and videos. This capability can streamline some of this process by recommending services based on patterns that other similar apps have used.
To give this a try, navigate to the Azure Portal and select the Copilot button in the toolbar to open the chat window. Here you can ask questions to recommended services for your app, or architecture, including, “What are popular services that are deployed with App Service apps like mine?” and “Which database should I use with my ACA app?”, and “What services would you recommend to implement distributed caching?”
Sometimes it’s important to validate if you’re on the right track. When you ask architectural or infrastructure-level questions to Azure Copilot, it helps you discover the most commonly used services for your specific use case. In the example below, after identifying performance bottlenecks in your app and considering implementing distributed caching to enhance performance, the recommendation points to Azure Cache for Redis. This service is widely deployed by many App Service apps similar to yours.
Boosting Performance: Optimizing Your Azure App
If your App Service app is running a little slower than expected, or if you’re suspecting any performance bottlenecks, these are some capabilities that can diagnose and optimize these problems.
Diagnostics Insights (Preview)
Diagnostic logs can return pages of information that are difficult to interpret. This capability can make it easier to identify anomalies and quickly identify bottlenecks . In the Azure Portal, you can efficiently evaluate your application’s CPU usage and track any anomalies by navigating to Diagnose & Solve Problems > Web App Slow. Within this section, you’ll find a chart that provides insights into performance and latency.
Notably, over the last 24 hours, approximately 90% of users accessing this web app experienced low latency.
Another way to access suggestions is to type in “my web app is slow” into Copilot for Azure, which will offer suggestions around any bottlenecks.
Diagnostic charts can sometimes be time-consuming to analyze. However, Copilot offers a helpful Summarization capability. When you input variations of “summarize this page,” Copilot will generate concise summaries of the insights, allowing you to quickly grasp the main points without having to read through every chart and detail.
Application Insights Code Optimizations (Preview)
Performance can be improved by making code-level changes. Code Optimizations helps identify where to make these improvements. By leveraging AI, Code Optimizations detects CPU and memory bottlenecks of your application during runtime. It is available for .NET applications that have Application Insights Profiler enabled. To access Code Optimizations in the Azure Portal, navigate to the Performance blade in Application Insights. For App Service, it’s also available in Diagnose & Solve Problems > Web App Slow.
In this example, some of the performance issues identified may be caused by inefficient code, which can be investigated.
Selecting any of these suggestions will open more details about the performance issue, show where and when in the code it’s occurring, and show the recommended solution.
For many recommendations, a code fix can be generated using the Code Optimizations extension (currently in limited preview) for Visual Studio and Visual Studio Code – Insiders. You can sign up here.
Learn more about Code Optimizations.
Making Improvements: Augmenting Your Azure App
If you have deployed an App Service app and you’re unsure which services to use to improve scalability and reliability for it, these capabilities can help optimize without reinventing the wheel.
Better Together (Preview) in Azure Portal
It can be time-consuming to pick, create, deploy, and connect a service to your App Service app. Better Together can help you deploy and connect popular services for your App Service app. This capability primarily focuses on connecting newly-created resources to your App Service app more easily. Navigate to Better Together for your App Service app through the Azure Portal using the menu item Better Together.
Enabling Azure Cache for Redis will automatically create a new Redis instance and establish the connection with your existing App Service app. If you choose to “Create” any of the other services, you’ll be directed to their onboarding flow, where you’ll receive guidance on creating and connecting the service. Stay tuned for the next release for a more customized experience!
Take a look at these capabilities in action with the video below.
Conclusion: Better Together
Azure strives to empower you to create robust, high-performing apps. Whether you’re starting a new app or improving an existing one, we are creating tools and services that can help. Please give these capabilities a try and let us know what you think by leaving a comment or emailing us at bettertogetherteam@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
List Calculated Field with table reference
Hi, in a SharePoint List, I need a calculated field, and I wonder if it’s feasible with just JSON. Users already input: location (drop-down list) & year. Based on those 2 values, I input manually a labour rate, using a reference table. Is it possible (without transtionning to PowerAutomate) to have a calculated field spitting out that labour rate? Would a very long list of ‘if statements’ work?
Hi, in a SharePoint List, I need a calculated field, and I wonder if it’s feasible with just JSON. Users already input: location (drop-down list) & year. Based on those 2 values, I input manually a labour rate, using a reference table. Is it possible (without transtionning to PowerAutomate) to have a calculated field spitting out that labour rate? Would a very long list of ‘if statements’ work? Read More
Meeting Bot issue: Did not receive valid response for JoinCall request from call modality controller
I’m trying to join a Teams Meeting with a bot. I used this https://microsoftgraph.github.io/microsoft-graph-comms-samples/docs/articles/index.html#making-an-outbound-call-to-join-an-existing-microsoft-teams-meeting sample.
When the bot attempts to join I get the popup to admit or deny it in the meeting, but as soon as I click admit, it drops.
In the logs I see this message:
Call status updated to Terminated – Did not receive valid response for JoinCall request from call modality controller.. DiagCode: 580#5426.@
I am using the latest (1.2.0.10563 at time of writing) version of Microsoft.Graph.Communications libraries and the problem only started after I updated from 1.2.0.3742 that I was using previously.
I could not find any info on what the call modality controller is, or how to check what it is responding if anything. Any ideas on how to troublshoot this are welcome.
I’m trying to join a Teams Meeting with a bot. I used this https://microsoftgraph.github.io/microsoft-graph-comms-samples/docs/articles/index.html#making-an-outbound-call-to-join-an-existing-microsoft-teams-meeting sample.When the bot attempts to join I get the popup to admit or deny it in the meeting, but as soon as I click admit, it drops.In the logs I see this message:Call status updated to Terminated – Did not receive valid response for JoinCall request from call modality controller.. DiagCode: 580#5426.@ I am using the latest (1.2.0.10563 at time of writing) version of Microsoft.Graph.Communications libraries and the problem only started after I updated from 1.2.0.3742 that I was using previously. I could not find any info on what the call modality controller is, or how to check what it is responding if anything. Any ideas on how to troublshoot this are welcome. Read More
腾龙开户yx0503123
01.真希望在电影里过日子,下一个镜头就是一行字幕:多年以后..…
02.你拼命挣钱的样子,虽然有些狼狈,但自己靠自己的样子,真的很美。
03.见到你那一刻我心里有场海啸,可我静静站着,没有让任何人知道。
04.时间走的好快啊!又要跨年了,好像什么事都还没来得及去做,一年又过去了,都希望岁月温柔,可岁月何曾饶过谁!
05. 生活嘛,笑一笑就好了,你已不再是小孩,就算撑不住,也不许哭。
06.好好赚钱吧,没有钱,你拿什么呵护你的亲情,支撑你的爱情,联络你的友情,靠嘴吗,别闹了,大家都挺忙的。
01.真希望在电影里过日子,下一个镜头就是一行字幕:多年以后..…02.你拼命挣钱的样子,虽然有些狼狈,但自己靠自己的样子,真的很美。03.见到你那一刻我心里有场海啸,可我静静站着,没有让任何人知道。04.时间走的好快啊!又要跨年了,好像什么事都还没来得及去做,一年又过去了,都希望岁月温柔,可岁月何曾饶过谁!05. 生活嘛,笑一笑就好了,你已不再是小孩,就算撑不住,也不许哭。06.好好赚钱吧,没有钱,你拿什么呵护你的亲情,支撑你的爱情,联络你的友情,靠嘴吗,别闹了,大家都挺忙的。 Read More
Dynamics 365 Partner Sandbox – Operations Application
Does the Dynamics 365 Partner Sandbox – Operations Application include CoPilot for Finance? If yes, a partner can start developing for FO for 895 per year?
Does the Dynamics 365 Partner Sandbox – Operations Application include CoPilot for Finance? If yes, a partner can start developing for FO for 895 per year? Read More
How do I access the Project app
Hello
Please i need your help on this issue.
How do I access the Project app? It says I don’t have a license
Hello Please i need your help on this issue. How do I access the Project app? It says I don’t have a license Read More
Excel Dependent Drop Down Lists not loading
Hello everyone.
I created Dependent Drop Down Lists in Excel using the Offset Formula.
When I open the sheet, the Drop Down Lists do not load but when I re-enter the same formula in Data Validation in the open sheet, the Drop Down Lists start loading / showing.
The same thing then repeats when I close and open the sheet.
Please help.
Hello everyone. I created Dependent Drop Down Lists in Excel using the Offset Formula. When I open the sheet, the Drop Down Lists do not load but when I re-enter the same formula in Data Validation in the open sheet, the Drop Down Lists start loading / showing. The same thing then repeats when I close and open the sheet. Please help. Read More
Town Hall feedback
Hi there,
We’re hurtling towards our first high-level Town Hall and I have a few concerns.
The Q&A function on the whole is not as robust or useful as it was in Live Event. Not having all presenters able to view the In Review tab is frustrating – a number of folk in some of our busier meetings want to be able to view and cast opinion on whether or not certain questions should be published but because this is reserved to Co-organizers and the Co-organizers are limited in number to 10, this is causing some issues around deciding/prioritizing who should be able to preview questions ahead of publishing.
We have found in testing that deleted questions don’t disappear promptly from the Published queue.
And there no longer appears to be any Q&A reporting available to organiser, co-organisers or presenters, meaning that the only way to review Q&A is to return to the event in the calendar and deleted questions are gone forever (we used to delete questions as they were answered to try to control the Published queue).
It looks like sorting by Most Recent vs Most Liked has disappeared meaning the democratisation of what a community would like to have answered has gone. It was already tricky enough because you couldn’t re-order against those parameters so most liked constantly drifted to the bottom. Now the Published queue will be very difficult to manage live and pulling out useful information from the event nigh on impossible.
Roles permissions seems to be backwards – how can presenters control who, including co-organizers, is on-screen but not view In Review questions? Meanwhile Co-organizers are unable to invite further presenters to the event?
An issue with production is that now as soon as someone shares a powerpoint it’s live to the audience. From the perspective of production, it would be extremely useful if you could bring a presenter’s share content on/off screen like you can with presenter cameras.
In controlling the production for audience, I like the fact that we can have multiple presenters on screen at once – this is a tremendous improvement on Teams Live. It’s a shame, though, that moving presenters on and off screen is so clunky – having to click through them one at a time to bring them on and off is not very slick
I’d be very happy to be told what of the above is inaccurate or if there are back-end/tenancy settings that can be changed to fix any of the issues highlighted above.
Cheers
Rich
Hi there, We’re hurtling towards our first high-level Town Hall and I have a few concerns. The Q&A function on the whole is not as robust or useful as it was in Live Event. Not having all presenters able to view the In Review tab is frustrating – a number of folk in some of our busier meetings want to be able to view and cast opinion on whether or not certain questions should be published but because this is reserved to Co-organizers and the Co-organizers are limited in number to 10, this is causing some issues around deciding/prioritizing who should be able to preview questions ahead of publishing. We have found in testing that deleted questions don’t disappear promptly from the Published queue. And there no longer appears to be any Q&A reporting available to organiser, co-organisers or presenters, meaning that the only way to review Q&A is to return to the event in the calendar and deleted questions are gone forever (we used to delete questions as they were answered to try to control the Published queue). It looks like sorting by Most Recent vs Most Liked has disappeared meaning the democratisation of what a community would like to have answered has gone. It was already tricky enough because you couldn’t re-order against those parameters so most liked constantly drifted to the bottom. Now the Published queue will be very difficult to manage live and pulling out useful information from the event nigh on impossible. Roles permissions seems to be backwards – how can presenters control who, including co-organizers, is on-screen but not view In Review questions? Meanwhile Co-organizers are unable to invite further presenters to the event? An issue with production is that now as soon as someone shares a powerpoint it’s live to the audience. From the perspective of production, it would be extremely useful if you could bring a presenter’s share content on/off screen like you can with presenter cameras. In controlling the production for audience, I like the fact that we can have multiple presenters on screen at once – this is a tremendous improvement on Teams Live. It’s a shame, though, that moving presenters on and off screen is so clunky – having to click through them one at a time to bring them on and off is not very slick I’d be very happy to be told what of the above is inaccurate or if there are back-end/tenancy settings that can be changed to fix any of the issues highlighted above. CheersRich Read More
Enable Zero Touch Enrollment of MDE on macOS devices managed by Microsoft Intune
Introduction
Microsoft Defender for Endpoint (MDE) is a unified endpoint security platform that helps protect your organization from advanced threats. MDE provides threat detection, investigation, and response capabilities across Windows, Linux, Android, and macOS devices.
To deploy MDE on macOS devices, you need to install the MDE agent and enroll the devices to the MDE service. You can use Microsoft Intune, a cloud-based device management service, to automate the installation and enrollment process. This blog post explains how to use Intune to achieve zero touch enrollment of MDE on macOS devices.
Prerequisites
Before you start, make sure you have the following:
User assigned with licenses for MDE and Intune.
A supported macOS version (three most recent major releases are supported)
The expectation in this blog post is that the device is already enrolled into Intune. It doesn’t cover the Intune enrollment methods and enrollment type doesn’t change the MDE onboarding.
Configuration Steps
The table below lists the mandatory steps for a successful MDE deployment on macOS. The column Purpose in the table calls out required configuration steps, click on each hyperlink to follow the guided instructions from our Learn Docs.
Step
Purpose
Type
Reference
1
Intune Configuration Profile – Extensions
Note: If you already have an existing Configuration profile with Bundle Identifier, you may want to merge this together since Apple only supports one.
2
Intune Configuration Profile – Custom
3
4
5
6
7
Onboarding Blob
8
Application – Native Intune
Optional Steps
Additionally, you may want to further customize the MDE configurations. Below are a few suggestions, follow the guided instructions from our Learn Docs.
Configuration
Short Description
Location
Configure Bluetooth policies for Device Control. (starting macOS 14)
Intune Custom Configuration Profile
Choose between Beta; Preview and Production Channels
Intune Custom Configuration Profile
Configuration settings for AV; Exclusions and EDR.
Intune Portal or Defender Portal
Reduce attack surface from Internet-based events like phishing;exploits;malicious content
Defender Portal
Deploy Device Control Policies
Removable devices controls like allow;block;read;write
Intune Portal or Defender Portal
Enable Data Loss Prevention (DLP)
Purview’s DLP Integration with MDE.
Intune Custom Configuration Profile
Verification & Monitoring
The MDE agent will be installed and enrolled silently on the macOS devices that you targeted. The agent icon will appear on the macOS desktop menu bar at the top of the screen.
Refer the screenshots below to click on the MDE icon to launch the app and view details.
Additionally, you can verify the installation and enrollment status by launching the Terminal app and execute the following command: “mdatp health”.
The output reports the overall MDE health status including Configs; Definitions; Device/Org IDs. You can refer the [managed] policies from your configurations.
As an IT admin, you can launch Microsoft Defender portal to view the device’s health, associated incidents, security recommendations, inventory and discovered vulnerabilities.
Click on the device for more information.
Other Installation Methods
Intune is one of the deployment tools for MDE, however you can choose other ways to deploy MDE. Below are a few callouts:
Command Line – Manual Deployment
Thanks,
Arnab Mitra
Microsoft Tech Community – Latest Blogs –Read More
Up Your Organizational Copilot Prompt Game – HLS Copilot Snacks
As organizations roll out Copilot for Microsoft 365 to their users it is imperative that they arm them with the knowledge and resources to be effective Prompters. The effectiveness of a user’s prompts really is the determinant in them gaining the full value of the generative AI capability of Copilot yet most users are left on their own with a powerful new tool they are unsure of how to properly use. Thankfully, Microsoft, and the extended Copilot community, have provided some great resources that can really help organizations empower their end users and up their organizational prompt game.
In this HLS Copilot Snack, I walk through 4 resources that can bring immediate impact within an organization in transforming their user prompts into a powerful tool for AI powered transformation.
To see all HLS Copilot Snacks video click here.
Resources:
Copilot Lab
Prompt Buddy
Copilot for Microsoft 365: The art and science of prompting
Enhance your copilot’s responses with prompt modification – Microsoft Copilot Studio | Microsoft Learn
To see all HLS Copilot Snacks video click here.
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More