Category: Microsoft
Category Archives: Microsoft
A Serious Bug in Windows Explorer
When you need an icon (thumbnail) that displays a picture or video file, it can’t be displayed and Windows Explorer keeps loading. When you right-click on such a file, Windows Explorer freezes.
When you need an icon (thumbnail) that displays a picture or video file, it can’t be displayed and Windows Explorer keeps loading. When you right-click on such a file, Windows Explorer freezes. Read More
Two separate Outlook instances on Android/iPhone without joining MDM?
I’m looking for a way to maintain separate instances of Outlook on my Android/iPhone device – one for work-related emails and another for personal emails. Our organization uses Microsoft Intune for managing work applications, and I would like to use the Outlook app for both work and personal accounts without mixing the data.
Is it possible to configure two distinct Outlook instances on a single device to keep work and personal emails separate? If so, could you provide guidance on how to set this up, especially in the context of using Mobile Application Management (MAM) policies to secure work data without enrolling the device in Mobile Device Management (MDM)?
I’m looking for a way to maintain separate instances of Outlook on my Android/iPhone device – one for work-related emails and another for personal emails. Our organization uses Microsoft Intune for managing work applications, and I would like to use the Outlook app for both work and personal accounts without mixing the data. Is it possible to configure two distinct Outlook instances on a single device to keep work and personal emails separate? If so, could you provide guidance on how to set this up, especially in the context of using Mobile Application Management (MAM) policies to secure work data without enrolling the device in Mobile Device Management (MDM)? Read More
Teams website tabs not displaying
Hello
Please i need your help on this issue.
Figure 1: New Teams website app link to compliance wire
Figure 2: New Teams website app link to compliance wire after submitting log-in credentials
in Classic Teams, when I log-in to Compliance wire, I am able to do so successfully.
Figure 3: Classic Teams Compliance Wire Log-in
The sites do not work in the new teams, but they work in the old teams.
Hello Please i need your help on this issue. Figure 1: New Teams website app link to compliance wire Figure 2: New Teams website app link to compliance wire after submitting log-in credentials in Classic Teams, when I log-in to Compliance wire, I am able to do so successfully.Figure 3: Classic Teams Compliance Wire Log-in The sites do not work in the new teams, but they work in the old teams. Read More
Migration of on prem file server to Azure cloud. Trying to avoid domain authention
I currently have an on-prem environment. We are wanting to migrate to azure cloud. We have a file server on prem that is running Linux Samba server to share files to users What are the bests way to migrate this to the Azure cloud environment . I do not want this to be part of a domain, I want it to be a file share. I am trying to avoid the users having to login to access the files. If there is not a way to avoid the login process can I keep a server on-prem and have the Azure cloud environment access the files thru the local environment firewall. Give me multiple ways to accomplish this task.
I currently have an on-prem environment. We are wanting to migrate to azure cloud. We have a file server on prem that is running Linux Samba server to share files to users What are the bests way to migrate this to the Azure cloud environment . I do not want this to be part of a domain, I want it to be a file share. I am trying to avoid the users having to login to access the files. If there is not a way to avoid the login process can I keep a server on-prem and have the Azure cloud environment access the files thru the local environment firewall. Give me multiple ways to accomplish this task. Read More
KQL help Exchange Online
Hello,
I need help in buildinga KQL Query as I’m fairly new to this. I have a set of 2 keyword list like
Set 1 = “A”,”B”,”C”
Set 2 = “1”,”2″,”3″
I want a KQL Query that matches any combinations those 2 sets match. I have tried
(“A” OR “B” OR “C”) AND (“1” OR “2” OR “3”) but that does not seem to work.
Many Greetings
Erik
Hello,I need help in buildinga KQL Query as I’m fairly new to this. I have a set of 2 keyword list like Set 1 = “A”,”B”,”C”Set 2 = “1”,”2″,”3″ I want a KQL Query that matches any combinations those 2 sets match. I have tried(“A” OR “B” OR “C”) AND (“1” OR “2” OR “3”) but that does not seem to work. Many GreetingsErik Read More
Reduction of Password Prompts with Intune Enrolled Phone
My company is transitioning from our current MDM to Intune, while at the same time moving our mailboxes from On-Prem to Exchange Online. Our current MDM requires no passwords from our users after initial enrollment of their mobile devices (both BYOD and company owned) thanks to Kerberos based authentication, whose tokens persists even after passwords are changed or expire and then are changed. In our testing using Intune enrollment and MFA using Authenticator, we found that users are still prompted to enter their password within Outlook when passwords change for an EXO mailbox and Entra ID / Azure account.
This is even true when enabling Passwordless Authentication, which is branded as eliminating the need for passwords (https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-passwordless) yet still, users are prompted when passwords change and they are attempting to access company mail in Outlook for iOS on an Intune enrolled device for example.
This not only can present itself as a pain point for our users who are used to not having to enter their password on their mobile devices or on their desktops using Windows Hello for example – it also after years of backwards thinking in the security industry pushing password complexity and frequent passwords changes is now considered a gaping security risk where users will circumvent or resist draconian password policies with incredibly simple passwords. Sadly, Passwordless Authentication on a mobile device, even with the added bonus of Face ID / biometrics, still doesn’t eliminate having to retype a password when a password changes.
Per everything I’ve read and also discussions with some within Microsoft, it appears there is no way around this. Certificate Based Authentication (https://learn.microsoft.com/en-us/entra/identity/authentication/how-to-certificate-based-authentication) with a PKI issuing certs to users that are deployed to devices via Intune may provide some relief, but we can’t know for sure how this plays out in password change scenarios and don’t have the luxury of testing this without a considerable amount of work even in a test / dev capacity.
For those out there using CBA, does CBA indeed make it so that password changes won’t require users to retype their passwords in to re-authenticate and is the cert fully trusted in those instances? My concern is that it could be a similar scenario to Passwordless Authentication with Authenticator, where the cert is used commonly as a cred but it doesn’t outtrump the requirement to occasionally enter the password when password changes occur causing tokens to expire. But if CBA does indeed eliminate the requirement for entering passwords, it’s something we will seriously consider.
My company is transitioning from our current MDM to Intune, while at the same time moving our mailboxes from On-Prem to Exchange Online. Our current MDM requires no passwords from our users after initial enrollment of their mobile devices (both BYOD and company owned) thanks to Kerberos based authentication, whose tokens persists even after passwords are changed or expire and then are changed. In our testing using Intune enrollment and MFA using Authenticator, we found that users are still prompted to enter their password within Outlook when passwords change for an EXO mailbox and Entra ID / Azure account. This is even true when enabling Passwordless Authentication, which is branded as eliminating the need for passwords (https://learn.microsoft.com/en-us/entra/identity/authentication/concept-authentication-passwordless) yet still, users are prompted when passwords change and they are attempting to access company mail in Outlook for iOS on an Intune enrolled device for example.This not only can present itself as a pain point for our users who are used to not having to enter their password on their mobile devices or on their desktops using Windows Hello for example – it also after years of backwards thinking in the security industry pushing password complexity and frequent passwords changes is now considered a gaping security risk where users will circumvent or resist draconian password policies with incredibly simple passwords. Sadly, Passwordless Authentication on a mobile device, even with the added bonus of Face ID / biometrics, still doesn’t eliminate having to retype a password when a password changes.Per everything I’ve read and also discussions with some within Microsoft, it appears there is no way around this. Certificate Based Authentication (https://learn.microsoft.com/en-us/entra/identity/authentication/how-to-certificate-based-authentication) with a PKI issuing certs to users that are deployed to devices via Intune may provide some relief, but we can’t know for sure how this plays out in password change scenarios and don’t have the luxury of testing this without a considerable amount of work even in a test / dev capacity.For those out there using CBA, does CBA indeed make it so that password changes won’t require users to retype their passwords in to re-authenticate and is the cert fully trusted in those instances? My concern is that it could be a similar scenario to Passwordless Authentication with Authenticator, where the cert is used commonly as a cred but it doesn’t outtrump the requirement to occasionally enter the password when password changes occur causing tokens to expire. But if CBA does indeed eliminate the requirement for entering passwords, it’s something we will seriously consider. Read More
Pre-fill Responses in Your Microsoft Forms
We are excited to share that Microsoft Forms now supports pre-filled links, making your data collection process more efficient and improving data accuracy. This feature not only allows you to set default answers for your questions, it empowers you to strategize how you would like the responses categorized. To help you better understand how to leverage this new feature, let’s try it together with an online training feedback survey. You can also try to pre-fill a form from this template.
Imagine your company conducted three online training sessions for participants in different time zones: Asia, Europe, and North America, each with a different lecturer. To streamline the process and avoid creating separate feedback forms for each session, you decide to use Forms pre–filled links to consolidate all feedback into a single form.
Find the pre-fill link from “…” icon
After creating your feedback survey, click on the “…” icon in the upper right corner and select “Get Pre-filled URL” to start setting your pre-filled answers.
Set pre–filled answers
Before setting pre-filled answers, you need to first activate “Enable pre-filled answers” in the top section of the form. After that, you can proceed to select pre-filled answers. In this case, the prefilled answers would be the session participated in and the lecturer’s name.
Send out the pre–fill link to different audiences
Once you’ve finished setting up the pre-filled answers, you can click the “Get Pre–filled link” button at the bottom of the form to copy/paste the URL for distribution. In this scenario, since you have three different sessions and lecturers, you’ll need to generate three different links with different prefilled answers before sending the form to the corresponding audience.
Recipients open the survey with pre–filled answers
When participants who attended the Asia session opens the survey, they will see that “Asia session” and “John Wang” have already been selected. They can then proceed to answer the remaining questions and submit the form.
Here are two additional real-life use cases to provide inspiration on how this feature can benefit you:
End-of-semester university course evaluations: Fields such as course name and instructor name can be pre–filled to track feedback from multiple courses in one form.
Customer feedback survey: pre-fill fields like employee name, service period, and department.
Microsoft Tech Community – Latest Blogs –Read More
Automate AKS Deployment and Chaos Engineering with Terraform and GitHub Actions
Azure Chaos Studio is a fully managed chaos engineering platform that helps you identify and mitigate potential issues in your applications before they impact customers. It enables you to intentionally introduce faults and disruptions to test the resilience and robustness of your systems. By using Chaos Studio, you can uncover hard-to-find problems in your applications, from late-stage development through production, and plan mitigations to improve overall system reliability.
The provided GitHub Action workflows demonstrate a comprehensive approach to automating the deployment and management of an AKS (Azure Kubernetes Service) cluster using Terraform, as well as deploying Chaos Mesh experiments and the Azure Vote service within the AKS cluster. These workflows streamline the infrastructure management process by integrating directly with GitHub, enabling seamless updates and deployments based on code changes or manual triggers. By leveraging GitHub Actions, Azure, and Kubernetes, these workflows ensure a robust, automated pipeline for maintaining and testing the resilience of applications deployed in the AKS environment.
Automating AKS with Terraform
To automate the deployment and management of an Azure Kubernetes Service (AKS) cluster, I utilized Terraform with the AKS module provided by Azure. This module simplifies the process by abstracting many of the complex configurations needed to set up and manage an AKS cluster.
In the Terraform configuration, I specified the AKS module with the latest version at the time, ensuring compatibility with the latest features and updates. The configuration began by defining essential parameters, such as the resource group name, Kubernetes version, and admin username. Automatic patch upgrades were enabled to ensure the cluster remains updated with the latest patches.
The cluster was configured to use virtual machine scale sets for agent nodes, with a specific node size and a range of nodes to accommodate varying workloads. Custom Linux OS configurations were applied to the agent nodes, enhancing their performance and security settings.
To enhance security, the API server was restricted to authorized IP ranges, including both public and private IP addresses of a bastion host and additional CIDR ranges. Integration with Azure Container Registry (ACR) was facilitated by attaching the ACR ID to the AKS cluster, enabling seamless container management.
Advanced features such as Azure Policy, auto-scaling, and HTTP application routing were enabled to improve cluster governance, scalability, and traffic management. User-assigned managed identities were employed for secure access control, and key management services (KMS) were enabled to secure sensitive data using Azure Key Vault.
Network settings were carefully configured, including DNS service IP, service CIDR, network plugin, and policy settings, ensuring robust network management and security. Role-based access control (RBAC) was enabled and managed through Azure Active Directory (AAD) to streamline user and group management.
Additional features such as log analytics, maintenance windows, and secret rotation were configured to enhance cluster monitoring, maintenance, and security. Tags and labels were added to agent nodes for better organization and resource management.
By defining these configurations in Terraform, the AKS deployment process was automated, making it reproducible and manageable through code. This approach not only reduced manual intervention but also ensured consistency and reliability in the AKS infrastructure.
Note: The code provided below is for exhibit purposes only and may be outdated at the time of writing. This code was used solely in a demo environment to illustrate the automation of an Azure Kubernetes Service (AKS) cluster/Chaos Mesh using the AKS module in Terraform. While the configuration showcases a comprehensive setup, including security, scalability, and management features, it is essential to review and update the code according to the latest Azure and Terraform best practices and versions when implementing it in a production environment. The exhibit is intended to serve as an educational example and may require modifications to align with current standards and specific use cases.
module “aks” {
source = “Azure/aks/azurerm”
version = “7.4.0”
prefix = random_id.aks.hex
resource_group_name = azurerm_resource_group.aks.name
kubernetes_version = “1.27” # don’t specify the patch version!
admin_username = “azureuser”
automatic_channel_upgrade = “patch”
agents_availability_zones = [“1”]
agents_count = null
agents_max_count = var.agents_max_count
agents_max_pods = 75
agents_min_count = var.agents_min_count
agents_size = “Standard_D2s_v3”
agents_pool_name = “testnodepool”
agents_type = “VirtualMachineScaleSets”
agents_pool_linux_os_configs = [
{
transparent_huge_page_enabled = “always”
sysctl_configs = [
{
fs_aio_max_nr = 65536
fs_file_max = 100000
fs_inotify_max_user_watches = 1000000
}
]
}
]
api_server_authorized_ip_ranges = concat([“${azurerm_linux_virtual_machine.bastion.public_ip_address}/32”, “${azurerm_linux_virtual_machine.bastion.private_ip_address}/32”, “REDACTED”],var.chaos_studio_cidr_ranges)
attached_acr_id_map = {
example = azurerm_container_registry.aks.id
}
azure_policy_enabled = true
auto_scaler_profile_enabled = true
auto_scaler_profile_expander = “least-waste”
enable_auto_scaling = true
http_application_routing_enabled = true
identity_ids = [azurerm_user_assigned_identity.aks_mid.id]
identity_type = “UserAssigned”
ingress_application_gateway_enabled = false
#ingress_application_gateway_id = azurerm_application_gateway.aks_appgw.id
#ingress_application_gateway_subnet_cidr = “10.52.1.0/24”
key_vault_secrets_provider_enabled = true
kms_enabled = true
kms_key_vault_key_id = “https://${azurerm_key_vault.aks_kv.name}.vault.azure.net/keys/${azurerm_key_vault_key.aks_key.name}/${azurerm_key_vault_key.aks_key.version}”
local_account_disabled = false
log_analytics_workspace_enabled = true
cluster_log_analytics_workspace_name = random_id.aks.hex
microsoft_defender_enabled = false
maintenance_window = {
allowed = [
{
day = “Sunday”,
hours = [22,23]
},
]
not_allowed = [
{
start = “2024-01-01T20:00:00Z”,
end = “2024-01-01T21:00:00Z”
},
]
}
net_profile_dns_service_ip = “10.0.0.10”
net_profile_service_cidr = “10.0.0.0/16”
network_plugin = “azure”
network_policy = “azure”
os_disk_size_gb = 60
private_cluster_enabled = false
public_network_access_enabled = true
rbac_aad = true
rbac_aad_managed = true
role_based_access_control_enabled = true
secret_rotation_enabled = true
sku_tier = “Standard”
storage_profile_blob_driver_enabled = true
storage_profile_enabled = true
temporary_name_for_rotation = “a${random_string.aks_temporary_name_for_rotation.result}”
vnet_subnet_id = azurerm_subnet.aks.id
rbac_aad_admin_group_object_ids = [azuread_group.aks_admins.object_id]
agents_labels = {
“Agent” : “agentLabel”
}
agents_tags = {
“Agent” : “agentTag”
}
depends_on = [
azurerm_subnet.aks,
]
}
Automating AKS with GitHub Actions
The provided GitHub Action workflow automates the deployment of an Azure Kubernetes Service (AKS) cluster using Terraform. This workflow is triggered on two conditions: when changes are pushed to the main branch within the terraform directory, or manually through a workflow dispatch event. The manual trigger allows users to specify the desired Terraform operation (plan, apply, or destroy) through an input parameter. This flexibility enables users to review changes, apply the infrastructure configuration, or tear it down as needed.
The workflow defines a single job named ‘Terraform’ that runs on the latest Ubuntu environment. It sets up necessary environment variables using secrets for secure authentication with Azure. The steps include checking out the repository, setting up the specified version of Terraform, and initializing Terraform with backend configuration sourced from environment variables. The workflow then validates the Terraform configuration to ensure correctness. Depending on the trigger, it proceeds to execute the appropriate Terraform command: plan to review the changes, apply to deploy the infrastructure, or destroy to remove it. This automation streamlines the management of the AKS cluster, ensuring consistent and reproducible deployments.
on:
push:
branches: [main]
paths:
– ‘terraform/**’
workflow_dispatch:
inputs:
terraform_operation:
description: “Terraform operation: plan, apply, destroy”
required: true
default: “plan”
type: choice
options:
– plan
– apply
– destroy
name: Deploy AKS Cluster
jobs:
terraform:
name: ‘Terraform’
runs-on: ubuntu-latest
env:
ARM_CLIENT_ID: ${{ secrets.ARM_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.ARM_CLIENT_SECRET }}
ARM_SUBSCRIPTION_ID: ${{ secrets.ARM_SUBSCRIPTION_ID }}
ARM_TENANT_ID: ${{ secrets.ARM_TENANT_ID }}
GITHUB_TOKEN: ${{ secrets.GH_TOKEN }}
TF_VERSION: 1.6.1
defaults:
run:
shell: bash
working-directory: ./terraform
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Setup Terraform
uses: hashicorp/setup-terraform@v3
with:
terraform_version: ${{ env.TF_VERSION }}
– name: Terraform Init
id: init
run: |
set -a
source ../.env.backend
terraform init
-backend-config=”resource_group_name=$TF_VAR_state_resource_group_name”
-backend-config=”storage_account_name=$TF_VAR_state_storage_account_name”
– name: Terraform Validate
id: validate
run: terraform validate -no-color
– name: Terraform Plan
id: plan
run: terraform plan -no-color
if: “${{ github.event_name == ‘workflow_dispatch’ && github.event.inputs.terraform_operation == ‘plan’ || github.event_name == ‘push’ }}”
– name: Terraform Apply
id: apply
run: terraform apply -auto-approve
if: “${{ github.event_name == ‘workflow_dispatch’ && github.event.inputs.terraform_operation == ‘apply’ }}”
– name: Terraform Destroy
id: destroy
run: terraform destroy –auto-approve
if: “${{ github.event.inputs.terraform_operation == ‘destroy’ }}”
Automating Chaos Studio with Terraform
The provided Terraform code defines resources for deploying Chaos Mesh. First, it creates a new Kubernetes namespace named “chaos-testing” using the kubernetes_namespace resource. This namespace isolates the Chaos Mesh components from other workloads in the cluster, enhancing organization and security by confining the chaos engineering experiments to a dedicated area.
Next, the code uses the helm_release resource to install Chaos Mesh via Helm, a package manager for Kubernetes. The Helm chart for Chaos Mesh is specified from its official repository, with version 2.6 explicitly chosen. The installation occurs within the previously defined “chaos-testing” namespace. The set blocks within the helm_release resource customize the installation by configuring the chaosDaemon to use containerd as the runtime and specifying the socket path for the container runtime. This setup ensures that Chaos Mesh integrates correctly with the underlying container runtime, enabling effective chaos engineering experiments to test the resilience and robustness of applications running in the Kubernetes cluster.
resource “kubernetes_namespace” “chaos_testing” {
metadata {
name = “chaos-testing”
}
}
resource “helm_release” “chaos_mesh” {
name = “chaos-mesh”
repository = “https://charts.chaos-mesh.org”
chart = “chaos-mesh”
namespace = kubernetes_namespace.chaos_testing.metadata[0].name
version = “2.6” # specify the version of the Chaos Mesh chart you want to deploy
set {
name = “chaosDaemon.runtime”
value = “containerd”
}
set {
name = “chaosDaemon.socketPath”
value = “/run/containerd/containerd.sock”
}
}
Automating Chaos Studio with GitHub Actions
The GitHub Action workflow provided facilitates the deployment and management of Chaos Mesh experiments and the Azure Vote service within an AKS (Azure Kubernetes Service) cluster. This workflow can be triggered by three types of events: a push to the main branch, a published release, and a manual trigger via workflow_dispatch. The manual trigger allows users to choose between three operations: deploying the vote service, uninstalling the vote service, or deploying chaos experiments.
The workflow defines three separate jobs corresponding to these operations, each running on a self-hosted runner. The deploy_vote_service job checks out the repository, logs into Azure using provided credentials, and sets up the Kubernetes configuration to interact with the AKS cluster. It then creates a namespace and deploys the Azure Vote service. The uninstall_vote_service job follows similar steps but focuses on removing the Azure Vote service from the cluster. The deploy_chaos_experiments job is more complex, involving the setup of the AKS configuration, deployment of chaos experiments, and management of necessary role assignments in Azure AD. It iterates over a set of predefined chaos experiment configurations, applies them, and ensures appropriate permissions are set for the experiments to interact with the AKS cluster. This structured approach ensures a consistent and automated deployment process for both the Azure Vote service and Chaos Mesh experiments.
on:
push:
branches:
– main
release:
types: [published]
workflow_dispatch:
inputs:
chaos_experiments_operation:
description: ‘Operation: Deploy Experiments for Chaos Mesh’
required: true
default: ‘deploy_vote_service’
type: choice
options:
– deploy_vote_service
– uninstall_vote_service
– deploy_chaos_experiments
name: Deploy Chaos Mesh Experiments & Vote Service
jobs:
deploy_vote_service:
runs-on: self-hosted
if: ${{ github.event.inputs.chaos_experiments_operation == ‘deploy_vote_service’ }}
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Azure Login
uses: azure/login@v1
with:
creds: ‘{“clientId”:”${{ secrets.ARM_CLIENT_ID }}”,”clientSecret”:”${{ secrets.ARM_CLIENT_SECRET }}”,”subscriptionId”:”${{ secrets.ARM_SUBSCRIPTION_ID }}”,”tenantId”:”${{ secrets.ARM_TENANT_ID }}”}’
– name: kubeconfig
run: |
az aks get-credentials –resource-group ${{ secrets.AKS_RESOURCE_GROUP }} –name ${{ secrets.AKS_NAME }} –overwrite-existing
kubelogin convert-kubeconfig -l azurecli
– name: Create Namespace
run: |
kubectl get namespace azure-vote || kubectl create namespace azure-vote
– name: Install Azure Vote Service
run: |
kubectl apply -f ./app/azure-vote.yaml -n azure-vote
kubectl get service azure-vote-front -n azure-vote
uninstall_vote_service:
runs-on: self-hosted
if: ${{ github.event.inputs.chaos_experiments_operation == ‘uninstall_vote_service’ }}
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Azure Login
uses: azure/login@v1
with:
creds: ‘{“clientId”:”${{ secrets.ARM_CLIENT_ID }}”,”clientSecret”:”${{ secrets.ARM_CLIENT_SECRET }}”,”subscriptionId”:”${{ secrets.ARM_SUBSCRIPTION_ID }}”,”tenantId”:”${{ secrets.ARM_TENANT_ID }}”}’
– name: kubeconfig
run: |
az aks get-credentials –resource-group ${{ secrets.AKS_RESOURCE_GROUP }} –name ${{ secrets.AKS_NAME }} –overwrite-existing
kubelogin convert-kubeconfig -l azurecli
– name: Uninstall Azure Vote Service
run: |
kubectl delete -f ./app/azure-vote.yaml -n azure-vote
deploy_chaos_experiments:
runs-on: self-hosted
if: ${{ github.event_name == ‘push’ || (github.event_name == ‘workflow_dispatch’ && github.event.inputs.chaos_experiments_operation == ‘deploy_chaos_experiments’) }}
steps:
– name: Checkout
uses: actions/checkout@v4
– name: Azure Login
uses: azure/login@v1
with:
creds: ‘{“clientId”:”${{ secrets.ARM_CLIENT_ID }}”,”clientSecret”:”${{ secrets.ARM_CLIENT_SECRET }}”,”subscriptionId”:”${{ secrets.ARM_SUBSCRIPTION_ID }}”,”tenantId”:”${{ secrets.ARM_TENANT_ID }}”}’
– name: Deploy Chaos Experiment AKS Targets
run: |
for file in ${{ github.workspace }}/json/*.json; do
sed -i ‘s/SUBSCRIPTION_ID_PLACEHOLDER/${{ secrets.ARM_SUBSCRIPTION_ID }}/g’ “$file”
sed -i ‘s/RESOURCE_GROUP_PLACEHOLDER/${{ secrets.AKS_RESOURCE_GROUP }}/g’ “$file”
sed -i ‘s/AKS_NAME_PLACEHOLDER/${{ secrets.AKS_NAME }}/g’ “$file”
done
# Create the chaos target
az rest –method put –uri “https://management.azure.com/${{ secrets.AKS_RESOURCE_ID }}/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh?api-version=${{ secrets.API_VERSION }}” –headers ‘Content-Type=application/json’ –body “{“properties”:{}}”
headers='{“Content-Type”:”application/json”}’
# Create the chaos experiments
experimentNames=(“PodChaos-2.1” “DNSChaos-2.1” “HTTPChaos-2.1” “KernelChaos-2.1” “TimeChaos-2.1” “IOChaos-2.1” “StressChaos-2.1” “NetworkChaos-2.1”)
for experimentName in “${experimentNames[@]}”; do
echo “Creating capability ${experimentName}”
az rest –method put –uri “https://management.azure.com/${{ secrets.AKS_RESOURCE_ID }}/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/${experimentName}?api-version=${{ secrets.API_VERSION }}” –headers “$headers” –body “{“properties”:{}}”
echo “Creating experiment ${experimentName}”
response=$(az rest –method put –uri “https://management.azure.com/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.Chaos/experiments/${experimentName}?api-version=${{ secrets.API_VERSION }}” –headers “$headers” –body @”${{ github.workspace }}/json/${experimentName}.json”)
echo “Response: $response”
done
– name: Get Principal IDs
id: get_principal_ids
run: |
# Define the experiment names
experimentNames=(“PODCHAOS-2.1” “DNSCHAOS-2.1” “HTTPCHAOS-2.1” “KERNELCHAOS-2.1” “TIMECHAOS-2.1” “IOCHAOS-2.1” “STRESSCHAOS-2.1” “NETWORKCHAOS-2.1”)
principal_ids=””
for experiment_name in “${experimentNames[@]}”; do
echo “Processing experiment: $experiment_name”
api_url=”https://management.azure.com/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.Chaos/experiments/$experiment_name?api-version=2024-01-01″
echo “API URL: $api_url”
experiment_response=$(az rest –method get –uri “$api_url”)
echo “Response for $experiment_name: $experiment_response”
principal_id=$(echo $experiment_response | jq -r ‘.identity.principalId’)
echo “Principal ID for $experiment_name: $principal_id”
principal_ids=”$principal_ids$principal_id,”
done
principal_ids=”${principal_ids%,}” # Remove trailing comma
echo “principal_ids=$principal_ids” >> $GITHUB_ENV
echo “::set-output name=principal_ids::$principal_ids”
– name: Add Principals to AD Group and Assign AKS Cluster Admin Role
run: |
IFS=’,’ read -ra IDS <<< “${{ steps.get_principal_ids.outputs.principal_ids }}”
for id in “${IDS[@]}”; do
# Check if the principal is already a member of the AD group
group_member_check=$(az ad group member check –group “${{ secrets.AKS_AD_GROUP }}” –member-id “$id” –query ‘value’ -o tsv)
if [ “$group_member_check” == “false” ]; then
az ad group member add –group “${{ secrets.AKS_AD_GROUP }}” –member-id “$id”
else
echo “Principal $id is already a member of the AD group.”
fi
# Check if the principal already has the AKS Cluster Admin role
role_assignment_check=$(az role assignment list –assignee “$id” –role “Azure Kubernetes Service Cluster Admin Role” –scope “/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.ContainerService/managedClusters/${{ secrets.AKS_NAME }}” –query ‘length(@)’ -o tsv)
if [ “$role_assignment_check” -eq 0 ]; then
# Assign AKS Cluster Admin role
az role assignment create
–assignee-object-id “$id”
–role “Azure Kubernetes Service Cluster Admin Role”
–scope “/subscriptions/${{ secrets.ARM_SUBSCRIPTION_ID }}/resourceGroups/${{ secrets.AKS_RESOURCE_GROUP }}/providers/Microsoft.ContainerService/managedClusters/${{ secrets.AKS_NAME }}”
else
echo “Principal $id already has the AKS Cluster Admin role assigned.”
fi
done
Automating Chaos Studio JSON Templates with GitHub Actions and Terraform
The JSON configuration provided (also see Azure Chaos Studio fault and action library) defines a detailed chaos experiment setup intended for deployment within an AKS (Azure Kubernetes Service) cluster. This configuration, which is stored in a separate root GitHub folder named json, is utilized by the GitHub Action workflows to orchestrate chaos engineering experiments using Chaos Mesh. By keeping these JSON configurations organized in a dedicated folder, the workflows can easily reference and apply them during deployment, ensuring a structured and maintainable approach to chaos testing.
The JSON file specifies the location of the experiment (eastus) and sets up a system-assigned identity for the resources. Within the properties section, the experiment steps are outlined, beginning with “Step 1.” This step includes a single branch (“Branch 1”) that defines a continuous action targeting all pods within the “azure-vote” namespace. The action is configured to simulate pod failures for a duration of five minutes, utilizing a specific Chaos Mesh capability (podChaos/2.1). The JSON configuration also defines a selector (“Selector1”) that identifies the specific AKS cluster targeted by the experiment. This setup ensures that the chaos experiment is precisely targeted and executed within the intended cluster, helping to test the resilience and fault tolerance of the applications running in the “azure-vote” namespace.
By integrating these JSON configurations into the GitHub Action workflows, the automation process becomes seamless. The workflows dynamically replace placeholder values (SUBSCRIPTION_ID_PLACEHOLDER, RESOURCE_GROUP_PLACEHOLDER, and AKS_NAME_PLACEHOLDER) with actual values during execution. This dynamic replacement allows for flexibility and reusability of the JSON configurations across different environments and clusters. The structured approach of keeping these configurations in a dedicated folder and calling them within the GitHub Action workflows ensures a streamlined and efficient process for deploying and managing chaos experiments, ultimately contributing to the robustness and reliability of the AKS-deployed applications.
{
“location”: “eastus”,
“identity”: {
“type”: “SystemAssigned”
},
“properties”: {
“steps”: [
{
“name”: “Step 1”,
“branches”: [
{
“name”: “Branch 1”,
“actions”: [
{
“type”: “continuous”,
“selectorId”: “Selector1”,
“duration”: “PT5M”,
“parameters”: [
{
“key”: “jsonSpec”,
“value”: “{“action”:”pod-failure”,”mode”:”all”,”selector”:{“namespaces”:[“azure-vote”]}}”
}
],
“name”: “urn:csci:microsoft:azureKubernetesServiceChaosMesh:podChaos/2.1”
}
]
}
]
}
],
“selectors”: [
{
“id”: “Selector1”,
“type”: “List”,
“targets”: [
{
“type”: “ChaosTarget”,
“id”: “/subscriptions/SUBSCRIPTION_ID_PLACEHOLDER/resourceGroups/RESOURCE_GROUP_PLACEHOLDER/providers/Microsoft.ContainerService/managedClusters/AKS_NAME_PLACEHOLDER/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh”
}
]
}
]
}
}
Summary
We covered several aspects of automating and managing AKS (Azure Kubernetes Service) clusters and chaos engineering experiments using Terraform and GitHub Actions. We started by detailing the Terraform code used to deploy an AKS cluster, highlighting the configuration of various components such as agent nodes, network settings, security policies, and integrations with Azure services. This automation not only ensures a consistent deployment process but also leverages the power of infrastructure as code to manage complex cloud resources efficiently.
We then explored a GitHub Action workflow designed to automate the deployment and management of Chaos Mesh experiments and the Azure Vote service. This workflow uses triggers based on code changes and manual inputs to execute specific tasks, such as deploying, uninstalling, or running chaos experiments within the AKS cluster. By integrating Azure credentials and Kubernetes configurations, the workflow streamlines the process of setting up and managing these experiments, ensuring that they are applied accurately and securely.
Additionally, we delved into the JSON configurations used for chaos experiments, stored in a dedicated GitHub folder and referenced within the GitHub Action workflows. These configurations define detailed chaos experiment steps and selectors, targeting specific resources within the AKS cluster to simulate various fault scenarios. By organizing these configurations and automating their deployment, we enhance the resilience and fault tolerance of applications running in the cloud.
Together, these discussions illustrate a robust approach to managing cloud infrastructure and testing application resilience through automation and chaos engineering. Utilizing Terraform for infrastructure deployment and GitHub Actions for orchestration and management allows for a streamlined, efficient, and consistent process, ultimately contributing to more reliable and resilient cloud-native applications.
Here are some helpful links from Microsoft Learn that relate to the topics we discussed today:
Create an AKS Cluster – Step-by-step guide to creating an AKS cluster using the Azure portal.
Terraform on Azure Documentation – Comprehensive documentation on using Terraform with Azure, including examples and best practices.
Chaos Studio Overview – An introduction to Azure Chaos Studio, its features, and capabilities.
Deploy Chaos Mesh on AKS – Tutorial on setting up and using Chaos Mesh within an AKS cluster through the Azure portal.
GitHub Actions for Azure – Detailed guide on using GitHub Actions to automate workflows for Azure deployments.
Helm for Kubernetes – Information on using Helm to manage Kubernetes applications on AKS.
Azure Kubernetes Service Documentation – Comprehensive resource for all things AKS, including tutorials, reference architectures, and best practices.
Azure Chaos Studio Tutorial – Instructions on creating chaos experiments using Chaos Studio and the Azure CLI.
Microsoft Tech Community – Latest Blogs –Read More
Building Better Azure Apps: Better Together
Helping you build better apps has been one of our key focus areas in Azure. Our latest tooling focuses on providing guidance for architecting, optimizing, and deploying apps. Whether you’re creating a new proof of concept or improving an existing app, these capabilities can boost productivity and performance. These capabilities are all in Preview, so please give them a try and let us know what you think!
Starting Right: Architecting Your Azure App
Let’s say you’re starting a proof of concept for a new application. Normally, you might spend a lot of time picking services, architecting apps, and deploying them based on industry best practices. Better Together can streamline this process with the below capabilities.
Better Together in Microsoft Copilot for Azure
The Better Together capability which can be accessed from Copilot can be helpful to understanding if you’re on the right track when building your app. In the past it might’ve been time-consuming to learn about the kinds of services that similar apps are using through docs and videos. This capability can streamline some of this process by recommending services based on patterns that other similar apps have used.
To give this a try, navigate to the Azure Portal and select the Copilot button in the toolbar to open the chat window. Here you can ask questions to recommended services for your app, or architecture, including, “What are popular services that are deployed with App Service apps like mine?” and “Which database should I use with my ACA app?”, and “What services would you recommend to implement distributed caching?”
Sometimes it’s important to validate if you’re on the right track. When you ask architectural or infrastructure-level questions to Azure Copilot, it helps you discover the most commonly used services for your specific use case. In the example below, after identifying performance bottlenecks in your app and considering implementing distributed caching to enhance performance, the recommendation points to Azure Cache for Redis. This service is widely deployed by many App Service apps similar to yours.
Boosting Performance: Optimizing Your Azure App
If your App Service app is running a little slower than expected, or if you’re suspecting any performance bottlenecks, these are some capabilities that can diagnose and optimize these problems.
Diagnostics Insights (Preview)
Diagnostic logs can return pages of information that are difficult to interpret. This capability can make it easier to identify anomalies and quickly identify bottlenecks . In the Azure Portal, you can efficiently evaluate your application’s CPU usage and track any anomalies by navigating to Diagnose & Solve Problems > Web App Slow. Within this section, you’ll find a chart that provides insights into performance and latency.
Notably, over the last 24 hours, approximately 90% of users accessing this web app experienced low latency.
Another way to access suggestions is to type in “my web app is slow” into Copilot for Azure, which will offer suggestions around any bottlenecks.
Diagnostic charts can sometimes be time-consuming to analyze. However, Copilot offers a helpful Summarization capability. When you input variations of “summarize this page,” Copilot will generate concise summaries of the insights, allowing you to quickly grasp the main points without having to read through every chart and detail.
Application Insights Code Optimizations (Preview)
Performance can be improved by making code-level changes. Code Optimizations helps identify where to make these improvements. By leveraging AI, Code Optimizations detects CPU and memory bottlenecks of your application during runtime. It is available for .NET applications that have Application Insights Profiler enabled. To access Code Optimizations in the Azure Portal, navigate to the Performance blade in Application Insights. For App Service, it’s also available in Diagnose & Solve Problems > Web App Slow.
In this example, some of the performance issues identified may be caused by inefficient code, which can be investigated.
Selecting any of these suggestions will open more details about the performance issue, show where and when in the code it’s occurring, and show the recommended solution.
For many recommendations, a code fix can be generated using the Code Optimizations extension (currently in limited preview) for Visual Studio and Visual Studio Code – Insiders. You can sign up here.
Learn more about Code Optimizations.
Making Improvements: Augmenting Your Azure App
If you have deployed an App Service app and you’re unsure which services to use to improve scalability and reliability for it, these capabilities can help optimize without reinventing the wheel.
Better Together (Preview) in Azure Portal
It can be time-consuming to pick, create, deploy, and connect a service to your App Service app. Better Together can help you deploy and connect popular services for your App Service app. This capability primarily focuses on connecting newly-created resources to your App Service app more easily. Navigate to Better Together for your App Service app through the Azure Portal using the menu item Better Together.
Enabling Azure Cache for Redis will automatically create a new Redis instance and establish the connection with your existing App Service app. If you choose to “Create” any of the other services, you’ll be directed to their onboarding flow, where you’ll receive guidance on creating and connecting the service. Stay tuned for the next release for a more customized experience!
Take a look at these capabilities in action with the video below.
Conclusion: Better Together
Azure strives to empower you to create robust, high-performing apps. Whether you’re starting a new app or improving an existing one, we are creating tools and services that can help. Please give these capabilities a try and let us know what you think by leaving a comment or emailing us at bettertogetherteam@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
List Calculated Field with table reference
Hi, in a SharePoint List, I need a calculated field, and I wonder if it’s feasible with just JSON. Users already input: location (drop-down list) & year. Based on those 2 values, I input manually a labour rate, using a reference table. Is it possible (without transtionning to PowerAutomate) to have a calculated field spitting out that labour rate? Would a very long list of ‘if statements’ work?
Hi, in a SharePoint List, I need a calculated field, and I wonder if it’s feasible with just JSON. Users already input: location (drop-down list) & year. Based on those 2 values, I input manually a labour rate, using a reference table. Is it possible (without transtionning to PowerAutomate) to have a calculated field spitting out that labour rate? Would a very long list of ‘if statements’ work? Read More
Meeting Bot issue: Did not receive valid response for JoinCall request from call modality controller
I’m trying to join a Teams Meeting with a bot. I used this https://microsoftgraph.github.io/microsoft-graph-comms-samples/docs/articles/index.html#making-an-outbound-call-to-join-an-existing-microsoft-teams-meeting sample.
When the bot attempts to join I get the popup to admit or deny it in the meeting, but as soon as I click admit, it drops.
In the logs I see this message:
Call status updated to Terminated – Did not receive valid response for JoinCall request from call modality controller.. DiagCode: 580#5426.@
I am using the latest (1.2.0.10563 at time of writing) version of Microsoft.Graph.Communications libraries and the problem only started after I updated from 1.2.0.3742 that I was using previously.
I could not find any info on what the call modality controller is, or how to check what it is responding if anything. Any ideas on how to troublshoot this are welcome.
I’m trying to join a Teams Meeting with a bot. I used this https://microsoftgraph.github.io/microsoft-graph-comms-samples/docs/articles/index.html#making-an-outbound-call-to-join-an-existing-microsoft-teams-meeting sample.When the bot attempts to join I get the popup to admit or deny it in the meeting, but as soon as I click admit, it drops.In the logs I see this message:Call status updated to Terminated – Did not receive valid response for JoinCall request from call modality controller.. DiagCode: 580#5426.@ I am using the latest (1.2.0.10563 at time of writing) version of Microsoft.Graph.Communications libraries and the problem only started after I updated from 1.2.0.3742 that I was using previously. I could not find any info on what the call modality controller is, or how to check what it is responding if anything. Any ideas on how to troublshoot this are welcome. Read More
腾龙开户yx0503123
01.真希望在电影里过日子,下一个镜头就是一行字幕:多年以后..…
02.你拼命挣钱的样子,虽然有些狼狈,但自己靠自己的样子,真的很美。
03.见到你那一刻我心里有场海啸,可我静静站着,没有让任何人知道。
04.时间走的好快啊!又要跨年了,好像什么事都还没来得及去做,一年又过去了,都希望岁月温柔,可岁月何曾饶过谁!
05. 生活嘛,笑一笑就好了,你已不再是小孩,就算撑不住,也不许哭。
06.好好赚钱吧,没有钱,你拿什么呵护你的亲情,支撑你的爱情,联络你的友情,靠嘴吗,别闹了,大家都挺忙的。
01.真希望在电影里过日子,下一个镜头就是一行字幕:多年以后..…02.你拼命挣钱的样子,虽然有些狼狈,但自己靠自己的样子,真的很美。03.见到你那一刻我心里有场海啸,可我静静站着,没有让任何人知道。04.时间走的好快啊!又要跨年了,好像什么事都还没来得及去做,一年又过去了,都希望岁月温柔,可岁月何曾饶过谁!05. 生活嘛,笑一笑就好了,你已不再是小孩,就算撑不住,也不许哭。06.好好赚钱吧,没有钱,你拿什么呵护你的亲情,支撑你的爱情,联络你的友情,靠嘴吗,别闹了,大家都挺忙的。 Read More
Dynamics 365 Partner Sandbox – Operations Application
Does the Dynamics 365 Partner Sandbox – Operations Application include CoPilot for Finance? If yes, a partner can start developing for FO for 895 per year?
Does the Dynamics 365 Partner Sandbox – Operations Application include CoPilot for Finance? If yes, a partner can start developing for FO for 895 per year? Read More
How do I access the Project app
Hello
Please i need your help on this issue.
How do I access the Project app? It says I don’t have a license
Hello Please i need your help on this issue. How do I access the Project app? It says I don’t have a license Read More
Excel Dependent Drop Down Lists not loading
Hello everyone.
I created Dependent Drop Down Lists in Excel using the Offset Formula.
When I open the sheet, the Drop Down Lists do not load but when I re-enter the same formula in Data Validation in the open sheet, the Drop Down Lists start loading / showing.
The same thing then repeats when I close and open the sheet.
Please help.
Hello everyone. I created Dependent Drop Down Lists in Excel using the Offset Formula. When I open the sheet, the Drop Down Lists do not load but when I re-enter the same formula in Data Validation in the open sheet, the Drop Down Lists start loading / showing. The same thing then repeats when I close and open the sheet. Please help. Read More
Town Hall feedback
Hi there,
We’re hurtling towards our first high-level Town Hall and I have a few concerns.
The Q&A function on the whole is not as robust or useful as it was in Live Event. Not having all presenters able to view the In Review tab is frustrating – a number of folk in some of our busier meetings want to be able to view and cast opinion on whether or not certain questions should be published but because this is reserved to Co-organizers and the Co-organizers are limited in number to 10, this is causing some issues around deciding/prioritizing who should be able to preview questions ahead of publishing.
We have found in testing that deleted questions don’t disappear promptly from the Published queue.
And there no longer appears to be any Q&A reporting available to organiser, co-organisers or presenters, meaning that the only way to review Q&A is to return to the event in the calendar and deleted questions are gone forever (we used to delete questions as they were answered to try to control the Published queue).
It looks like sorting by Most Recent vs Most Liked has disappeared meaning the democratisation of what a community would like to have answered has gone. It was already tricky enough because you couldn’t re-order against those parameters so most liked constantly drifted to the bottom. Now the Published queue will be very difficult to manage live and pulling out useful information from the event nigh on impossible.
Roles permissions seems to be backwards – how can presenters control who, including co-organizers, is on-screen but not view In Review questions? Meanwhile Co-organizers are unable to invite further presenters to the event?
An issue with production is that now as soon as someone shares a powerpoint it’s live to the audience. From the perspective of production, it would be extremely useful if you could bring a presenter’s share content on/off screen like you can with presenter cameras.
In controlling the production for audience, I like the fact that we can have multiple presenters on screen at once – this is a tremendous improvement on Teams Live. It’s a shame, though, that moving presenters on and off screen is so clunky – having to click through them one at a time to bring them on and off is not very slick
I’d be very happy to be told what of the above is inaccurate or if there are back-end/tenancy settings that can be changed to fix any of the issues highlighted above.
Cheers
Rich
Hi there, We’re hurtling towards our first high-level Town Hall and I have a few concerns. The Q&A function on the whole is not as robust or useful as it was in Live Event. Not having all presenters able to view the In Review tab is frustrating – a number of folk in some of our busier meetings want to be able to view and cast opinion on whether or not certain questions should be published but because this is reserved to Co-organizers and the Co-organizers are limited in number to 10, this is causing some issues around deciding/prioritizing who should be able to preview questions ahead of publishing. We have found in testing that deleted questions don’t disappear promptly from the Published queue. And there no longer appears to be any Q&A reporting available to organiser, co-organisers or presenters, meaning that the only way to review Q&A is to return to the event in the calendar and deleted questions are gone forever (we used to delete questions as they were answered to try to control the Published queue). It looks like sorting by Most Recent vs Most Liked has disappeared meaning the democratisation of what a community would like to have answered has gone. It was already tricky enough because you couldn’t re-order against those parameters so most liked constantly drifted to the bottom. Now the Published queue will be very difficult to manage live and pulling out useful information from the event nigh on impossible. Roles permissions seems to be backwards – how can presenters control who, including co-organizers, is on-screen but not view In Review questions? Meanwhile Co-organizers are unable to invite further presenters to the event? An issue with production is that now as soon as someone shares a powerpoint it’s live to the audience. From the perspective of production, it would be extremely useful if you could bring a presenter’s share content on/off screen like you can with presenter cameras. In controlling the production for audience, I like the fact that we can have multiple presenters on screen at once – this is a tremendous improvement on Teams Live. It’s a shame, though, that moving presenters on and off screen is so clunky – having to click through them one at a time to bring them on and off is not very slick I’d be very happy to be told what of the above is inaccurate or if there are back-end/tenancy settings that can be changed to fix any of the issues highlighted above. CheersRich Read More
Enable Zero Touch Enrollment of MDE on macOS devices managed by Microsoft Intune
Introduction
Microsoft Defender for Endpoint (MDE) is a unified endpoint security platform that helps protect your organization from advanced threats. MDE provides threat detection, investigation, and response capabilities across Windows, Linux, Android, and macOS devices.
To deploy MDE on macOS devices, you need to install the MDE agent and enroll the devices to the MDE service. You can use Microsoft Intune, a cloud-based device management service, to automate the installation and enrollment process. This blog post explains how to use Intune to achieve zero touch enrollment of MDE on macOS devices.
Prerequisites
Before you start, make sure you have the following:
User assigned with licenses for MDE and Intune.
A supported macOS version (three most recent major releases are supported)
The expectation in this blog post is that the device is already enrolled into Intune. It doesn’t cover the Intune enrollment methods and enrollment type doesn’t change the MDE onboarding.
Configuration Steps
The table below lists the mandatory steps for a successful MDE deployment on macOS. The column Purpose in the table calls out required configuration steps, click on each hyperlink to follow the guided instructions from our Learn Docs.
Step
Purpose
Type
Reference
1
Intune Configuration Profile – Extensions
Note: If you already have an existing Configuration profile with Bundle Identifier, you may want to merge this together since Apple only supports one.
2
Intune Configuration Profile – Custom
3
4
5
6
7
Onboarding Blob
8
Application – Native Intune
Optional Steps
Additionally, you may want to further customize the MDE configurations. Below are a few suggestions, follow the guided instructions from our Learn Docs.
Configuration
Short Description
Location
Configure Bluetooth policies for Device Control. (starting macOS 14)
Intune Custom Configuration Profile
Choose between Beta; Preview and Production Channels
Intune Custom Configuration Profile
Configuration settings for AV; Exclusions and EDR.
Intune Portal or Defender Portal
Reduce attack surface from Internet-based events like phishing;exploits;malicious content
Defender Portal
Deploy Device Control Policies
Removable devices controls like allow;block;read;write
Intune Portal or Defender Portal
Enable Data Loss Prevention (DLP)
Purview’s DLP Integration with MDE.
Intune Custom Configuration Profile
Verification & Monitoring
The MDE agent will be installed and enrolled silently on the macOS devices that you targeted. The agent icon will appear on the macOS desktop menu bar at the top of the screen.
Refer the screenshots below to click on the MDE icon to launch the app and view details.
Additionally, you can verify the installation and enrollment status by launching the Terminal app and execute the following command: “mdatp health”.
The output reports the overall MDE health status including Configs; Definitions; Device/Org IDs. You can refer the [managed] policies from your configurations.
As an IT admin, you can launch Microsoft Defender portal to view the device’s health, associated incidents, security recommendations, inventory and discovered vulnerabilities.
Click on the device for more information.
Other Installation Methods
Intune is one of the deployment tools for MDE, however you can choose other ways to deploy MDE. Below are a few callouts:
Command Line – Manual Deployment
Thanks,
Arnab Mitra
Microsoft Tech Community – Latest Blogs –Read More
Up Your Organizational Copilot Prompt Game – HLS Copilot Snacks
As organizations roll out Copilot for Microsoft 365 to their users it is imperative that they arm them with the knowledge and resources to be effective Prompters. The effectiveness of a user’s prompts really is the determinant in them gaining the full value of the generative AI capability of Copilot yet most users are left on their own with a powerful new tool they are unsure of how to properly use. Thankfully, Microsoft, and the extended Copilot community, have provided some great resources that can really help organizations empower their end users and up their organizational prompt game.
In this HLS Copilot Snack, I walk through 4 resources that can bring immediate impact within an organization in transforming their user prompts into a powerful tool for AI powered transformation.
To see all HLS Copilot Snacks video click here.
Resources:
Copilot Lab
Prompt Buddy
Copilot for Microsoft 365: The art and science of prompting
Enhance your copilot’s responses with prompt modification – Microsoft Copilot Studio | Microsoft Learn
To see all HLS Copilot Snacks video click here.
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Introducing a new enrollment method for staging corporate Android devices with Microsoft Intune
By: Akriti Srivastava – Product Manager 2 | Microsoft Intune
With Intune’s May (2405) service release, we’re introducing a new enrollment method ‘Device Staging’ for the following Android Enterprise devices:
Corporate-owned fully managed
Corporate-owned work profile devices
The new method simplifies the enrollment experience for frontline workers (FLW), optimizing their productivity by reducing the time spent to set up the device.
What is ‘Device Staging’ and how is this enrollment method different?
Currently, the enrollment process for corporate devices uses a ‘Default’ enrollment token and is completed in 2 stages, first by the admin and then the user. The admin initiates the enrollment process, creates the enrollment token, and then shares it with the user. Then, the user signs into the device using their credentials and navigates through all the provisioning steps to complete enrollment. For more information on this method, review: Set up enrollment for Android Enterprise fully managed devices.
The new method introduces a ‘Staging’ token, the enrollment is completed in 3 stages, first by the admin, second by an admin or third-party vendor, and then the user.
In the ‘Staging’ enrollment experience, an admin initiates the process, creates the enrollment token, and then shares device staging token with a third-party vendor or admin. Then, provisioning steps are completed by the third-party admin/vendor. The device remains userless throughout the vendor stage and becomes user affiliated and ready for use only at the last step when the user signs in with their credentials.
With this method, more work is done by the vendor/admin as they perform the enrollment of the device, go through the steps to complete Google registration, and get the device ready (while your organization’s apps are automatically installed in the background).
Getting started
How to begin using the device ‘Staging’ experience
Sign in to the Microsoft Intune admin center.
Navigate to Devices > Android > Android Enrollment.
Under Enrollment Profiles choose either Corporate-owned, fully managed user devices or Corporate-owned devices with work profile.
Create a new profile.
Configure the token.
Stage 1- Actions performed by admin
Creates the staging token by setting the Token type to “Corporate-owned, fully managed, via staging” within the enrollment profile and set the token’s expiration date.
Creates either a dynamic device group or an assignment filter to assign policies and apps during the user stage for devices that will enroll with the newly-created enrollment token.
Important: Dynamic Device group is not supported at the vendor stage. Assignment filters must be used at the vendor stage for targeting apps and policies.
Enrollment token (located in the enrollment profile) and the device is then sent to the third-party vendor or another admin.
Stage 2- Actions performed by third-party vendor/admin
Vendor unboxes the device, puts the battery in, and turns it on.
Vendor goes through the enrollment process, walks through the setup wizard but doesn’t need to input credentials at the sign-in screen. Out-of-box enrollment of devices is performed using the token QR code or the token number provided by the admin.
The vendor stage ends on the home screen with work profile created.
The device is turned off and given to the user.
Stage 3- Actions to be performed by device user
User turns on the phone, goes to the Intune app, and signs in using their credentials.
Note: Some screens that don’t require user input may be skipped, depending on technical feasibility. An “enrollment in progress” screen takes their place.
After completing the final enrollment steps, the device is ready to be used by the user.
Monitoring devices undergoing staging
In the Intune admin center (Devices > All devices), admins can monitor the list of devices which are in the process of staging (vendor stage) and the ones which have completed (user stage). These columns in particular will help admins to view the list and status of the staging devices: Device name, OS (enrollment mode), Primary User.
When the user has not completed the enrollment: When the device is still at the vendor stage. Admins can view the list of the devices which are under the process of staging.
Device name will have a prefix- Staging followed by serial number followed by enrollment mode followed by date and time.
Staging_ XX1235_AnroidEnterprise_06/09/2022_4.20 AM
The prefix ‘Staging’ indicates that the device is still in the process of staging.
When the user has completed the enrollment: There will be a change in the naming convention of the devices once staging completes.
Device name starts with the username followed by enrollment mode followed by date and time.
Username_AnroidEnterprise_06/09/2022_4.20 AM
Stay tuned to What’s new in Intune for the release and for further additions to this functionality! If you have any questions, let us know in the comments or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
MVP’s Favorite Content: Microsoft AI, SQL, Power Platform
In this blog series dedicated to Microsoft’s technical articles, we’ll highlight our MVPs’ favorite article along with their personal insights.
Erik David Johnson, AI MVP, Denmark
Empowering responsible AI practices | Microsoft AI
“Featuring a playbook on responsible AI, this resource offers insightful articles on security, ethics, and more. Grounded in Microsoft’s six principles for the responsible development of AI solutions, it provides a practical perspective based on core values, making it an excellent tool for enhancing your understanding of responsible AI.”
Komes Chandavimol, AI MVP, Thailand
18 Lessons, Get Started Building with Generative AI
“Generative AI for Beginner, I would recommend these lessons for those who are seriously interested in studying Generative AI but don’t know where to start. I have introduced this to my students in class and see great feedback on the result. In addition, I have changed my role to be the coach to my student who provide not only the guidelines, but the small group clinic for them to ask any questions for this content”
(In Thai, สำหรับผู้ที่สนใจศึกษาเรื่อง Generative AI อย่างจริงจัง แต่ไม่รู้จะเริ่มที่ไหน ขออนุญาติแนะนำบทเรียนดีๆจาก microsoft ครับ เริ่มตั้งแต่พื้นฐาน ถีงสร้าง Gen AI Solution เบื้องต้นได้)
Sergio Govoni, Data Platform MVP, Italy
General availability: Elastic Jobs in Azure SQL Database – Microsoft Community Hub
“Database maintenance is an important factor also for Azure SQL and Elastic Jobs is the most complete solution for automation of scheduled activities to be performed on Azure SQL databases. We are excited to announce the general availability (GA) of Elastic Jobs for Azure SQL Databases! In the previous article Automating Azure SQL Database maintenance tasks (2° part), we described the initial implementation (preview) of Azure Elastic Job Agents, through which it’s possible to create and schedule processes on one or more Azure SQL databases to execute queries or maintenance tasks. In this article, I will focus on describing the major changes (compared to the previous post) in terms of configuration and security of connections to the target databases.”
*Relevant Blog:
– Italian: Automazione delle attività di manutenzione in Azure SQL Database (3 Parte) – UGISS
George Chysovalantis Grammatikos, Microsoft Azure MVP, Greece
Power Platform on Microsoft Learn | Microsoft Learn
“I highly recommend the MS Power Platform training as well. The content here is valuable on services like Power Automate, Power Apps, Power BI, etc., allowing individuals to develop personalized applications, streamline processes, and analyze information with ease.”
*Relevant Activities/Resources:
– What is Power Apps? | Microsoft Power Apps
Microsoft Tech Community – Latest Blogs –Read More