Category: News
Microsoft’s Copilot: A Frustrating Flop in AI-Powered Productivity
Microsoft’s Copilot was supposed to be the game-changer in productivity, but it’s quickly proving to be a massive disappointment. The idea was simple: integrate AI directly into Word, Excel, PowerPoint, and other Office tools to make our lives easier. But when it comes to actually performing specific functions, Copilot falls flat.
Here’s the problem: when you ask Copilot to alter a document, modify an Excel file, or adjust a PowerPoint presentation, it’s practically useless. Instead of performing the tasks as requested, it often leaves you hanging with vague suggestions or instructions. Users don’t want to be told how to perform a task—they want it done. This is what an AI assistant should do: execute commands efficiently, not just offer advice.
What makes this even more frustrating is that other AI tools, like ChatGPT, can handle these tasks effortlessly. When you ask ChatGPT to perform a specific function, it does so without hesitation. It’s able to understand the request and deliver exactly what’s needed. But Copilot? It struggles with the basics, and that’s unacceptable, especially from a company like Microsoft.
It’s frankly embarrassing that Microsoft can’t get this right. The whole point of integrating AI into these tools was to streamline workflows and boost productivity. But if Copilot can’t even manage simple tasks like formatting a document or adjusting a spreadsheet, then what’s the point? Users don’t need another tool that tells them how to do something—they need one that does it for them.
Microsoft, you’ve missed the mark with Copilot. It’s not just a minor inconvenience; it’s a serious flaw that undermines the value of your Office suite. When other AI tools can easily accomplish what Copilot can’t, it’s time to reevaluate. Users expect more, and frankly, they deserve more for their investment.
What’s been your experience with Copilot? Is anyone else finding it as frustrating as I am? Let’s talk about it.
Microsoft’s Copilot was supposed to be the game-changer in productivity, but it’s quickly proving to be a massive disappointment. The idea was simple: integrate AI directly into Word, Excel, PowerPoint, and other Office tools to make our lives easier. But when it comes to actually performing specific functions, Copilot falls flat.Here’s the problem: when you ask Copilot to alter a document, modify an Excel file, or adjust a PowerPoint presentation, it’s practically useless. Instead of performing the tasks as requested, it often leaves you hanging with vague suggestions or instructions. Users don’t want to be told how to perform a task—they want it done. This is what an AI assistant should do: execute commands efficiently, not just offer advice.What makes this even more frustrating is that other AI tools, like ChatGPT, can handle these tasks effortlessly. When you ask ChatGPT to perform a specific function, it does so without hesitation. It’s able to understand the request and deliver exactly what’s needed. But Copilot? It struggles with the basics, and that’s unacceptable, especially from a company like Microsoft.It’s frankly embarrassing that Microsoft can’t get this right. The whole point of integrating AI into these tools was to streamline workflows and boost productivity. But if Copilot can’t even manage simple tasks like formatting a document or adjusting a spreadsheet, then what’s the point? Users don’t need another tool that tells them how to do something—they need one that does it for them.Microsoft, you’ve missed the mark with Copilot. It’s not just a minor inconvenience; it’s a serious flaw that undermines the value of your Office suite. When other AI tools can easily accomplish what Copilot can’t, it’s time to reevaluate. Users expect more, and frankly, they deserve more for their investment.What’s been your experience with Copilot? Is anyone else finding it as frustrating as I am? Let’s talk about it. Read More
App using node-fetch as agent
A few days ago, I was looking into a user’s sign in logs. I noticed an application called Augmentation Loop with the user agent as node-fetch/1.0 (+https://github.com/bitinn/node-fetch). Looking into the Augmentation Loop, it is part of apps included in Conditional Access Office 365 app suite. (https://learn.microsoft.com/en-us/entra/identity/conditional-access/reference-office-365-application-contents)
According to this site (https://petri.com/microsoft-revamps-outlook-one-outlook-vision/), it is a way of coordinating all the various types of data and services consumed by Outlook.
From what I can see, Augmentation Loop sign ins are always in between Microsoft Office sign ins:
I tried referencing the app ID (4354e225-50c9-4423-9ece-2d5afd904870) to the Azure app ID list (https://learn.microsoft.com/en-us/microsoft-365-app-certification/azure/azure-apps), however, it is not there.
I also tried searching through Azure admin all applications and it is also not there. Google search doesn’t also return anything.
May someone please explain what application or service is using the node-fetch agent?
A few days ago, I was looking into a user’s sign in logs. I noticed an application called Augmentation Loop with the user agent as node-fetch/1.0 (+https://github.com/bitinn/node-fetch). Looking into the Augmentation Loop, it is part of apps included in Conditional Access Office 365 app suite. (https://learn.microsoft.com/en-us/entra/identity/conditional-access/reference-office-365-application-contents)According to this site (https://petri.com/microsoft-revamps-outlook-one-outlook-vision/), it is a way of coordinating all the various types of data and services consumed by Outlook. From what I can see, Augmentation Loop sign ins are always in between Microsoft Office sign ins: I tried referencing the app ID (4354e225-50c9-4423-9ece-2d5afd904870) to the Azure app ID list (https://learn.microsoft.com/en-us/microsoft-365-app-certification/azure/azure-apps), however, it is not there. I also tried searching through Azure admin all applications and it is also not there. Google search doesn’t also return anything. May someone please explain what application or service is using the node-fetch agent? Read More
Introducing the MDTI Premium Data Connector for Sentinel
The MDTI and Unified Security Operations Platform teams are excited to introduce an MDTI premium data connector available in the Unified Security Operations Platform and standalone Microsoft Sentinel experiences. This connector enables customers with an MDTI premium license and API license to apply the powerful raw and finished threat intelligence in MDTI, including high-fidelity indicators of compromise (IoCs), across their security operations to detect and respond to the latest threats.
Microsoft researchers, with the backing of interdisciplinary teams of thousands of experts spread across 77 countries, continually add new analysis of threat activity observed across more than 78 trillion threat signals to MDTI, including powerful indicators drawn directly from threat infrastructure. In Sentinel, this intelligence enables enhanced threat detection, enrichment of incidents for rapid triage, and the ability to launch investigations that proactively surface external threat infrastructure before it can be used in campaigns.
This blog will highlight the exciting use cases for the MDTI premium data connector, including enhanced enrichment, threat detection, and hunting that customers can tap into when enabling both the standard and premium MDTI data connectors. It will also cover how customers can easily get started with this out-of-the-box connector.
Dynamic Incident Enrichment
The MDTI data connector can help analysts respond to threats at scale by automatically enriching incidents with MDTI premium threat intelligence, evaluating indicators in an incident with dynamic reputation data (everything Microsoft knows about a piece of online infrastructure) to mark its severity and automatically triage it accordingly. Comments are added to the incident outlining the reputation details with links to further information about associated threat actors, tools, and vulnerabilities.
Threat Detection
With a flip of the switch, the MDTI premium data connector immediately enables detections for threats, including activity from the more than 300 named threat actor groups tracked by Microsoft. When enabled in Microsoft Sentinel, this connector takes URLs, domains, and IPs from a customer environment via log data and checks them against a dynamic list of known bad IOCs from MDTI. When a match occurs, an incident is automatically created, and the data is written to the Microsoft Sentinel TI blade. By enabling this rule, Microsoft Sentinel users know they have detections in place for threats known to Microsoft.
External Threat Hunting
Customers can pivot off the IoCs to investigate further and boost their understanding of the threat with MDTI’s repository of raw and finished intelligence. Finished intelligence, or written intelligence and analysis, includes articles, activity snapshots, and Intel Profiles about actors tooling and vulnerabilities. It provides crucial context and vital information such as targeting information, TTPs (tactics, techniques, and procedures), and additional IoCs.
Customers can also explore advanced internet data sets created by amass collection network that maps threat infrastructure across the internet every day to locate relationships between entities on the web to malicious infrastructure, tooling, and backdoors outside the network at incredible scale. Below is an example of how to effectively detect and hunt for Indicators of Compromise (IoCs) associated with threat actors using Sentinel with MDTI premium connector enabled.
Begin by following these steps:
Filter IoCs by MDTI Source – set the source filter to “Premium Microsoft Defender Threat Intelligence” within the Sentinel TI Blade
Tags enable filtering on IoCs by specific threat actors. For example, `ActivityGroup:AQUA BLIZZARD`
Next, customers can leverage the enriched data from the MDTI feed in their Log Analytics workspace using KQL queries to hunt. They can also create custom analytic rules:
Users can also create an Analytics Rule to better align with their hunting workflow:
For the sake of this example, our detection rule is very simple. However, customers can enhance rules with their own detection logic:
Customers can then extend their investigation and gather more intelligence on the threat actor in the Unified Security Operations Platform MDTI experience by taking the indicator value and perform a search in the global search feature:
Customers can click on the intel profiles directly to learn more about the actor and access additional IoCs compiled by Microsoft’s threat research teams:
Getting started with MDTI Connector
To install/access the UX for the Premium MDTI data connector, users will need to install the Threat Intelligence (Preview) Solution:
Sign up here to participate. We will enable this private preview in the customer environment three (3) business days after submission.
Three business days after the previous step, customers should navigate to this Threat Intelligence (Preview)Solution and select Create
Customers should then select the subscription, resource group, and workspace name for which they wish to add this solution.
Select Review + create
Select Create
After selecting Create, customers will be navigated to the page with the deployment of the solution. Please allow a couple minutes for the deployment to be completed.
Then, use this feature flag, https://aka.ms/MDTIPremiumFeedPrPFeatureFlag, to login again to Microsoft Sentinel.
After installing the preview solution and adding the feature flag to the URL – users will be able to access the Premium Microsoft Defender for Threat Intelligence Data Connector. Below is a screenshot showing what the Data Connector page in Sentinel should look like:
Connecting the Data Connector
Navigate to the Data Connectors blade in Sentinel:
Select the Premium Microsoft Defender Threat Intelligence (Preview)Connector:
Select Open connector page:
Select Connect to connect the data connector (note, if already connected, the disconnect button will allow customers to disconnect the data connector):
After connecting the data connector, customers should navigate to the Threat Intelligence Blade in their Sentinel Workspace, and soon premium indicators will be added.
Conclusion
Microsoft delivers leading threat intelligence built on visibility across the global threat landscape made possible protecting Azure and other large cloud environments, managing billions of endpoints and emails, and maintaining a continuously updated graph of the internet. By processing an astonishing 78 trillion security signals daily, Microsoft can deliver threat intelligence in MDTI providing an all-encompassing view of attack vectors across various platforms, ensuring Sentinel customers have comprehensive threat detection and remediation.
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDTI product web page.
Also, be sure to contact our sales team to request a demo or a quote. Learn how you can begin using MDTI with the purchase of just one Copilot for Security SCU here.
Microsoft Tech Community – Latest Blogs –Read More
Simulate sine wave with timestep different than overall model timestep
Hello All,
I needed your help in understanding of where the mistake is and wanted to know how can I implement the following:
I have a simulink model running a certain fixed time step. And inside the model there is a sine wave function connected with a digital clock that is creating a sine wave with sample rate set to -1(inherit). Meaning it will use t=simulink model timestep. Instead of -1 I would like to use a time step which is faster than Simulink time step. I tried using different values but I am getting an error: "Digital Clock has an invalid sample time. Only constant (inf) or inherited (-1) sample times are allowed in the asynchronous subsystem".
Can you please suggest me what other options can I try?
Appreciate all your help and guidance.Hello All,
I needed your help in understanding of where the mistake is and wanted to know how can I implement the following:
I have a simulink model running a certain fixed time step. And inside the model there is a sine wave function connected with a digital clock that is creating a sine wave with sample rate set to -1(inherit). Meaning it will use t=simulink model timestep. Instead of -1 I would like to use a time step which is faster than Simulink time step. I tried using different values but I am getting an error: "Digital Clock has an invalid sample time. Only constant (inf) or inherited (-1) sample times are allowed in the asynchronous subsystem".
Can you please suggest me what other options can I try?
Appreciate all your help and guidance. Hello All,
I needed your help in understanding of where the mistake is and wanted to know how can I implement the following:
I have a simulink model running a certain fixed time step. And inside the model there is a sine wave function connected with a digital clock that is creating a sine wave with sample rate set to -1(inherit). Meaning it will use t=simulink model timestep. Instead of -1 I would like to use a time step which is faster than Simulink time step. I tried using different values but I am getting an error: "Digital Clock has an invalid sample time. Only constant (inf) or inherited (-1) sample times are allowed in the asynchronous subsystem".
Can you please suggest me what other options can I try?
Appreciate all your help and guidance. #simulink MATLAB Answers — New Questions
getting statistics from within a mask within an image
We have an image that represents data that only makes sense when it is analyzed in numerical format.
Specifically, the data needs to be analyzed as a function of radius as shown below
Im interested specifically in hte max and min values within each of the defined areas with respect to the center point.
Ive been looking at a few examples online and it seems this should work when you pull the data from a mask.
However, the issue is that I seem to be getting values that are not realistic.
How to get pixel value inside a circle – MATLAB Answers – MATLAB Central (mathworks.com)
how to draw circle in an image? – MATLAB Answers – MATLAB Central (mathworks.com)
this is what I am doing
clear
img = double(imread(‘img121.jpg’));; %no filtration
img = -(0.0316*img) +8.3; % we did this as we cant calibeate the film, we scan the same film over and over and it changes by 80pixels
img = imrotate(img, 90);
img = imgaussfilt(img ,1.5);
figure, imagesc(img )
axis image
height2 = 3.6
caxis([0 height2])
colorbar
title(‘ ‘)
impixelinfo
%# make sure the image doesn’t disappear if we plot something else
hold on
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%below looks like what we want
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%# define points (in matrix coordinates)
%3"
cpx = 2050;
cpy = 2020;
inchlist = [12,10.5,9,7.5,6,4.5];
%draw lines on heel axis
for n=1:size(inchlist,2)
inch= inchlist(n)/4;
hcirc = drawcircle(‘Center’,[2050,2020],’Radius’,inch*590,’StripeColor’,’red’);
mask1 = hcirc.createMask;
maxval = (max(img(mask1)));
minval = (min(img(mask1)));
uniformity = maxval/minval
% p1 = [cpy-100,cpx+inch*590];
end
Even after getting this max and min value, I will need to remove 10 to get rid of noise. Extra credit if you can point me to a soluton for that too.
thank youWe have an image that represents data that only makes sense when it is analyzed in numerical format.
Specifically, the data needs to be analyzed as a function of radius as shown below
Im interested specifically in hte max and min values within each of the defined areas with respect to the center point.
Ive been looking at a few examples online and it seems this should work when you pull the data from a mask.
However, the issue is that I seem to be getting values that are not realistic.
How to get pixel value inside a circle – MATLAB Answers – MATLAB Central (mathworks.com)
how to draw circle in an image? – MATLAB Answers – MATLAB Central (mathworks.com)
this is what I am doing
clear
img = double(imread(‘img121.jpg’));; %no filtration
img = -(0.0316*img) +8.3; % we did this as we cant calibeate the film, we scan the same film over and over and it changes by 80pixels
img = imrotate(img, 90);
img = imgaussfilt(img ,1.5);
figure, imagesc(img )
axis image
height2 = 3.6
caxis([0 height2])
colorbar
title(‘ ‘)
impixelinfo
%# make sure the image doesn’t disappear if we plot something else
hold on
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%below looks like what we want
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%# define points (in matrix coordinates)
%3"
cpx = 2050;
cpy = 2020;
inchlist = [12,10.5,9,7.5,6,4.5];
%draw lines on heel axis
for n=1:size(inchlist,2)
inch= inchlist(n)/4;
hcirc = drawcircle(‘Center’,[2050,2020],’Radius’,inch*590,’StripeColor’,’red’);
mask1 = hcirc.createMask;
maxval = (max(img(mask1)));
minval = (min(img(mask1)));
uniformity = maxval/minval
% p1 = [cpy-100,cpx+inch*590];
end
Even after getting this max and min value, I will need to remove 10 to get rid of noise. Extra credit if you can point me to a soluton for that too.
thank you We have an image that represents data that only makes sense when it is analyzed in numerical format.
Specifically, the data needs to be analyzed as a function of radius as shown below
Im interested specifically in hte max and min values within each of the defined areas with respect to the center point.
Ive been looking at a few examples online and it seems this should work when you pull the data from a mask.
However, the issue is that I seem to be getting values that are not realistic.
How to get pixel value inside a circle – MATLAB Answers – MATLAB Central (mathworks.com)
how to draw circle in an image? – MATLAB Answers – MATLAB Central (mathworks.com)
this is what I am doing
clear
img = double(imread(‘img121.jpg’));; %no filtration
img = -(0.0316*img) +8.3; % we did this as we cant calibeate the film, we scan the same film over and over and it changes by 80pixels
img = imrotate(img, 90);
img = imgaussfilt(img ,1.5);
figure, imagesc(img )
axis image
height2 = 3.6
caxis([0 height2])
colorbar
title(‘ ‘)
impixelinfo
%# make sure the image doesn’t disappear if we plot something else
hold on
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%below looks like what we want
%https://www.mathworks.com/matlabcentral/answers/1931825-how-to-get-pixel-value-inside-a-circle
%# define points (in matrix coordinates)
%3"
cpx = 2050;
cpy = 2020;
inchlist = [12,10.5,9,7.5,6,4.5];
%draw lines on heel axis
for n=1:size(inchlist,2)
inch= inchlist(n)/4;
hcirc = drawcircle(‘Center’,[2050,2020],’Radius’,inch*590,’StripeColor’,’red’);
mask1 = hcirc.createMask;
maxval = (max(img(mask1)));
minval = (min(img(mask1)));
uniformity = maxval/minval
% p1 = [cpy-100,cpx+inch*590];
end
Even after getting this max and min value, I will need to remove 10 to get rid of noise. Extra credit if you can point me to a soluton for that too.
thank you mask, statistics MATLAB Answers — New Questions
How to classify a folder of images?
I have a file contains a set of images including images of three denominations of currency. I need a MATLAB code to determine the number of images for each denomination of currency according to its features and colors.I have a file contains a set of images including images of three denominations of currency. I need a MATLAB code to determine the number of images for each denomination of currency according to its features and colors. I have a file contains a set of images including images of three denominations of currency. I need a MATLAB code to determine the number of images for each denomination of currency according to its features and colors. classify a folder of images MATLAB Answers — New Questions
IT admin unable to approve an uploaded custom app to Teams
Hello everyone,
My team and I have developed a copilot chatbot which we’re ready to make available to our superusers ( basically a group of testers) intially. Eventually we would like to rollout the app the rest of our organization.
Here are the steps we followed :
1. We’ve created a new app in Apps: Developer Portal (microsoft.com) pointing to the client ID within the chatbot configuraiton.
2. We validated and published the app by downloading the app package.
3. We uploaded the app package in Teams and submitted for approval
4. Our IT received our request and when approving, nothing really happens, the app remains in a blocked status.
Additionnally, he reports that he sees a pending action on the publish side :
We are clueless as to what’s missing here and we would like some guidance on the troubleshooting steps. We would like less privileges as possible especially that we are not ready to rollout the app to the org yet.
Best
Faten
Hello everyone, My team and I have developed a copilot chatbot which we’re ready to make available to our superusers ( basically a group of testers) intially. Eventually we would like to rollout the app the rest of our organization. Here are the steps we followed : 1. We’ve created a new app in Apps: Developer Portal (microsoft.com) pointing to the client ID within the chatbot configuraiton. 2. We validated and published the app by downloading the app package. 3. We uploaded the app package in Teams and submitted for approval 4. Our IT received our request and when approving, nothing really happens, the app remains in a blocked status. Additionnally, he reports that he sees a pending action on the publish side : We are clueless as to what’s missing here and we would like some guidance on the troubleshooting steps. We would like less privileges as possible especially that we are not ready to rollout the app to the org yet.BestFaten Read More
Copilot a let down
Microsoft’s attempt at integrating Copilot into its Office suite has been nothing short of a letdown. What was touted as the next big thing in productivity tools has turned out to be a frustrating experience for many users. The promise was grand—Copilot was supposed to revolutionize how we work in Word, Excel, PowerPoint, and more, but the reality has been far from it.
Let’s start with the basics. Copilot struggles to execute even the simplest of prompts. Whether you’re trying to format a document in Word, generate data insights in Excel, or create a presentation in PowerPoint, Copilot often fails to deliver. It’s supposed to be an AI-powered assistant, yet it feels more like a sluggish tool that barely gets the job done. For something that’s supposed to save time and enhance productivity, Copilot ends up wasting more time as users grapple with its limitations.
In contrast, tools like ChatGPT are light years ahead. When you ask ChatGPT to help with a task, it understands context, executes commands efficiently, and delivers accurate results. Whether it’s generating text, helping with coding, or providing insights, ChatGPT has proven itself as a reliable assistant that can handle a wide array of tasks.
But Copilot? It can’t even handle a basic document format without hiccups. It’s as if Microsoft has launched a half-baked product, expecting users to tolerate its shortcomings while they work out the kinks. This isn’t the first time we’ve seen a tech giant overpromise and underdeliver, but it’s particularly disappointing coming from Microsoft, a company that has the resources and expertise to do better.
The worst part? Users are paying for this. Copilot isn’t a free add-on—it’s a feature that’s supposed to justify its cost with enhanced productivity. But when it can’t even perform fundamental tasks correctly, it feels more like a waste of money.
Microsoft, if you’re listening, it’s time to get your act together. Copilot needs significant improvements if it’s going to compete in the AI assistant space. Right now, it’s not even in the same league as ChatGPT. Users deserve better for the investment they’ve made.
What are your thoughts? Has anyone had a different experience, or do you agree that Copilot has been a massive disappointment? Let’s discuss.
Microsoft’s attempt at integrating Copilot into its Office suite has been nothing short of a letdown. What was touted as the next big thing in productivity tools has turned out to be a frustrating experience for many users. The promise was grand—Copilot was supposed to revolutionize how we work in Word, Excel, PowerPoint, and more, but the reality has been far from it.Let’s start with the basics. Copilot struggles to execute even the simplest of prompts. Whether you’re trying to format a document in Word, generate data insights in Excel, or create a presentation in PowerPoint, Copilot often fails to deliver. It’s supposed to be an AI-powered assistant, yet it feels more like a sluggish tool that barely gets the job done. For something that’s supposed to save time and enhance productivity, Copilot ends up wasting more time as users grapple with its limitations.In contrast, tools like ChatGPT are light years ahead. When you ask ChatGPT to help with a task, it understands context, executes commands efficiently, and delivers accurate results. Whether it’s generating text, helping with coding, or providing insights, ChatGPT has proven itself as a reliable assistant that can handle a wide array of tasks.But Copilot? It can’t even handle a basic document format without hiccups. It’s as if Microsoft has launched a half-baked product, expecting users to tolerate its shortcomings while they work out the kinks. This isn’t the first time we’ve seen a tech giant overpromise and underdeliver, but it’s particularly disappointing coming from Microsoft, a company that has the resources and expertise to do better.The worst part? Users are paying for this. Copilot isn’t a free add-on—it’s a feature that’s supposed to justify its cost with enhanced productivity. But when it can’t even perform fundamental tasks correctly, it feels more like a waste of money.Microsoft, if you’re listening, it’s time to get your act together. Copilot needs significant improvements if it’s going to compete in the AI assistant space. Right now, it’s not even in the same league as ChatGPT. Users deserve better for the investment they’ve made.What are your thoughts? Has anyone had a different experience, or do you agree that Copilot has been a massive disappointment? Let’s discuss. Read More
Gatekeeper: Enforcing security policy on your Kubernetes clusters
Microsoft Defender for Containers secures Kubernetes clusters deployed in Azure, AWS, GCP, or on-premises using sensor data, audit logs and security events, control plane configuration information, and Azure Policy enforcement. In this blog, we’ll take a look at Azure Policy for Kubernetes and explore the Gatekeeper engine that is responsible for policy enforcement on the cluster.
Each Kubernetes environment is architected differently, but Azure Policy is enforced the same way across Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS) in AWS, Google Kubernetes Engine (GKE) in GCP, and on-premises or IaaS. Defender for Containers uses an open-source framework called Gatekeeper to deploy safeguards and enforcements at scale. We’ll get into what Gatekeeper is in a moment, but first, let’s orient ourselves with a simplified reference architecture for AKS.
Every Kubernetes environment has two main components, the control plane which provides the core Kubernetes services for orchestration and the nodes which house the infrastructure that runs the applications themselves. In Azure managed clusters, the control plane includes the following components:
An API server named kube-apiserver which exposes the Kubernetes API and acts as the front end for the control plane
A scheduler named kube-scheduler which assigns newly created pods to available nodes based on scheduling criteria such as resource requirements, affinity and anti-affinity, and so on
A controller manager named kube-controller-manager which responds to node health events and other tasks
A key-value store named etcd which backs all cluster data
A cloud controller manager, logically named cloud-controller-manager, that links the cluster into Azure (this is the primary difference between Kubernetes on-premises and any cloud-managed Kubernetes)
We look to the API server when we need to enforce and validate a policy. For example, let’s say we want to set limits on container CPU and memory usage. This is a good idea to protect against resource exhaustion attacks, and it’s a generally good practice to set resource limits on cloud compute anyways. This configuration comes from the container spec – lines 53-54 in this example YAML template:
In this case, I didn’t specify any limit on CPU or memory usage for this container. Defender for Cloud will flag this as a recommendation that we can delegate, remediate, automate via a Logic App, or deny outright:
It’s not hard to imagine how Defender for Cloud can identify affected containers – it’s simply looking for quota values populated in the container spec. But Defender for Cloud is also giving us the option to enforce this recommendation by denying the deployment of any container with no specified resource limit. How does this work? To answer this, we need to dive into Gatekeeper.
Defender for Containers enforces Azure Policy through an add-on called Azure Policy for Kubernetes. This is deployed as an Arc-enabled Kubernetes extension in AWS, GCP, and on-premises environments and as a native AKS add-on in Azure. The add-on is powered by a Gatekeeper pod deployed into a single node in the cluster.
Gatekeeper is a widely deployed solution that allows us to decouple policy decisions from the Kubernetes API server. Our built-in and custom benchmark policies are translated into “CustomResourceDefinition” (CRD) policies that are executed by Gatekeeper’s policy engine. Kubernetes includes admission controllers that can view and/or modify authenticated, authorized requests to create, modify, and delete objects in the Kubernetes environment. There are dozens of admission controllers in the Kubernetes API server, but there are two that we specifically rely on for Gatekeeper enforcement. First, the MutatingAdmissionWebhook is a controller that calls mutating webhooks – in serial, one after another – to read and modify the pending request. Second, the ValidatingAdmissionWebhook controller goes into action during the final validation phase of the operation and calls validating webhooks in parallel to inspect the request. A validating webhook can reject the request which will deny creation, modification, or deletion of the resource. Because the validating controller is invoked after all object modifications are complete, we use validating admission webhooks to guarantee that we are inspecting the final state of an object.
Gatekeeper has several components called “operations” that can be deployed into one monolithic pod or as multiple individual pods in a service-oriented architecture. The Azure Policy add-on deploys Gatekeeper’s operations individually in three pods:
The audit process, which evaluates and reports policy violations on existing resources (this should always be run as a singleton pod to avoid contentions and prevent overburdening the API server)
The validating webhook, and
The mutating webhook.
You can see these pods in your cluster by filtering on the ‘gatekeeper.sh/system’ label:
Here we can see one gatekeeper-audit pod and two gatekeeper-controller pods. Note that the two webhook pods are not distinguished by function – we’ll encounter this later on when we view logs from the mutating admission controller. Running these operations in different pods allows for horizontal scaling on the webhooks and enables operational resilience among the three components.
In our earlier example, we wanted to deny the creation of any container that doesn’t have CPU and/or memory usage limits defined in its container spec. Defender for Containers will use Gatekeeper’s validating admission webhook to reject any misconfigured requests at the API server. But what if we wanted to take some other action – for instance, if we were rolling out a new policy and wanted to audit compliance rather than directly move into enforcement? Or what if we want to exempt certain namespaces or labels from a policy rule? For this, we will need to explore parameters and effects.
First, let’s find our policy definition in the Azure portal by navigating to Microsoft Defender for Cloud > Environment settings and opening the Security Policies in the settings for our Azure subscription. Our built-in policy definitions come from the default Microsoft Cloud Security Benchmark which contains 240 recommendations covering all Defender for Cloud workload protections. Filtering on a keyword will surface our policy definition:
Click the ellipses at the right of the definition to view the context menu. Select “Manage effect and parameters” to open a configuration panel with several options:
First, let’s talk about the policy effects. Sorted by their order of evaluation from first to last, we have:
Disabled – this will prevent rule evaluation throughout this subscription.
Deny – this will block creation of a new resource that fails the policy. (Note that it will not remove existing resources that have already been deployed.)
Audit – this will generate an alert but not block resource creation. Audit is evaluated after Deny to prevent double-logging of an undesired resource.
What about the additional parameters? Our policy rule allows us to set rule parameters such as the maximum allowed memory and CPU values, as well as exclude namespaces from monitoring, select labels for monitoring, and exclude images from all container policy inspection. This configuration block is critical for managing exemptions such as containers that should be allowed to run as root or similar scenarios. Several Kubernetes namespaces are excluded by default: kube-system, gatekeeper-system, azure-arc, and others are commonly excluded from these policy definitions.
If we inspect the policy itself, we will see its execution logic. Of particular interest is the “templateInfo” section in lines 178-181:
This invokes the URI for the CustomResourceDefinition (CRD), a YAML file that describes the schema of the constraint and specifies the actual constraint logic in the Rego declarative language. In our example, the CRD is located at
You might have noticed that our Azure policy effects of “audit” and “deny” map directly to the validating admission webhook, which can check resource create/modify/delete requests against our policy configuration. What about the other Gatekeeper component, the mutating admission webhook? Instead of simply rejecting creation of a container that is missing a resource usage quota, we could dynamically edit the API request to set our own limit and allow the container to spawn. Let’s check out another built-in Azure policy definition to see this one in action.
First, let’s take a look at the policy reference list from the AKS documentation. Search or scroll down to find a policy named “[Preview]: Sets Kubernetes cluster containers CPU limits to default values in case not present.” The documentation includes links to the Azure portal (login required) and the JSON source code for the definition in the Azure-Policy GitHub, currently at version 1.2.0-preview as of the date of this blog post. Let’s click into the Azure portal where we can view the policy definition and assign it to our Kubernetes cluster. Notice our available effects – instead of “Audit” and “Deny”, we now have “Mutate”:
The linked CRD (line 64) is a short one, assigning a limit of “500m” if not present:
(Direct link: https://store.policy.core.windows.net/kubernetes/mutate-resource-cpu-limits/v1/mutation.yaml)
We can assign the policy to the tenant, subscription, or resource group(s) in our environment, set exclusions, and optionally configure resource selectors and overrides to customize the rollout of this policy. Once deployed, we will need to wait for up to 15 minutes for the Azure Policy add-on to pull changes to policy assignments. Once the new assignment is updated, the add-on will add the appropriate constraint template and constraints to the policy engine. On the same fifteen-minute timer, the add-on will execute a full scan of the cluster using the Audit operation.
Let’s connect to our Kubernetes cluster and run some commands to validate our new mutate-effect policy. First, we’ll need to set up kubeconfig by setting subscription context and saving credentials for our cluster. Follow the instructions in the documentation and check by running ‘kubectl cluster-info’ to validate that the shell is connected correctly:
View constraint templates downloaded by the Azure Policy add-on using ‘kubectl get assign’:
Now let’s spawn a container that will violate this policy to view the mutation in action. You can use any YAML template or the single-image application wizard in the Azure console. If you use the wizard, be sure to zero out the default limits in Application Details.
Since we’re using a mutation effect, the mutating admission webhook in Gatekeeper should insert default values for CPU and memory when it’s called by the admission controller before passing the object creation request back to the API server. The container should deploy without any interference from a Deny effect policy because the request was modified prior to the validating admission webhook being called. Sure enough, our deployment is successful!
Now let’s check the logs for the gatekeeper pod to view audit and mutation events. Note that the two gatekeeper-controller webhook pods are not differentiated in the console – check both pod names to find the one that is executing mutate actions in your cluster.
We can see the mutate event at the end of the log:
Copied in text form, it reads as follows.
{“level”:”info”,”ts”:1723829551.9305975,”logger”:”mutation”,”msg”:”Mutation applied”,”process”:”mutation”,”Mutation Id”:”a4155642-5417-48c9-a15a-e31040807e66″,”event_type”:”mutation_applied”,”resource_group”:””,”resource_kind”:”Pod”,”resource_api_version”:”v1″,”resource_namespace”:”default-1723829546418″,”resource_name”:”web-dvwa-nolimit-8c9f967d4-“,”resource_source_type”:”Original”,”resource_labels”:{“app”:”web-dvwa-nolimit”,”pod-template-hash”:”8c9f967d4″},”iteration_0″:”Assign//azurepolicy-k8sazurev1resourcelimitscpu-f81c1c050a0fb6b965bc:1″}
We can validate that our new container has a limit applied by inspecting the pod YAML:
There it is – the mutation applied a CPU limit before passing the request back to the API server, and the resource was created successfully!
For more reading on Gatekeeper and Azure Policy for Kubernetes, check out these resources:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
https://learn.microsoft.com/en-us/azure/aks/use-azure-policy
https://learn.microsoft.com/en-us/azure/aks/policy-reference
https://open-policy-agent.github.io/gatekeeper/website/docs/
https://github.com/open-policy-agent/opa
Microsoft Tech Community – Latest Blogs –Read More
how to modify code for distributed delay
I have a code, which gives a solution of a delay logistic equation with discrete delay.
tau = 1;
tspan = [0 20];
y0 = 0.5;
sol = dde23(@ddefunc, tau, y0, tspan);
% Plot the solution
figure;
plot(sol.x, sol.y, ‘LineWidth’, 2);
xlabel(‘Time (days)’);
ylabel(‘Population’);
legend(‘y’);
% Define the delay differential equation
function g = ddefunc(t, y, Z)
r = 1.5;
y_tau = Z;
g = r * y * (1 – y_tau);
end
Now I want to modify my code for distributed delay as attached below.
Can someone guide me how to deal with distributed delayI have a code, which gives a solution of a delay logistic equation with discrete delay.
tau = 1;
tspan = [0 20];
y0 = 0.5;
sol = dde23(@ddefunc, tau, y0, tspan);
% Plot the solution
figure;
plot(sol.x, sol.y, ‘LineWidth’, 2);
xlabel(‘Time (days)’);
ylabel(‘Population’);
legend(‘y’);
% Define the delay differential equation
function g = ddefunc(t, y, Z)
r = 1.5;
y_tau = Z;
g = r * y * (1 – y_tau);
end
Now I want to modify my code for distributed delay as attached below.
Can someone guide me how to deal with distributed delay I have a code, which gives a solution of a delay logistic equation with discrete delay.
tau = 1;
tspan = [0 20];
y0 = 0.5;
sol = dde23(@ddefunc, tau, y0, tspan);
% Plot the solution
figure;
plot(sol.x, sol.y, ‘LineWidth’, 2);
xlabel(‘Time (days)’);
ylabel(‘Population’);
legend(‘y’);
% Define the delay differential equation
function g = ddefunc(t, y, Z)
r = 1.5;
y_tau = Z;
g = r * y * (1 – y_tau);
end
Now I want to modify my code for distributed delay as attached below.
Can someone guide me how to deal with distributed delay distributed delay, delay differentia equations, solve MATLAB Answers — New Questions
Sizeof double float int etc
Hello there,
I need to know how to find an equivallent function in Matlab to the sizeof function in c++.
For example, in c++ if I write sizeof(double) I would get the amount of memory needed to store a double.
I need something very similar with a matrix now. I will be storing bigger and bigger matrix and I need to find their size in the memory.
Could someone of you please help me?
all best,
:)Hello there,
I need to know how to find an equivallent function in Matlab to the sizeof function in c++.
For example, in c++ if I write sizeof(double) I would get the amount of memory needed to store a double.
I need something very similar with a matrix now. I will be storing bigger and bigger matrix and I need to find their size in the memory.
Could someone of you please help me?
all best,
🙂 Hello there,
I need to know how to find an equivallent function in Matlab to the sizeof function in c++.
For example, in c++ if I write sizeof(double) I would get the amount of memory needed to store a double.
I need something very similar with a matrix now. I will be storing bigger and bigger matrix and I need to find their size in the memory.
Could someone of you please help me?
all best,
🙂 sizeof, memory, size MATLAB Answers — New Questions
To RESHAPE number of elements must not change
Hi all, im trying to do simple ERP study using EEGLAB. To RESHAPE number of elements must not change, such an error messahe was thrown. let me know how to fix it. ThanksHi all, im trying to do simple ERP study using EEGLAB. To RESHAPE number of elements must not change, such an error messahe was thrown. let me know how to fix it. Thanks Hi all, im trying to do simple ERP study using EEGLAB. To RESHAPE number of elements must not change, such an error messahe was thrown. let me know how to fix it. Thanks please help MATLAB Answers — New Questions
MS Teams Feedback/Requests
1) Add the ability for users to choose how long they would like Teams Banner Notifications to be set for; currently the notification disappears too quickly.
2) Allow users to pin more than 15 contacts in the chat window
3) In the People window, there is currently some type of error where if a contact cannot be found in the favorite tab, but is already added in the “all contacts” tab, MS Teams will not let you update, reconcile, delete, update, etc and the Person cannot be added to the Favorite tab.
4) Allow users to create their own chat categories
1) Add the ability for users to choose how long they would like Teams Banner Notifications to be set for; currently the notification disappears too quickly. 2) Allow users to pin more than 15 contacts in the chat window 3) In the People window, there is currently some type of error where if a contact cannot be found in the favorite tab, but is already added in the “all contacts” tab, MS Teams will not let you update, reconcile, delete, update, etc and the Person cannot be added to the Favorite tab. 4) Allow users to create their own chat categories Read More
Sentinel Data collection rule initial setup
I am trying to setup a Data collection rule (common event format (CEF) via AMA) for getting our firewall logs into sentinel via a syslog server, but I am not sure what facility(ies) to use, is there an article about the setup of this (these) rules? I tried doing searches but have found nothing relevant
I am trying to setup a Data collection rule (common event format (CEF) via AMA) for getting our firewall logs into sentinel via a syslog server, but I am not sure what facility(ies) to use, is there an article about the setup of this (these) rules? I tried doing searches but have found nothing relevant Read More
How to grant permissions on behalf of the organization Script
Hello everyone!
We generated a necessary Script to create the API/APP/Service Principal in Entra ID, and assign some delegated and application permissions.
However, I need to grant permission on behalf of the organization for these permissions, during the Script itself.
I have tried several times, in different ways, but all without success.
Does anyone know how this can be done? If it can be done? And could you help me with this?
Thank you all.
Best regards
Hello everyone! We generated a necessary Script to create the API/APP/Service Principal in Entra ID, and assign some delegated and application permissions. However, I need to grant permission on behalf of the organization for these permissions, during the Script itself. I have tried several times, in different ways, but all without success. Does anyone know how this can be done? If it can be done? And could you help me with this? Thank you all.Best regards Read More
Conditional Formatting with Multiple Cell Values
Hello, I am hoping someone know how to help me set up conditional color coding in excel. I am needing certain cells to populate one color if the others cells are filled out wrong and another color if the cells are filled out right.
Example:
If there is a number in cell E and cell H says F1IL and cell J isn’t blank and cell P is blank all these cells turn red.
But if there is a number in cell E and cell H says (anything but F1IL) and cell J is blank and cell P has a number then all these cells turn green.
My goal is to have errors pop up when someone doesn’t fill out the information correctly, but to also not have anything highlighted if there is no information entered on that line.
Thank you for your help!
Hello, I am hoping someone know how to help me set up conditional color coding in excel. I am needing certain cells to populate one color if the others cells are filled out wrong and another color if the cells are filled out right. Example:If there is a number in cell E and cell H says F1IL and cell J isn’t blank and cell P is blank all these cells turn red.But if there is a number in cell E and cell H says (anything but F1IL) and cell J is blank and cell P has a number then all these cells turn green.My goal is to have errors pop up when someone doesn’t fill out the information correctly, but to also not have anything highlighted if there is no information entered on that line. Thank you for your help! Read More
How to datasample exponential data without losing the exponential decay?
Hi all!
So this is the question:
I have a Table with one column (std_spk_avg, attached). This column has 400 numbers. The data follow exponential distribution, so when i normally resample using ‘resample’ function in matlab to obtain 1000 iterations, i lose the exponential decay in each iteration…
How can i code with this function so as not to lose the exponential decay in my 1000 iterations?
Thanks you all in advance :)Hi all!
So this is the question:
I have a Table with one column (std_spk_avg, attached). This column has 400 numbers. The data follow exponential distribution, so when i normally resample using ‘resample’ function in matlab to obtain 1000 iterations, i lose the exponential decay in each iteration…
How can i code with this function so as not to lose the exponential decay in my 1000 iterations?
Thanks you all in advance 🙂 Hi all!
So this is the question:
I have a Table with one column (std_spk_avg, attached). This column has 400 numbers. The data follow exponential distribution, so when i normally resample using ‘resample’ function in matlab to obtain 1000 iterations, i lose the exponential decay in each iteration…
How can i code with this function so as not to lose the exponential decay in my 1000 iterations?
Thanks you all in advance 🙂 datasample, exponential, table, exponential data, struct MATLAB Answers — New Questions
how to organize input dimensions for LSTM classification
Hi guys,
I’m trying to train a lstm using sequential data to predict classes, and I’m a little confused by the format of input data and labels.
For the sake of simplicity, I’ll use an example to mimic my situation.
let’s say I’m trying to use temperature data to predict 3 cities: A, B, and C.
Within each city, i have temperature readings from 10 therometers over 2 seconds at a sample frequency of 100 hz.
So far, at each observation, I have a 200 by 10 matrix (time point by therometer).
temperature_matrix = randi(40, 200, 10) % pseudodata
We collected the temperature data 40 times throughout the day at each city, and this will give us 120 observations (3 cities * 40). Within each observation, I have a 200 by 10 matrix.
As for my input format, I now have a 120 by 1 cell array, and again within each cell array is a 200 by 10 matrix.
temperature_input = cell(120,1)
for ii = 1:length(temperature_input)
temperature_input{ii} = randi(40, 200, 10)
end
labels = [repmat("city A", 40,1); repmat("city B", 40,1); repmat("city C", 40,1)]
Per my undstanding, if I were to have a time step of 10, i should make a sliding window with a size of 5, and move it down the time dimenssion at a moving step of 1. That is to say, for each 200 by 10 temperature_matrix, I now slice it into 196 2D arrays, where each array is 5 by 10 (window size by therometer).
My question is how this sliding window plays a part in the input format? the sliding window create the fourth dimension in my example. The other three dimension is observation, time, and therometer. I think my overall structure is still a 120 by 1 cell array, but the dimenssions within each entry, I dont know how to organize them.
Also, out of curiosity, will it mess up the structure i transpose the time point by therometer matrice? I’m only asking between I’ve seen examples on the sequencce either in row or column.
Best,
FYHi guys,
I’m trying to train a lstm using sequential data to predict classes, and I’m a little confused by the format of input data and labels.
For the sake of simplicity, I’ll use an example to mimic my situation.
let’s say I’m trying to use temperature data to predict 3 cities: A, B, and C.
Within each city, i have temperature readings from 10 therometers over 2 seconds at a sample frequency of 100 hz.
So far, at each observation, I have a 200 by 10 matrix (time point by therometer).
temperature_matrix = randi(40, 200, 10) % pseudodata
We collected the temperature data 40 times throughout the day at each city, and this will give us 120 observations (3 cities * 40). Within each observation, I have a 200 by 10 matrix.
As for my input format, I now have a 120 by 1 cell array, and again within each cell array is a 200 by 10 matrix.
temperature_input = cell(120,1)
for ii = 1:length(temperature_input)
temperature_input{ii} = randi(40, 200, 10)
end
labels = [repmat("city A", 40,1); repmat("city B", 40,1); repmat("city C", 40,1)]
Per my undstanding, if I were to have a time step of 10, i should make a sliding window with a size of 5, and move it down the time dimenssion at a moving step of 1. That is to say, for each 200 by 10 temperature_matrix, I now slice it into 196 2D arrays, where each array is 5 by 10 (window size by therometer).
My question is how this sliding window plays a part in the input format? the sliding window create the fourth dimension in my example. The other three dimension is observation, time, and therometer. I think my overall structure is still a 120 by 1 cell array, but the dimenssions within each entry, I dont know how to organize them.
Also, out of curiosity, will it mess up the structure i transpose the time point by therometer matrice? I’m only asking between I’ve seen examples on the sequencce either in row or column.
Best,
FY Hi guys,
I’m trying to train a lstm using sequential data to predict classes, and I’m a little confused by the format of input data and labels.
For the sake of simplicity, I’ll use an example to mimic my situation.
let’s say I’m trying to use temperature data to predict 3 cities: A, B, and C.
Within each city, i have temperature readings from 10 therometers over 2 seconds at a sample frequency of 100 hz.
So far, at each observation, I have a 200 by 10 matrix (time point by therometer).
temperature_matrix = randi(40, 200, 10) % pseudodata
We collected the temperature data 40 times throughout the day at each city, and this will give us 120 observations (3 cities * 40). Within each observation, I have a 200 by 10 matrix.
As for my input format, I now have a 120 by 1 cell array, and again within each cell array is a 200 by 10 matrix.
temperature_input = cell(120,1)
for ii = 1:length(temperature_input)
temperature_input{ii} = randi(40, 200, 10)
end
labels = [repmat("city A", 40,1); repmat("city B", 40,1); repmat("city C", 40,1)]
Per my undstanding, if I were to have a time step of 10, i should make a sliding window with a size of 5, and move it down the time dimenssion at a moving step of 1. That is to say, for each 200 by 10 temperature_matrix, I now slice it into 196 2D arrays, where each array is 5 by 10 (window size by therometer).
My question is how this sliding window plays a part in the input format? the sliding window create the fourth dimension in my example. The other three dimension is observation, time, and therometer. I think my overall structure is still a 120 by 1 cell array, but the dimenssions within each entry, I dont know how to organize them.
Also, out of curiosity, will it mess up the structure i transpose the time point by therometer matrice? I’m only asking between I’ve seen examples on the sequencce either in row or column.
Best,
FY lstm, input, dimension MATLAB Answers — New Questions
Discovered but not crawled
Hi
I have problem with bing webmaster. Most of my site URL’s including the root domain having error when I request “URL Inspection”. I have search many times, read different articles including Bing Help forums but didn’t find a possible solution. Please some one help me regarding issues. For further tests I am including my site URL TechSAA . My speed speed is Good as you can check on speed testers.
Here is the image
I am using “Bing Webmaster Url Submission” plugin with API access. I don’t know why Bing webmaster is sending this error. While Google webmaster have no errors and indexing all of my site pages.
I am not a technical person so please help me, if you can recomend any WordPress Plugin to solve my problem.
I am also using Cloudflare (Cloudflare firewall) and Youst (Youst Indexnow).
I am waiting for best help.
Thank you Very much.
HiI have problem with bing webmaster. Most of my site URL’s including the root domain having error when I request “URL Inspection”. I have search many times, read different articles including Bing Help forums but didn’t find a possible solution. Please some one help me regarding issues. For further tests I am including my site URL TechSAA . My speed speed is Good as you can check on speed testers. Here is the image I am using “Bing Webmaster Url Submission” plugin with API access. I don’t know why Bing webmaster is sending this error. While Google webmaster have no errors and indexing all of my site pages. I am not a technical person so please help me, if you can recomend any WordPress Plugin to solve my problem. I am also using Cloudflare (Cloudflare firewall) and Youst (Youst Indexnow). I am waiting for best help. Thank you Very much. Read More
What determine the storage size in sharepoint online admin center
I am working on 2 online tenants, one has 1.85 storage size:-
the other has 1.02 storage size:-
so what determine this storage size and why they are different in 2 tenants?
I am working on 2 online tenants, one has 1.85 storage size:- the other has 1.02 storage size:- so what determine this storage size and why they are different in 2 tenants? Read More