Month: October 2024
new Exchange 2019 node tries to connect to Exchange 2016 on port 2525 when using SMTP relay
We had a pair of Exchange 2016 servers that are only used for SMTP relay and user management, one server in each geographic datacenter. We just added Exchange 2019 to the environment, one node in each data center. When I shut down Exchange 2016 in one datacenter and try to perform SMTP relay through Exchange 2019, E2019 tries to talk to E2016 over port 2525 before it accepts mail for delivery, creating a 10+ second delay. When E2019 is down, E2016 does not exhibit similar behavior and fires the message right away with no delay. Not sure why this happens, but I want to remove this dependency of E2019 on reaching out to E2016. Here is the message I see on E2019 (192.168.1.20 is E2016), I can also observe this communication in Wireshark. Failed to connect. Winsock error code: 10060, Win32 error code: 10060, Destination domain: internalproxy, Error Message: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.1.20:2525. Read More
Conditional Format for column cells that fit within a threshold
I have a conditional format for my column that highlights values from top to bottom that fit within a threshold of $35,906,142.00. The formula I am using for this condition is
=SUM($X$2:X2) <= 35906142
It highlights before it reaches the $35,906,142.00, but there are ranked values below the last unhighlighted value that can still fit inside the threshold along with the others already counted before exceeding the $35,906,142.00. What formula should I use to include those values? The numbers must stay in this descending order however, because these are ranked by a score in another column.
I have a conditional format for my column that highlights values from top to bottom that fit within a threshold of $35,906,142.00. The formula I am using for this condition is =SUM($X$2:X2) <= 35906142 It highlights before it reaches the $35,906,142.00, but there are ranked values below the last unhighlighted value that can still fit inside the threshold along with the others already counted before exceeding the $35,906,142.00. What formula should I use to include those values? The numbers must stay in this descending order however, because these are ranked by a score in another column. Read More
UPDATED ILT Course Retirement- MS-4006: Copilot for Microsoft 365 for Administrators
Original post from August 27th, 2024, has been removed
Edit: With the release date of MS-4017: Manage and extend Microsoft 365 Copilot being moved to November 22nd, 2024, we want to provide more time for Partners and Trainers to better prepare for the retirement of MS-4006: Copilot for Microsoft 365 for Administrators.
Therefore, we have extended the retirement of MS-4006 to November 29th, 2024. This will be reflected in the next published Title Plan, and on MS Learn within the next two weeks.
MS-4006: Copilot for Microsoft 365 for Administrators
Credential: N/A
UPDATED Retirement date: November 29th, 2024
MS-4006 (Microsoft 365 Copilot for Administrators) was originally designed over a year ago to help administrators prepare for enabling Microsoft 365 Copilot in their tenants. At the time, Copilot administration functionality had not been released, so the primary preparation for Copilot was to ensure that organizations implemented Microsoft 365’s security and compliance controls that would ultimately affect how data was protected and used in Microsoft 365 Copilot.
We also knew from our research that many customers weren’t following Microsoft’s best practices for permissions, users, and policies (which are covered in MS-102). Therefore, the Modern Work team was asked to create a simplified 1-day course that introduced Microsoft 365 Copilot and summarized key points from MS-102 related to the Microsoft 365 security and compliance controls. Hence, MS-4006 was born.
In the early days of Copilot, MS-4006 served our customers well as they prepared to implement Microsoft 365 Copilot. However, with the recent release of the Microsoft 365 Copilot administrative features and extensibility options, our customers’ needs have changed. Rather than focusing on the security and compliance features that were in MS-4006 (which are still covered in MS-102), we must now target our training on the new Copilot administrative controls and extensibility features. The Modern Work team has heard feedback from its stakeholders, partners, trainers and customers. Their call for change was the driving force behind our decision.
Replacement course: MS-4017: Manage and extend Microsoft 365 Copilot
UPDATED Release date: Releasing November 22nd, 2024
Credential: N/A
As such, we are retiring MS-4006 and replacing it with a new one-day ILT course offering: MS-4017: Manage and extend Microsoft 365 Copilot.
Please read below for more information pertaining to our future ILT course offering:
The course begins with the same Learning Path (LP) from MS-4006 that introduces Microsoft 365 Copilot. This LP includes modules covering the Copilot design and implementation requirements, which are still valuable to new Copilot customers. It also includes the module that summarizes key Microsoft 365 security and compliance features that affect Microsoft 365 Copilot deployments. The detailed security and compliance LP’s and labs from MS-4006 that were appropriated from MS-102 have been removed; only this summarized module remains.
A new LP is being added related to Copilot administration. This LP begins with a module on how organizations should apply the principles of Zero Trust to their Microsoft 365 Copilot deployments. It then includes a module on managing Microsoft Copilot, followed by a module on managing Microsoft 365 Copilot administration.
The course concludes with a new LP that guides admins in preparing for Microsoft Copilot extensibility. This LP begins with a module on Copilot extensibility fundamentals and concludes with a module on choosing a Copilot extensibility development path.
Please note: This is not a support forum. Only comments related to this specific blog post content are permitted and responded to.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the Microsoft ISV AI Envisioning Day: Identify and Prioritize Use Cases for AI Solutions
We are excited to announce the upcoming Business Envisioning webinar, Microsoft ISV AI Envisioning Day: Identify and Prioritize Use Cases for AI Solutions, designed specifically for Independent Software Vendors (ISVs) to get acquainted with Business Envisioning for AI applications. This event is a fantastic opportunity for ISVs to learn how to leverage AI to drive innovation and business value.
What to Expect
During the Business Envisioning webinar, we will walk you through a comprehensive framework that will enable you to create a development plan for building AI applications. The session will cover:
Business Envisioning: Understand how to identify and prioritize business use cases for AI solutions. Learn the importance of aligning AI initiatives with your business goals to maximize value.
The Business, User Experience, and Technology Prioritization Framework: The business, experience, technology (BXT) framework enables ISVs to evaluate the potential of their use cases. See how this exercise helps ISVs structure the details of each use case to better evaluate their viability.
Use Case Prioritization: Employ the BXT rankings as an agile method to evaluate and differentiate the value and learning opportunities of each use case being considered, and then implement prioritization accordingly.
Why Attend?
By attending the Business Envisioning webinar, you will:
Gain Valuable Insights: Learn from Microsoft experts about the latest trends and best practices in AI application development.
Network with Peers: Connect with other ISVs and share experiences, challenges, and solutions.
Accelerate Your AI Journey: Get practical advice and actionable steps to kickstart your AI projects and bring them to market faster.
How to Register
Register now for the Microsoft ISV AI Envisioning Day: Identify and Prioritize Use Cases for AI Solutions session. The session will run monthly in multiple time zones so use this link to check for upcoming sessions.
We look forward to seeing you at the ISV AI Envisioning Day: Identify and Prioritize Use Cases for AI Solutions webinar and helping you unlock the full potential of AI for your business!
Microsoft Tech Community – Latest Blogs –Read More
Save money on your Sentinel ingestion costs with Data Collection Rules
This article is co-authored by Brain Delaney, Andrea Fisher, and Jon Shectman.
As digital environments continue to expand, Security Operations teams are often asked to optimize costs even as the amount of data they need to collect and store grows exponentially. Teams may often feel they have to choose between not collecting a particular data set or log source, and balancing their limited security budget.
Today, we’ll outline a strategy you can use to reduce your data volume while also collecting and retaining the information that really matters. We’ll show you how to use Data Collection Rules (DCRs) to drop information from logs that are less valuable to you. Specifically, we’ll first discuss the thought process around deciding what’s important in a log to your organization. Then we’ll show you the process of using DCRs to “project-away” information you don’t want or need using two log source examples. This process saves direct ingress and long-term retention costs, and reduces analyst fatigue.
One word of caution. Only you can really decide what’s important to your organization in a particular log or table. Nothing you do can’t be undone, but it may result in data not being captured if you elect to drop it (“project-away”). This is why we’re spending time discussing the thought process of deciding what’s really important.
A Word about DCRs (or What is a DCR and Why Should I Care?)
We won’t have space in this blog entry to go deep into DCRs, as they can quickly get complicated. For a thorough discussion, please visit Custom data ingestion and transformation in Microsoft Sentinel | Microsoft Learn.
There are two points that we need to discuss here. First, what exactly is a DCR and why should I care? A DCR is one way that Sentinel and Log Analytics give you a high degree of control over specific data that actually gets ingested into your workspace. Think of a DCR as a way to manipulate the ingestion pipeline. For our purposes here, DCRs can be thought of as a set of basic KQL queries applied to incoming logs that allow you to do something to that data: filter out irrelevant data, enrich existing data, mask sensitive attributes, or even perform Advanced Security Information Model (ASIM) normalization. As you’ve probably guessed by now, it’s this first capability (filter out irrelevant data) we’re concerned with here.
Second, for our purposes, there are two kinds of DCRs: standard and workspace.
Standard DCRs are currently supported for AMA-based connectors and workflows using the new Logs Ingestion API. An example of a standard DCR will be the one used for the Windows Security Events collected through the AMA.
Workspace transformation DCRs serve supported workflows in a workspace that aren’t served by standard DCRs. A Sentinel workspace can have only one workspace transformation DCR, but that DCR will contain separate transformations for each input stream. An example of a workspace DCR will be the one used for AADNonInteractiveSigninLogs collected via diagnostic settings.
Workspace DCRs do not apply when a standard DCR is used to ingest the data.
Finding High-volume Sources
To optimize costs, it is important to understand where all the data is going before making difficult decisions about which logs to drop and which logs to keep. We recommend focusing on high-volume sources, as they will have the biggest return on your efforts.
Determining High volume tables
First, if you haven’t already, you’ll want to determine your high-volume billable tables (as not all tables are billable) to see where you can have the most impact when optimizing costs. You can get this with a simple KQL query:
Usage
| where TimeGenerated > ago(30d)
| where IsBillable
| summarize SizeInGB=sum(Quantity) / 1000 by DataType
| sort by SizeInGB desc
Record Level Analysis
Once you have determined your high-volume billable tables, you may want to look at volume per record type. You may need to experiment with different combinations to find some high-volume patterns that you may not find security value in. For example, with the SecurityEvent table, it would be interesting to know which Event IDs contribute to the most volume so you can assess their security value. Keep in mind that the count of events is not directly related to the cost as some events are much bigger in size than others. For this, we will use the _BilledSize column which contains the billed size for the record in bytes:
SecurityEvent
| summarize SizeInMB=sum(_BilledSize) / 1000 / 1000 by EventID
| sort by SizeInMB
Column Level Analysis
In some cases, you may not be able to discard entire records, but there may be an opportunity to discard columns or parts of columns. When browsing a data source, you may find some columns have significant amounts of data, such as AADNonInteractiveUserSignInLogs and its ConditionalAccessPolicies column which is a large array of the status of each conditional access policy and whether it applied or not to the background token activity. For this we will use the estimate_data_size() function:
AADNonInteractiveUserSignInLogs
| extend ColumnSize = estimate_data_size(ConditionalAccessPolicies)
| summarize RecordSizeInMB=sum(_BilledSize) / 1000 / 1000, ColumnSizeInMB=sum(ColumnSize) / 1000 / 1000
| extend PercentOfTotal = ColumnSizeInMB / RecordSizeInMB
Examining the Process
Let’s look at this process of reducing ingestion using DCRs in two examples – one for workspace DCRs and one for standard.
AADNonInteractiveSigninLogs
SOC engineers and managers often worry about the cost of bringing in additional logs like the AADNonInteractiveSigninLogs. Non-interactive user sign-ins are sign-ins performed by a client app or an OS component on behalf of a user. Unlike interactive user sign-ins, these sign-ins do not require the user to supply authentication. Instead, they use a token or code to authenticate on behalf of a user. You can see how bad actors might make use of this type of authentication so there is a good reason to ingest them.
There is a potentially significant optimization opportunity with the AADNonInteractiveSigninLogs table. One of the fields contains information about conditional access policy evaluation. Fifty to eighty percent of the log data is typically conditional access policy data. In many cases the non-interactive log will have the same conditional access outcome as occurred in the interactive log; however the non-interactive volume is much higher. In the cases where the outcome is different, is it critical for you to know which specific conditional access policy allowed or blocked a session? Does knowing this add investigative value?
For this example, we’ll use a workspace DCR since there is no standard DCR available for this data type (e.g. it’s a diagnostic logs dataflow).
If you already have a workspace DCR, you’ll edit it like this:
Conversely, if you don’t already have a workspace DCR, you’ll have to create one:
Once you have it, click Next. Then click on </> Transformation editor on the top and use the following query if you want to remove all ConditionalAccessPolicies from this table:
source
| project-away ConditionalAccessPolicies
Alternately, as this array is sorted where applied policies (success/failure) appear to the top, if you wanted to only keep the first few policies, you could use this transformation:
source
| extend CAP = todynamic(ConditionalAccessPolicies)
| extend CAPLen = array_length(CAP)
| extend ConditionalAccessPolicies = tostring(iff(CAPLen > 0, pack_array(CAP[0], CAP[1], CAP[2], CAP[3]), todynamic(‘[]’)))
| project-away CAPLen, CAP
SecurityEvent
The security event logs are another source that can be verbose. The easiest way to ingest the data is to use the standard categories of “minimal”, “common,” or “all.” But are those options the right ones for you? Some known noisy event IDs may have questionable security value. We recommend looking closely at what you are collecting in this table presently and appraising the noisiest events to see if they truly add security value.
For example, you’ll likely want event IDs like 4624 (“An account was successfully logged on”) and 4688 (“A new process has been created”). But do you need to keep 4634 (“An account was logged off”) and 4647 (“User initiated logoff”)? These might be useful for auditing, but less so for breach detection. You could drop these events out of your logs by setting the category to “minimal,” but may find that you’re missing other event IDs that you find valuable.
In the event you are using collection tier “all,” the XPath query does not explicitly collect these events by number. To remove an event, you will need to replace the XPath in the DCR to select all but specific events with a query such as:
Security!*[System[(EventID!=4634) and (EventID!=4647)]]
In the event you are using collection tier “common” or “minimal,” the event IDs will already be listed in the DCR’s XPath queries and you can simply remove them along with the corresponding “or” statement from the query:
Security!*[System[(EventID=1102) or (EventID=1107) or (EventID=1108) or (EventID=4608) or (EventID=4610) or (EventID=4611) or (EventID=4614) or (EventID=4622) or (EventID=4624) or (EventID=4625) or (EventID=4634) or (EventID=4647) or (EventID=4648) or (EventID=4649) or (EventID=4657)]]
Alternatively, you can drop these events by adding a transformKql statement to the DCR, though in this case it will be less efficient than using XPath:
source
| where EventID !in (toint(4634), toint(4647))
For more information on updating a standard DCR, review the Monitoring and Modifying DCRs section of https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/create-edit-and-monitor-data-collection-rules-with-the-data/ba-p/3810987
In Summary
As digital footprints grow exponentially, it is increasingly important that security teams remain judiciously intentional in the data that they collect and retain. By thoughtfully selecting data sources and refining data sets with DCRs, you can ensure that you are spending your security budget in the most efficient and effective manner.
Microsoft Tech Community – Latest Blogs –Read More
Mail Merge from an Email – New Outlook
I am seeing this mail merge feature in New Outlook but struggling to find info about this feature. Does anyone know about this? Also not working when trying to send.
I am seeing this mail merge feature in New Outlook but struggling to find info about this feature. Does anyone know about this? Also not working when trying to send. Read More
Windows Problems when Surround Sound Amplifier Turned Off.
Operating System 23H2 updated 8th October 2024.
Audio amplifier – Yamaha HTR-6063. This has HDMI pass through mode when powered off.
I was watching a video in Firefox browser in full screen. I then turned off the audio amplifier and the image after it returned was not full screen. The right edge of the browser had moved inboard showing desktop. Had to push ESC to get back control of browser. I clicked to maximise browser and the right hand edge moved further left. Clicking maximise again and full screen and control achieved.
This is mostly what happens when the audio amplifier is turned off when in full screen browser video. When out of full screen I haven’t noticed a problem.
Other times when in full screen and amplifier turned off:
The right hand edge comes then, then goes back to full screen browser video.
Is in full screen browser video and behaves normally.
Operating System 23H2 updated 8th October 2024.Audio amplifier – Yamaha HTR-6063. This has HDMI pass through mode when powered off. I was watching a video in Firefox browser in full screen. I then turned off the audio amplifier and the image after it returned was not full screen. The right edge of the browser had moved inboard showing desktop. Had to push ESC to get back control of browser. I clicked to maximise browser and the right hand edge moved further left. Clicking maximise again and full screen and control achieved. This is mostly what happens when the audio amplifier is turned off when in full screen browser video. When out of full screen I haven’t noticed a problem. Other times when in full screen and amplifier turned off:The right hand edge comes then, then goes back to full screen browser video.Is in full screen browser video and behaves normally. Read More
Creating content scan jobs in microsoft purview information protection scanner using powershell
Hello Everyone
Please let me know what method available to create content scan jobs and add repositories in automated way?
Is there any PowerShell method? I do not see commands available to create content scan jobs in Microsoft purview information protection scanner.
Thank you in anticipation
Hello Everyone Please let me know what method available to create content scan jobs and add repositories in automated way?Is there any PowerShell method? I do not see commands available to create content scan jobs in Microsoft purview information protection scanner. Thank you in anticipation Read More
Entra Cloud Sync – Will Creating a New Configuration Sync Immediately With Defaults
Setting up a new Entra Cloud sync agent for a customer who already has an established on-prem AD and Azure AD with a mess of non-synced accounts and passwords between them.
So I need to do a slow roll on this thing and filter syncing by OUs in AD.
I know I have to create a new configuration in the Azure portal but what are the risks of the default config kicking in and doing a sync of all my users before I have a chance to filter it down to just the OUs I want to sync?
Should I disable the on-prem agent before creating a config in the cloud? That “Create” button is giving me anxiety 😐
thanks,
Dan
Setting up a new Entra Cloud sync agent for a customer who already has an established on-prem AD and Azure AD with a mess of non-synced accounts and passwords between them.So I need to do a slow roll on this thing and filter syncing by OUs in AD. I know I have to create a new configuration in the Azure portal but what are the risks of the default config kicking in and doing a sync of all my users before I have a chance to filter it down to just the OUs I want to sync? Should I disable the on-prem agent before creating a config in the cloud? That “Create” button is giving me anxiety 😐 thanks,Dan Read More
URGENT: Updated course release for MS-4002
Updated course release:
MS-4002: Prepare security and compliance to support Microsoft 365 Copilot
New release date: November 22nd, 2024
Please see the initial blog post for any additional information about this course: Coming soon: MS-4002: Prepare security and compliance to support Microsoft 365 Copilot – Microsoft Community Hub
We apologize for any inconveniences this may cause in your deliveries.
Thank you
Please note: This is not a support forum. Only comments related to this specific blog post content are permitted and responded to.
For ILT Courseware Support, please visit: aka.ms/ILTSupport
If you have ILT questions not related to this blog post, please reach out to your program for support.
Microsoft Tech Community – Latest Blogs –Read More
Troubleshooting IIS 500 Errors Using httpErrors
It’s not uncommon for users to encounter a 500 – Server Error while browsing an application hosted on IIS. This typically indicates something went wrong on the server side. But what exactly? To start, IIS logs an entry with a substatus code in its log files—these codes are key to diagnosing what went wrong. Additionally, you might find relevant information in the System or Application event logs. But what if no events are logged there? That’s when things get tricky, and the issue becomes harder to troubleshoot. In such cases, understanding how to work with httpErrors in IIS can be helpful.
customErrors vs. httpErrors: What’s the Difference?
Before jumping to the solution, let’s understand the difference between customErrors and httpErrors, as they are often confused.
In simple terms:
customErrors handles exceptions thrown by .NET code (like 404, 403, or 500 errors).
httpErrors deals with errors that IIS itself generates, not those from the application.
customErrors is part of the System.Web section, which governs how .NET errors are managed. Here’s a sample of customErrors in a web.config:
<configuration>
<system.web>
<customErrors defaultRedirect=”GenericError.htm” mode=”RemoteOnly”>
<error statusCode=”500″ redirect=”InternalError.htm”/>
</customErrors>
</system.web>
</configuration>
On the other hand, httpErrors belongs to system.webServer, part of the IIS configuration. Here’s an example of httpErrors in the IIS configuration:
<system.webServer>
<httpErrors errorMode=”Custom” defaultResponseMode=”ExecuteURL”>
<error statusCode=”400″ path=”/iisstart.htm” responseMode=”ExecuteURL” />
</httpErrors>
</system.webServer>
Is it an IIS or .NET Error? How to Tell?
When troubleshooting a 500 error, the first step is to determine whether it’s originating from the IIS pipeline or the .NET application. To isolate this, enable Failed Request Tracing (FREB) logs for the website in question. These logs provide detailed information about the request’s journey through the IIS pipeline and reveal which module is responsible for the error.
How to Enable and Read FREB Logs: Reading FREB logs can give you valuable insights into where the problem lies. You can find a step-by-step guide on enabling and analyzing these logs on Microsoft’s community hub.
If the 500 error is coming from the .NET pipeline, you’ll want to adjust the customErrors settings. One common approach is to disable customErrors temporarily to get more detailed error messages. This is especially useful when debugging issues in development or staging environments.
Here’s how to set the customErrors mode:
On: Custom error pages are enabled. Without a defaultRedirect, users will see a generic error page.
Off: Disables custom error pages. Detailed ASP.NET errors are shown to both local and remote clients.
RemoteOnly: Custom error pages are shown to remote clients, while detailed errors are visible to the local server. This is the default setting.
For example:
<configuration>
<system.web>
<customErrors mode=”Off” />
</system.web>
</configuration>
Tweaking httpErrors for IIS Modules
If the error is coming from an IIS module (i.e., not related to your .NET application), tweaking the httpErrors settings is the way to go. One useful trick is to configure IIS to pass through the existing response rather than suppress it. This lets you see the full error, which can provide more context for debugging.
Here’s how to enable the pass-through configuration for httpErrors:
<configuration>
<system.webServer>
<httpErrors existingResponse=”PassThrough” />
</system.webServer>
</configuration>
This setting prevents IIS from masking the actual error response, making it easier to pinpoint the root cause.
Your case might not exactly be this, so to learn more about httpErrors? Grab more details from here. HTTP Errors <httpErrors> | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
FindFirst not working
I have a 365 database with the following code:
Set rst = CurrentDb().OpenRecordset(“SchedFieldTable”)
mFID = DLookup(“[FID]”, “SchedFieldTable”, “[Width]+[Left] > 10.5 * 1440”)
MsgBox “[FID] = ” & mFID
rst.FindFirst “[FID]= ” & mFID
where FID, Width, and Left are all number fields in the table SchedFieldTable. The message box result is: [FID]=25, which is correct.
I get an error on the line:
rst.FindFirst “[FID]= ” & mFID
I have also tried: rst.FindFirst “[FID]= ‘” & mFID & “‘”
Can you help? This is driving me nuts!!!
I have a 365 database with the following code:Set rst = CurrentDb().OpenRecordset(“SchedFieldTable”)mFID = DLookup(“[FID]”, “SchedFieldTable”, “[Width]+[Left] > 10.5 * 1440”)MsgBox “[FID] = ” & mFIDrst.FindFirst “[FID]= ” & mFID where FID, Width, and Left are all number fields in the table SchedFieldTable. The message box result is: [FID]=25, which is correct.I get an error on the line:rst.FindFirst “[FID]= ” & mFIDI have also tried: rst.FindFirst “[FID]= ‘” & mFID & “‘”Can you help? This is driving me nuts!!! Read More
Looking for a Function that Counts the Amount of Cells that apply to a certain rule made
I am looking for a function that counts the amount of cells in a column that apply to a conditional rule I made. The cells in the column that apply to the rule are highlighted in a different color, but I cannot seem to find a function that gives me a “count” of those highlighted cells that align with that rule. I have tried using COUNTIF functions and just typing the equation in that I have as the rule parameter, but is is not working.
I am looking for a function that counts the amount of cells in a column that apply to a conditional rule I made. The cells in the column that apply to the rule are highlighted in a different color, but I cannot seem to find a function that gives me a “count” of those highlighted cells that align with that rule. I have tried using COUNTIF functions and just typing the equation in that I have as the rule parameter, but is is not working. Read More
Nested if functions
Hi guys, im trying to do a nested if function revolving around this : =IF(F2=”less than high school”,1, IF(F2=”high school”,2,IF(F2=”associates”,3,IF(F2=”bachelors”,4,IF(F2=”masters”,5, IF(F2=”more than masters”,6,))))))
it shows me 3,4 and 5 but shows a 0 for 1,2 and 6. any ideas on how to fix this?
Hi guys, im trying to do a nested if function revolving around this : =IF(F2=”less than high school”,1, IF(F2=”high school”,2,IF(F2=”associates”,3,IF(F2=”bachelors”,4,IF(F2=”masters”,5, IF(F2=”more than masters”,6,)))))) it shows me 3,4 and 5 but shows a 0 for 1,2 and 6. any ideas on how to fix this? Read More
What to do if your Sentinel Data Connector shows as [DEPRECATED]
Several Sentinel users raised the alarm that several of the data connectors they were using suddenly show as deprecated in the user interface.
The first thing you need to know is that your data has not stopped flowing. It’s still being happily delivered to the CommonSecurityLog or Syslog table. The analytic rules are still applying to the data. Workbooks and Playbooks should work exactly the same way they always have.
This change was actually meant to be a benefit. We’ve recently deprecated the log analytics agent – sometimes referred to as an MMA or OMS agent – and replaced it with the shiny new Azure Monitor Agent (AMA). There are many benefits to moving to the AMA agent including faster performance and its support for multihoming. Learn more about them here.
But for our purposes, the benefit is that instead of needing lots of different connectors based on specific solutions, you can use a single connector (Common Event Format for AMA) for anything that will write to the CommonSecurityLog. There is another one called the Syslog for AMA that does the same for Syslog. Documentation on how to install the CEF and Syslog data connectors can be found here.
I do have one more gotcha for you. If you have already shifted to the Common Event Format data connector and want to tidy up by deleting the deprecated connectors, you can’t. You’ll get an error. A fix is on the way.
Microsoft Tech Community – Latest Blogs –Read More
Policing a Sandbox: Integrity Guarantees for Dynamic Container Workloads
This article was originally posted on confidentialcontainers.org by Magnus Kulke. Read the original article on confidentialcontainers.org. The source for this content can be found here.
In a previous article we discussed how we can establish confidence in the integrity of an OS image for a confidential Guest that is supposed to host a collocated set (Pod) of confidential containers. The topic of this article will cover the integrity of the more dynamic part of a confidential container’s lifecycle. We’ll refer to this phase as “runtime”, even though from the perspective of Container workload it might well be before the actual execution of an entrypoint or command.
A Confidential Containers (CoCo) OS image contains static components like the Kernel and a root filesystem with a container runtime (e.g kata-agent) and auxiliary binaries to support remote attestation. We’ve seen that those can be covered in a comprehensive measurement that will remain stable across different instantiations of the same image, hosting a variety of Confidential Container workloads.
Why it’s hard
The CoCo project decided to use a Kubernetes Pod as its abstraction for confidential containerized workloads. This is a great choice from a user’s perspective, since it’s a well-known and well-supported paradigm to group resources and a user simply has to specify a specific runtimeClass for their Pod to launch it in a confidential TEE (at least that’s the premise).
For the implementers of such a solution, this choice comes with a few challenges. The most prominent one is the dynamic nature of a Pod. A Pod in OCI term is a Sandbox, in which one or more containers can be imperatively created, deleted, updated via RPC calls to the container runtime. So, instead of a concrete software that acts in reasonably predictable ways we give guarantees about something that is inherently dynamic.
This would be the sequence of RPC calls that are issued to a Kata agent in the Guest VM (for brevity we’ll refer to it as Agent in the text below), if we launch a simple Nginx Pod. There are 2 containers being launched, because a Pod includes the implicit pause container:
create_sandbox
get_guest_details
copy_file
create_container
start_container
wait_process
copy_file
…
copy_file
create_container
stats_container
start_container
stats_container
Furthermore, a Pod is a Kubernetes resource which adds a few hard-to-predict dynamic properties, examples would be SERVICE_* environment variables or admission controllers that modify a Pod spec before it’s launched. The former is maybe tolerable (although it’s not hard to come up with a scenario in which the injection of a malicious environment variable would undermine confidentiality), the latter is definitely problematic. If we assume a Pod spec to express the intent of the user to launch a given workload, we can’t blindly trust the Kubernetes Control Plane to respect that intent when deploying a CoCo Pod.
Restricting the Container Environment
A locked down Kubernetes Control Plane that only allows a specific set of operations on a Pod. It’s tough and implementation-heavy since the Kubernetes API is very expressive and it’s hard to predict all the ways in which a Pod spec can be modified to launch unintended workloads, but there is active research in this area.
This could be combined with a secure channel between the user and the runtime in the TEE, that allows users to perform certain administrative tasks (e.g. view logs) from which the k8s control plane is locked out.
There could be a log of all the changes that are applied to the sandbox. We can record RPC calls and their request payload into a runtime registers of a hardware TEE (e.g. TDX RTMRs or TPM PCRs), that are included in the hardware evidence. This would allow us to replay the sequence of events that led to the current state of the sandbox and verify that it’s in line with the user’s intent, before we release a confidential secret to the workload.
However not all TEEs provide facilities for such runtime measurements and as we pointed out above: the sequence of RPC calls might be predictable, but the payload is determined by Kubernetes environment that cannot be easily predicted.
We can use a combination of the above two approaches. A policy can describe a set of invariants that we expect to hold true for a Pod (e.g. a specific image layer digest) and relax certain dynamic properties that are deemed acceptable (e.g. the SERVICE_* environment variables) or we can just flat-out reject calls to a problematic RPC endpoint (e.g. exec in container). The policy is enforced by the container runtime in the TEE on every RPC invocation.
This is elegant as such a policy engine and core policy fragments can be developed alongside the Agent’s API, unburdening the user from understanding the intricacies of the Agent’s API. To be effective an event log as describe in option #2 would not just need to cover the API but also the underlying semantics of this API.
Kata-Containers currently features an implementation of a policy engine using the popular Rego language. Convenience tooling can assist and automate aspects of authoring a policy for a workload. The following would be an example policy (hand-crafted for brevity, real policy bodies would be larger) in which we allow the launch of specific OCI images, the execution of certain commands, Kata management endpoints, but disallow pretty much everything else during runtime:
package agent_policy
import future.keywords.in
import future.keywords.if
import future.keywords.every
default CopyFileRequest := true
default DestroySandboxRequest := true
default CreateSandboxRequest := true
default GuestDetailsRequest := true
default ReadStreamRequest := true
default RemoveContainerRequest := true
default SignalProcessRequest := true
default StartContainerRequest := true
default StatsContainerRequest := true
default WaitProcessRequest := true
default CreateContainerRequest := false
default ExecProcessRequest := false
CreateContainerRequest if {
every storage in input.storages {
some allowed_image in policy_data.allowed_images
storage.source == allowed_image
}
}
ExecProcessRequest if {
input_command = concat(” “, input.process.Args)
some allowed_command in policy_data.allowed_commands
input_command == allowed_command
}
policy_data := {
“allowed_commands”: [
“whoami”,
“false”,
“curl -s http://127.0.0.1:8006/aa/token?token_type=kbs”,
],
“allowed_images”: [
“pause”,
“docker.io/library/nginx@sha256:e56797eab4a5300158cc015296229e13a390f82bfc88803f45b08912fd5e3348”,
],
}
Policies are in many cases dynamic and specific to the workload. Kata ships the genpolicy tool that will generate a reasonable default policy based on a given k8s manifest, which can be further refined by the user. A dynamic policy cannot be bundled in the rootfs, at least not fully, since it needs to be tailored to the workload. This implies we need to provide the Guest VM with the policy at launch time, in a way that allows us to trust the policy to be genuine and unaltered. In the next section we’ll discuss how we can achieve this.
Init-Data
Confidential Containers need to access configuration data that for practical reasons cannot be baked into the OS image. This data includes URIs and certificates required to access Attestation and Key Broker Services, as well as the policy that is supposed to be enforced by the policy engine. This data is not secret, but maintaining its integrity is crucial for the confidentiality of the workload.
In the CoCo project this data is referred to as Init-Data. Init-Data is specified as file/content dictionary in the TOML language, optimized for easy authoring and human readability. Below is a (shortened) example of a typical Init-Data block, containing some pieces of metadata, configuration for CoCo guest components and a policy in Rego language:
algorithm = “sha256”
version = “0.1.0”
[data]
“aa.toml” = ”’
[token_configs]
[token_configs.kbs]
url = ‘http://my-as:8080’
cert = “””
—–BEGIN CERTIFICATE—–
MIIDEjCCAfqgAwIBAgIUZYcKIJD3QB/LG0FnacDyR1KhoikwDQYJKoZIhvcNAQEL
…
4La0LJGguzEN7y9P59TS4b3E9xFyTg==
—–END CERTIFICATE—–
“””
”’
“cdh.toml” = ”’
socket = ‘unix:///run/confidential-containers/cdh.sock’
credentials = []
…
”’
“policy.rego” = ”’
package agent_policy
…
A user is supposed to specify Init-Data to a Confidential Guest in the form of a base64-encoded string in a specific Pod annotation. The Kata Containers runtime will then pass this data on to the Agent in the Guest VM, which will decode the Init-Data and use it to configure the runtime environment of the workload. Crucially, since Init-Data is not trusted at launch we need a way to establish that the policy has not been tampered with in the process.
Integrity of Init-Data
The Init-Data body that was illustrated above contains a metadata header which specifies a hash algorithm that is supposed to be used to verify the integrity of the Init-Data. Establishing trust in provided Init-Data is not completely trivial.
Let’s start with a naive approach anyway: Upon retrieval and before applying Init-Data in the guest we can calculate a hash of that Init-Data body and stash the measurement away somewhere in encrypted and integrity protected memory. Later we could append it to the TEE’s hardware evidence as an additional fact about the environment. An Attestation-Service would take that additional fact into account and refuse to release a secret to a confidential workload if e.g. a too permissive policy was applied.
Misdemeanor in the Sandbox
We have to take a step back and look at the bigger picture to understand why this is problematic. In CoCo we are operating a sandbox, i.e. a rather liberal playground for all sorts of containers. This is by design, we want to allow users to migrate existing containerized workloads with as little friction as possible into a TEE. Now we have to assume that some of the provisioned workloads might be malicious and attempting to access secrets they should not have access to. Confidential Computing is also an effort in declaring explicit boundaries.
There are pretty strong claims that VM-based Confidential Computing is secure, because it builds on the proven isolation properties of hardware-based Virtual Machines. Those have been battle-tested in (hostile) multi-tenant environments for decades and the confidentiality boundary between a Host and Confidential Guest VM is defined along those lines.
Now, Kata Containers do provide an isolation mechanism. There is a jail for containers that employs all sorts of Linux technologies (seccomp, Namespaces, Cgroups, …) to prevent a container from breaking out of its confinement. However, containing containers is a hard problem and regularly new ways of container’s escaping their jail are discovered and exploited (adding VM-based isolation to Containers is one of the defining features for Kata Containers, after all).
Circling back to the Init-Data Measurement
The measurement is a prerequisite for accessing a confidential secret. If we keep such a record in the memory of a CoCo management process within the Guest, this would have implications for the Trust Model: A Hardware Root-of-Trust module is indispensable for Confidential Computing. A key property of that module is the strong isolation from the Guest OS. Through clearly defined interfaces it can record measurements of the guest’s software stack. Those measurements are either static or extend-only. A process in the guest VM cannot alter them freely.
A measurement record in the guest VM’s software stack is not able to provide similar isolation. A process in the guest, like a malicious container, would be able to tamper with such a record and deceive a Relying Party in a Remote Attestation in order to get access to restricted secrets. A user would not only have to trust the CoCo stack to perform the correct measurements before launching a container, but they would also have to trust this stack to not be vulnerable to sandbox escapes. This is a pretty big ask.
Hence a pure software approach to establish trust in Init-Data is not desirable. We want to move the trust boundary back to the TEE and link Init-Data measurements to the TEE’s hardware evidence. There are generally 2 options to establish such a link, which one of those is chosen depends on the capabilities of the TEE:
Host-Data
Host-Data is a field in a TEE’s evidence that is passed into a confidential Guest from its Host verbatim. It’s not secret, but its integrity is guaranteed, as its part of the TEE-signed evidence body. We are generalising the term Host-Data from SEV-SNP here, a similar concept exists in other TEEs with different names. Host-Data can hold a limited amount of bytes, typically in the 32 – 64 byte range. This is enough to hold a hash of the Init-Data, calculated at the launch of the Guest. This hash can be used to verify the integrity of the Init-Data in the guest, by comparing the measurement (hash) of the Init-Data in the guest with the host-provided hash in the Host-Data field. If the hashes match, the Init-Data is considered to be intact.
Example: Producing a SHA256 digest of the Init-Data file
openssl dgst -sha256 –binary init-data.toml | xxd -p -c32
bdc9a7390bb371258fb7fb8be5a8de5ced6a07dd077d1ce04ec26e06eaf68f60
Runtime Measurements
Instead of seeding the Init-Data hash into a Host-Data field at launch, we can also extend the TEE evidence with a runtime measurement of the Init-Data directly, if the TEE allows for it. This measurement is then a part of the TEE’s evidence and can be verified as part of the TEE’s remote attestation process.
Example: Extending an empty SHA256 runtime measurement register with the digest of an Init-Data file
dd if=/dev/zero of=zeroes bs=32 count=1
openssl dgst -sha256 –binary init-data.toml > init-data.digest
openssl dgst -sha256 –binary <(cat zeroes init-data.digest) | xxd -p -c32
7aaf19294adabd752bf095e1f076baed85d4b088fa990cb575ad0f3e0569f292
Glueing Things Together
Finally, in practice a workflow would look like the steps depicted below. Note that the concrete implementation of the individual steps might vary in future revisions of CoCo (as of this writing v0.10.0 has just been released), so this is not to be taken as a reference but merely to illustrate the concept. There are practical considerations, like limitations in the size of a Pod annotation, or how Init-Data can be provisioned into a guest that might alter details of the workflow in the future.
Creating a Manifest
kubectl’s –dry-run option can be used to produce a JSON manifest for a Pod deployment, using the allow-listed image from the policy example above. We are using jq to specify a CoCo runtime class:
kubectl create deployment
–image=”docker.io/library/nginx@sha256:e56797eab4a5300158cc015296229e13a390f82bfc88803f45b08912fd5e3348″
nginx-cc
–dry-run=client
-o json
jq ‘.spec.template.spec.runtimeClassName = “kata-cc”‘
> nginx-cc.json
An Init-Data file is authored, then encoded in base64 and added to the Pod annotation before the deployment is triggered:
vim init-data.toml
INIT_DATA_B64=”$(cat “init-data.toml” | base64 -w0)”
cat nginx-cc.yaml | jq
–arg initdata “$INIT_DATA_B64”
‘.spec.template.metadata.annotations = { “io.katacontainers.config.runtime.cc_init_data”: $initdata }’
| kubecl apply -f –
Testing the Policy
If the Pod came up successfully, it passed the initial policy check for the image already.
kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-cc-694cc48b65-lklj7 1/1 Running 0 83s
According to the policy only certain commands are allowed to be executed in the container. Executing whoami should be fine, while ls should be rejected:
kubectl exec -it deploy/nginx-cc — whoami
root
kubectl exec -it deploy/nginx-cc — ls
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec “e2d8bad68b64d6918e6bda08a43f457196b5f30d6616baa94a0be0f443238980”: cannot enter container 914c589fe74d1fcac834d0dcfa3b6a45562996661278b4a8de5511366d6a4609, with err rpc error: code = PermissionDenied desc = “ExecProcessRequest is blocked by policy: “: unknown
In our example we tie the Init-Data measurement to the TEE evidence using a Runtime Measurement into PCR8 of a vTPM. Assuming a 0-initalized SHA256 register, we can calculate the expected value by extend the zeroes with the SHA256 digest of the Init-Data file:
dd if=/dev/zero of=zeroes bs=32 count=1
openssl dgst -sha256 –binary init-data.toml > init-data.digest
openssl dgst -sha256 –binary <(cat zeroes init-data.digest) | xxd -p -c32
765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f
As part of the policy we also allow-listed a specific command that can request a KBS token using an endpoint that is exposed to a container by a specific Guest Component. Note: This is not something a user would want to typically enable, since this token is used to retrieve confidential secrets and we would not want it to leak outside the Guest. We are using it here to illustrate that we could retrieve a secret in the container, since we passed remote attestation including the verification of the Init-Data digest.
kubectl exec deploy/nginx-cc — curl -s http://127.0.0.1:8006/aa/token?token_type=kbs | jq -c ‘keys’
[“tee_keypair”,”token”]
Since this has been successful, we can inspect the logs of the Attestation Service (bundled into a KBS here) to confirm it has been considered in the appraisal. The first text block shows the claims from the (successfully verified) TEE evidence, the second block is displaying the acceptable reference values for a PCR8 measurement:
kubectl logs deploy/kbs -n coco-tenant | grep -C 2 765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f
…
“aztdxvtpm.tpm.pcr06”: String(“65f0a56c41416fa82d573df151746dc1d6af7bd8d4a503b2ab07664305d01e59”),
“aztdxvtpm.tpm.pcr07”: String(“124daf47b4d67179a77dc3c1bcca198ae1ee1d094a2a879974842e44ab98bb06”),
“aztdxvtpm.tpm.pcr08”: String(“765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f”),
“aztdxvtpm.tpm.pcr09”: String(“1b094d2504eb2f06edcc94e1ffad4e141a3cd5024b885f32179d1b3680f8d88a”),
“aztdxvtpm.tpm.pcr10”: String(“bb5dfdf978af8a473dc692f98ddfd6c6bb329caaa5447ac0b3bf46ef68803b17”),
—
“aztdxvtpm.tpm.pcr08”: [
“7aaf19294adabd752bf095e1f076baed85d4b088fa990cb575ad0f3e0569f292”,
“765156eda5fe806552610f2b6e828509a8b898ad014c76ad8600261eb7c5e63f”,
],
“aztdxvtpm.tpm.pcr10”: [],
Size Limitations
In practice there are limitations with regards to the size of init-data bodies. Especially the policy sections of such a document can reach considerable size for complex pods and thus exceed the limitations that currently exist for annotation values in a Kubernetes Pod. As of today various options are being discussed to work with this limitation, ranging from simple text-compression to more elaborate schemes.
Conclusion
In this article we discussed the challenges of ensuring the integrity of a Confidential Container workload at runtime. We’ve seen that the dynamic nature of a Pod and the Kubernetes Control Plane make it hard to predict exactly what will be executed in a TEE. We’ve discussed how a policy engine can be used to enforce invariants on such a dynamic workload and how Init-Data can be used to provision a policy into a Confidential Guest VM. Finally, we’ve seen how the integrity of Init-Data can be established by linking it to the TEE’s hardware evidence.
Microsoft Tech Community – Latest Blogs –Read More
Save Time & Streamline Your Workflow with Microsoft Copilot’s Scheduled Prompts
Have you ever wanted to automate a copilot prompt to run at a specific time or frequency, well, here is some exciting news for all those Copilot productivity enthusiasts out there. Microsoft has rolled out a fantastic new feature for Copilot called Scheduled Prompts. This new feature allows you to automate Copilot prompts to run at specific times or intervals. Imagine having your daily summaries, reminders, or even complex workflows triggered automatically without lifting a finger. It’s like having a personal assistant that’s always on time! Scheduled prompts remove the need to remember these tasks, sharing them automatically in your chat history.
Let’s dive into how to configure this:
How to Schedule a Prompt
Run the Prompt: First, run the prompt you want to schedule to ensure everything is set up correctly.
Create a Scheduled Prompt: Hover over the prompt and select the “Schedule this prompt” icon. This opens a new window for setting up your schedule.
Fill in the Details: In the new window, fill in the fields for how often you want it to run, the time, and any other details. Note that you can only set the schedule to run up to five times. Schedule a reminder to restart the prompt.
Save and Activate: Once you’ve filled in all the details, save and activate. Your prompt is now scheduled and will run automatically at the times you set. You can always change the settings by selecting the ellipses at the top and viewing your Scheduled Prompts.
To find the results, check the chat history in Teams—newly run prompts will appear bolded.
With scheduled prompts, you’re not just saving time—you’re making your life easier. Give it a try and start automating!
Microsoft Tech Community – Latest Blogs –Read More
फोनपे गलत ट्रांजेक्शन की रिपोर्ट कैसे करें?
फोनपे गलत ट्रांजेक्शन से पैसे कैसे वापस करें? नंबर सहायता टीम (0866↑↑084↑↑7056} तक पहुंच सकते हैं और जितनी जल्दी हो सके अपनी शिकायत दर्ज कर सकते हैं।
फोनपे गलत ट्रांजेक्शन से पैसे कैसे वापस करें? नंबर सहायता टीम (0866↑↑084↑↑7056} तक पहुंच सकते हैं और जितनी जल्दी हो सके अपनी शिकायत दर्ज कर सकते हैं। Read More
फोनपे गलत ट्रांजेक्शन की रिपोर्ट कैसे करें?
फोनपे गलत ट्रांजेक्शन से पैसे कैसे वापस करें? नंबर सहायता टीम (0866↑↑084↑↑7056} तक पहुंच सकते हैं और जितनी जल्दी हो सके अपनी शिकायत दर्ज कर सकते हैं।
फोनपे गलत ट्रांजेक्शन से पैसे कैसे वापस करें? नंबर सहायता टीम (0866↑↑084↑↑7056} तक पहुंच सकते हैं और जितनी जल्दी हो सके अपनी शिकायत दर्ज कर सकते हैं। Read More
क्या मैं फोनपे के खिलाफ शिकायत दर्ज कर सकता हूं?
फोनपे गलत ट्रांजेक्शन से पैसे कैसे वापस करें? नंबर सहायता टीम (0866↑↑084↑↑7056} तक पहुंच सकते हैं और जितनी जल्दी हो सके अपनी शिकायत दर्ज कर सकते हैं।
फोनपे गलत ट्रांजेक्शन से पैसे कैसे वापस करें? नंबर सहायता टीम (0866↑↑084↑↑7056} तक पहुंच सकते हैं और जितनी जल्दी हो सके अपनी शिकायत दर्ज कर सकते हैं। Read More