Category: Microsoft
Category Archives: Microsoft
Process Monitor v4.0 and Sysmon 1.3.3 for Linux
The new column, Process Start, can be used to filter processes by their start times – for example to hide all processes that were running when this Process Monitor session started, or to only show those processes. In the Process Monitor Filter dialog, this column will have the timestamp of the current time as a pre-filled value in the drop-down. Copying and pasting a value from any of the timestamp columns in the main event list also works.
The user interface improvements in this version include a more native look to the dark theme, new interface icons, more consistent behaviors for the summary dialogs accessible through the Tools menu, better mouse and keyboard navigation, and template values autofilled to some of the filter columns. The summary dialogs now have the “Edit Filter” option, and the main event list supports a per-column “Count Occurrences” action.
We have fixed two Boot Logging bugs: one that incorrectly stopped the log after 428 seconds with profiling events enabled and one that incompletely initialized module symbol information with the /ConvertBootLog command line option.
Copying items to the clipboard from the main event list is faster and also displays the interruptible progress dialog visible with other time consuming operations throughout Procmon.
There are also a series of UI element alignment fixes, we updated the online search from the event properties dialog, the dialogs’ geometry, we enabled runtime checks, and made a series of security improvements.
Microsoft Tech Community – Latest Blogs –Read More
How to achieve high HTTP scale with Azure Functions Flex Consumption
Taking Azure Functions from 0 to 32,000 RPS in 7 seconds
Consider a connected car platform that processes data from millions of cars. Or a national retailer running pop-up campaign that processes pre-orders. Or a healthcare provider calculating big data analytics. All of these can have variable load requirements — from zero to tens of thousands of requests per second (RPS). The serverless model has grown rapidly as developers increasingly run event-triggered code as a service, pushing platform limits, and Azure Functions customers now want to orchestrate complex serverless solutions and expect high throughput.
This feedback led us to revamp the Azure Functions platform architecture to help ensure that it meets our customers’ most demanding performance requirements. As this article describes:
We have introduced the new Azure Functions Flex Consumption plan that you can use to achieve high-volume HTTP RPS while optimizing costs.
You can customize the per instance concurrency of HTTP-triggered functions and choose between instance memory sizes to fit your throughput and cost requirements.
We demonstrate achieving 32,000 RPS in 7 seconds with a sample retail customer flash sale case study, with a .NET HTTP triggered function app sending to Event Hubs through a VNet.
We demonstrate achieving 40,000 RPS with 1,000 instances in less than a minute with a Python app with per-instance concurrency of 1.
Understanding concurrency driven scaling
Per-instance concurrency is the number of parallel requests that each instance of your app can handle. In Azure Functions Flex Consumption, we’ve introduced deterministic concurrency for HTTP. All HTTP-triggered functions in your Flex Consumption app are grouped and scaled together in the same instances, and new instances are allocated based on the HTTP concurrency configured for your app. Per-instance concurrency is vital to great performance and it’s important to configure the maximum number of concurrent workloads that can be processed at the same time by a given instance. With higher concurrency, you can push more executions through and potentially pay less.
To show how this works using an example, imagine that 10 customers select the shopping cart at the same time on an e-commerce website, sending 10 requests to a function app. If concurrency is set to 1 and the app is scaled down to zero, the platform will scale the app to 10 instances and run 1 request on each instance. If you change concurrency to 2, the platform scales out to five instances, and each handles two requests.
In general, you can trust the default values to work for most cases and let Azure Functions scale dynamically based on the number of incoming events. I.e., Flex Consumption already provides default values that make the best of each language’s capabilities. For Python apps, the default concurrency is 1 for all instance sizes. For other languages, the 2,048 MB instance size uses a default concurrency of 16 and the 4,096 MB uses 32. In any case, you have the flexibility to choose the right per-instance settings for your workload.
You can change the HTTP concurrency using the Azure CLI’s trigger-type and perInstanceConcurrency parameters:
az functionapp scale config set -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME> –trigger-type http –trigger-settings perInstanceConcurrency=<CONCURRENCY>
This is also possible from the Azure Portal on the new Scale and Concurrency settings for Flex Consumption apps in the Concurrency per instance section:
Concurrency and instance memory sizes
Currently, Flex Consumption supports two instance memory sizes: 2,048 MB, and 4,096 MB, with more instance sizes to be added in the future. The default is 2,048 MB. Depending on your workload you can benefit from a larger instance size, which can potentially handle more concurrency or heavier workloads as well. To create your app with a different instance memory size, simply include the instance-memory parameter:
az functionapp create -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME> -s <STORAGE_ACCOUNT_NAME> –runtime <RUNTIME> –runtime-version <RUNTIME_VERSION> –flexconsumption-location “<AZURE_REGION>” –instance-memory <INSTANCE_MEMORY>
You can also change the instance memory size in the Azure Portal when creating the app, or from the same Scale and Concurrency settings mentioned above after the app is created.
Not all hosting providers support per-instance concurrency higher than 1, even if some workloads would benefit from it. If your function app doesn’t have compute-intensive operations, per-instance concurrency control may be very helpful. I.e., running four operations concurrently while paying the same is better than paying for one operation at a time.
Cold Start
It’s worth noting that when you set concurrency to a value higher than 1, you also reduce the cold start penalty for those concurrent executions. We recently wrote about the improvements in Azure Functions overall to reduce cold starts (Azure Functions cold start improvement). In Flex Consumption you can also help ensure that a minimum number of instances are always running and available. The new always ready feature keeps a select number of instances warm for your functions.
Protecting downstream components
In addition to concurrency and instance size, you need to consider whether a downstream component has limited throughput capacity, like a database or an API that your function calls. You can change the maximum number of instances that your Flex Consumption app scales to by modifying the maximum instance count setting. You can set it to a valid value between 40 (the lowest value for maximum instance count) and 1,000 (the maximum). For example, in Azure CLI:
az functionapp scale config set -g <RESOURCE_GROUP> -n <FUNCTION_APP_NAME> –maximum-instance-count <SCALE_LIMIT>
You can also change this in the Azure Portal from the same Scale and Concurrency settings mentioned above.
Case study: HTTP endpoint writing to Azure Event Hubs
A retail customer asked us to help with a project to handle a flash online promotion projected to receive a peak of 2 million HTTP requests per minute (approximately 35,000 RPS). A function app was used for the ingestion of contact information from interested buyers. The web component of this solution was hosted by a third party that could only forward a buyer’s contact information via HTTP. Our customer wanted to protect the incoming data using Azure managed identities and to forward it for downstream processing to Azure Event Hubs secured behind a virtual network.
We developed a sample that implements the basics of this scenario and ran it through a suite of performance tests. You can deploy and test the High scale HTTP function app to Event Hubs via VNet sample yourself.
Initial test setup
The application was deployed into Flex Consumption with the following settings:
Instance memory size set to 2,048 MB
Maximum instance count set to 100
HTTP concurrency set to the system assigned default of 16 concurrent requests per instance
1,000 concurrent clients, calling the HTTPS endpoint with a HTTP post, using Azure Load Testing, for three minutes
Results
The application achieved an average throughput of 15,630 requests per second.
The application handled almost 3 million requests in total during this three-minute test. Azure Load Testing reports the following latency distribution.
Request count
Latency P50
Latency P90
Latency P99
Latency P99.9
2969090
50 ms
96 ms
166 ms
2188 ms
We can analyze the scale-out behavior by looking at our logs in Application Insights. This query counts how many different instances emitted a log for each second of the test—the application was successfully executing across 80 instances within 10 seconds of the workload starting.
requests
| where timestamp >= datetime(2024-05-06T01:15:00.0000000Z) and timestamp <= datetime(2024-05-06T01:25:00.0000000Z)
| summarize dcount(cloud_RoleInstance) by bin(timestamp, 1s)
| render columnchart
Test variations
We then made some modifications to the setup to push the application performance higher, with a tradeoff on cost. We ran the same client load but with the following server configuration changes:
Updated maximum instance count to 500 (and regional subscription memory quota raised accordingly)
Separate test runs with HTTP per-instance concurrency set to 8 and 4
With Azure Load Testing, you can compare your runs, so we compared the concurrency values of 16, 8, and 4 directly.
As the chart shows, dropping the concurrency to 4 really made a difference for this workload, pushing the throughput well above 32,000 RPS. This result correlates to the reduced latency numbers—just under 6.6 million requests in three minutes with a P50 latency of 23 milliseconds.
Latency profile with HTTP Concurrency = 4
Here are the latency percentiles breakdown of the HTTP concurrency = 4 run:
Request count
Latency P50
Latency P90
Latency P99
Latency P99.9
6,596,600
23ms
39ms
88ms
172ms
With each instance handling fewer requests, we see a corresponding increase in the instance count with HTTP Concurrency of 4. This also translates into faster scaling, with the system scaling out to 250 instances within 7 seconds.
requests
| where timestamp >= datetime(2024-05-06T02:30:00.0000000Z) and timestamp <= datetime(2024-05-06T02:40:00.0000000Z)
| summarize dcount(cloud_RoleInstance) by bin(timestamp, 1s)
| render columnchart
Tuning for performance versus cost
The following table compares the overall performance and cost of these runs. Learn more about Flex Consumption billing meters.
Concurrency
configuration
Request count
RPS
GB seconds
Cost in USD
GB-sec cost per
1 million requests
16 (default)
2,969,090
15,630
28,679
$0.4588
$0.1545
8
3,888,524
20,358
40,279
$0.6444
$0.16573
4
6,596,600
32,980
93,443
$1.4951
$0.2266
The total cost of these runs went up as we lowered the concurrency because it reduced the latency, allowing Azure Load Testing to send more requests during the three-minute interval. The last column shows the normalized cost per 1 million requests, indicating that the better performance from a lower concurrency value comes at a higher cost.
We recommend performing this type of analysis on your own workload to determine which configuration best suits your needs. As the results for higher concurrency demonstrate, using per-instance concurrency in Azure Functions workloads can really help you reduce costs. A great way to accomplish this is by taking advantage of the integration between Azure Functions and Azure Load Testing.
Scale to 1000 instances in less than a minute
The Event Hub case study above demonstrates the cost savings you can unlock if your workload can take advantage of concurrency – but what about workloads that cannot? We have made heavy investments in optimizing the system to work well when the per-instance concurrency is set to 1.
Test Setup
Python function app with concurrency set to 1, instance size set to 2,048 MB
Workload is a mix of IO and CPU – an http triggered function that receives a 73 KB HTML document and then parses it
Updated maximum instance count to 1,000 and regional subscription memory quota raised accordingly
1,000 concurrent clients, calling the HTTPS endpoint with an HTTP post, using Azure Load Testing, for five minutes
Results
The system stabilizes at 40K RPS in less than a minute. The following chart shows our 4 most recent runs at time of writing:
Latency profile
Here are the latency percentiles breakdown of this HTTP concurrency = 1 run:
Request count
Latency P50
Latency P90
Latency P99
Latency P99.9
12,567,955
20ms
34ms
59ms
251ms
The system achieves this performance by scaling up to ~975 instances within 1 minute. The only reason it did not reach exactly 1000 is that slightly more than 1000 concurrent clients are needed to push it that far, due to network travel time. Here is the first minute of scaling activity:
requests
| where timestamp between (todatetime(‘2024-06-06T00:57:00Z’) .. 1m)
| summarize dcount(cloud_RoleInstance) by bin(timestamp, 1s)
| render columnchart
You will notice that the scaling is not linear – the system added instances much more rapidly during the first 10 seconds and then gradually de-accelerated. This behavior is by design – we believe this approach hits a sweet spot of delivering great burst scale performance while reducing the degree of unnecessary over-scaling. If you find that this scaling pattern does not work well for your workload, please let us know.
Compute injected into your virtual network, in milliseconds
We’ve introduced improved virtual network features to Azure Functions Flex Consumption. Your function app can reach services secured behind a virtual network (VNet) and can also be secured to your virtual network with service or private endpoints. But more importantly, your function apps can reach services that are restricted to a virtual network without sacrificing scale-out speed and with scale-to-zero.
Our customer scenario used a VNet to allow the function app to write to an event hub that had no public endpoint. You might be wondering whether this VNet injection comes at a performance cost in terms of startup latency. This latency matters not only when your application is fully idle and scaled to zero (cold start) but also when the application needs to scale out quickly.
To best answer the question for our customer, we ran a series of benchmarks on the simplest possible workload—an application with an HTTP endpoint that returns a static 200 response. We compared the startup performance with and without VNet integration and ran these tests with the following configurations in mind:
Significant load to force scaling out too many instances
Coverage across multiple language stacks (Python 3.11, Java 17, .NET 8, Node.js 20)
Coverage across six different regions
Thirty-two unique test pairs with the exact same configuration, except whether VNet injection was enabled
We collected just under 30,000 data points across a five-day period and measured the time taken to get the first response from each allocated instance:
Configuration
Sample count
Latency P50 (ms)
Latency P90 (ms)
Latency P99 (ms)
No VNet
15,048
435
1,217
2,007
VNet integrated
14,408
472
1,357
3,307
Our test findings demonstrate that enabling VNet injection has a very low impact on your scale-out performance. We observe that 37ms at the 50th percentile is a reasonable cost to pay for the added security benefits of using virtual networks with Flex Consumption. These performance numbers for VNet injection are due to the deep investment we have made into the networking stack of Project Legion, which is the compute substrate for Flex Consumption.
Troubleshooting
We’ve touched on a few different configuration settings you need to keep in mind when running high throughput workloads on Flex Consumption, so here’s a checklist we suggest working through if you’re struggling to reach your performance goals:
Max instance count – verify that you’ve raised the maximum instance count to an appropriate value.
Regional subscription memory quota – if you have multiple Function Apps on the same subscription running in the same region, they share this quota. This means that one app might not scale out to the desired size if another app is already running at significant scale. If you need it raised, file a support ticket.
Monitor application insights for signs of downstream bottlenecks – during earlier iterations of the Event Hub case study we did not have the Event Hub scaled out sufficiently, and so we were encountering transient “the request was terminated because the entity is being throttled” errors, which were visible in the traces table in Application Insights.
Final thoughts
We’re proud of the performance enhancements in Azure Functions Flex Consumption. As one participant of our preview program said, “I’ve never seen any Azure service scaling like this! This is what we’ve been missing for a long time.”
To learn more and share feedback:
Learn more about Azure Functions Flex Consumption.
Deploy and run your own workloads to Flex Consumption, or try one of our samples.
Share your feedback about Azure Functions Flex Consumption scale. Your feedback and insights will be crucial in refining and enhancing this feature.
Microsoft Tech Community – Latest Blogs –Read More
Formula for calculating data in a drop down menu
I have a spreadsheet with a column that has a dropdown list in each row to assign a project to a person’s name. (each row is a new project) I need a formula to calculate how many projects each person is assigned.
I have a spreadsheet with a column that has a dropdown list in each row to assign a project to a person’s name. (each row is a new project) I need a formula to calculate how many projects each person is assigned. Read More
% Complete and % Work Complete do not update with Actual Start change.
Changing the date in Actual Start does not update the values in % Complete and % Work Complete columns, they stay at 0. If the Actual Finish date has any date at all, the percentages go to 100%.
Is Project supposed to be automatically calculating those % complete columns or am I missing steps?
Windows 11
Project Online Desktop Client v2405
Changing the date in Actual Start does not update the values in % Complete and % Work Complete columns, they stay at 0. If the Actual Finish date has any date at all, the percentages go to 100%. Is Project supposed to be automatically calculating those % complete columns or am I missing steps? Windows 11Project Online Desktop Client v2405 Read More
Trying to format a drive without a drive letter
I have an 8TB drive that registers as a harddrive in “Manage->Disk Management” as drive 10 but I cannot assign a drive letter to it. I know how to assign letters to drives. In this case, all options are disables except “Properties”. Is there a way to perform a low level format of the drive and see if that resolves my problem?
I have an 8TB drive that registers as a harddrive in “Manage->Disk Management” as drive 10 but I cannot assign a drive letter to it. I know how to assign letters to drives. In this case, all options are disables except “Properties”. Is there a way to perform a low level format of the drive and see if that resolves my problem? Read More
SMB share inaccessible on Windows 11 24H2 built 26120.961
Hi,
I’ve got a share server and 2 clients
first client is a Windows 11 pro 23H2
second client is a Windows 11 pro 24H2 built 26120.961
With Windows 11 pro 23H2, no problems, i can access shares.
But with Windows 11 pro 24H2, impossible, it sais network path is not works
The Share server has securitySignature required, encryption required, SMB3 min etc…
Thanks for your help
Windows 11 pro 23H2 :
Windows 11 pro 24H2 :
Share server :
Hi,I’ve got a share server and 2 clientsfirst client is a Windows 11 pro 23H2second client is a Windows 11 pro 24H2 built 26120.961With Windows 11 pro 23H2, no problems, i can access shares.But with Windows 11 pro 24H2, impossible, it sais network path is not works The Share server has securitySignature required, encryption required, SMB3 min etc…Thanks for your helpWindows 11 pro 23H2 : Windows 11 pro 24H2 :Share server : Read More
AD cmdlets not working
I’m trying to locate users in Active Directory using a function. When I run the code in either ISE or Visual Studio Code, the lookup works. When I run the script in a PowerShell window the lookup doesn’t work. Has anyone seen this before and know what the solution is?
I’m trying to locate users in Active Directory using a function. When I run the code in either ISE or Visual Studio Code, the lookup works. When I run the script in a PowerShell window the lookup doesn’t work. Has anyone seen this before and know what the solution is? Read More
Encabezados y pies de pagina en Excel
Tengo 2 maquinas con Office 365 en ambas, pero los encabezados y pies de pagina cambian de tamaño de una maquina a otra, teniendo los mismos parámetros, pasa lo mismo si se guardan como archivo PDF
Tengo 2 maquinas con Office 365 en ambas, pero los encabezados y pies de pagina cambian de tamaño de una maquina a otra, teniendo los mismos parámetros, pasa lo mismo si se guardan como archivo PDF Read More
Defender – Export or capture certificate expiry data
Hi There,
I am attempting to pull expired certificate information from Defender. My question is thus two fold:
Is it possible to create an email or alert based on certificates due to expire in 30 days.Is it possible to call an API for Defender for Endpoint?
Our current solution for alerts on expiring certificates in the domain is no longer sustainable and I am looking at redesigning the solution, however, before we can do a proper solution, we need to do something a little less manual and this will be our start.
Alert Rule
I can see that the certificate information is under the Inventories of the Vulnerabilities blade in Defender Endpoint which suggests that an expiring certificate should alert as a Vulnerability. Is this correct, if so how would I go about creating an alert to identify this?
API or Information passing
Is it possible to use API to call the information of certificates from Defender, again I have looked and found nothing. If API’s aren’t possible I saw that I can ship the data to Event Hub which would be useful but again I need to know if the certificate information is captured and passed on if I do this. Does anyone have this information?
Thanks,
Hi There, I am attempting to pull expired certificate information from Defender. My question is thus two fold:Is it possible to create an email or alert based on certificates due to expire in 30 days.Is it possible to call an API for Defender for Endpoint?Our current solution for alerts on expiring certificates in the domain is no longer sustainable and I am looking at redesigning the solution, however, before we can do a proper solution, we need to do something a little less manual and this will be our start. Alert RuleI can see that the certificate information is under the Inventories of the Vulnerabilities blade in Defender Endpoint which suggests that an expiring certificate should alert as a Vulnerability. Is this correct, if so how would I go about creating an alert to identify this? API or Information passingIs it possible to use API to call the information of certificates from Defender, again I have looked and found nothing. If API’s aren’t possible I saw that I can ship the data to Event Hub which would be useful but again I need to know if the certificate information is captured and passed on if I do this. Does anyone have this information? Thanks, Read More
Copilot features in the Viva Insights advanced app
Copilot in Viva Insights advanced app simplifies the query building process for analysts by suggesting metrics, filters, attributes that are relevant to the analysis.
With over 300 metric choices in the advanced app, picking the right ones for analysis can be time-consuming. Copilot streamlines the query building process. It is simply a matter of typing your question inside the “Describe the query” box, and Copilot will do the rest.
Not only does Copilot in Viva Insights expedite results, but it also flattens the learning curve for new analysts. Copilot does all the heavy lifting so that even the least experienced analyst can generate an impactful report as quickly and efficiently as the most seasoned analyst.
You can access Copilot in Person query, in the top right corner of the page. In the side panel, Copilot will display prompts to assist you with building your query. If you have a specific question in mind, type it into the description box at the bottom of the panel. Copilot will respond to your question with suggested metrics, filters, and attributes for your query.
For example, an analyst who wants to understand more about the impact of hybrid work on network composition could use natural language as an input to ask: “Are employees in the United States building social capital at work?” In response, Copilot in Viva Insights would suggest metrics to use in the query, such as internal network size, strong and diverse ties, and influence score which indicate employees’ ability to build networks. The Copilot also suggests filters such as “Area = US”. Analysts can refine or revise the scope of the analysis at any time during the query building process.
Copilot for custom Person queries saves users time and allows them to generate reports more efficiently. This new feature enhances the custom query building experience and guides the analyst toward an informed analysis.
The new Copilot experience is only available in private preview. If you are interested in first access to this Copilot query feature, please contact Madhura Bhat Hathoklu at madhura.h@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Celebrating Pride with Excel
Join us as we celebrate Pride and the spirit of radical joy! “Radical joy” is an anthem to those who thrive against all odds. The joy that radiates and inspires. The joy that is not dependent on outside forces but is within us and within our control, and when expressed, can help make change for all and inspire others to do the same.
Discover everything Pride at unlocked.microsoft.com/pride.
Take action:
Embrace the Stories of joy and explore them in this Excel template
Show your Pride with this special theme for Mac and iOS
Radical joy is more than just a feeling. It’s an anthem to those who thrive against all odds. An ode to beauty. And a reminder that joy is both universal and individual—and we should unite in spreading it everywhere.
Microsoft Tech Community – Latest Blogs –Read More
Enable Office Diagnostic Data to unlock the power of Microsoft 365 Apps health.
In the Microsoft 365 Apps admin center you can monitor your Microsoft 365 Apps health as well as leverage update validation during your monthly patching rollout. Both of these powerful tools are available in the Apps Admin Center. The prerequisite for seeing this data is enabling Office Diagnostic Data (ODD) to allow us to collect your usage and patching signals and then return them to you in a way that allows you to really see what is happening in your environment from an Apps health perspective. Additionally, sending diagnostic data to Microsoft allows us to better track the quality of the builds we release, identify potential issues faster and deliver a better product to you.
What is Office Diagnostic Data?
Office Diagnostic Data (ODD) is a set of diagnostic signals that are collected by the Microsoft 365 Apps on your device and include details of how the M365 Apps are functioning, including signals related to App reliability, App performance, and versioning information, among other things. Microsoft can use this data to keep our applications updated, safe, and working well. Microsoft gives customers control over what type of ODD is sent to Microsoft, including the ability to turn it off almost completely.
How can I tell if I am sending Office Diagnostics Data to Microsoft
To ensure your devices in your tenant are sending diagnostic data you can check the tracker in the Apps Health section of the Apps admin center. Under the Overview page, in the Insights section, click on “See details” to show the flyout of how many devices are sending data. If you are not seeing the number of devices sending data you think you should, check to make sure you are configured to send data, your network is not blocking the traffic and that your devices are in support.
Why: Value of enabling ODD
We encourage customers to enable Office Diagnostic Data (ODD) to provide visibility into the health of their M365 Apps. With ODD enabled, both Microsoft and the customer’s tenant admins can proactively address issues that may impact the user experience. When Microsoft engineering receives detailed diagnostic signals from your devices, we are able to monitor for major issues and can take steps to mitigate reliability and performance problems—and even alert you to actions that you may need to take to resolve issues.
If ODD is disabled (set to the lowest level of “Neither”), Microsoft engineering has no visibility into your user’s experiences, and your tenant’s issues will not be considered when prioritizing bug fixes.
For customers, Microsoft has created a separate admin portal called the Apps Admin Center (config.office.com). If a tenant has ODD enabled, the admin can access M365 App health data and act on recommendations from Microsoft (through Config.office.com). Additionally, if Microsoft identifies an issue that is not within our control to resolve, we will often reach out to advise the customer on how to fix the problem.
To summarize, by sending ODD to Microsoft, you can benefit from the following:
Access health dashboards in the Microsoft 365 Apps admin center that show you the relevant app health data.
Influence the development and prioritization of new features and bug fixes.
Benefit from the latest security updates and patches that are intended to improve your user experience.
When relevant, receive proactive guidance and recommendations from Microsoft on how to improve the performance and reliability of your Microsoft 365 Apps.
Experience proactive resolution of issues and bugs.
We have customers from all over the world and from various industries, including government and security, who have enabled ODD (Office Diagnostics Data). This allows us to collaborate more effectively and keep a proactive eye on their health and experience. Enabling ODD can help us partner better and improve our customers’ experiences.
What is collected?
The level of diagnostics data you choose determines what type of data is collected by ODD. To enable us to process your app health trends and provide proactive support, we recommend that you set ODD to at least the Required level. This level collects the minimum amount of data needed to identify and fix issues. If you set ODD to Neither, you are essentially disabling it and preventing Microsoft from proactively improving your user experience. This means that you will be left alone in dealing with your tenant issues and user escalations reactively, without any proactive help from Microsoft engineering . Please refer to the following table for more information on the different levels of ODD.
Level
Description
Required
The minimum data necessary to help keep Office secure, up-to-date, and performing as expected on the device it’s installed on. Includes version of Office and information about crashes.
Optional
Required + Additional data that helps us make product improvements and provides enhanced information to help us detect, diagnose, and remediate issues. Includes performance info such as how long it takes to save a document.
Neither
No diagnostic data about Office client software running on the user’s device is collected and sent to us. This option, however, significantly limits our ability to detect, diagnose, and remediate problems your users may encounter using Office.
How to enable ODD?
There are a number of ways to configure Office Diagnostic Data on your devices, the method you use will depend on how you manage your devices. The following are the primary methods used by most customers.
Cloud Policy:
Use the “configure the level of client software diagnostics data sent by office to Microsoft” in the apps admin center(config.office.com)
Levels:
Optional
Required*
Neither
If policy is not configured, optional diagnostic data is sent to Microsoft.
*Minimum recommended
Control Setting via GPO (for Windows)
Configure the level of client software diagnostic data sent by Office to Microsoft (admx.help)
Control privacy settings by editing the registry.
Use the following information to configure privacy settings directly in the registry
[HKEY_CURRENT_USERSoftwarePoliciesMicrosoftofficecommonclienttelemetry] “sendtelemetry”=dword:00000002
How to view the Data sent to Microsoft
Use Diagnostic Data Viewer
Go to Start, select Settings>Office data settings
Enable toggle for Office diagnostic data viewing on
Diagnostic Data Viewer Overview (Windows 10 and Windows 11) – Windows Privacy | Microsoft Learn
How can an admin monitor M365 Apps health?
Microsoft 365 Apps health takes the diagnostic data you send to Microsoft and gives it right back to you in an easy to read and understand section within the M365 Apps admin center. Apps health tracks things like load time, crash rates and file open time to give you a holistic view of the health of the devices in your organization.
You can drill into each individual application, focus on a specific servicing channel or monitor the health of your add-ins all from Apps health inside the Apps admin center.
Privacy:
What is included in Office Diagnostic Data?
Depending on the level of control you have set up, the information that will be sent to Microsoft will be different. For Microsoft to help you keeping your tenant healthy proactively, we it is sufficient enabling ODD at the “Required” level. For a full list of Office Diagnostic data sent to Microsoft when “Required” level is set up, please refer to: Required diagnostic data for Office – Deploy Office | Microsoft Learn
How does Microsoft keep our data private?
Microsoft is the industry leader in protecting customer data and will only use your data to provide the services that you have purchased from Microsoft. Read more about how we protect & manage your data here.
How long diagnostic data stored?
For commercial customers, our typical engineering practice is to retain diagnostic data from Microsoft 365 Apps for up to 18 months. If an enterprise subscription expires or is terminated, Microsoft holds for 90 days and then deletes data within the next 90 days, as outlined in the DPA.
Is user data collected?
With Office Diagnostic data no user content or personal information (such as usernames or email addresses) is collected. Data that we received is pseudonymized. Diagnostic data also does not include any file content or information about apps unrelated to Office.
Where is data stored?
Office Diagnostics Data for EU customers is now stored within the EU, aligning with our commitment to regional data residency. The rest of the data continues to be securely stored in the United States, ensuring comprehensive data management and privacy.
Network
Office Diagnostics Data is transmitted to Microsoft through Office 365 endpoints, utilizing the devices’ network. This process typically involves a low volume of data, which is unlikely to impact network performance. Additionally, the data is secured both during transit and while stored, ensuring its confidentiality and integrity.
What is the impact in my Network?
The bandwidth consumed by Office Diagnostic Data varies as it depends on user interaction with Office Apps. While “Required” events typically upload once per session, user-driven events differ. To estimate data upload, enable diagnostics on a few devices and monitor connections to diagnostic endpoints via your firewall. For individual device data, use the Diagnostic Data Viewer on a ‘typical’ user’s device.
Microsoft Tech Community – Latest Blogs –Read More
“New Teams” in Windows 2019 RDSH environment with FSLogix profiles
Recently Microsoft released FSLogix hotfix 4 to address critical issues with “New Teams” in RDSH environments. This was super important to those of us that are affected by the upcoming June 30th deadline to implement “New Teams”.
I’ve noted some real issues that have yet to be resolved, and I have an open support case where I have clearly documented the issues we are facing. However, the support folks at LTIMindtree have for the past 6 weeks been ‘escalating’ the issue and ‘waiting for a resource to be assigned’ to the ticket. I keep telling them that we have a June 30th deadline, but they don’t seem able to help despite the looming deadline.
I’m curious as to whether or not Microsoft has been made aware of the ongoing issues with deploying “New Teams” on Windows 2019 using FSLogix profiles?
Noting that we are battling:
Teams occasionally does not start automatically, and in that circumstance Teams can’t be started by the user. The only resolution is to sign out, then sign in. This has become noticeably better with FSLogix hotfix 4, but it is not 100% yet. We are still affected by this although much less frequently than before hotfix 4 was released.Some user preferences are not captured and stored in the user’s FSLogix profile container. One specific example is the “Links open preference”. If a user selects a specific browser, that setting is not captured and stored in the profile. It is therefore reverted to default during the next session login. This is just a single example to demonstrate the issue, not a complete list.
I’m wondering whether or not anyone in the Microsoft Teams group is aware of the fact that there are MSP’s out there that have real, legit technical issues and are facing a Microsoft imposed deadline without any real ‘fixes’ or solutions.
At this point, we’ve been forced to communicate to our customers basically saying “Sorry, but Microsoft is forcing this upgrade and hasn’t yet addressed these issues that we are aware of”.
Is there anyone else in the same boat?
Recently Microsoft released FSLogix hotfix 4 to address critical issues with “New Teams” in RDSH environments. This was super important to those of us that are affected by the upcoming June 30th deadline to implement “New Teams”. I’ve noted some real issues that have yet to be resolved, and I have an open support case where I have clearly documented the issues we are facing. However, the support folks at LTIMindtree have for the past 6 weeks been ‘escalating’ the issue and ‘waiting for a resource to be assigned’ to the ticket. I keep telling them that we have a June 30th deadline, but they don’t seem able to help despite the looming deadline. I’m curious as to whether or not Microsoft has been made aware of the ongoing issues with deploying “New Teams” on Windows 2019 using FSLogix profiles? Noting that we are battling:Teams occasionally does not start automatically, and in that circumstance Teams can’t be started by the user. The only resolution is to sign out, then sign in. This has become noticeably better with FSLogix hotfix 4, but it is not 100% yet. We are still affected by this although much less frequently than before hotfix 4 was released.Some user preferences are not captured and stored in the user’s FSLogix profile container. One specific example is the “Links open preference”. If a user selects a specific browser, that setting is not captured and stored in the profile. It is therefore reverted to default during the next session login. This is just a single example to demonstrate the issue, not a complete list. I’m wondering whether or not anyone in the Microsoft Teams group is aware of the fact that there are MSP’s out there that have real, legit technical issues and are facing a Microsoft imposed deadline without any real ‘fixes’ or solutions. At this point, we’ve been forced to communicate to our customers basically saying “Sorry, but Microsoft is forcing this upgrade and hasn’t yet addressed these issues that we are aware of”. Is there anyone else in the same boat? Read More
HTTP 404 errors when accessing Purview portal (new and old)
Hi all,
I have the “contributor” role in a subscription, and I created a new purview account.
However, when I try accessing the Purview portal (both new and old) there are many missing buttons/functionality. I see there are several errors like the following one:
{
“error”: {
“code”: “ResourceNotFound”,
“message”: “Authorization error:[ResourceNotFound:Response content: {“Message”:”The Gls FFO Tenant Region was not found for the given tenant: ………….”}, full response: StatusCode: 404, ReasonPhrase: Not Found, Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:n{n Cache-Control: no-cachen Pragma: no-cachen Server: Microsoft-HTTPAPI/2.0n X-BEServer: ……n X-NanoProxy: 1n X-AspNet-Version: 4.0.30319n Request-Id: ……n Restrict-Access-Confirm: 1n X-BEPartition: …..n X-CalculatedBETarget: ……….OUTLOOK.COMn X-BackEndHttpStatus: 404n X-BeSku: WCS7n X-End2EndLatencyMs: 76n x-ms-appId: …..n X-Proxy-BackendServerStatus: 404n X-Proxy-RoutingCorrectness: 1n X-FEServer: …..n X-FirstHopCafeEFZ: PHXn Alt-Svc: h3=”:443″; ma=2592000n Alt-Svc: h3-29=”:443″; ma=2592000n Strict-Transport-Security: max-age=31536000; includeSubDomainsn MS-CV: ……1n Date: Mon, 17 Jun 2024 16:24:25 GMTn Content-Length: 112n Content-Type: application/json; charset=utf-8n Expires: -1n}]”
}
}
These 404 errors are returned when the UI makes requests like account?api-version=2023-10-01-preview, /accounts/features, /datagovernance/quality/business-domains/%7B%7BbusinessDomain%7D%7D/report, /catalog/api/atlas/v2/types/typedefs, etc.
The same 404 response is return for a request to /api/gateway/actions/collections/me with a list of roles in the request body such as [“Microsoft.Purview/accounts/data/read”,”Microsoft.Purview/accounts/data/write”,”Microsoft.Purview/accounts/source/read”,…….
I have the contributor role for the whole subscription and I am the one that created the purview account.
Am I missing any privileges?
Hi all,I have the “contributor” role in a subscription, and I created a new purview account. However, when I try accessing the Purview portal (both new and old) there are many missing buttons/functionality. I see there are several errors like the following one: {
“error”: {
“code”: “ResourceNotFound”,
“message”: “Authorization error:[ResourceNotFound:Response content: {“Message”:”The Gls FFO Tenant Region was not found for the given tenant: ………….”}, full response: StatusCode: 404, ReasonPhrase: Not Found, Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:n{n Cache-Control: no-cachen Pragma: no-cachen Server: Microsoft-HTTPAPI/2.0n X-BEServer: ……n X-NanoProxy: 1n X-AspNet-Version: 4.0.30319n Request-Id: ……n Restrict-Access-Confirm: 1n X-BEPartition: …..n X-CalculatedBETarget: ……….OUTLOOK.COMn X-BackEndHttpStatus: 404n X-BeSku: WCS7n X-End2EndLatencyMs: 76n x-ms-appId: …..n X-Proxy-BackendServerStatus: 404n X-Proxy-RoutingCorrectness: 1n X-FEServer: …..n X-FirstHopCafeEFZ: PHXn Alt-Svc: h3=”:443″; ma=2592000n Alt-Svc: h3-29=”:443″; ma=2592000n Strict-Transport-Security: max-age=31536000; includeSubDomainsn MS-CV: ……1n Date: Mon, 17 Jun 2024 16:24:25 GMTn Content-Length: 112n Content-Type: application/json; charset=utf-8n Expires: -1n}]”
}
} These 404 errors are returned when the UI makes requests like account?api-version=2023-10-01-preview, /accounts/features, /datagovernance/quality/business-domains/%7B%7BbusinessDomain%7D%7D/report, /catalog/api/atlas/v2/types/typedefs, etc. The same 404 response is return for a request to /api/gateway/actions/collections/me with a list of roles in the request body such as [“Microsoft.Purview/accounts/data/read”,”Microsoft.Purview/accounts/data/write”,”Microsoft.Purview/accounts/source/read”,……. I have the contributor role for the whole subscription and I am the one that created the purview account. Am I missing any privileges? Read More
Inability for External Presenters to Manage Breakouts is hurting our business in the L&D industry
I’ve raised this topic in various forums and at numerous conferences over the years, but I’m hoping we can initiate a productive dialogue here.
We operate in the L&D space, offering corporate training support for global clients across over a dozen platforms, and have been doing so since the days of Skype. Whether it’s WebEx, Adobe Connect, Zoom, or another platform, the process is generally straightforward. We receive host rights, and we’re set to go.
However, Microsoft Teams is an exception. From the beginning, conducting training sessions as an external vendor on Teams has been extremely challenging. Despite some improvements over the years, one critical piece of functionality is still missing: external users cannot manage breakout rooms.
This oversight is severely impacting our business, as well as many other small businesses like ours, as our corporate clients transition to Teams. Many organizations move to the Microsoft ecosystem partly due to their InfoSec policies, making the issuance of internal organization credentials difficult and often necessitating additional hardware.
It’s not just external vendors who are frustrated. Many of our clients, including the L&D departments of Fortune 500 companies, have actively opposed their companies’ transition to Teams.
We want to embrace these transitions and even endorse and encourage them. Is this functionality on the roadmap? Who do we need to speak to, or what actions are required to fully address this issue? We are eager to discuss the friction points of using Teams for training from the perspective of a company that has facilitated and supported hundreds of thousands of trainings across every major platform over the last decade.
Your thoughts and insights would be greatly appreciated.
I’ve raised this topic in various forums and at numerous conferences over the years, but I’m hoping we can initiate a productive dialogue here. We operate in the L&D space, offering corporate training support for global clients across over a dozen platforms, and have been doing so since the days of Skype. Whether it’s WebEx, Adobe Connect, Zoom, or another platform, the process is generally straightforward. We receive host rights, and we’re set to go. However, Microsoft Teams is an exception. From the beginning, conducting training sessions as an external vendor on Teams has been extremely challenging. Despite some improvements over the years, one critical piece of functionality is still missing: external users cannot manage breakout rooms.This oversight is severely impacting our business, as well as many other small businesses like ours, as our corporate clients transition to Teams. Many organizations move to the Microsoft ecosystem partly due to their InfoSec policies, making the issuance of internal organization credentials difficult and often necessitating additional hardware. It’s not just external vendors who are frustrated. Many of our clients, including the L&D departments of Fortune 500 companies, have actively opposed their companies’ transition to Teams.We want to embrace these transitions and even endorse and encourage them. Is this functionality on the roadmap? Who do we need to speak to, or what actions are required to fully address this issue? We are eager to discuss the friction points of using Teams for training from the perspective of a company that has facilitated and supported hundreds of thousands of trainings across every major platform over the last decade. Your thoughts and insights would be greatly appreciated. Read More
ntel Ethernet controller(3) I 225V issue
Hello,
I was having 2019 standard server, running, Added one SSD and Installed fresh new standard 2022 server. After installation, drivers are missing so exported from running 2019 server. All drivers are installed except one Network card. This card is Intel Ethernet controller(3) I 225V.. I tried all possible but did not find any luck. This 2022 server is not yet keyed, but I guess its good for next 180 days. what could be reason ?
Any help ?
Hello,I was having 2019 standard server, running, Added one SSD and Installed fresh new standard 2022 server. After installation, drivers are missing so exported from running 2019 server. All drivers are installed except one Network card. This card is Intel Ethernet controller(3) I 225V.. I tried all possible but did not find any luck. This 2022 server is not yet keyed, but I guess its good for next 180 days. what could be reason ? Any help ? Read More
Struggling with Vlookup / Index & Match
I need to match 2 columns of data to another sheet, then return another column of data only if they both match. I will attach a file in the reply.
Thank you,
Ambrosia
64-bit operating system, x64-based processor
Windows 10 Enterprise
Microsoft 365 Apps for enterprise
Version 2308
I need to match 2 columns of data to another sheet, then return another column of data only if they both match. I will attach a file in the reply. Thank you,Ambrosia 64-bit operating system, x64-based processorWindows 10 EnterpriseMicrosoft 365 Apps for enterpriseVersion 2308 Read More
Having SharePoint Search Bar search a specific site
Hi,
We are setting up a SharePoint site and would like to behavior of the SharePoint search to search that site collection, as well as a different site collection. I know it’s possible to change the search behavior for a site to either do that site, entire tenant or hub but we’d prefer to have it only search another site if possible.
Thanks.
Hi, We are setting up a SharePoint site and would like to behavior of the SharePoint search to search that site collection, as well as a different site collection. I know it’s possible to change the search behavior for a site to either do that site, entire tenant or hub but we’d prefer to have it only search another site if possible. Thanks. Read More
Audio module soft keys doesn’t work on Teams
Hi
I’m using this docking from HP which I think is great:
The only problem that I’m having is that the soft keys on the top and expecially:
mute
answer call
turn off call
doesn’t work with Team either old and new version of that.
Now I have seen this old discussion:
https://www.reddit.com/r/MicrosoftTeams/comments/k618e6/hp_thunderbolt_dock_integration/
where people where supposing that a firmware update from HP is needed to have them working with Teams and HP is never gonna release it. Now, frankly I don’t know but isn’t there a way to have them working without a firmware update?
Please note that these soft keys were working with old Skype For Business so I guess teams change some sort of protocol for that. Any hope to have them back working?
It’s kind of disappointing that the device is great and running but neither HP or Microsoft is gonna help doing a very minor update to have it fixed.
HiI’m using this docking from HP which I think is great:The only problem that I’m having is that the soft keys on the top and expecially:muteanswer callturn off calldoesn’t work with Team either old and new version of that.Now I have seen this old discussion:https://www.reddit.com/r/MicrosoftTeams/comments/k618e6/hp_thunderbolt_dock_integration/where people where supposing that a firmware update from HP is needed to have them working with Teams and HP is never gonna release it. Now, frankly I don’t know but isn’t there a way to have them working without a firmware update? Please note that these soft keys were working with old Skype For Business so I guess teams change some sort of protocol for that. Any hope to have them back working? It’s kind of disappointing that the device is great and running but neither HP or Microsoft is gonna help doing a very minor update to have it fixed. Read More
Formula Help: Count Unique Domains per Account ID
Hello Community!
I have an Excel file with two columns: Account ID and Domain Name. A single Account ID can have multiple Domain Names associated with it. In other words, the relationship between Account ID and Domain Name is one-to-many (1:N). What’s the best way to pull the number of UNIQUE domains associated with a specific account ID? Here is what the sample data might look like:
Account IDDomain Name1234f.com1234e.com1234g.com4321a.com4321b.com4321
b.com
5678z.com5678
y.com
For Account ID 1234, there are 3 unique domains. Similarly, there are 2 unique domains for account IDs 4321 and 5678. Can anyone please assist.
Hello Community! I have an Excel file with two columns: Account ID and Domain Name. A single Account ID can have multiple Domain Names associated with it. In other words, the relationship between Account ID and Domain Name is one-to-many (1:N). What’s the best way to pull the number of UNIQUE domains associated with a specific account ID? Here is what the sample data might look like: Account IDDomain Name1234f.com1234e.com1234g.com4321a.com4321b.com4321b.com5678z.com5678y.com For Account ID 1234, there are 3 unique domains. Similarly, there are 2 unique domains for account IDs 4321 and 5678. Can anyone please assist. Read More