Category: Microsoft
Category Archives: Microsoft
Monitor your Virtual Machines and Arc servers’ workloads with Azure Monitor
Motivation
Azure Monitor is an amazing suite of technologies that lets you collect, visualize, and act on data from your Azure resources. You can use metrics and logs to monitor the health and performance of any Azure resource. Microsoft offers tailored experiences for specific workloads, such as Virtual Machine Insights. Some of these experiences also include alerts and modern dashboards (Grafana) to help you act and troubleshoot issues. However, for server-based workloads, such as IIS, Print Servers, DNS, and others, there was no native cloud solution for monitoring. Until now.
The Azure Monitor Starter Packs (or “MonStar” packs) is a set of pre-configured components that provide monitoring configuration for multiple Azure resources without the need to create rules, alerts or dashboards. The monitoring features will be ready for assignment and consumption as soon as deployed.
Each pack will contain the required rules to collect the pertinent information (DCRs), the Alert Rules to inform about observations (alerts) and Dashboards (Grafana) to visualize the data.
Deployment
The deployment process is as simple as clicking a button in the repository:
For details on how to deploy, please refer to the documentation here. Once deployed, frontend and backend components will be leveraged to provide management experience, collect data and act (alerting) on specified thresholds.
The main workbook is the front end to the solution and will allow enabling and disabling monitoring as well as checking the status of the monitored resources.
In the backend, a Logic App in conjunction with a Function app make the actions from the workbook possible. Operations like enabling and disabling monitoring, enabling or disabling alerts can be triggered from the workbook.
Coverage
Currently, the are packs for the following technologies:
IIS 2012 and 2016
DNS 2016
Print Server 2016
Nginx
Windows OS and Linux OS (using VM Insights).
Collaboration
New packs can be built by leveraging the existing modules and creating bicep files based on a template. Details on how to build a pack can be found here.
If you find any issues, please report them in the repository.
Start using the Azure Monitor Starter Packs today!
Microsoft Tech Community – Latest Blogs –Read More
MVP’s Favorite Content: AI, Microsoft Copilot, SQL, Azure
In this blog series dedicated to Microsoft’s technical articles, we’ll highlight our MVPs’ favorite article along with their personal insights.
Liji Thomas, AI MVP, United States
AI Engineer Career Path – Microsoft Learn Official Collection | Microsoft Learn
“The role of AI engineers is increasingly pivotal in our technology-driven era, especially with the democratization of AI. The ability to apply AI through accessible APIs and SDKs has opened new horizons, making AI more reachable than ever before. AI engineers are not mere coders; they are problem solvers who possess a robust blend of skills in programming, machine learning, and understanding of the possibilities. They leverage AI to address complex real-world challenges.
For those keen on entering or advancing in this rapidly evolving field, the Microsoft Learn collection is an excellent resource. It offers an extensive curriculum that not only facilitates the development of fundamental skills but also prepares learners for the AI-102 certification, a benchmark in the industry. Significantly, 10-15% of the skills covered in this certification pertain to implementing generative AI solutions, reflecting the latest trends in the industry.
Besides serving as a one-stop solution for anyone eager to dive into the world of AI engineering, a couple of things about the collection stand out to me –
-It democratizes learning by making high-quality, industry-relevant education accessible to a broader audience.
-It caters not only to beginners but also to experienced professionals seeking to update their skills in line with current industry standards. You will find it invaluable for renewing your AI-102 certification.
-The collection underscores the importance of continuous learning and adaptation in a field like AI that is constantly evolving. Its content is regularly updated, providing access to the most current and pertinent information.”
Tomoharu Misawa, M365 MVP, Japan
How to get ready for Copilot for Microsoft 365 (youtube.com)
“This is a video that provides an at-a-glance understanding of what is necessary before using Microsoft Copilot for Microsoft 365. By watching this video, you can gain the knowledge to prepare for the implementation of Microsoft Copilot.”
(In Japanese: Microsoft Copilot for Microsoft 365 を利用する前に必要なことがひと目でわかるビデオです。このビデオを見れば Microsoft Copilot を導入する前の準備を整えるための知識を得ることができます。)
*Relevant Blog: Microsoft 365 Copilot に向けて権限を整理してきましょう – ()のブログ (hatenablog.com)
Sergio Govoni, Data Platform MVP, Italy
Azure SQL Database Elastic Jobs preview refresh – Microsoft Community Hub
“Azure SQL Database has not got a native scheduling service comparable to SQL Agent that is present in the on-premise instances. When database solutions are implemented in Azure SQL, after the design phase of DB schema, possible solutions to perform database maintenance activities like Integrity Check, Index Rebuild etc…have to be studied. On the contrary, Backup activity already has an excellent default configuration given by Microsoft Azure platform. Azure SQL Database Elastic Jobs (preview refresh November 2023) is able to manage a wide variety of tasks such as database maintenance and more!”
*Relevant Blog:
<English>
– Azure SQL Database Maintenance tasks | Medium
– Automation of maintenance activities in Azure SQL Database | Medium
<Italiano>
– Automazione delle attività di manutenzione in Azure SQL Database – UGISS
– Automazione delle attività di manutenzione in Azure SQL Database (2 Parte) – UGISS
Takashi Takebayashi, Microsoft Azure MVP, Japan
Tenancy models for a multitenant solution – Azure Architecture Center | Microsoft Learn
“It is a very informative document that elaborates in detail on the most crucial point to consider when designing a multi-tenant architecture, which is the level of separation required for each tenant. In particular, I myself have experience working with several companies that offer multi-tenant SaaS, and I feel that if I had seen this document at that time, I could have made a better design. Therefore, I think it is content that many people should know about.”
(In Japanese: マルチテナント アーキテクチャを設計する上で最も考慮しなくてはならないポイントであるテナントごとに必要な分離レベルについて詳細に記述されており、大変参考になる資料だと思います。特に私自身がマルチテナント SaaS を提供する複数の企業に所属した経験があり、もしその際にこの資料を見ていたらもっとよい設計にできたのにと感じるものです。そのため、多くの人に知ってもらいたいコンテンツです。)
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
AI in Operations (Part 2 of 2)
We have been through the use cases of AI in Development in part 1 and in this blog we will cover the final 4 stages of the DevOps lifecycle and look into how AI can be used in operations at scale.
Release: The release stage in DevOps refers to the phase in the Software Development Lifecycle (SDLC) where a new version or iteration of a product is cut and made available to the end users. Here are two examples of where AI can help in this stage:
Automated release note generation:
Natural Language Processing (NLP) and Generative AI for release notes: When writing release notes, it can be quite an arduous task and it is imperative to get it correct. Using NLP and Generative AI, you can analyse code changes and automatically generate comprehensive release notes in natural language for end users. This ensures that the release documentation is comprehensive, up-to-date, and easily understandable for your user base.
Deployment risk assessment:
Machine Learning for risk prediction: The releasing of a new iteration of a product is always exciting but it comes with inherent risk. By implementing machine learning models to assess the risk associated with a release, using historical data it will help the team by providing insights, potential risks and employ mitigation contingencies that can be put in place ahead of time.
Deploy: The deploy stage in DevOps refers to the process of deploying the tested software or infrastructure changes from a development / test / pre-production environment to a production environment. Here are two examples of how AI can assist in this stage:
Dynamic rollback strategies:
AI-Driven rollbacks: It is expected that at one point or another, you will need to rollback your recently deployed environment. Mistakes happen and that is ok. The hard part of this is that it is not always automatically taken care of and the “what” and “why” is also not always clear. Here you can utilise AI models to analyse real-time performance metrics during deployment. If anomalies or performance issues are detected post-deployment, it can autonomously decide whether to initiate a rollback, ensuring there is a quick response to potential issues.
Deployment Optimisation:
Using AI for optimal traffic routing: There are several different deployment methods that are widely used. These include Canary, All-at-once, shadow deployments and more. Blue-Green is one of the most commonly used in production systems but that does not always mean it yields the results as expected. By utilising AI tools, you can dynamically optimise the traffic distribution between blue and green environments in a blue-green deployment, potentially better than a regular load balancer. This ensures that the new version receives sufficient traffic for testing and validation without impacting user experience.
Operate: In this stage the focus is on maintaining and managing the production environment. Here is where you will triage and address any incidents that occur and yes, AI can help!
Cognitive incident analysis:
Cognitive AI for incident triage: When an incident occurs, as a DevOps Engineer (or otherwise stated) you need to be able to categorise and explain it in basic language to report it and help other team members understand. This can be a hard task and time consuming, especially when there is pressure involved. Here would be a good time to implement cognitive AI tooling that can understand and categorise incidents based on natural language descriptions, such as application logs. By doing this, it will assist in a faster and somewhat accurate incident triage, allowing the team to prioritise and address critical issues promptly.
Monitor: In this stage of DevOps you are continuously checking the health of the service, performance and even the behaviour. This can be time consuming and costly. You can do this in a few ways from cherry picking logs and analysing them to reading user feedback and calculating costs. Here is how AI can help you:
Predictive cost analysis:
Cost prediction and optimisation: Building on cloud infrastructure comes with a sense of anxiety that you may be charged unknowingly for the usage of a service or tool you may not be aware of. With the integration of AI into monitoring tools you could use it to predict future resource allocation and associated costs without needing to work through a manual cost calculator. This enables proactive cost management and optimisations with very little lift from you, the end user.
Sentiment analysis of user feedback:
AI-Based sentiment analysis: User feedback is imperative to improve the product or service you are providing, and this is the stage in the DevOps lifecycle where you will review it and begin to plan any actionable items into the next sprint. By applying sentiment analysis on user feedback and logs you can get an overall picture of how the product is being perceived and behaving at any given time. This in turn leads to having a quicker turn around on the feedback loop, it can help to prioritise feature improvements, bug fixes, or infrastructure changes.
By incorporating AI into your DevOps and Software Development Lifecycle, you will be able to speed up and improve your delivery of services in several ways, as shown above in this blog and in part 1. When using AI tools there must always be human interaction and oversight to ensure what is being changed, provided, or reported by the models is correct.
To fully immerse yourself in the different AI tools available to help at these different stages of operations, I would suggest visiting the Microsoft AI website.
Microsoft Tech Community – Latest Blogs –Read More
Deploying Flask Apps to Azure Web App via Docker Hub
This tutorial will explore a step-by-step approach to deploying a Python-based Flask web application to Azure Web App using Docker Hub. Our project, a book recommendation system “BookBuddy”, represents a collaborative effort between myself, Haliunna Munkhuu, and Lilly Grella. We will guide you through each phase of the project: from the initial development of the recommendation logic in Python to creating the Flask framework, packaging our application into a Docker image, and finally deploying it to Azure.
Development with Flask on a Local IDE
Before we start, ensure that Python and pip are installed on our system. Then, install Flask using pip:
pip install Flask
Create a file named `flask_app.py` for our application “BookBuddy”. This code initializes the Flask app and sets up a simple route:
from flask import Flask
app = Flask(__name__)
@app.route(‘/’)
def index():
return “Welcome to BookBuddy!”
To ensure our Flask app is functioning correctly, we write a simple unit test using Python’s built-in `unittest` framework:
import unittest
class TestBookRecommendation(unittest.TestCase):
def setUp(self):
# Set up any variables you need for your tests
self.test_genre = “Cookbooks”
Containerization with Docker
Next, we need to create a Dockerfile to specify the environment of our Flask application:
# Dockerfile content
With Docker Desktop installed, build the Docker image with the following command:
docker build -t bookbuddy:latest .
After signing up and logging into Docker Hub, create a repository named `bookbuddy`, and ensure we have our tag name in lowercase:
docker login
docker tag bookbuddy:latest <your-docker-hub-username>/bookbuddy:latest
docker push <your-docker-hub-username>/bookbuddy:latest
Infrastructure as Code with Terraform
Before we deploy our Flask application to Azure, we need to set up the necessary infrastructure. We use Terraform, an open-source infrastructure as a code software tool, to write, plan, and create infrastructure efficiently.
Install Terraform on our local machine, we can refer to the Terraform official website.
Create a new directory for our Terraform configuration files. Within this directory, run:
terraform init
This command initializes a new Terraform project and sets up the necessary plugins.
Define our Azure resources in a file named `main.tf`. This file should include our Azure provider, resource groups, and App Service resources.
# main.tf content with Azure provider and resources
Execute the following command to see the execution plan, which shows what Terraform will do when we apply our configuration
terraform plan
If the plan looks correct, deploy our infrastructure with:
terraform apply
Confirm the action when prompted, and Terraform will begin creating the resources. Once Terraform has finished applying the changes, check the Azure portal to see our new resources.
Azure Web App Deployment
First, we need to install the Azure CLI. This can be done from the official website or via package managers:
# Azure CLI installation commands
Log into our Azure portal. From the Azure services dashboard, click on “App Services” to start creating a new Web App.
In the App Services section, click on “Create” and select “Web App”.
Create Web App
Fill out the “Instance Details” section with our web app’s name, publish settings, and operating system. For example, choose “Docker Container” and the region closest to ourselves for optimal performance.
Review our settings and click “Create” to provision the Web App with our configurations.
Once the Web App is provisioned, we will be directed to the deployment overview page. This page will indicate that our deployment is in progress.
After a short while, we will receive a notification that your deployment is complete. Click “Go to resource” to manage your deployed Web App.
In the Web App management section, we can view our Web App’s details, such as the default domain, status, and location. Here, we can also manage domain settings, scaling, and deployment slots
Navigate to the default domain provided by Azure to view our running Flask application. We should see the landing page of our “BookBuddy” application.
Conclusion
We’ve now successfully deployed a Flask app to Azure Web App using a Docker container. Our book recommendation system is live and accessible. Toggle “On” for continuous deployment in Azure to allow automatic redeployment whenever the Docker image is updated on Docker Hub. For any updates, push the new Docker image to Docker Hub and Azure will handle the rest.
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Removal of several Microsoft Graph Beta API’s for Intune device configuration reports
In February 2024, the following Microsoft Graph Beta API’s that leverage the old Intune reporting framework for device configuration policy reports will stop working:
Device configuration report:
https://microsoft.graph.com/Beta/deviceManagement/managedDevices(‘device_id’)/deviceConfigurationStates
Device status:
https://microsoft.graph.com/Beta/deviceConfiguration/StatelessDeviceConfigurationFEService/deviceManagement/deviceConfigurations(‘policy_id’)/deviceStatuses
If you’re impacted by this change, look for MC688107 in the Message center. If you’re using automation or scripts to retrieve reporting data from the Graph Beta API’s listed above, we recommend moving to newer Intune reporting framework by making POST requests to the corresponding endpoint for each report:
Device configuration report: getConfigurationPoliciesReportForDevice
Device and user check-in status report: getConfigurationPolicyDevicesReport
Device assignment status report: getCachedReport
For more information on the updated reporting experience read, Announcing updated policy reporting experience in Microsoft Intune.
Example: Device configuration report
POST: https://graph.microsoft.com/beta/deviceManagement/reports/getConfigurationPoliciesReportForDevice
Payload:
{
“select”: [
“IntuneDeviceId”,
“PolicyBaseTypeName”,
“PolicyId”,
“PolicyStatus”,
“UPN”,
“UserId”,
“PspdpuLastModifiedTimeUtc”,
“PolicyName”,
“UnifiedPolicyType”
],
“filter”: “((PolicyBaseTypeName eq ‘Microsoft.Management.Services.Api.DeviceConfiguration’) or (PolicyBaseTypeName eq ‘DeviceManagementConfigurationPolicy’) or (PolicyBaseTypeName eq ‘DeviceConfigurationAdmxPolicy’) or (PolicyBaseTypeName eq ‘Microsoft.Management.Services.Api.DeviceManagementIntent’)) and (IntuneDeviceId eq ‘adce2b4a-0000-0000-0000-0000000000’)”,
“skip”: 0,
“top”: 50,
“orderBy”: [
“PolicyName”
]
}
Response:
{
“TotalRowCount”: 2,
“Schema”: [{
“Column”: “IntuneDeviceId”,
“PropertyType”: “String”
},
{
“Column”: “PolicyBaseTypeName”,
“PropertyType”: “String”
},
{
“Column”: “PolicyId”,
“PropertyType”: “String”
},
{
“Column”: “PolicyName”,
“PropertyType”: “String”
},
{
“Column”: “PolicyStatus”,
“PropertyType”: “Int32”
},
{
“Column”: “PspdpuLastModifiedTimeUtc”,
“PropertyType”: “DateTime”
},
{
“Column”: “UnifiedPolicyType”,
“PropertyType”: “String”
},
{
“Column”: “UnifiedPolicyType_loc”,
“PropertyType”: “String”
},
{
“Column”: “UPN”,
“PropertyType”: “String”
},
{
“Column”: “UserId”,
“PropertyType”: “String”
}],
“Values”: [[“adce2b4a-0000-0000-0000-0000000000”, “DeviceManagementConfigurationPolicy”, “fdb08003-0000-0000-0000-00000000000”, “ASR Rules 02”, 2, “2023-08-13T01:51:46”, “SettingsCatalog”, “Settings Catalog”, “admin@xxx.net”, “132aa545-0000-0000-0000-00000000000”], [“adce2b4a-0000-0000-0000-00000000000”, “DeviceManagementConfigurationPolicy”, “09e3c028-0000-0000-0000-00000000000”, “Intent Policy with AF”, 6, “2023-08-10T01:53:20”, “MicrosoftDefenderAntivirus”, “Microsoft Defender Antivirus”, “admin@xxxx.net”, “132aa545- 0000-0000-0000-00000000000”]],
“SessionId”: “”
}
If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam!
Microsoft Tech Community – Latest Blogs –Read More
Removal of several Microsoft Graph Beta API’s for Intune device configuration reports
In February 2024, the following Microsoft Graph Beta API’s that leverage the old Intune reporting framework for device configuration policy reports will stop working:
Device configuration report:
https://microsoft.graph.com/Beta/deviceManagement/managedDevices(‘device_id’)/deviceConfigurationStates
Device status:
https://microsoft.graph.com/Beta/deviceConfiguration/StatelessDeviceConfigurationFEService/deviceManagement/deviceConfigurations(‘policy_id’)/deviceStatuses
If you’re impacted by this change, look for MC688107 in the Message center. If you’re using automation or scripts to retrieve reporting data from the Graph Beta API’s listed above, we recommend moving to newer Intune reporting framework by making POST requests to the corresponding endpoint for each report:
Device configuration report: getConfigurationPoliciesReportForDevice
Device and user check-in status report: getConfigurationPolicyDevicesReport
Device assignment status report: getCachedReport
For more information on the updated reporting experience read, Announcing updated policy reporting experience in Microsoft Intune.
Example: Device configuration report
POST: https://graph.microsoft.com/beta/deviceManagement/reports/getConfigurationPoliciesReportForDevice
Payload:
{
“select”: [
“IntuneDeviceId”,
“PolicyBaseTypeName”,
“PolicyId”,
“PolicyStatus”,
“UPN”,
“UserId”,
“PspdpuLastModifiedTimeUtc”,
“PolicyName”,
“UnifiedPolicyType”
],
“filter”: “((PolicyBaseTypeName eq ‘Microsoft.Management.Services.Api.DeviceConfiguration’) or (PolicyBaseTypeName eq ‘DeviceManagementConfigurationPolicy’) or (PolicyBaseTypeName eq ‘DeviceConfigurationAdmxPolicy’) or (PolicyBaseTypeName eq ‘Microsoft.Management.Services.Api.DeviceManagementIntent’)) and (IntuneDeviceId eq ‘adce2b4a-0000-0000-0000-0000000000’)”,
“skip”: 0,
“top”: 50,
“orderBy”: [
“PolicyName”
]
}
Response:
{
“TotalRowCount”: 2,
“Schema”: [{
“Column”: “IntuneDeviceId”,
“PropertyType”: “String”
},
{
“Column”: “PolicyBaseTypeName”,
“PropertyType”: “String”
},
{
“Column”: “PolicyId”,
“PropertyType”: “String”
},
{
“Column”: “PolicyName”,
“PropertyType”: “String”
},
{
“Column”: “PolicyStatus”,
“PropertyType”: “Int32”
},
{
“Column”: “PspdpuLastModifiedTimeUtc”,
“PropertyType”: “DateTime”
},
{
“Column”: “UnifiedPolicyType”,
“PropertyType”: “String”
},
{
“Column”: “UnifiedPolicyType_loc”,
“PropertyType”: “String”
},
{
“Column”: “UPN”,
“PropertyType”: “String”
},
{
“Column”: “UserId”,
“PropertyType”: “String”
}],
“Values”: [[“adce2b4a-0000-0000-0000-0000000000”, “DeviceManagementConfigurationPolicy”, “fdb08003-0000-0000-0000-00000000000”, “ASR Rules 02”, 2, “2023-08-13T01:51:46”, “SettingsCatalog”, “Settings Catalog”, “admin@xxx.net”, “132aa545-0000-0000-0000-00000000000”], [“adce2b4a-0000-0000-0000-00000000000”, “DeviceManagementConfigurationPolicy”, “09e3c028-0000-0000-0000-00000000000”, “Intent Policy with AF”, 6, “2023-08-10T01:53:20”, “MicrosoftDefenderAntivirus”, “Microsoft Defender Antivirus”, “admin@xxxx.net”, “132aa545- 0000-0000-0000-00000000000”]],
“SessionId”: “”
}
If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam!
Microsoft Tech Community – Latest Blogs –Read More
Exciting news: Teams Essential plus Teams Phone promotion extended!
PROMO EXTENDED!
We are excited to share that the Teams Essential plus Teams Phone bundle promotion (available in USA, UK, Puerto Rico, and Canada) has been extended through July 1st, 2024. Teams Phone is one of the biggest bets with the highest potential for partners in the new fiscal year and can help partners increase profitability.
Teams Phone enables customers to always be connected through a cloud-based phone solution. They will benefit from intelligent phone features such as auto transcriptions of mailbox messages, smart call control such as scheduling call queues, screen pop, and more.
We are providing CSP partners with the following tools to support Teams Phone enablement and sales guidance:
SMB Masters Program on-demand trainings | Microsoft Teams Phone Learning Path
New Teams Phone SMB 1:many customer workshop | book for partners as a part of the SMB workshop motion
With the promo we launched some Phone discounts, Teams Essentials plus Phone System Promo FAQ
Call to Action
Download and share the SMB 1:Many Customer Workshops to help partners grow their practices TE+PS business: Modern Work for Partners – SMB Briefings (microsoft.com)
Evangelize new resources on https://aka.ms/TeamsEssentialsPartner
Resources
Teams Phone Partner Portal
Teams Phone SMB partner opportunity deck & Teams Phone SMB pitch deck
Teams Essentials plus Phone System Promo FAQ
CSP Masters Program readiness: https://aka.ms/M365MastersProgram
Microsoft Tech Community – Latest Blogs –Read More