Month: October 2024
API fetching data from GIT loading to Storage Account in Parquet
I have a api that i am calling via ADF
the data that i am bringing is 28 days and i want to build historical data
in the incoming data there is a column called “day” which holds date.
i want to reference that and make the adf pipeline so it writes incrementally
what would be the approach?
I have a api that i am calling via ADF the data that i am bringing is 28 days and i want to build historical datain the incoming data there is a column called “day” which holds date.i want to reference that and make the adf pipeline so it writes incrementallywhat would be the approach? Read More
View SharePoint Properties in Word for Browser?
Can you view SharePoint Properties inside Word for Browser? like you can with Word for Desktop (under View -> Properties)?
Can you view SharePoint Properties inside Word for Browser? like you can with Word for Desktop (under View -> Properties)? Read More
MetricsQueryClient returning different results based on timespan
I’m using the Python MetricsQueryClient to list out how many tokens were used on certain days via the APIM policy “azure-openai-emit-token-metric”. The problem is that when I call the query_resource() function with “timespan” set for the entire month of October, I get different results for token count usage for today’s date than when I set the “timespan” to just the last 48 hours. For example, when setting the timespan to be from 10/20/2024 to 10/22/2024, I see 34 prompt tokens for today’s date. But if I set the timespan to be 10/1/24 to 11/1/24, I see 0 prompt tokens for today’s date.
Is this a known issue? It is documented somewhere?
I’m using the Python MetricsQueryClient to list out how many tokens were used on certain days via the APIM policy “azure-openai-emit-token-metric”. The problem is that when I call the query_resource() function with “timespan” set for the entire month of October, I get different results for token count usage for today’s date than when I set the “timespan” to just the last 48 hours. For example, when setting the timespan to be from 10/20/2024 to 10/22/2024, I see 34 prompt tokens for today’s date. But if I set the timespan to be 10/1/24 to 11/1/24, I see 0 prompt tokens for today’s date. Is this a known issue? It is documented somewhere? Read More
Recovering windows 10 password
While attempting to remove antivirus that slow downs my windows., it prompted to restart. It took long to restart but the processing sign was working.. after a while when it restarted windows it was stuck on boot screen for a while and I had to force shutdown. I tried couple of times but same result. Windows recovery was activated and it showed options to recover and it prompted for the password that I had long forgotten as I used to sign-in through pin. And the password was local not associated with Microsoft account.
So I’m locked out and I can’t loose the data.. please can anyone any genius help me !!!
While attempting to remove antivirus that slow downs my windows., it prompted to restart. It took long to restart but the processing sign was working.. after a while when it restarted windows it was stuck on boot screen for a while and I had to force shutdown. I tried couple of times but same result. Windows recovery was activated and it showed options to recover and it prompted for the password that I had long forgotten as I used to sign-in through pin. And the password was local not associated with Microsoft account.So I’m locked out and I can’t loose the data.. please can anyone any genius help me !!! Read More
Defect age
Need help with calculating how much time defect sits in a particular status. If defect is submitted as New 10/2, doesn’t get picked up for review until 10/10; then it moves to development on 10/20, then moves to coding on 11/2, then UAT on 11/15—basically need a way to track # of days defect is in each status that will be selected from a drop-down list. I’ve provided sample below. Do I need to track additional data in order to calculate this? Thanks
DefectDescriptionStatusCreated on Date1234Something is brokenNew9/3/20244321Something broke againImplementation Review3/21/20244322Something broke againIn Development3/22/2024
Need help with calculating how much time defect sits in a particular status. If defect is submitted as New 10/2, doesn’t get picked up for review until 10/10; then it moves to development on 10/20, then moves to coding on 11/2, then UAT on 11/15—basically need a way to track # of days defect is in each status that will be selected from a drop-down list. I’ve provided sample below. Do I need to track additional data in order to calculate this? Thanks DefectDescriptionStatusCreated on Date1234Something is brokenNew9/3/20244321Something broke againImplementation Review3/21/20244322Something broke againIn Development3/22/2024 Read More
sustitucion del PowerShell 5 por el 7
acabo de descargar la versión 7, pero sigue apareciendo la versión 5
¿Qué debo hacer?
acabo de descargar la versión 7, pero sigue apareciendo la versión 5 ¿Qué debo hacer? Read More
Candidly Copilot Episode 2
Welcome to the Candidly Copilot podcast episode 2. In this edition of Candidly Copilot we discuss getting your data house in order before you begin, and concurrently with, rolling out Microsoft 365 Copilot. Additionally, Michael Gannotti answers questions around what he is most excited about in the realm of Microsoft 365 Copilot as well as Agents. Finally, Michael shared the prompt of the week which you can find in the resources below.
Resources:
Check out ALL upcoming, and previously recorded, Candidly Copilot podcasts here
Learn about Microsoft Purview | Microsoft Learn
Restricted SharePoint Search – SharePoint in Microsoft 365 | Microsoft Learn
Microsoft 365 Copilot – Microsoft Adoption
Configure a communication compliance policy to detect for Copilot interactions | Microsoft Learn
Microsoft Copilot for Security in Microsoft Purview | Microsoft Learn
Thanks for visiting!
Microsoft Tech Community – Latest Blogs –Read More
Complex Data Extraction using Document Intelligence and RAG
Section 1: Introduction
Historically, data extraction from unstructured documents was a manual and tedious process. It consisted of a combination of constant human involvement or the incorporation of tools that were limited, due to the various document formats, inability to recognize font color and styles, and the document’s data quality being substandard.
These methods were time-consuming and often resulted in errors due to the complexities involved in understanding and interpreting unstructured data. The process also lacked scalability, making it difficult to process large volumes of documents efficiently.
With the advancements of LLMs, the ability to extract data from unstructured documents has significantly improved and offers users a more adaptable solution based on their individual needs.
This guide will show an approach to building a solution for complex entity extraction using Document Intelligence with RAG.
Section 2: Architecture: Complex Entity Extraction using Document Intelligence
Azure Document Intelligence (DI) is great for extracting structured data from unstructured documents in most scenarios. However, as in our case, dealing with tax documents that have thousands of different templates makes it challenging for the DI to capture specific tax information across different tax forms. The wide variety of tax templates comes from the diverse nature of tax documents, since each jurisdiction, city, country and state create its own unique taxation form, with different tax document types (e.g., invoices, withholding forms, declarations, and reports), which makes it an impractical solution to train a DI model for each unique tax form.
To manage unknown structures, we use LLMs to reason through unstructured documents. We utilize DI to extract layout and style information, which is provided to the LLM for extracting required details in a process called Doc2Schema. To query the document, we prompted GPT-4o to leverage DI information, effectively extracting the updated tax information, including but not limited to newly applied tax rates and applied tax locations, all structured according to the specified schema.
Generally, document querying with LLMs is carried out using Retrieval-Augmented Generation (RAG) models. Following the extraction of layout information through Document Intelligence (DI), semantic chunking is applied to maintain related entities such as tables and paragraphs within a single chunk. These chunks are subsequently embedded into a vector index, which can later be searched using tools like Azure AI Search. By employing prompt engineering, we can then query the document to retrieve targeted information.
Information can be dispersed throughout the document. For instance, a targeted record could be listed in a table row or appear in the introduction, footers, or headers. Searching through these scattered chunks might lead to information loss. To address this problem, we extend the chunk size to the maximum context length of the underlying LLM (128K tokens for GPT4o). Given the type of documents we handle, one chunk can usually hold an entire document. However, this method is also scalable for larger documents by either using smaller chunks and increasing retrieval results (the k-parameter) or by storing essential chunk information in memory.
Figure 2: Document Intelligence + RAG
Section 3: Implementation
Component 1: Azure AI Document Intelligence
For the first component, we leveraged Microsoft Document Intelligence to analyze document layouts efficiently. The process begins by specifying the document file path and configuring the Document Intelligence resource with the resource endpoint and key. Notably, the styles feature is added to the feature list exclusively for PDF documents, as the current capabilities of Document Intelligence do not support styles for HTML documents.
Once configured, the document is analyzed using the “prebuilt layout” model. This model meticulously examines the document, identifying various sections such as paragraphs, figures, tables, styles, and markdown textual content. The function then returns the output response from Document Intelligence in a structured “markdown” format, displaying the detected document sections for easy reference and further processing.
When we examine the main keys in document intelligence’s output response, we observe the following output dictionary keys.
The “content” key contains the document’s markdown content, which includes tables, figures, and headlines formatted in markdown. Demonstrating the following sample document and showing its markdown content:
Figure 4: Sample Document
George TBD rate change notice (wa.gov)
The “styles” key on the other hand captures all the document’s characteristics, such as font weight (normal or bold), font color, whether the text is handwritten, and background color. It groups all spans with the same characteristic into a list. This key map each characteristic to the “content” using span information, indicating the start and end offsets marked with the specific characteristic.
Component 2 – Azure AI Document Intelligence (Styles Feature)
Continuing with Azure AI Document Intelligence component, we aimed to add some visual characteristics to the document contents so the LLM can identify updates which are highlighted in a specific font style. We leveraged the “content” key to parse the textual content of documents in markdown format, which serves as the context for the LLM prompt. By extracting styles, we can identify changes which are highlighted through visual elements such as bold font weight, specific font colors, or background colors. To ensure the LLM recognizes these highlighted changes, we utilized the “styles” key and append the style information to the markdown “content” in the form of tags. For example, using the above sample document, <color: blue> New tax rate is 0.02 </color> indicates that the sentence enclosed within the tags is in blue color. We then merge all consecutive spans sharing similar styles to be enclosed within the same tag, to optimize the context length.
Similarly, we appended the grounding information by considering the span offsets associated with the “color” key. This ensured that all text locations within the document were included in the context, as every piece of text, regardless of its color—even black—should have a specified color attribute as a style.
A sample document’s markdown content after adding the styles information looks like the following:
Component 3: Semantic Chunking
Moving to the third component which is semantic chunking, this advanced and effective method manages sequence length based on the document layout. It addresses the challenge of information loss by preserving a full meaningful section per chunk. However, for certain use cases, the complexity and structure of documents can make semantic chunking an impractical approach.
For example, in the sample document displayed above, there are essential notes, such as how a change is highlighted or specific notations important for the extraction task, that are often mentioned in the header or footer of the document. Consequently, if there are multiple chunks within the document, the LLM could not identify the records with changes due to the non-consecutive chunk to these notes.
Addressing this challenge, we maximized the number of tokens per chunk to match the maximum sequence length of GPT-4o model which is 128k token and token overlap set to 64k tokens.
Initially, we used Byte Pair Encoding (BPE) for better grammar understanding. TokenEstimator class is designed to estimate the number of tokens in a given text using the GPT-4 tokenizer. It provides estimate_tokens function, which encodes the text and returns the number of tokens.
Showing the number of the resulting chunks for the sample document:
Component 4: Azure OpenAI – LLM Prompting
The fourth component focuses on the LLM call. In our experiment, we utilized the “GPT-4o” model deployed in Azure OpenAI Studio, providing the necessary resource credentials and the API version of the deployment. We read the prompt file to manage prompt versioning within the pipeline, then called the chat completion API, passing the prompt template along with the context. We implemented retry mechanisms for the API call to handle potential failures due to quota management or incomplete JSON responses from the OpenAI API. Additionally, we added the “force_json” option to ensure the output response is serialized in JSON format.
Now let’s examine how the prompt was crafted to achieve this extraction use case.
We divided the task into three key steps, providing detailed guidance for the LLM at each stage. First, the LLM needed to identify how tax rate change records are highlighted within the document, supported by possible notations such as “*”, bolded, or colored text.
Next, we defined the information to be extracted as fields, with a description for each, including specific formatting requirements, such as for dates. Finally, the extracted fields were formatted into a predefined JSON schema, creating a list of JSON objects where each object represents a tax rate change record for a particular tax type.
Throughout the process, grounding information for each extracted field was preserved. This list of objects was then placed under a dummy key, “results,” allowing all extracted objects to be captured when the response_format is set to enforce JSON output.
Showing the final output of the chosen sample document:
As shown above, each JSON object represents a changed tax rate applied within a specified location on a specific tax rate level. Additionally, each field incorporates its location within the document (the start and end offsets are calculated by the model based on the above prompt from the original document content) to smoothly identify its source in the original document.
In the next section, we will show insights into this experiment’s overall evaluation.
Section 4: Evaluation & Metrics
Using ground truth and prediction data, we measure precision, recall, and F1 as our metrics. To be a correct prediction, all fields in the ground truth record must match exactly with those in the predicted records.
Metric
Value
Precision
56.39
Recall
37.08
F1 Score
44.74
To conduct a thorough analysis of our results, we performed an ablation study to determine which entities impact the extraction process. The figure below shows the metrics derived by progressively adding one entity to the record. A decrease in metrics upon including an entity suggests an issue with that entity.
We also analyze documents to find further issues. Sometimes, GPT4o’s predictions are more accurate than the ground truth. For instance, the model can identify a parent city from the title that the ground truth misses. Despite being correct, the record is marked wrong due to a parent_city mismatch. This shows GPT may surpass human annotations, particularly with scattered information, highlighting the need for multiple reviews of the ground truth using GPT.
Section 5: Supporting Documentation
Want to try on your own? Access the Notebook here:
Azure/complex_data_extraction_with_llms: Complex Data Extraction with LLMs (github.com)
Notebook Contents:
Resources Settings and Credentials.
Document Layout Analysis using Document Intelligence (DI) Block.
Storing DI Outputs Block (Optional).
Applying Styles Tags Block.
Semantic Chunking Block.
Indexing Block (Optional).
Azure OpenAI LLM Model Call Block.
LLM Response Cleaning Block.
Storing Predictions Vs. Ground Truth Block.
Calculating Evaluation Metrics Block.
Reorder Extracted Entities Block (Optional).
Microsoft Tech Community – Latest Blogs –Read More
Announcing AzAPI 2.0
The AzAPI provider, designed to expedite the integration of new Azure services with HashiCorp Terraform, has now released 2.0. This updated version marks a significant step in our goal to provide launch day support for Azure services using Terraform.
What is the AzAPI Provider?
The AzAPI provider functions as a lightweight layer atop the Azure ARM REST APIs. It is a first class provider experience along with the AzureRM provider. Azure resources that might not yet be or may never be supported in AzureRM can be accessed by this provider, including private/public preview services and features.
Key Features of the AzAPI Provider Include:
Resource-specific versioning, allowing users to switch to a new API version without altering provider versions.
Special functions like `azapi_update_resource` and `azapi_resource_action`.
Immediate Day 0 support for new services.
Ready to see the new updates? Let’s take a look!
No More JSON!
All resource properties, outputs, and state representation are now handled with HashiCorp Configuration Language (HCL) instead of JSON. This change allows the use of all native Terraform HCL functionalities. For more info on scenarios on usage, check out our initial announcement.
Clarity with Outputs
Outputs are now customizable through the `response_export_values` property, which can function as either a list or a map.
For instance, to export response values for an Azure container registry:
If I set the value to a list, i.e. response_export_values = `[“properties.loginServer”, “properties.policies.quarantinePolicy.status”]` , I would get the following output:
{
properties = {
loginServer = “registry1.azurecr.io”
policies = {
quarantinePolicy = {
status = “disabled”
}
}
}
}
If I instead set the value to a map using JMESPath querying, i.e. response_export_values = `{“login_server”: “properties.loginServer”, “quarantine_status”: “properties.policies.quarantinePolicy.status”}`, I would get the following output:
{
“login_server” = “registry1.azurecr.io”
“quarantine_status” = “disabled”
}
This feature uses a key-value configuration, making it easier to specify exact output values. For example, you can set `{“login_server”: “properties.loginServer”, “quarantine_status”: “properties.policies.quarantinePolicy.status”}`.
retry Block
User-defined retriable errors via the retry block help the provider digest errors when expected. For example, if a resource may run into a create timeout issue, the following block of code may help:
resource “azapi_resource” “example” {
# usual properties
retry {
interval_seconds = 5
randomization_factor = 0.5 # adds randomization to retry pattern
multiplier = 2 # if try fails, multiplies time between next try by this much
error_message_regex = [“ResourceNotFound”]
}
timeouts {
create = “10m”
}
Preflight Support
Preflight validation, enabled by a feature flag, will identify errors without deploying resources, providing a quicker feedback loop. For example, in a config with several resources, an invalid network addressPrefix definition will be caught quickly:
provider “azapi” {
enable_preflight = true
}
resource “azapi_resource” “vnet” {
type = “Microsoft.Network/virtualNetworks@2024-01-01”
parent_id = azapi_resource.resourceGroup.id
name = “example-vnet”
location = “westus”
body = {
properties = {
addressSpace = {
addressPrefixes = [
“10.0.0.0/160”, # preflight will throw an error here
]
}
}
}
}
Resource Replacement Triggers
Customize specific methods of replacing your resource.
replace_triggers_external_values: Replaces if specified external values change.
replace_triggers_refs: Triggers a resource replacement based on changes in specified paths.
Resource Discovery
Discover resources under a parent ID such as a subscription, virtual network, or resource group using the new `azapi_resource_list` data source. You can also filter using query parameters as shown below:
data “azapi_client_config” “current” {}
data “azapi_resource_list” “listPolicyDefinitionsBySubscription” {
type = “Microsoft.Authorization/policyDefinitions@2021-06-01”
parent_id = “/subscriptions/${data.azapi_client_config.current.subscription_id}“
query_parameters = {
“$filter” = [“policyType eq ‘BuiltIn'”]
}
response_export_values = [“*”]
}
output “o1” {
value = data.azapi_resource_list.listPolicyDefinitionsBySubscription.output
}
AzAPI Provider Functions
AzAPI now supports several Terraform provider functions:
build_resource_id: Constructs an Azure resource ID.
parse_resource_id: Breaks down an Azure resource ID into its components.
subscription_resource_id: Constructs an Azure subscription scope resource ID.
tenant_resource_id: Builds an Azure tenant scope resource ID.
management_group_resource_id: Creates an Azure management group scope resource ID.
resource_group_resource_id: Forms an Azure resource group scope resource ID.
extension_resource_id: Generates an Azure extension resource ID with additional names.
To check out the references and examples, visit the Terraform registry.
AzAPI VSCode Extension Improvements
The release coincides with updates to the VSCode extension:
Code Samples: Quickly insert code samples from our auto-gen pipeline:
Paste as AzAPI: Convert JSON or ARM templates directly into HCL:
Conclusion
AzAPI 2.0 brings numerous enhancements, promising a better Terraform experience on Azure. With these features, we believe that you can use AzAPI as a standalone provider to meet any of your infrastructure needs. Stay tuned for a blogpost coming on suggestions for when to use each provider. Be sure to explore the new features; we’re confident you’ll enjoy them!
If you haven’t yet, check out the provider: https://registry.terraform.io/providers/Azure/azapi/latest/docs
Microsoft Tech Community – Latest Blogs –Read More
New E-book: Building a Comprehensive API Security Strategy
APIs are everywhere – they are proliferating at a rapid pace, therefore, making them a prime target for attackers. Thus, having a plan to secure protect your APIs as part of your overall cybersecurity strategy is critical for protecting your business, as well as sensitive user data.
We are excited to share our newest e-book: Building a Comprehensive API Security Strategy
This e-book is filled with valuable best practices and contains the basic building blocks you need to get started on an integrated and layered approach to API security, encompassing a variety of different elements, tools, and principles that can be applied to your program.
Introduction: Understand how APIs work and the concept of API security.
API security protects your business-critical data and operations: Take a look into today’s threat landscape to learn how to think critically about your API security strategy.
Build confidence in your API inventory: Learn about API discovery and why it’s an important first step in building your strategy.
Actively manage your dynamic API inventory: Learn what’s needed to apply organizational policies and security controls to stay on top of your inventory.
Fortify your APIs with advanced security solutions: After management, it’s time to secure your APIs further. Learn the common API risks and the advantages of integrating a cloud native application protection platform (CNAPP) into your game plan.
Build layered defenses to maximize your API security: Take it to the next level by incorporating complementary tools and services to help strengthen your overall security posture and protect against threats.
Want to get started with securing your APIs? Download the free e-book here to learn about the building blocks of a good API security strategy.
Microsoft Tech Community – Latest Blogs –Read More
Business User to manage an Application’s users in Entra External ID
Hi all,
In my company we are using Microsoft Entra External ID as CIAM for one of our applications. Users are external to the company (i.e. ‘consumers’). Users are initially created by IT, as the app is not open for the general public.
Everything works fine so far and, in addition to the authentication, we are using Entra External ID for authorization as well. For that, we are using regular Entra groups that travel to the app using OIDC claims, so once the user has successfully authenticated, the apps gets the group/s membership as well.
Here comes the question:
We now want to have a non-IT, Business user to manage authorizations, (i.e group memberships). The options we manage are:
1) Provide the business user access to the Entra External ID console, with a heavily restricted role that will only allow him to manage users of a certain app (in general, a limited collection of apps).
2) Create a (web) application that handles user authorization management. It would basically show the list of users and group membership for each, and allow making modification to them.
For option 2) we would like to keep it “CIAM agnostic”, meaning we don’t want to have it solved via something like MS Graph API , for instance. Instead, we would like (if possible) a solution based on standards such as OIDC. We are open to use any other different standard protocol such as SAML.
We don’t know if any of the options are actually feasible, or if there is a better approach that should be considered. Ideas about how we can handle this?
Thank you all in advance for you help.
Hi all, In my company we are using Microsoft Entra External ID as CIAM for one of our applications. Users are external to the company (i.e. ‘consumers’). Users are initially created by IT, as the app is not open for the general public. Everything works fine so far and, in addition to the authentication, we are using Entra External ID for authorization as well. For that, we are using regular Entra groups that travel to the app using OIDC claims, so once the user has successfully authenticated, the apps gets the group/s membership as well. Here comes the question:We now want to have a non-IT, Business user to manage authorizations, (i.e group memberships). The options we manage are: 1) Provide the business user access to the Entra External ID console, with a heavily restricted role that will only allow him to manage users of a certain app (in general, a limited collection of apps). 2) Create a (web) application that handles user authorization management. It would basically show the list of users and group membership for each, and allow making modification to them. For option 2) we would like to keep it “CIAM agnostic”, meaning we don’t want to have it solved via something like MS Graph API , for instance. Instead, we would like (if possible) a solution based on standards such as OIDC. We are open to use any other different standard protocol such as SAML. We don’t know if any of the options are actually feasible, or if there is a better approach that should be considered. Ideas about how we can handle this? Thank you all in advance for you help. Read More
Drag and Drop on Favorites Bar causes canary to crash 132.0.2909.0
You cannot rearrange using drag and drop on the Favorites Bar, it causes and instant crash.
Not sure when it started, because I don’t rearrange things that often.
You cannot rearrange using drag and drop on the Favorites Bar, it causes and instant crash. Not sure when it started, because I don’t rearrange things that often. Read More
Excel data segregation
Hi All,
I have orders data in Sheet 1, and it needs to be segregated into different sheets based on the segment in Column H (e.g., Consumer, Corporate, or Home Office). Further data analysis will be conducted in the individual sheets by adding additional columns. All data should appear in the respective individual sheets. Since the data in Sheet 1 is dynamic, any changes made there should automatically reflect in the respective sheets. What would be the best approach to achieve this without using a macro?
Hi All,I have orders data in Sheet 1, and it needs to be segregated into different sheets based on the segment in Column H (e.g., Consumer, Corporate, or Home Office). Further data analysis will be conducted in the individual sheets by adding additional columns. All data should appear in the respective individual sheets. Since the data in Sheet 1 is dynamic, any changes made there should automatically reflect in the respective sheets. What would be the best approach to achieve this without using a macro? Read More
NCrypt Usage
Are there any official restrictions on NCrypt Usage other than not calling from service main? For example, is it OK to use NCrypt from a service outside of service main?
Are there any official restrictions on NCrypt Usage other than not calling from service main? For example, is it OK to use NCrypt from a service outside of service main? Read More
More new languages supported in Microsoft 365 Copilot
This month we rolled out support for an additional 12 languages in Microsoft 365 Copilot: Bulgarian, Croatian, Estonia, Greek, Indonesian, Latvian, Lithuanian, Romanian, Serbian (Latin), Slovak, Slovenian, and Vietnamese. Microsoft 365 Copilot now supports a total of 42 languages.
There are a few noteworthy items in this latest set of languages we’re releasing. For example, very early in October, we have already introduced support for Welsh and Catalan, and it’s also important to note that the rollout of Indonesian and Serbian, which began in mid-October, will not reach all customers until early November. And finally, users working in Serbian language will see Teams meeting transcripts in Cyrillic, rather than Latin script. This is an issue we’re working to resolve. We will provide customers with updates on progress towards providing Teams meeting transcripts for Serbian language in Latin script on an as-appropriate basis. Learn more about supported languages for Microsoft Copilot here.
We are always improving and refining Copilot’s language capabilities. We are also continuing to expand the list of supported languages, with plans to offer support for even more languages in the coming months.
Microsoft Tech Community – Latest Blogs –Read More
Feature request – note field for AAGUID
Dear Microsoft Team,
I am writing to request a feature enhancement for MS Entra. Specifically, it would be highly beneficial to have a note field associated with each enabled AAGUID. Currently, it is challenging to identify the device corresponding to each AAGUID.
Adding this feature would greatly improve the usability and management of devices within MS Entra.
Thank you for considering this request. I look forward to your response.
Best regards,
Martin
Dear Microsoft Team, I am writing to request a feature enhancement for MS Entra. Specifically, it would be highly beneficial to have a note field associated with each enabled AAGUID. Currently, it is challenging to identify the device corresponding to each AAGUID.Adding this feature would greatly improve the usability and management of devices within MS Entra.Thank you for considering this request. I look forward to your response.Best regards,Martin Read More
Sharepoint: Couldnt resolve user……
I have a fresh Sharepoint install (E5 Developer License)……..both the organisation settings within the admin 365 portal and the site settings within the sharepoint portal allow sharing to anyone. However, I cannot share a file with anyone external I just receive the following error….”Couldnt resolve user <external address>”. Pretty frustrating as this seems so simple but cannot for the life of me determine whats going on.
Any help would be appreciated
I have a fresh Sharepoint install (E5 Developer License)……..both the organisation settings within the admin 365 portal and the site settings within the sharepoint portal allow sharing to anyone. However, I cannot share a file with anyone external I just receive the following error….”Couldnt resolve user <external address>”. Pretty frustrating as this seems so simple but cannot for the life of me determine whats going on. Any help would be appreciated Read More
AI Activity Explorer Not Showing Content
I am getting an error indicating “Additional permissions required. Your role can’t view AI Visits or user risk levels. For permission, ask an administrator to change your role.”
I am currently an Entra Global Admin, Entra Compliance Admin, and Purview Compliance Admin, and have other roles. I do see based on the dashboard graph I should bee seeing data. What other roles may be necessary or what other configurations may be missing?
I am getting an error indicating “Additional permissions required. Your role can’t view AI Visits or user risk levels. For permission, ask an administrator to change your role.” I am currently an Entra Global Admin, Entra Compliance Admin, and Purview Compliance Admin, and have other roles. I do see based on the dashboard graph I should bee seeing data. What other roles may be necessary or what other configurations may be missing? Read More
Hidden Worksheets
I am trying to write a macro in Excel to manipulate Hidden and Visible worksheets and am running into a problem. The Help on this subject reads as follows:
Notes: To select multiple sheets do either of these:
Press and hold CTRL, then click the items to select them.
Press and hold SHIFT, then use the up and down arrow keys to adjust your selection.
Neither of these options work.
My goal here is to select all Hidden sheets and make them visible so I can work with them. Can anyone shed some light on this?
I am trying to write a macro in Excel to manipulate Hidden and Visible worksheets and am running into a problem. The Help on this subject reads as follows: Notes: To select multiple sheets do either of these: Press and hold CTRL, then click the items to select them. Press and hold SHIFT, then use the up and down arrow keys to adjust your selection.Neither of these options work. My goal here is to select all Hidden sheets and make them visible so I can work with them. Can anyone shed some light on this? Read More
Outlook allowed a message I had read to be recalled with no notification
Today I opened an email and had read it. It suddenly disappeared and was nowhere to be found in my deleted items. I checked my deleted folder and searched everything and it was gone. I checked with my colleague who sent it and she confirmed she recalled it. But I had received no notification it was recalled or attempted to be recalled. I checked everywhere and there was no notification of recall.
Today I opened an email and had read it. It suddenly disappeared and was nowhere to be found in my deleted items. I checked my deleted folder and searched everything and it was gone. I checked with my colleague who sent it and she confirmed she recalled it. But I had received no notification it was recalled or attempted to be recalled. I checked everywhere and there was no notification of recall. Read More