Tag Archives: microsoft
June 2024 update on Azure AD Graph API retirement
One year ago, we shared an update on the completion of a three-year notice period for the deprecation of the Azure AD Graph API service. This service is now in the retirement cycle and retirement (shut down) will occur in incremental stages. In the first stage of this retirement cycle, newly created applications will receive an error (HTTP 403) for any requests to Azure AD Graph APIs. We’re revising the date for this first stage from June 30 to August 31, and only applications created after August 31, 2024 will be impacted. After January 31, 2025, all applications – both new and existing – will receive an error when making requests to Azure AD Graph APIs, unless they’re configured to allow extended Azure AD Graph access.
We understand that some apps may not have fully completed migration to Microsoft Graph. We’re providing an optional configuration through the authenticationBehaviors property, which will allow an application to use Azure AD Graph APIs through June 30, 2025. Azure AD Graph will be fully retired after June 30, 2025, and no API requests will function at this point, regardless of the application’s configuration.
If you develop or distribute software that still uses Azure AD Graph APIs, you must act now to avoid interruption. You’ll either need to migrate your applications to Microsoft Graph (highly recommended) or configure the application for an extension, as described below, and ensure that your customers are prepared for the change. If you’re using applications supplied by a vendor that use Azure AD Graph APIs, work with the software vendor to update to a version that has migrated to Microsoft Graph APIs.
How do I find Applications in my tenant using Azure AD Graph APIs?
The Microsoft Entra recommendations feature provides recommendations to ensure your tenant is in a secure and healthy state, while also helping you maximize the value of the features available in Entra ID.
We’ve provided two Entra recommendations that show information about applications and service principals that are actively using Azure AD Graph APIs in your tenant. These new recommendations can support your efforts to identify and migrate the impacted applications and service principals to Microsoft Graph.
For more information, reference Recommendation to migrate to Microsoft Graph API.
Configuring an application for an extension of Azure AD Graph access
To allow an application created to have an extension for access to Azure AD Graph APIs through June 30, 2025, you must make a configuration change on the application after it’s created. This configuration change is done through the AuthenticationBehaviors interface. By setting the blockAzureADGraphAccess flag to false, the newly created application will be able to continue to use Azure AD Graph APIs until further in the retirement cycle.
Note: In this first stage, only Applications created after August 31, 2024 will be impacted. Existing applications will be able to continue to use Azure AD Graph APIs even if the authenticationBehaviors property is not configured. Once this change is rolled out, you may also choose to set blockAzureADGraphAccess to true for testing or to prevent an existing application from using Azure AD Graph APIs.
Microsoft Graph REST API examples
Read the authenticationBehaviors property for a single application:
GET https://graph.microsoft.com/beta/applications/afe88638-df6f-4d2a-905e-40f2a2d451bf/authenticationBehaviors
Set the authenticationBehaviors property to allow extended Azure AD Graph access for a new Application:
PATCH https://graph.microsoft.com/beta/applications/afe88638-df6f-4d2a-905e-40f2a2d451bf/authenticationBehaviors
Content-Type: application/json
{
“blockAzureADGraphAccess”: false
}
Microsoft Graph PowerShell examples
Read the authenticationBehaviors property for a single application:
Import-Module Microsoft.Graph.Beta.Applications
Connect-MgGraph -Scopes “Application.Read.All”
Get-MgBetaApplication -ApplicationId afe88638-df6f-4d2a-905e-40f2a2d451bf -Property “id,displayName,appId,authenticationBehaviors”
Set the authenticationBehaviors property to allow extended Azure AD Graph access for a new Application:
Import-Module Microsoft.Graph.Beta.Applications
Connect-MgGraph -Scopes “Application.ReadWrite.All”
$params = @{
authenticationBehaviors = @{
blockAzureADGraphAccess = $false
}
}
Update-MgBetaApplication -ApplicationId $applicationId -BodyParameter $params
What happens to applications using Azure AD Graph after August 31, 2024?
Any existing applications that use Azure AD Graph APIs and were created before this date will not be impacted at this stage of the retirement cycle.
Any applications created after August 31, 2024 will encounter errors when making requests to Azure AD Graph APIs, unless the blockAzureADGraphAccess attribute has been set to false in the authenticationBehaviors configuration for the application.
What happens to applications using Azure AD Graph after January 31, 2025?
After January 31, 2025, all applications – new and existing – will encounter errors when making requests to Azure AD Graph APIs, unless the blockAzureADGraphAccess attribute has been set to false in the authenticationBehaviors property for the application.
What happens to applications using Azure AD Graph after June 30, 2025?
Azure AD Graph APIs will no longer be available to any applications after this point, and any requests to Azure AD Graph APIs will receive an error, regardless of the authenticationBehaviors configuration for the application.
Current support for Azure AD Graph
Azure AD Graph APIs are in the retirement cycle and have no SLA or maintenance commitment beyond security-related fixes.
About Microsoft Graph
Microsoft Graph represents our best-in-breed API surface. It offers a single unified endpoint to access Entra and Microsoft 365 services such as Microsoft Teams and Microsoft Intune. All new functionalities will only be available through Microsoft Graph. Microsoft Graph is also more secure and resilient than Azure AD Graph.
Microsoft Graph has all the capabilities that have been available in Azure AD Graph and new APIs like identity protection and authentication methods. Its client libraries offer built-in support for features like retry handling, secure redirects, transparent authentication, and payload compression.
What about Azure AD and Microsoft Online PowerShell modules?
As of March 30, 2024, AzureAD, AzureAD-Preview, and Microsoft Online (MSOL) PowerShell modules are deprecated and will only be supported for security fixes. These modules will be retired and stop working after March 30, 2025. You should migrate these to Microsoft Graph PowerShell. Please reference this update for more information.
Available tools
Migrate from Azure Active Directory (Azure AD) Graph to Microsoft Graph
Azure AD Graph app migration planning checklist
Azure AD Graph to Microsoft Graph migration FAQ
Kristopher Bash
Product Manager, Microsoft Graph
Learn more about Microsoft Entra
Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.
Microsoft Entra News and Insights | Microsoft Security Blog
Microsoft Entra blog | Tech Community
Microsoft Entra documentation | Microsoft Learn
Microsoft Entra discussions | Microsoft Community
Microsoft Tech Community – Latest Blogs –Read More
Windows 365 Cross-region Disaster Recovery generally available
Disaster recovery is a critical consideration for any IT desktop strategy. When it comes to remote desktops, the majority of organizations consider disaster recovery a primary objective. Since its introduction, Windows 365 has provided robust business continuity and disaster recovery options. Whether for compliance requirements, natural disasters, technical failure, or human error, putting greater distance between your primary and backup environments can add an extra sense of security and peace of mind to any IT desktop strategy.
We are excited to introduce Windows 365 Cross-region Disaster Recovery, a Windows 365 add-on feature that creates “snapshots” of Cloud PCs. These snapshots are placed in customer-defined, geographically distant locations, and they can be recovered to Cloud PCs running in the selected location during a disaster recovery event.
Windows 365 Cross-region Disaster Recovery is especially relevant for industries and organizations that are highly regulated, or that have users or workflows that require geographic distance between primary and backup locations.
Configuration and use
Unlike many traditional disaster recovery solutions, Windows 365 Cross-region Disaster Recovery was designed to be configured and used with minimal—or even no—prior disaster recovery experience. Configuration can be completed in a few minutes. In the event of an outage, recovery may be activated with just a few clicks and typically in less than five minutes.
In addition to configuration and activation, Windows 365 Cross-region Disaster Recovery has been integrated into various reports and flows. Reports alert administrators if an outage has taken place and provide full context of the configuration and status of each Cloud PC using Windows 365 Cross-region Disaster Recovery. After the outage is resolved, administrators are notified and can deactivate Cross-region Disaster Recovery in minutes.
How do I get the Windows 365 Cross-region Disaster Recovery add-on?
Windows 365 Cross-region Disaster Recovery is provided as an add-on license to Windows 365 Enterprise SKUs. It is not currently available for any other Windows 365 SKU.
In the United States, pricing for the Windows 365 Cross-region Disaster Recovery add-on is $5 per user, per month. It can be applied to the Enterprise Cloud PCs that the user is licensed to use. Please contact sales for pricing in other regions.
FAQ
Q: Are the geographies and regions available for Windows 365 Cross-region Disaster Recovery limited?
A: In general no, because any geography or region where Windows 365 is available may be used as a backup region, and any of those areas can be selected by the administrator. Administrators should carefully consider the location of Cloud PC users, as well as data sovereignty, when selecting backup regions.
Q: If a user has multiple Cloud PCs, can each device have a different Windows 365 Cross-region Disaster Recovery configuration?
A: No. At this time, all Cloud PCs associated with a user will have the same Windows 365 Cross-region Disaster Recovery configuration.
Q: What is the restore time objective (RTO) and restore point objective (RPO) for Windows 365 Cross-region Disaster Recovery?
A: RPO is defined by the cadence of point-in-time restore snapshots. The RTO is four hours for Cloud PC tenants with up to 50,000 Cloud PCs in a region. The performance of Cross-region Disaster Recovery is anticipated to increase as actual deployment sizes increase to maintain an RTO of four hours.
Next Steps
Learn more about:
Windows 365 Cross-region Disaster Recovery
Point-in-time restore for Windows 365 Enterprise
Windows 365 and Azure network connections
Azure regions and zones
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X and on LinkedIn. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Connection Reliability in Azure Virtual Desktop Insights
We are thrilled to announce that the Connection Reliability tab in Azure Virtual Desktop Insights is now generally available. IT administrators can now monitor the connection resilience between users and Azure Virtual Desktop host pools. This gives administrators a simpler experience when it comes to understanding disconnection events and correlations between errors that affect their end users.
The Connection Reliability tab provides two primary visuals.
The first is a graph that analyzes and plots the number of disconnections over the concurrent connections during a given time range. This allows administrators to easily detect clusters of disconnects that are impacting connection reliability. Administrators can also analyze connection errors by different pivots—for example client version and IP range—to determine the root cause of disconnects and improve connection reliability.
The second visual provides a table of the top 20 disconnection events and lists the top 20 specific time intervals where the most disconnections occurred. Administrators can select a row in the table to highlight specific segments of the chart to view the disconnections that occurred during those time segments.
To experience the benefits of the Azure Virtual Desktop Insights Connection Reliability tab, sign in to Azure Virtual Desktop Insights and navigate to the Connection Reliability tab. More information can be found here.
Our team is dedicated to enhancing Azure Virtual Desktop Insights and expanding its capabilities to address the evolving needs of our users. We encourage you to explore the features of the Connection Reliability tab and share your experiences to help us guide future development of this and other Azure Virtual Desktop Insights features.
Stay up to date! Bookmark the Azure Virtual Desktop Tech Community.
Microsoft Tech Community – Latest Blogs –Read More
How to remove the credential for legacy Threat Detection feature from Azure SQL Database
(Written on May 30th, 2024)
If you come across a credential named something like ‘https://xxyyzz.blob.core.windows.net/sqldbtdlogs‘ in the sys.database_scoped_credentials table of your Azure SQL Database and are unsure of its purpose. it is likely related to the Threat Detection feature. This feature monitored and detected threats to your Azure SQL Database, generating reports stored in the sqldbtdlogs container in the storage account xxyyzz.
You can further verify this by checking the container for a folder named like ‘SqlDbThreatDetection_Audit_xxxxx’:
Previously, this credential was automatically added to the sys.database_scoped_credentials table when Threat Detection was enabled and removed when it was disabled. However, Threat Detection has been deprecated and replaced by Microsoft Defender for Azure SQL, which offers more extensive and holistic monitoring and threat detection capabilities.
If you find this credential still present in your Azure SQL Database, it might have been missed during the transition from Threat Detection to Microsoft Defender for Azure SQL. If you confirm it is no longer in use and want to remove it, note that you cannot simply use the DROP DATABASE SCOPED CREDENTIAL command, as it will result in an error:
This design likely prevents the unintended removal of the credential, which would cause Threat Detection to fail. The credential should automatically be dropped once Threat Detection is disabled.
Since Threat Detection can no longer be enabled or disabled through the Azure Portal due to its deprecation, you can use the following command to disable it: az sql db threat-policy.
Here’s a demonstration:
1. Confirm the Credential Exists:
2. Check Threat Detection Status:
(If it shows ‘Disabled’, but the credential is present, you can still proceed to the next step to disable the feature again to drop the credential.)
3. Run the command to disable the feature to drop the credential:
4. Confirm the credential is no longer present:
(The end of this post)
Microsoft Tech Community – Latest Blogs –Read More
Location data when exporting from MS Lists to Excel
Hi Everyone
I am just working with Lists and I need some advice on formatting the location information so it looks more presentable.
This is an example of what is exporting:
{“EntityType”:”LocalBusiness”,”LocationSource”:”Bing”,”LocationUri”:”https://www.bingapis.com/api/v6/localbusinesses/YN1029x7555097221641136189“,”UniqueId”:”https://www.bingapis.com/api/v6/localbusinesses/YN1029x7555097221641136189“,”IsPreviouslyUsed”:fal
Does anyone have any advice or a solution to this?
Thanks in advance !
Hi Everyone I am just working with Lists and I need some advice on formatting the location information so it looks more presentable. This is an example of what is exporting: {“EntityType”:”LocalBusiness”,”LocationSource”:”Bing”,”LocationUri”:”https://www.bingapis.com/api/v6/localbusinesses/YN1029x7555097221641136189″,”UniqueId”:”https://www.bingapis.com/api/v6/localbusinesses/YN1029x7555097221641136189″,”IsPreviouslyUsed”:fal Does anyone have any advice or a solution to this? Thanks in advance ! Read More
Attack surface reduction – check trigger if possible
Hello,
I configured ASR rules and now reviewing exceptions.
Is it possible to find out what triggers “sc.exe” or “conhost.exe” without checking event viewer on the specific machine? Or we can just exclude paths that we actually see as exceptions and that’s it?
That way we could define the exception more precisely instead of putting “sc.exe” or “conhost.exe” as exception.
Here are 2 paths blocked by the same rule:
C:WindowsSystem32conhost.exe
Block process creations originating from PSExec and WMI commands
C:WindowsSystem32sc.exe
Block process creations originating from PSExec and WMI commands
Thank you!
Hello,I configured ASR rules and now reviewing exceptions.Is it possible to find out what triggers “sc.exe” or “conhost.exe” without checking event viewer on the specific machine? Or we can just exclude paths that we actually see as exceptions and that’s it?That way we could define the exception more precisely instead of putting “sc.exe” or “conhost.exe” as exception.Here are 2 paths blocked by the same rule:C:WindowsSystem32conhost.exeBlock process creations originating from PSExec and WMI commandsC:WindowsSystem32sc.exeBlock process creations originating from PSExec and WMI commandsThank you! Read More
Project for the web – actual start and finish dates
Hello,
I have started using MS project for the web/ new planner for a development project for a team of 10 people. I know there already are the fields start and finish date. However, I was wondering how to add fields or a way to track/report the actual start date and the actual finish date. Ideally I will like that when the user sets a task to “in progress” it automatically records the actual start date, and when the user marks it as “completed”, it will also record the actual finish date.
Thank you!
Hello, I have started using MS project for the web/ new planner for a development project for a team of 10 people. I know there already are the fields start and finish date. However, I was wondering how to add fields or a way to track/report the actual start date and the actual finish date. Ideally I will like that when the user sets a task to “in progress” it automatically records the actual start date, and when the user marks it as “completed”, it will also record the actual finish date.Thank you! Read More
Focused Inbox for Contacts Only
I have been searching for an idea listing to have the option of configuring the focused inbox to show only email from contacts and the Other show everything else. I get an enormous number of unsolicited emails into my focused inbox. I have spent hours trying to “always move them to other”, but that is not effective, efficient or user friendly. I’d rather add to my contact the emails that I have agreed to receive. My important emails are getting lost in all the clutter.
I have been searching for an idea listing to have the option of configuring the focused inbox to show only email from contacts and the Other show everything else. I get an enormous number of unsolicited emails into my focused inbox. I have spent hours trying to “always move them to other”, but that is not effective, efficient or user friendly. I’d rather add to my contact the emails that I have agreed to receive. My important emails are getting lost in all the clutter. Read More
Saving custom prompts to copilot lab
I’ve asked both Copilot and ChatGPT for instructions on how to save a custom prompt to Copilot lab. (We have Copilot for Microsoft 365). I am not an admin for our account so I cannot see how your account is set up but I don’t know if I’m not seeing the option to do this because it’s not available or it’s not set up in our account. Can anyone confirm if this is even possible?
I’ve asked both Copilot and ChatGPT for instructions on how to save a custom prompt to Copilot lab. (We have Copilot for Microsoft 365). I am not an admin for our account so I cannot see how your account is set up but I don’t know if I’m not seeing the option to do this because it’s not available or it’s not set up in our account. Can anyone confirm if this is even possible? Read More
How To Revert System Permissions to Default After Locking Myself Out?
I tried to delete a file by taking ownership & got carried away with all the settings. Now it seems I’m locked out of my own drive & my system doesn’t operate properly, i can’t manage my screen brightness, download apps, uninstall apps, & only a handful of settings working currently. I have went through a dozen restarts hoping it will go back to normal & nothing. So how do i restore the system permissions & ownership settings? Read More
Booking 30-minute meetings only on the hour.
Hello,
I work for a company that has a centralized hiring system so we complete many online interviews. Currently, I use a personal booking page but am switching to shared booking page because I want the option to add required questions for my applicants to fill out prior to booking an interview.
However, I am running into an issue with scheduling with the shared booking page. My interview slots are 30 minutes, but on my personal booking page I had it set to limit start times to 1-hr intervals, so it would ONLY book at the top of the hour, and give me 30 minutes in between each interview to prep for the next. I have my outlook calendar specifically set up so people can only book at 9am, 10am, 11am, 2pm, 3pm, and 4pm. I do not want Bookings to give them the option to book at 9:30, 10;30, etc because then it becomes too much. Even if I set the time increments to 1 hr while keeping the 30 minute interview time, bookings still gives it the option to schedule at the half an hour. Is there a way to get the best of both worlds with the required questions and 1-hr intervals?
Hello, I work for a company that has a centralized hiring system so we complete many online interviews. Currently, I use a personal booking page but am switching to shared booking page because I want the option to add required questions for my applicants to fill out prior to booking an interview. However, I am running into an issue with scheduling with the shared booking page. My interview slots are 30 minutes, but on my personal booking page I had it set to limit start times to 1-hr intervals, so it would ONLY book at the top of the hour, and give me 30 minutes in between each interview to prep for the next. I have my outlook calendar specifically set up so people can only book at 9am, 10am, 11am, 2pm, 3pm, and 4pm. I do not want Bookings to give them the option to book at 9:30, 10;30, etc because then it becomes too much. Even if I set the time increments to 1 hr while keeping the 30 minute interview time, bookings still gives it the option to schedule at the half an hour. Is there a way to get the best of both worlds with the required questions and 1-hr intervals? Read More
Sentinel Region migration
Looking for any documentation to describe Sentinel Region migration. Like from one azure region to another azure region.
Is there possibility we can move log analytic workspace data to new region
Is that data will be usable (can we query data)
what other major factor to consider while migrating. Things we can automate and things we need to do manual while migrating Analytic rules, Logic apps, workbooks, automation rules, workbooks, watchlists, Data connectors, parsers etc.
Looking for any documentation to describe Sentinel Region migration. Like from one azure region to another azure region.Is there possibility we can move log analytic workspace data to new region Is that data will be usable (can we query data)what other major factor to consider while migrating. Things we can automate and things we need to do manual while migrating Analytic rules, Logic apps, workbooks, automation rules, workbooks, watchlists, Data connectors, parsers etc. Read More
Sync is down on edge canary android
Good morning,
why hasn’t syncing been working on edge android canary for a while now? the first step was the inability to enable password synchronization, the second (dated today) completely blocks all synchronization options, disabled by default, and impossible to enable them again? Please explain the reasons for this choice, or deploy a fix quickly. thank you very much microsoft edge team !
Good morning,why hasn’t syncing been working on edge android canary for a while now? the first step was the inability to enable password synchronization, the second (dated today) completely blocks all synchronization options, disabled by default, and impossible to enable them again? Please explain the reasons for this choice, or deploy a fix quickly. thank you very much microsoft edge team ! Read More
Licensing for guest users in Entra ID
Hi,
We have Active Directory Premium P1 licenses in our tenant and I’d like to know how does licensing work for guest users in Entra ID. We are pushing MFA through Conditional access and I’m trying to figure out if the guest users will need this license or not for MFA enforcement. I know that if there’s a subscription attached to Entra ID, the licensing is based on MAU but there is no subscription in our Entra ID yet. I’m a GA in our tenant and I don’t see any subscriptions here.
Any advice would be appreciated.
Hi, We have Active Directory Premium P1 licenses in our tenant and I’d like to know how does licensing work for guest users in Entra ID. We are pushing MFA through Conditional access and I’m trying to figure out if the guest users will need this license or not for MFA enforcement. I know that if there’s a subscription attached to Entra ID, the licensing is based on MAU but there is no subscription in our Entra ID yet. I’m a GA in our tenant and I don’t see any subscriptions here. Any advice would be appreciated. Read More
Building the Ultimate Nerdland Podcast Chatbot with RAG and LLM: Step-by-Step Guide
Large Language Models (LLMs) have become a hot topic in the tech world. For those of us in Belgium (or the Netherlands) with a passion for technology and science, one of the go-to resources is the monthly podcast “Nerdland,” which delves into a variety of subjects from bioscience and space exploration to robotics and artificial intelligence.
Recognizing the wealth of knowledge contained in over 100 episodes of “Nerdland,” A simple thought came to our minds: why not develop a chatbot tailored for “Nerdland” enthusiasts? This chatbot would leverage the content of the podcasts to engage and inform users. Want to know more of the project with Nerdland? Visit aka.ms/nerdland
This chatbot enables the Nerdland community to interact at another level with the Nerdland content. On top of that, it democratizes the wealth of information in all these podcasts. Since the podcast is in Ducth, the audience around the world is quite limited. Now we can expose this information in dozens of languages, cause these LLMs are capable of multi-language conversations out of the box.
In this blog post, I’ll explore the technical details and architecture of this exciting project. I’ll discuss the LLMs utilized, the essential components, the process of integrating podcast content into the chatbot, and the deployment strategy used to ensure the solution is scalable, secure, and meets enterprise standards on Azure.
RAG Principles
Upon delving into this article, it’s likely you’ve experimented with chatGPT or other Generative AI (GenAI) models. You may have observed two notable aspects:
Large Language Models (LLMs) excel at crafting responses that are both articulate and persuasive.
However, the content provided by an LLM might be entirely fictional, a phenomenon known as “hallucinations.”
LLMs are often employed for data retrieval, offering a user-friendly interface through natural language. To mitigate the issue of hallucinations, it’s crucial to ground the LLM’s responses in a specific dataset, such as one you privately own. This grounding ensures that the LLM’s outputs are based on factual information.
To utilize LLMs with your proprietary data, a method called Retrieval Augmented Generation (RAG) is used. RAG combines the natural language prowess of LLMs with data they haven’t been explicitly trained on.
Several components are essential for this process:
Indexing: Your data must be organized in a way that allows for easy retrieval. Indexing structures your data into “documents,” making key data points readily searchable.
Depending on your data size, you may need to split it into smaller pieces before indexing. Reason being that LLMs typically only allow a certain amount of input tokens (for reference: GPT3.5 turbo allows 4096 tokens, the newest GPT4o allows up to 128’000 input tokens). Given that LLMs have a finite context window and the amount of tokens has a cost, chunking optimizes this token usage. Typically, data is chunked in increments (e.g., 1024 characters), although the optimal size may vary depending on your data.
Intent Recognition: The user’s query is processed by an LLM to extract the “intent,” a condensed version of the original question. Querying the index with this intent often produces more relevant results than using the full prompt. The index is then searched using the intent, yielding the top n documents.
Once the relevant documents are identified, they are fed into a Large Language Model. The LLM then crafts a response in natural language, drawing upon the information from these documents. It ensures that the answer is not only coherent but also traces back to the original data sources indexed, maintaining a connection to the factual basis of the information.
Keyword matching is a fundamental aspect of data retrieval, yet it has its limitations. For instance, a search for “car” might yield numerous references to “cars” but overlook related terms like “vehicles” due to the lack of direct word correlation. To enhance search capabilities, it’s not just exact keyword matches that are important, but also the identification of words with similar meanings.
This is where vector space models come into play. By mapping words into a vector space, “car” and “vehicle” can be positioned in close proximity, indicating their semantic similarity, while “grass” would be positioned far from both. Such vector representations significantly refine the search results by considering semantically related terms.
Embedding models are the tools that facilitate the translation of words into their vector counterparts. These pretrained models, such as the ADA model, encode words into vectors.
Integrating vector search with the Retrieval Augmented Generation (RAG) model introduces additional steps:
Initially, our index is comprised solely of textual documents. To enable vector-based searching, our data must also be vectorized using the pretrained embedding models. These vectors are then indexed, transforming our textual index into a vector database.
Incoming queries are likewise converted into vectors through an embedding model. This conversion allows for a dual search approach within our vector database, leveraging both keyword and vector similarities.
Finally, the top ‘n’ documents from the index are processed by the LLM, which synthesizes the information to generate a coherent, natural language response.
RAG for Nerdland Assistant
Our Nerdland Assistant is a Retrieval Augmented Generation (RAG) chatbot, uniquely crafted to harness the rich content from the podcast archives. To achieve this, we’ve combined a suite of Azure components, each serving a distinct purpose in the chatbot’s architecture:
Container Apps: These are utilized to host custom logic in the form of containers in a serverless way, ensuring both cost-effectiveness and scalability.
Logic Apps: These facilitate the configuration of workflows, streamlining the process with ease and efficiency.
Azure OpenAI: This serves as a versatile API endpoint, granting access to a range of OpenAI models including ChatGPT4, ADA, and others.
AI Search: At the core of our chatbot is an index/vector database, which enables the sophisticated retrieval capabilities necessary for the RAG model.
Storage Accounts: A robust storage solution is essential for housing our extensive podcast library, ensuring that the data remains accessible and secure.
The journey of the Nerdland Assistant episode begins when a new MP3 file is uploaded to an Azure storage account. This triggers a series of automated workflows.
A Logic App is triggered, instructing a Container App to convert the stereo MP3 to mono format, a requirement for the subsequent speech-to-text conversion.
Another Logic App initiates the OpenAI Whisper transcription batch API, which processes the MP3s with configurations such as language selection and punctuation preferences.
A third Logic App monitors the transcription progress and, upon completion, stores the results back in the storage account.
A fourth Logic App calls upon a Python Scrapy framework-based Container App to scrape additional references from the podcast’s shownotes page.
The final Logic App sets off our custom indexer, hosted in a Container App, which segments the podcast transcripts into smaller chunks and uploads them to Azure AI.
Each chunk is then crafted into an index document, enriched with details like the podcast title, episode sponsor, and scraped data.
These documents are uploaded to Azure AI Search, which employs the ADA model to convert the text into vector embeddings, effectively transforming our index into a vector database.
While Azure AI Search is our chosen platform, alternatives like Azure Cosmos DB, Azure PostgreSQL, or Elastic could also serve this purpose.
Azure AI Search harnesses the ADA model to automatically convert textual documents into vector embeddings, through configuration. For those seeking alternatives, options include Azure Cosmos DB, Azure PostgreSQL, and Elasticsearch, among others.
With our vector database now established atop our podcast episodes, we’re ready to implement Retrieval Augmented Generation (RAG). There are several approaches to this, such as manual implementation, using libraries like LangChain or Semantic Kernel, or leveraging Azure OpenAI APIs and Microsoft OpenAI SDKs.
The choice of method depends on the complexity of your project. For more complex systems involving agents, plugins, or multimodal solutions, LangChain or Semantic Kernel might be more suitable. However, for straightforward applications like ours, the Azure OpenAI APIs are an excellent match.
We’ve crafted our own RAG backend using the Azure OpenAI APIs, which simplifies the process by handling all configurations in a single stateless request. This API abstracts much of the RAG’s complexity and requires the following parameters:
LLM Selection: For the Nerdland Copilot, we’re currently utilizing GPT-4o.
Embedding Model: Such as ADA, to vectorize the input.
Parameters: These include settings like temperature (to adjust the creativity of the LLM’s responses) and strictness (to limit responses to the indexed data).
Vector Database: In our case, this is the AI Search, which contains our indexed data.
Document Retrieval: The number of documents the vector database should return in response to a query.
System Prompt: This provides additional instructions to the LLM on the desired tone and behavior, such as “answer informally and humorously, act like a geeky chatbot.”
User Prompt: The original question posed by the user.
The backend we created in step 9, deployed on a Container App, is abstracted by an API Management layer which enhances security, controls the data flow, and offers potential enhancements like smart caching and load balancing for OpenAI.
To maintain a record of chat interactions, we’ve integrated a Redis cache that captures chat history via session states, effectively archiving the chat history of the Large Language Model (LLM). This server-side implementation ensures that the system prompts remain secure from any end-user modifications.
The final touch to our backend is its presentation through a React frontend hosted on an Azure Static Web App. This interface not only provides a seamless user experience but also offers the functionality for users to view and interact with the sources referenced in each LLM-generated response.
This entire setup is fully scripted as Infrastructure as Code. We utilize Bicep and the Azure Developer CLI to template the architecture, ensuring that our solution is both robust and easily replicable.
LLM Configuration
The quality of the answers by the LLM are significantly shaped by several factors: the system prompts, LLM parameters (such as temperature, max tokens, ..), chunk size, and the indexing method’s robustness.
Every single of the above parameters influences the outcome by a lot. The only way to improve the outcome is to assess the results by influencing the parameters. To structure this process, you can make use of PromptFlow. PromptFlow allows you to repeat this LLM tweaking process, keeping track of the quality of the results per configuration.
Responsible AI
When deploying an application that is making use of generative AI; adhering to Responsible AI Principles is crucial. These Microsoft principles guide the ethical and safe use of AI technologies. https://learn.microsoft.com/en-us/legal/cognitive-services/openai/overview
A key advantage of utilizing Azure OpenAI endpoints is the built-in safety content filters provided by Microsoft. These filters function by processing both the input prompts and the generated outputs through a sophisticated array of classification models. The goal is to identify and mitigate the risk of producing any content that could be deemed harmful.
Future of the project and GenAI
Triggered by the above? Feel free to explore yourself on: github.com/azure/nerdland-copilot.
The journey of developing this custom Assistantwas a time-constrained endeavor that laid the groundwork for a basic yet functional system. The possibilities for expansion and enhancement are endless, with potential future enhancements including:
Integration of models like GPT-4o: Enabling speech-based interactions with the bot, offering a more dynamic and accessible user experience.
Data enrichment: Incorporating a broader spectrum of (external) data to enrich the chatbot’s knowledge base and response accuracy.
Quality optimization: Embedding LLMOps (for example with: PromptFlow) into the application’s core to fine-tune the LLM’s performance, coupled with leveraging real-time user feedback for continuous improvement.
Incorporating graph libraries would enable the AI to present answers that are not only informative but also visually compelling, particularly for responses that involve statistical analysis. This would make data interpretation more intuitive for users.
Embracing the adage that “a picture is worth a thousand words,” integrating the ability for the AI to communicate through images and videos could improve the way we interact with the technology.
The concept of creating a podcast from scratch by combining various topics is an exciting prospect. It would allow for the generation of unique and diverse content, tailored to the interests and preferences of the individual. A possible way of achieving this might be with the use of “agents”. An agent being a specific task for an LLM (one specific model/prompt/..) This requires a setup where multiple “agents” work together.
Want to know more? Contact us!
Other sources
https://aka.ms/nerdland
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search
https://github.com/Azure-Samples/azure-search-openai-demo
https://github.com/Azure/Vector-Search-AI-Assistant another example, with AKS and CosmosDB
Microsoft Tech Community – Latest Blogs –Read More
Logic Apps Aviators Newsletter – July 2024
In this issue:
Ace Aviator of the Month
Customer Corner
News from our product group
News from our community
Ace Aviator of the Month
July’s Ace Aviator: Diogo Formosinho
What is your role and title? What are your responsibilities associated with your position?
I work as an Integration Developer at DevScope. My primary responsibility is to develop solutions based on client requirements. This involves analyzing client needs, designing and implementing integration solutions, ensuring that data flows between different systems. My role also emphasizes collaboration with cross-functional teams to ensure that our solutions are aligned with overall business objectives.
Can you provide some insights into your day-to-day activities and what a typical day in your role looks like?
As a developer, my day-to-day activities vary depending on the clients I’m working with. Recently, I’ve been focused on a client based in Canada, which has shaped my daily routine. My morning is dedicated to development and working on active projects. This is when I’m most focused and productive. Whether it’s creating logic apps or testing, the morning hours are crucial for making significant progress on the project. I often start by reviewing my tasks and setting goals for what I want to achieve by lunchtime.
The afternoon is reserved for meetings and collaboration. Working with a Canadian client means I need to sync my schedule to accommodate time zone differences. These meetings are essential for aligning and discussing project updates, and addressing any issues that arise. After meetings, I use the remaining time to make necessary changes based on the feedback received. This can involve tweaking, fixing bugs, or refining what is necessary. The late afternoon is a good time for this kind of work as it allows me to address immediate concerns and ensure that the project stays on track.
By balancing focused development time with collaborative meetings, I ensure that I stay productive and responsive.
What motivates and inspires you to be an active member of the Aviators/Microsoft community?
My motivation to share things with community it’s due to the potential of technology. The collaborative environment, where knowledge is shared and collective problem-solving thrives, constantly inspires me. Being part of a community that values growth, learning, and mutual support motivates me to keep sharing things that may help other professionals on their work.
Looking back, what advice do you wish you would have been told earlier on that you would give to individuals looking to become involved in STEM/technology?
The advice I wish I had received earlier is to have an open mind. It’s crucial to view failures and setbacks as learning opportunities rather than endpoints. Maintaining curiosity, continuously expanding your knowledge, and asking questions can greatly accelerate your learning curve.
What are some of the most important lessons you’ve learned throughout your career that surprised you?
One surprising lesson is the critical role of documentation. Initially, I believed that having a deep understanding of Logic Apps and their capabilities was sufficient. However, I quickly learned that clear, detailed documentation is indispensable. Good documentation not only helps in maintaining and scaling applications but also aids in troubleshooting and onboarding new team members. It ensures that the logic behind each app is transparent and accessible, which is crucial for long-term project sustainability and team collaboration.
Imagine you had a magic wand that could create a feature in Logic Apps. What would this feature be and why?
If I had a magic wand, I would create a feature in Logic Apps that enables integration with a wider range of AI and machine learning models. As someone with a master’s degree in artificial intelligence engineering, I understand the value of incorporating advanced analytics and predictive capabilities into workflows. This feature would allow users to easily integrate AI-powered insights into their applications without needing extensive data science expertise. By democratizing access to these insights, users could unlock new levels of efficiency, innovation, and decision-making.
Customer Corner:
SPAR NL readies for the future of retail with Azure Integration Services
Check out this customer success story with SPAR, a leading retail innovator, and how they’re revolutionizing their operations with Microsoft Azure Integration Services. Azure Logic Apps plays a crucial role in SPAR’s digital transformation journey by automating complex workflows and facilitating real-time data exchange between internal systems and external partners. Read more about how this integration has not only enhanced operational efficiency but also improved agility, enabling SPAR to respond swiftly to market demands and customer needs.
News from our product group:
Azure Logic Apps Community Standup – June 2024
Missed June’s Community Standup live last week? Catch up here in this recording and mark your calendar for July’s standup on the 26th.
Event Grid Trigger: Validation handshake Failed on Event subscription deployment
Having issues with failed error on event subscription deployment while handling validation requests in workflows with event grid triggers to bypass validation? Check out this article for a solution.
Retrieve a Consumption Logic App workflow definition from deletion
Learn more about a recovery method for Consumption where you can retrieve the definition after deletion.
Logic App Standard Storage issues investigation using Slots
Having issues with inaccessible storage? We might have an option to help using Slots.
Azure Logic Apps PeekLock caching and Service Bus queue Lockduration
Check out this article about optimizations when integrating the “When messages are available in a queue (peek-lock)” Logic App trigger with an Azure Service bus queue.
Announcing: Public Preview of Resubmit from an Action in Logic Apps Consumption Workflows
We are excited to introduce Resubmit Action in the Consumption SKU. Read more about this long-awaited feature in this article.
Announcing: General Availability of Azure API Center extension for Visual Studio Code
Read more about our exciting news about Azure API Center extension for Visual Studio Code now being generally available.
Announcement: Introducing .NET C# Inline Action for Azure Logic Apps (Standard) – Preview
Check out this article about our new capability that allows developers to write .NET C# script right within the Logic Apps designer in Azure Portal.
Templates for Azure Logic Apps Standard: Seeking Your Feedback on UI Wireframes
Wanting to preview the new Templates for Azure Logic Apps? We’re looking for your feedback!
Announcement!! Azure OpenAI and Azure AI Search connectors are now Generally Available (GA)
We are thrilled to announce the general availability of Azure OpenAI and AI Search connectors for Logic Apps. Read more here.
Announcing the Public Preview of the Azure Logic Apps Rules Engine!
Learn how to effectively implement Mission Critical Solutions with the new Azure Logic Apps Rules Engine
Integration Environment Update: Introducing Unified Monitoring and Business Process Tracking Update
Check out this article about our exciting new capability in Integration Environment that allows you to monitor Azure Integration Services.
Announcement: Introducing .NET 8 Custom Code support for Azure Logic Apps (Standard) – Preview
We are excited to announce that we now support .NET 8 for custom code in Logic App Standard.
Logic Apps Standard – New Hybrid Deployment Model (Preview)
Read about our exciting new Hybrid Deployment Model for Logic Apps Standard that allows you to run Logic Apps workloads on customer managed infrastructure.
Integrate GPT4o (Azure Open AI) in Teams channel via Logic App with image supportability
Read this post to learn how to upgrade to GPT-4o with capability for image processing.
Advanced Scenarios with the 3270 Design Tool: Arrays and Screen collection with Robert Beardsworth
Watch this video with Harold Campos and Robert Beardsworth as they discuss the 3270 Design tool and demonstrate advanced scenarios to handle Arrays and Screens collection.
News from our community:
Remove Wasteful Processing in Logic Apps
Video by Mike Stephenson
Watch Mike discuss a scenario with Logic Apps where you can have wasteful processing of unchanged records. Learn how to optimize the cost of the Logic App to keep it efficient.
Friday Fact: New Logic App Designer (in GA) Enables Copy and Paste Actions
Post/Video by Luís Rigueira
Read this post or watch the video by Luis about how to use the copy and paste actions with the new generally available Designer in Logic Apps.
Generative AI Capabilities for Logic Apps Standard with Azure OpenAI and AI Search Connectors
Post by Steef-Jan Wiggers
Read this article from Steef-Jan discussing the general availability of Azure OpenAI and Azure AI Search connectors for Logic Apps Standard.
Post/Video by Diogo Formosinho
Learn from this month’s Ace Aviator Diogo about how to manage rate limits when it comes to External APIs.
Friday Fact: Consistency in Logic App Trigger Names Ensures Successful Resubmissions
Post/Video by Luís Rigueira
Read or watch this simple yet important tip and trick from Luis when it comes to testing your Logic Apps on Azure Portal.
Post/Video by Sandro Pereira
Implementing advanced routing scenarios? Learn about this trick from Sandro when it comes to using Conditional Split with Custom Expressions inside Logic Apps.
Get Started with Azure Logic Apps Standard using Visual Studio Code | Local Development with VS Code
Video by Sri Gunnala
Learn how to kickstart with Azure Logic Apps Standard in VS Code including prerequisites, a simple example workflow, and deployment to Azure in this getting started video from Sri.
Integration Insider: Integration Modernization with AIS – From Lift & Shift to Full Modernization
Video by Derek Marley and Tim Bieber
Watch another edition of Integration Insider where Derek and Tim reveal a new Integration Anti-Pattern emerging that enterprise organizations need to be aware of and how to combat it.
Post by Sandro Pereira
Read about a solution Sandro found for an error in a Logic Apps deployment related to misconfigured managed identities in API connections.
Microsoft Tech Community – Latest Blogs –Read More
Can an ADF Pipeline trigger upon source table update?
Hi,
Is it possible for an Azure Data Factory Pipeline to be triggered each time the source table changes?
Let’s say I have a ‘copy data’ activity in a pipeline. The activity copies data from TableA to TableB. Can the pipeline be configured to execute whenever source TableA is updated (a record deleted, changed, a new record inserted, etc..)?
Thanks.
Hi,Is it possible for an Azure Data Factory Pipeline to be triggered each time the source table changes?Let’s say I have a ‘copy data’ activity in a pipeline. The activity copies data from TableA to TableB. Can the pipeline be configured to execute whenever source TableA is updated (a record deleted, changed, a new record inserted, etc..)?Thanks. Read More
Why did the new Teams remove the feature that lets you know who has already joined a meeting?
In the classic Teams, you used to be able to see who had joined a meeting that already started. They’ve removed this in the new Teams.
Is there a workaround? Not sure why they’d remove helpful key features like this….
I’ve tried to use Teams classic, but it is now defunct 🙂
In the classic Teams, you used to be able to see who had joined a meeting that already started. They’ve removed this in the new Teams. Is there a workaround? Not sure why they’d remove helpful key features like this…. I’ve tried to use Teams classic, but it is now defunct 🙂 Read More
Copilot in Excel: Unlocking Insights from Data
You’ve seen how Copilot in Excel can help write complex formulas. Today, let’s delve into a dataset containing US birth data from 2000 – 2014 to learn how Copilot in Excel can help us format data, analyze data, and create visualizations.
1. First, we’d like to ask Copilot to format our data for better readability
We’d like to go from this:
Day of Week
Births
6
9083
7
8006
To this:
Day of Week
Births
Saturday
9,083
Sunday
8,006
Prompt: “Convert the days of week into words. For example, 1 is Monday. Additionally, add thousand separators into the birth column.”
2. Next, let’s ask Copilot a question about our data
Prompt: “What are the top 10 days with the lowest birth rate and give a rationale.”
With this prompt, Copilot creates a table with the days with the lowest birth rate, and it gives us an explanation that December 25th is a major US holiday, which means hospitals may have limited staff and schedule fewer elective births.
3. Finally, we’d like Copilot to help us create a visualization of our data to help us uncover more insights
One of the most powerful ways to understand data is through visualization. Copilot makes it easy to create compelling visuals that can highlight trends and patterns in the birth data.
Prompt: “Create a line graph that graphs the number of births by year. Grouped by the days of the week.”
Here we see that Copilot creates a line graph for us, with each line representing a day of the week. We can see that weekdays have higher birth rates than weekends.
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks.
Your feedback helps shape the future of Excel. Please leave a comment below with thoughts or questions on content related to this blog post. Additionally, please let us know how you like a particular feature and what we can improve upon—“Give a compliment” or “Make a suggestion”. You can also submit new ideas or vote for other ideas via Microsoft Feedback.
Subscribe to our Excel Blog and the Insiders Blog to get the latest updates. Stay connected with us and other Excel fans around the world – join our Excel Community and follow us on X, formerly Twitter.
Microsoft Tech Community – Latest Blogs –Read More
2024 Microsoft Nonprofit Partner of the Year Awards announced
Microsoft Partner of the Year Awards celebrates the outstanding achievements and innovation by partners. These partners play a crucial role in delivering transformative solutions that address complex challenges and drive success for customers around the globe.
The rigorous selection process focuses on partners who have demonstrated excellence in innovation and implementation of customer solutions based on Microsoft technology. The importance of their work cannot be overstated. They enhance the capabilities of technology to empower organizations by saving time and reaching farther.
Winner
Valorem Reply: Pioneering technology for social good
Valorem Reply is deeply committed to nonprofits. Their team of Microsoft-dedicated practitioners is uniquely equipped to understand and tackle the challenges faced by their customers, accelerating value from tech investments.
As part of the Global Reply network and a Microsoft Cloud Solutions Partner, Valorem Reply enables nonprofits with innovative technology. Their strategic focus on harnessing Microsoft technology to imagine and create cutting-edge solutions is a testament to their belief in using the best tools to forge a better future by empowering nonprofit organizations.
The collaboration between Valorem Reply and their customer CARE illustrates the transformative power of technology. CARE, a global humanitarian organization, collects and analyzes structured and unstructured survey data across 100+ countries to determine the preparedness of high-risk countries for upcoming emergencies.
By developing an OpenAI sentiment analysis application, Valorem Reply equipped CARE with a powerful tool to enhance emergency readiness and decision-making, reducing human error and delivery times, and providing timely insights for risk mitigation. The sentiment analysis application is a strategic tool for emergency preparedness, enabling CARE to make informed decisions that could make the difference between life and death.
The customer outcomes speak for themselves:
20% higher accuracy in AI-analyzed data, enabling informed decisions and enhancing business strategies.
40% reduction in qualitative survey data analysis time and resource requirements.
Our winner’s journey is one of passion, expertise, and unwavering dedication to making a positive impact. It’s a story of how technology, when aligned with humanitarian goals, can become a force for good, transforming lives and shaping a more equitable and safe future for all. This is the essence of Valorem Reply—a company that truly embodies the spirit of innovation for social good.
Finalists
Exigo Tech: Strong partnerships for mission-driven success
Exigo Tech champions the transformative power of technology for nonprofits, partnering closely with clients like Samaritans of Singapore (SOS) to revolutionize their information technology (IT) systems. Their commitment is to create strong partnerships with clients to produce successful technological outcomes.
SOS provides confidential emotional support to individuals considering suicide. SOS was overwhelmed with requests for emotional support across various channels, which delayed their ability to assist those in need. To address this, Exigo Tech helped SOS implement Microsoft Dynamics 365 to centralize and streamline communication, allowing SOS to quickly categorize and assign cases to the right mental health professionals. This efficient system ensured timely support, with Dynamics 365 omnichannel customer service module doubling the response rate and providing 24/7 virtual support, demonstrating the critical role of technology in mental health services.
KPMG: Powered Enterprise for nonprofit transformation
KPMG leverages their Powered Enterprise suite to thoughtfully transition clients to the cloud, focusing on transforming functions with technology as the key enabler. This preconfigured and tested model includes leading practices, detailed job descriptions optimized for Dynamics 365, and a wealth of artifacts to drive organizational change. It allows rapid delivery of Microsoft solutions with minimal configuration, ensuring speed to value, reduced risks, and future-proof integrity and security. Collaborating closely with Microsoft, KPMG enhances the Microsoft Cloud for Nonprofit, benefiting organizations worldwide.
One such organization is the Australian Red Cross (ARC). ARC needed to unify and update legacy systems that created data siloes and disparate platforms: Microsoft Cloud for Nonprofits enables a unified system that allows for faster data insights and scalable growth. Using KPMG’s Powered Enterprise approach, the solution was implemented in just nine months and provided ARC with clarity, direction, and a de-risked approach at every step.
Wipfli: Amplifying impact through AI innovation
Wipfli is revolutionizing global change by harnessing Microsoft AI capabilities across Azure, Modern Work, and Business Applications and Power Platform. Their innovative approach includes AI algorithms in Fundraising and Engagement, Dynamics 365 Copilot, and Microsoft Fabric, which are pivotal in data strategy and AI-powered data aggregation and visualization. This enables organizations to leverage technology for impactful change and societal advancement.
Wipfli helped Junior Achievement (JA) lay AI foundations with Fabric. JA leveraged Microsoft technology to optimize its curriculum for 4.4 million students, streamline operations, and foster community engagement while ensuring data privacy for its one million volunteers. This scalable model has been successfully implemented by JA and can be adopted by other national organizations to improve efficiency in collaboration with local branches.
Wipfli brought together complex environments from 102 local offices, strategically scaling JA’s global impact by creating a unified organization. The firm also employed Power Pages, Nonprofit Common Data Model, Microsoft Cloud for Nonprofit, and Dynamics 365 Customer Insights (marketing module) to develop a new volunteer portal. This innovation has resulted in significant operational improvements for Junior Achievement, including:
The elimination of a three-month manual data validation process, now handled efficiently through the new system.
A drastic reduction in the time required to organize 100,000 annual events—from a multi-day manual process to digital registration that takes less than five minutes.
These advancements have not only enhanced JA’s operational capabilities but also set a precedent for other organizations seeking similar transformations. Wipfli’s use of Microsoft tools has proven to be a game-changer in the nonprofit sector, enabling organizations to focus more on their mission and less on administrative burdens.
The winners and finalists of the 2024 Nonprofit Partner of the Year Award are shining examples of how technology can be leveraged for social good, improving lives, and fostering a more equitable and sustainable world. Their dedication to the global nonprofit community and transformation using Microsoft tools is changing the world, one impactful solution at a time.
Congratulations to all the nominees, finalists, and winners of the Partner of the Year 2024 in the nonprofit category. Your commitment to excellence and innovation is truly inspiring and sets a benchmark for others in the industry.
Microsoft Tech Community – Latest Blogs –Read More