Tag Archives: microsoft
How to Fix QBDBMGRN Not Running on This Computer Error?
Troubleshooting Solutions : QBDBMGRN Not Running on This Computer
Encountering the “QBDBMGRN Not Running on This Computer” error in QuickBooks can be frustrating, but there are several troubleshooting steps you can take to resolve it:
Restart QuickBooks Services: Begin by restarting the QuickBooks Database Server Manager and related services. Go to the Windows Start menu, type “Services” in the search bar, and press Enter. Locate the QuickBooksDBXX service (XX denotes the version of QuickBooks), right-click on it, and select Restart. Repeat this process for other related services like QuickBooksDBXX (QBDBXX) and QuickBooksDBXX (QBCFMonitorService).Update QuickBooks: Ensure that both QuickBooks and QuickBooks Database Server Manager are updated to the latest versions. Outdated software may encounter compatibility issues leading to the “QBDBMGRN Not Running on This Computer” error. Visit the official QuickBooks website or use the built-in update feature within the software to download and install the latest updates.Check QuickBooks Installation: Verify the integrity of the QuickBooks installation on the affected computer. Sometimes, corrupt or missing files can cause the QBDBMGRN error. Go to the Control Panel, then select Programs and Features. Find QuickBooks in the list of installed programs, right-click on it, and choose Repair. Follow the prompts to repair the installation.Restart the Computer: A simple restart of the computer can often resolve temporary glitches that cause the QBDBMGRN error. After restarting, check if QuickBooks opens without encountering the error.Check Firewall and Antivirus Settings: Adjust firewall and antivirus settings to ensure they are not blocking QuickBooks processes. Add exceptions for QuickBooks and its associated processes in firewall and antivirus software to allow them to function properly. Consult the documentation of your firewall or antivirus program for instructions on adding program exceptions.Verify Network Connectivity: Ensure that the computer has proper network connectivity to access QuickBooks Database Server Manager. Check network cables, routers, and switches for any signs of malfunction. Ensure that the workstation running QuickBooks has proper network access to the server hosting the company file.Recreate QuickBooks Network Data (.ND) File: If the issue persists, recreate the QuickBooks Network Data (.ND) file associated with the company file. Close QuickBooks on all workstations and navigate to the folder containing the company file. Locate the corresponding .ND file, rename it, and reopen QuickBooks. QuickBooks will automatically recreate the .ND file, potentially resolving any corruption issues causing the error.
By following these troubleshooting steps, users can effectively address the “QBDBMGRN Not Running on This Computer” error in QuickBooks and resume normal operations without significant disruption. If the problem persists after attempting these solutions, it may be necessary to contact QuickBooks support for further assistance.
Troubleshooting Solutions : QBDBMGRN Not Running on This Computer Encountering the “QBDBMGRN Not Running on This Computer” error in QuickBooks can be frustrating, but there are several troubleshooting steps you can take to resolve it: Restart QuickBooks Services: Begin by restarting the QuickBooks Database Server Manager and related services. Go to the Windows Start menu, type “Services” in the search bar, and press Enter. Locate the QuickBooksDBXX service (XX denotes the version of QuickBooks), right-click on it, and select Restart. Repeat this process for other related services like QuickBooksDBXX (QBDBXX) and QuickBooksDBXX (QBCFMonitorService).Update QuickBooks: Ensure that both QuickBooks and QuickBooks Database Server Manager are updated to the latest versions. Outdated software may encounter compatibility issues leading to the “QBDBMGRN Not Running on This Computer” error. Visit the official QuickBooks website or use the built-in update feature within the software to download and install the latest updates.Check QuickBooks Installation: Verify the integrity of the QuickBooks installation on the affected computer. Sometimes, corrupt or missing files can cause the QBDBMGRN error. Go to the Control Panel, then select Programs and Features. Find QuickBooks in the list of installed programs, right-click on it, and choose Repair. Follow the prompts to repair the installation.Restart the Computer: A simple restart of the computer can often resolve temporary glitches that cause the QBDBMGRN error. After restarting, check if QuickBooks opens without encountering the error.Check Firewall and Antivirus Settings: Adjust firewall and antivirus settings to ensure they are not blocking QuickBooks processes. Add exceptions for QuickBooks and its associated processes in firewall and antivirus software to allow them to function properly. Consult the documentation of your firewall or antivirus program for instructions on adding program exceptions.Verify Network Connectivity: Ensure that the computer has proper network connectivity to access QuickBooks Database Server Manager. Check network cables, routers, and switches for any signs of malfunction. Ensure that the workstation running QuickBooks has proper network access to the server hosting the company file.Recreate QuickBooks Network Data (.ND) File: If the issue persists, recreate the QuickBooks Network Data (.ND) file associated with the company file. Close QuickBooks on all workstations and navigate to the folder containing the company file. Locate the corresponding .ND file, rename it, and reopen QuickBooks. QuickBooks will automatically recreate the .ND file, potentially resolving any corruption issues causing the error.By following these troubleshooting steps, users can effectively address the “QBDBMGRN Not Running on This Computer” error in QuickBooks and resume normal operations without significant disruption. If the problem persists after attempting these solutions, it may be necessary to contact QuickBooks support for further assistance. Read More
How fix QBDBMGRN Not Running after update windows 10/11?
Troubleshooting Solutions : QBDBMGRN Not Running
When encountering the “QBDBMGRN Not Running” issue in QuickBooks, it’s crucial to troubleshoot the problem systematically to resume normal operations efficiently. Here are several steps to resolve this issue:
Restart QuickBooks Database Server Manager (QBDBSM): Begin by restarting the QuickBooks Database Server Manager. Navigate to the Windows Start menu, type “Services” in the search bar, and press Enter. Locate the QuickBooksDBXX service (XX denotes the version of QuickBooks), right-click on it, and select Restart. This action often resolves temporary glitches causing the “QBDBMGRN Not Running” error.Check QuickBooks Database Server Manager Status: Verify whether the QuickBooks Database Server Manager is running properly. Repeat the previous steps to access the Services window. Locate the QuickBooksDBXX service, right-click, and choose Properties. Ensure that the Startup type is set to Automatic and the Service status shows “Running.” If not, set the Startup type to Automatic and click on Start to initiate the service.Update QuickBooks: Ensure that both QuickBooks and QuickBooks Database Server Manager are updated to the latest versions. Outdated software may encounter compatibility issues leading to the “QBDBMGRN Not Running” error. Visit the official QuickBooks website or use the built-in update feature within the software to download and install the latest updates.Verify Network Connectivity: Confirm that there are no network connectivity issues affecting QuickBooks Database Server Manager. Check network cables, routers, and switches for any signs of malfunction. Ensure that the workstation running QuickBooks has proper network access to the server hosting the company file. Troubleshoot any network-related problems to ensure seamless communication between devices.Firewall and Antivirus Configuration: Adjust firewall and antivirus settings to allow QuickBooks Database Server Manager to function correctly. Add exceptions for QuickBooks and its associated processes in firewall and antivirus software to prevent them from blocking necessary connections. Consult the respective firewall or antivirus documentation for guidance on adding program exceptions.File Hosting Configuration: Verify that the company file is properly hosted on the server and accessible by QuickBooks Database Server Manager. Open QuickBooks on the host computer, navigate to the File menu, select Utilities, and then Host Multi-User Access. Follow the on-screen instructions to ensure proper hosting of the company file.Recreate Network Data File: If the issue persists, recreate the QuickBooks Network Data (.ND) file associated with the company file. Close QuickBooks on all workstations and navigate to the folder containing the company file. Locate the corresponding .ND file, rename it, and reopen QuickBooks. QuickBooks will automatically recreate the .ND file, potentially resolving any corruption issues causing the error.
By following these troubleshooting steps, users can effectively address the “QBDBMGRN Not Running” issue in QuickBooks and resume normal operations without significant disruption. If the problem persists after attempting these solutions, it may be necessary to contact QuickBooks support for further assistance.
Troubleshooting Solutions : QBDBMGRN Not Running When encountering the “QBDBMGRN Not Running” issue in QuickBooks, it’s crucial to troubleshoot the problem systematically to resume normal operations efficiently. Here are several steps to resolve this issue: Restart QuickBooks Database Server Manager (QBDBSM): Begin by restarting the QuickBooks Database Server Manager. Navigate to the Windows Start menu, type “Services” in the search bar, and press Enter. Locate the QuickBooksDBXX service (XX denotes the version of QuickBooks), right-click on it, and select Restart. This action often resolves temporary glitches causing the “QBDBMGRN Not Running” error.Check QuickBooks Database Server Manager Status: Verify whether the QuickBooks Database Server Manager is running properly. Repeat the previous steps to access the Services window. Locate the QuickBooksDBXX service, right-click, and choose Properties. Ensure that the Startup type is set to Automatic and the Service status shows “Running.” If not, set the Startup type to Automatic and click on Start to initiate the service.Update QuickBooks: Ensure that both QuickBooks and QuickBooks Database Server Manager are updated to the latest versions. Outdated software may encounter compatibility issues leading to the “QBDBMGRN Not Running” error. Visit the official QuickBooks website or use the built-in update feature within the software to download and install the latest updates.Verify Network Connectivity: Confirm that there are no network connectivity issues affecting QuickBooks Database Server Manager. Check network cables, routers, and switches for any signs of malfunction. Ensure that the workstation running QuickBooks has proper network access to the server hosting the company file. Troubleshoot any network-related problems to ensure seamless communication between devices.Firewall and Antivirus Configuration: Adjust firewall and antivirus settings to allow QuickBooks Database Server Manager to function correctly. Add exceptions for QuickBooks and its associated processes in firewall and antivirus software to prevent them from blocking necessary connections. Consult the respective firewall or antivirus documentation for guidance on adding program exceptions.File Hosting Configuration: Verify that the company file is properly hosted on the server and accessible by QuickBooks Database Server Manager. Open QuickBooks on the host computer, navigate to the File menu, select Utilities, and then Host Multi-User Access. Follow the on-screen instructions to ensure proper hosting of the company file.Recreate Network Data File: If the issue persists, recreate the QuickBooks Network Data (.ND) file associated with the company file. Close QuickBooks on all workstations and navigate to the folder containing the company file. Locate the corresponding .ND file, rename it, and reopen QuickBooks. QuickBooks will automatically recreate the .ND file, potentially resolving any corruption issues causing the error.By following these troubleshooting steps, users can effectively address the “QBDBMGRN Not Running” issue in QuickBooks and resume normal operations without significant disruption. If the problem persists after attempting these solutions, it may be necessary to contact QuickBooks support for further assistance. Read More
Guest user can’t add a comment to Planner task
Hello everyone,
as soon as a guest user wants to send a comment in a Teams Planner task, they get the error message “Your session has expired. Can you please log in again?…”.
The general access to Teams Planner for guest users work (i.e. they can create and edit tasks etc.), only sending comments does not work.
We have already checked the solution in the following link and deleted the “MailContact” entry for the user. But it still does not work.
https://learn.microsoft.com/en-us/office/troubleshoot/planner/guests-cannot-comment-assigned-tasks
Is there a suggested solution for this?
Many thanks in advance!
Hello everyone, as soon as a guest user wants to send a comment in a Teams Planner task, they get the error message “Your session has expired. Can you please log in again?…”. The general access to Teams Planner for guest users work (i.e. they can create and edit tasks etc.), only sending comments does not work.We have already checked the solution in the following link and deleted the “MailContact” entry for the user. But it still does not work.https://learn.microsoft.com/en-us/office/troubleshoot/planner/guests-cannot-comment-assigned-tasks Is there a suggested solution for this? Many thanks in advance! Read More
FASTTRACK VS HWM INCENTIVES
Good day,
I need to understand how FastTrack and HWM Incentives tie in together.
And where the best place is to see our tenants’ M365 usage
This is rather urgent.
Thank you.
Good day, I need to understand how FastTrack and HWM Incentives tie in together. And where the best place is to see our tenants’ M365 usage This is rather urgent. Thank you. Read More
I need a best online video downloader for PC Windows 10
I’m currently looking for recommendations on the best online video downloader that is compatible with PC Windows 10. I often need to download videos from various platforms for both personal and professional use, and I’m searching for a safe tool that offers both flexibility and ease of use. Ideally, the downloader should support multiple video formats and provide options for resolution settings. I’d appreciate any suggestions on software or online video downloader services that are secure and efficient, along with any tips on what features to look for when choosing a video downloader. Thank you! Read More
ADF – CDC Creates File and Folder on the same name
Hi Team,
My requirement is simple, if any updates/inserts/updates happened to one our Azure SQL Server database table, the CDC has to generate the .csv file for the incremental in the Azure storage.
It is working as expected, but the CDC creates a folder and file on the same Target Name.
Here is my CDC Source and Target.
when the Insert happened in the database, the CDC generates the incremental file, but it also generates a file outside the folder as below.
Inside the incr-mist-data/incoming/incr folder as below.
It should create the .csv file under the incr-mist-data/incoming/incr folder, but it creates a 0KB file in the folder incr-mist-data/incoming ?? Please let me know how to avoid it.
Thanks
Rajan
Hi Team, My requirement is simple, if any updates/inserts/updates happened to one our Azure SQL Server database table, the CDC has to generate the .csv file for the incremental in the Azure storage. It is working as expected, but the CDC creates a folder and file on the same Target Name. Here is my CDC Source and Target. when the Insert happened in the database, the CDC generates the incremental file, but it also generates a file outside the folder as below. Inside the incr-mist-data/incoming/incr folder as below. It should create the .csv file under the incr-mist-data/incoming/incr folder, but it creates a 0KB file in the folder incr-mist-data/incoming ?? Please let me know how to avoid it. ThanksRajan Read More
What’s Happening in New Outlook with Offline, Outbox, Sync, and IMAP?
I am sharing my insights about New Outlook features.
Offline Functionality Update
Get ready for enhanced offline productivity! In the Phase 1 rollout, scheduled for May, you’ll access core functions like email viewing and composing, calendar, and contacts, even without Wi-Fi. Phase 2 will introduce Offline Searching. Some add-in features will remain available only online. Watch the video for more details. Also, the date is subject to change. The original rollout was in January.
Outbox Functionality
In April/May, the Outbox gets a makeover. Unlike Classic, where scheduled outgoing messages reside in the Outbox, they will remain in the Drafts folder in New Outlook. Instead, the Outbox “may” become the holding place for Offline mode messages. Microsoft “may” change direction before this feature is completely released. Watch the video for the full explanation.
Sync Button Functionality
You can use the Sync button to manually synchronize your mailbox and ensure your messages match the server. No more drafts stuck in limbo! However, Microsoft plans to enhance the syncing experience. Stay tuned for updates!
IMAP Performance Improvements
Microsoft has intentionally withheld some features in New Outlook for third-party accounts like Gmail, Yahoo, and other IMAP accounts. However, Microsoft solicits your feedback while investigating potential roadblocks for certain features. In the video, I explained Microsoft’s reasons for this decision and directed users to a list of available features in New Outlook and Outlook.com for non-Microsoft accounts.
Delayed IMAP Issues Reported
Some users have reported delays with IMAP email delivery. Microsoft is on it! Watch the video for details.
Full YouTube Video @traccreations4e:
/Teresa 04/28/2024
I am sharing my insights about New Outlook features.
Offline Functionality UpdateGet ready for enhanced offline productivity! In the Phase 1 rollout, scheduled for May, you’ll access core functions like email viewing and composing, calendar, and contacts, even without Wi-Fi. Phase 2 will introduce Offline Searching. Some add-in features will remain available only online. Watch the video for more details. Also, the date is subject to change. The original rollout was in January.Outbox FunctionalityIn April/May, the Outbox gets a makeover. Unlike Classic, where scheduled outgoing messages reside in the Outbox, they will remain in the Drafts folder in New Outlook. Instead, the Outbox “may” become the holding place for Offline mode messages. Microsoft “may” change direction before this feature is completely released. Watch the video for the full explanation.Sync Button FunctionalityYou can use the Sync button to manually synchronize your mailbox and ensure your messages match the server. No more drafts stuck in limbo! However, Microsoft plans to enhance the syncing experience. Stay tuned for updates!IMAP Performance ImprovementsMicrosoft has intentionally withheld some features in New Outlook for third-party accounts like Gmail, Yahoo, and other IMAP accounts. However, Microsoft solicits your feedback while investigating potential roadblocks for certain features. In the video, I explained Microsoft’s reasons for this decision and directed users to a list of available features in New Outlook and Outlook.com for non-Microsoft accounts.Delayed IMAP Issues ReportedSome users have reported delays with IMAP email delivery. Microsoft is on it! Watch the video for details.Full YouTube Video @traccreations4e:
https://youtu.be/Mo5FaFnQPQ8/Teresa 04/28/2024
Microsoft LEARN Archievement Code
Hi,
I registred in Microsoft LEARN to generate the archievement codes and I can not yet see the button / URL to do that.
Could anyone help me?
Thanks in advance.
Michael
Hi, I registred in Microsoft LEARN to generate the archievement codes and I can not yet see the button / URL to do that.Could anyone help me? Thanks in advance.Michael Read More
php upload files to onedrive, php tutorial
Hello
I try to upload files to my onedrive with php.
I used some sdk librarys from github and curl…. nothing work.
So I did the php tutorial from microsoft.
It works, i get a access token.
But in the tutorial is no script to upload files.
Does anybody know the script?
My script ends here
https://learn.microsoft.com/en-us/graph/tutorials/php?tabs=aad&tutorial-step=7
Thank you
Hello I try to upload files to my onedrive with php.I used some sdk librarys from github and curl…. nothing work.So I did the php tutorial from microsoft.It works, i get a access token. But in the tutorial is no script to upload files.Does anybody know the script? My script ends here https://learn.microsoft.com/en-us/graph/tutorials/php?tabs=aad&tutorial-step=7 Thank you Read More
Enhancing Azure Connectivity: Sharing PaaS instance across customer tenants on Azure
I’ve come across a scenario where one of my customer using Azure SQL DB wanted to share their Database with other customer who was also hosted on Azure. They were struggling to establish site-to-site connectivity so that Customer B could access Customer A’s network, enabling them to connect to the Azure SQL DB via the site-to-site tunnel. Though this can be achieved, there are better ways to connect to Azure SQL DB, or any PaaS instance for that matter, with another customer who is using Azure. This can also be used by customers who have multiple Azure AD tenants.
Solution: You must be aware of private endpoints for PaaS instances. It can be configured for multiple types of service. Azure SQL DB, Storage account etc. You can have more than one private endpoint for any type of resource. For example, you can configure Azure SQL DB private endpoint in the Contoso network. Similarly, you can create one more private endpoint for the same resource in Fabrikam’s VNET. When you’re configuring a PE, you’re basically bringing the PaaS service to Fabrikam’s VNET.
There are multiple benefits to using this:
Private endpoints can be configured in any region:
So, it can happen that your Azure SQL DB resides in the Azure Central India region, but you’ve created the PE in the South India region of Fabrikam’s VNET. Though this shouldn’t be done, considering latency. But this similar type of architecture of private endpoints can be leveraged for other PaaS instances where latency is not a challenge or sometimes specific PaaS service itself is not available in your region. You’ll see a lot of similar deployments when you deploy the OpenAI service.
No peering is required, connectivity happens in the backend:
A common misconception is that you’ll need to peer the VNETs of Contoso and Fabrikam in order for an app residing in Fabrikam to connect to a DB in the Contoso tenant. Which is not the case. There shouldn’t be any connectivity between any of the VNET. As soon as private endpoint is created in Fabrikam and approved by Contoso you can connect to that endpoint from any of the VNETs hosted in Fabrikam as long as it has line of sight with the private endpoint IP Address. Even Fabrikam On-premises location which is connected with S2S can connect to Private Endpoint IP Address. All the connections flow through the Azure backbone and reach the actual PaaS instance without needing VNET Peering.
Can work cross tenant:
As mentioned in the architecture, this private endpoint and PaaS DB relationship can work across tenants. PE can be created in any Entra ID tenant, and the actual PaaS instance can reside in any of the customer tenants. This makes PE connectivity flexible to use in broad usecases.
please Note: Not all services would support cross tenant, I’ve found Azure PosgreSQL
Can also be used when customer has conflicting IP Addresses:
One of the architecture pattern with private endpoint is, Fabrikam though they have their own IP Address space and Contoso might also be using same conflicting IP schema, still Fabrikam was able to create private endpoint in their own VNET with unique IP which is limited to their own environment only. It doesn’t matter to contoso whether PE was created with the same IP Address as of Contoso VNET. So, basically, you eliminate the need for NAT and complex routing in this scenario.
PE with PaaS is just one example; I’ve seen architecture where we’ve deployed customer application with Standard LB and used that to expose it as a private Link service, assuming PE pointing to private link service can be created in any of the VNETs with conflicting IP addresses as there is no need for VNET Peering. So, this exposes your own application with PLS and PE and eliminate the need of VNET Peering and NAT if you’ve conflicting IP Address space.
I hope above scenario help you to design more complex architecture leveraging private endpoint capabilities.
Happy Learning!
Aquib Qureshi
Technical Specialist
Visit my Blog: www.azuredoctor.com
Microsoft Tech Community – Latest Blogs –Read More
Building Your Own Copilot for Credit Card Selection
The Scenario
Have you ever found yourself lost in the maze of credit card options, navigating through countless comparison websites, unsure of the accuracy and timeliness of their information? I certainly have. Recently, I embarked on a quest to find the perfect credit card, one that rewards my spending habits with frequent flyer points. However, relying solely on popular comparison platforms left me questioning the reliability of their data, often overshadowed by biased advertisements.
But then, a beacon of hope emerged: the realization that all bank product information is accessible through a common API. With this revelation, I set out to craft my own solution – a personalized Copilot to guide me through the sea of credit card offerings.
Building your own Copilot allows customization so that you can tailor it to your specific needs, industry, or domain, ensuring that it provides more relevant and accurate suggestions. In addition, it is great for data privacy because when using your own data, you maintain control over its privacy and security, avoiding concerns about sharing sensitive information with third-party platforms.
Furthermore, it provides improved performance. With access to your proprietary data, the Copilot can offer insights and suggestions that are more aligned with your organization’s unique challenges and objectives. Moreover, building your own Copilot can potentially provide cost savings in the long run compared to relying on external services, especially as your usage scales up.
Finally, developing your own Copilot allows you to innovate and differentiate your products or services, potentially giving you a competitive advantage in your market.
Now, you might wonder, why not utilize existing tools like Bing Chat or Gemini? While they offer convenience, their reliance on scraped data from comparison websites introduces the risk of inaccuracies and outdated information, defeating the purpose of informed decision-making.
This endeavor is not about dispensing financial advice. Instead, it’s a testament to the power of leveraging your own data to create your own Copilot. Consider it a journey of exploration, driven by curiosity and a desire to learn more about AI.
Important Concepts
It is important that I highlight some important concepts before we start.
RAG: Retrieval-augmented generation (RAG) enhances generative AI models by integrating facts from external sources, addressing limitations in LLMs. While LLMs excel at responding quickly to general prompts due to their parameterized knowledge, they lack depth for specific or current topics.
Vector Database: A Vector Database is a structured collection of vectors, which are mathematical representations of data points in a multi-dimensional space. This type of database is often used in machine learning and data analysis tasks, where data points need to be efficiently stored, accessed, and manipulated for various computational tasks.
Prompt: In the context of AI and natural language processing, a prompt refers to a specific input or query provided to an AI model to generate a desired output. Prompts can vary in complexity, from simple questions to more elaborate instructions, and they play a crucial role in directing the AI’s behaviour and generating meaningful responses.
LLM stands for Large Language Model. It refers to a class of AI models, such as OpenAI’s GPT (Generative Pre-trained Transformer) models, that are trained on vast amounts of text data to understand and generate human-like text. LLMs are capable of tasks like language translation, text completion, summarization, and more, making them versatile tools for natural language processing applications.
The High-Level Design
The following are the resources we will leverage for this scenario.
Logic Apps
Storage Account
Azure OpenAI
Azure AI Search
The following is a high-level diagram of how the solution looks like.
The Implementation
Now let’s break it down how this can be accomplished. I will assume you already have all these resources deployed. I will just go into the configuration details. However, you can easily deploy everything that is required with the following commands:
New-AzResourceGroup -Name OAIResourceGroup -Location AustraliaEast
New-AzStorageAccount -ResourceGroupName OAIResourceGroup -Location AustraliaEast -SkuName Standard_LRS -Kind StorageV2 -AllowBlobPublicAccess $false
New-AzLogicApp -ResourceGroupName OAIResourceGroup -Location AustraliaEast -Name MyLogicApp
New-AzSearchService -ResourceGroupName OAIResourceGroup -Name MySearchService -Sku Standard -Location AustraliaEast -PartitionCount 1 -ReplicaCount 1 -HostingMode Default
New-AzCognitiveServiceAccount -ResourceGroupName OAIResourceGroup -Name MyOpenAIResource -Type OpenAI -SkuName S0 -Location AustraliaEast
Note: You may have to install some Az modules if they are not already installed. Also ensure you choose unique names for your resources.
Collecting The Data
Let’s start looking at how we can do the retrieval of the bank product data and make it our own. This is the data that will be used to augment the response you will receive back from the LLM.
Before you start:
Make sure you enable System Managed Identity for your Logic App resource.
Make sure you give the Logic App managed identity the Storage Blob Data Contributor role in the Storage Account, and you create a container named products.
I have created a Logic App with a Recurrence Trigger of once a day.
I also have setup four parallel branches, one for each bank.
Let’s take a deeper look at the first branch. The other branches will be the same except for the URIs we are hitting.
ANZ https://api.anz/cds-au/v1/banking/products
Westpachttps://digital-api.westpac.com.au/cds-au/v1/banking/products/
CBAhttps://api.commbank.com.au/public/cds-au/v1/banking/products/
NABhttps://openbank.api.nab.com.au/cds-au/v1/banking/products
First step we are doing a GET request to the bank API. If you are replicating this, just enter the values as below. These are the minimum and maximum API versions and the maximum number of results to return.
Next, we parse the JSON using the Body of the Request as input. You can get the schema from the response sample banks provided in their API documentation. For example, for CBA in can retrieve from here.
Now that we have all the bank products, we will iterate through each product, using the productid from the parsed JSON object to retrieve the details about each product. We will then do the same as we did before and parse the JSON. Again, the schema of a product can be retrieved from the bank API documentation.
Finally, we configure where we want those documents to be stored. We are going to use the Logic Apps managed identity to securely connect with the Storage Account and send those documents to a container named products there.
Once we establish that connection, we can define it will create the blobs on container “products”, name each blob with the product ID and the content of the blob will be the product details we got from the bank.
Once you trigger your Logic App workflow and it completes successfully, you will see several new blobs in the products container as per below.
Great, now we have the data to augment our responses!
Azure OpenAI Deployments
Before you start:
Ensure you enable Azure AI Search Managed Identity for your Azure AI Search resource.
Make sure you give the Azure AI Search managed identity the Cognitive Services OpenAI User role in the OpenAI resource.
Ensure you enable System Managed Identity for your Azure OpenAI resource.
Now let’s setup the Azure OpenAI with the models you need.
Open the Azure OpenAI Studio. Go to Deployments and Create two new Deployments. In the first one select GPT-4 and the second one select ADA002 as displayed below.
You are performing this step because these models will be used in the next sections.
The GPT-4 model is the LLM used to generate the responses based on our prompts and the ADA-002 model is the model used to generate the embeddings.
Import and Vectorize Data
Before you start:
Make sure you give the Azure AI Search managed identity the Storage Blob Data Reader role in the Storage Account.
Now you can move to Azure AI Search where you import and vectorize the data. Azure AI Search takes vector embeddings and uses a nearest neighbors’ algorithm to place similar vectors close together in an index. Internally, it creates vector indexes for each vector field.
To do this, is very simple. Go to Azure AI Search resource and click on Import and Vectorize Data.
Select your subscription, the storage account, container and make sure you tick the box to authenticate using the managed identity.
Next, select the Azure OpenAI Service, select the ADA002 model you deployed earlier and make sure authentication type is managed identity, semantic ranker is enabled, and schedule indexing is set to Daily.
The ADA002 model was specifically designed to create the embeddings based on semantic meaning. These embeddings are stored in a vector database which in this case is the Azure AI Search itself.
The Semantic Ranker adds the following:
A secondary ranking over an initial result set that was scored using BM25 or RRF. This secondary ranking uses multi-lingual, deep learning models adapted from Microsoft Bing to promote the most semantically relevant results.
It extracts and returns captions and answers in the response, which you can render on a search page to improve the user’s search experience.
Basically, semantic ranking improves search relevance by using language understanding to re-rank search results.
You can verify everything worked as expected by checking your indexers in Azure AI Search. The figure below shows the indexer was successfully created with 226 documents indexed.
You can also visualize all vectors, by selecting them and hitting the Search button. Notice that here it displays 2245 documents because those 226 documents are broken down into chunks. The vector is an array of 1536 floating point values, which corresponds to the size of embeddings produced by the ada-002 model.
The Flow
From the left:
Define your data sources.
Deploy the models to be used.
Transform your data into embeddings.
Store those embeddings into a vector store.
From the right:
Start with a user question or request (prompt).
That is transformed into embeddings so that we can do semantic and lexical search.
Send it to Azure AI Search to find relevant information.
Send the top ranked search results to the LLM.
Use the natural language understanding and reasoning capabilities of the LLM to generate a response to the initial prompt.
The Prompt
Once all the previous steps have been completed, you are now able to start prompting directly from the Azure AI Studio.
Now you should add our data source to the chat playground so that you ground your data and provide extra context to the LLM. This will help you to get more accurate answers back.
From the Chat Playground in Open AI Studio, select Add your data and select + Add a data source.
Select Azure AI Search as your data source, select the subscription where that is located, the Azure AI Search resource, your index which you have created in a previous step, tick the box to Add vector search to this resource and select the ADA002 model we deployed earlier.
Now, select Hybrid + semantic as the search type and an existing semantic configuration.
Great! Let’s ask our first question.
My prompt was that I fly a lot and I want to find out what credit cards are best for me.
Using my data, it replied with the 3 best credit cards, and it even provided references to the data. I clicked on one of the citations and quickly see where the data was cited from.
Let’s ask BankCheck Copilot to provide a link to the first card.
Awesome, let’s do one more question before we wrap this up.
One thing to note is that by default we are limiting the responses to our own data content as you can see below.
Therefore, if we prompt it, for example, to write a calculator program in python, it will not do it as depicted below.
Conclusion
The post outlined the process of building a personalized AI Copilot using your own data, highlighting the benefits of customization, data privacy, improved performance, cost savings, and innovation. It introduced key concepts like Retrieval-augmented generation (RAG), Vector Database, AI prompts, and Large Language Models (LLMs).
I shared a specific scenario of searching for the best credit card, emphasizing the advantage of using direct bank API data over relying on potentially outdated comparison sites. The technical implementation involved using Logic Apps, Azure OpenAI, and Azure AI Search to collect, store, and analyse bank product data. The post detailed steps for data collection, vectorization, and embedding, followed by a demonstration of querying the system with specific prompts. The outcome was a Copilot that can provide accurate, personalized credit card recommendations, illustrating the power of leveraging proprietary data and AI for tailored solutions.
A special mention to Olaf Wrieden who reviewed and provided some good feedback before I released the final version of the article.
As always, I hope this was informative to you and thanks for reading.
Felipe Binotto, Cloud Solution Architect
References
Retrieval Augmented Generation (RAG)
Australia Consumer Data Standards
ChatGPT Over Your Data by LangChain
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
Microsoft Tech Community – Latest Blogs –Read More
Train a Simple Recommendation Engine using Azure Machine Learning Designer
Hi, everyone! I am Paschal Alaemezie, a Gold Microsoft Learn Student Ambassador. I am a student at the Federal University of Technology, Owerri (FUTO). I am interested in Artificial Intelligence, Software Engineering, and Emerging technologies, and how to apply the knowledge from these technologies in writing and building cool solutions to the challenges we face. Feel free to connect with me on LinkedIn and GitHub or follow me on X (Twitter).
Have you ever logged in to any of the popular online stores without buying anything there yet? Did you notice that most of the recommended items are similar to the ones you just viewed, or the ones matching your demographics? How about watching a video for a while on any of the popular video streaming platforms? Did you notice that videos similar to the one you just viewed were recommended to you? These are the wonders of recommendation engines that modern industries harness in making their platforms more interactive and achieving satisfying user experiences.
Items such as movies, restaurants, books, shoes, or songs are instances of what might be recommended to users. The user is an entity with item preferences such as a person, a group of persons, or any other type of entity you can imagine.
In this article, we will train a simple recommendation engine using the Azure Machine Learning designer, which is the graphical UI of Azure Machine Learning, and for this purpose, we will need an Azure subscription. In my next article series on AI, I will show you how you can build amazing solutions using the new Azure AI Studio.
If you are a student, you can use your university or school email to sign up for a free Azure for Students account and start building on the Azure cloud with a free $100 Azure credit.
Approaches to Building Recommendation Engines
These are the approaches to building relevant recommendation engines:
The content-based approach: Recommendations are based on the similarity of users or items. Users can be described by properties such as age or gender. Items can be described by properties such as the author or the manufacturer. Typical examples of content-based recommendation systems can be found in online stores.
The Collaborative filtering approach: This approach uses only identifiers of the users and the items. It is based on a matrix of ratings given by the users to the items. The main source of information about a user is the list of the items they have rated and the similarity with other users who have rated the same items. The SVD recommender module in Azure Machine Learning Designer is based on the Singular Value Decomposition algorithm. It uses identifiers of the users and the items, and a matrix of ratings given by the users to the items. It is a typical example of a collaboratively filtered recommender.
The Hybrid approach: This approach combines both the content- and the collaborative-filtering approaches to interact with both user ratings and cold-start users – who are users without ratings. The benefit of this approach is that it optimizes the capabilities of both recommender systems to create a combined recommendation. An example of a hybrid online recommendation engine is Azure AI Personalizer – which enables you to create optimized user experiences and add real-time relevance to product recommendations, with reinforcement learning-based capabilities.
Activities
We will make use of the Train SVD Recommender module available in Azure Machine Learning Designer to train a movie recommendation engine. We will adopt the collaborative filtering approach: the model learns from a collection of ratings made by users on a subset of a catalogue of movies. Two open datasets available in Azure Machine Learning Designer are used the IMDB Movie Titles dataset joined on the movie identifier with the Movie Ratings dataset.
We will both train the engine and score new data, to demonstrate the different modes in which a recommender can be used and evaluated. The trained model will predict what rating a user will give to unseen movies, so we will be able to recommend movies that the user is most likely to enjoy. This is a No-code approach to training recommendation engines using the Azure Machine Learning Designer.
Activity 1: Create a New Training Pipeline
Step 1: Setting up your Azure Machine Learning workspace
In the Azure portal, click on Create a resource.
Search for Azure Machine Learning and select it. In the Azure Machine Learning window, click on Create, and select New workspace.
Step 2: In the Basics section:
For the Resource details:
Select your Subscription from the drop-down menu.
Select your Resource group. If you have any existing resource group, select it from the drop-down menu. Otherwise, click on Create new to create a new resource group, and click OK after that.
For the Workspace details:
Workspace name: Provide any name of your choice, for example, Movie-recommender. The name you choose should be unique in the resource group.
Region: select any region of your choice.
Then, click on Review + create.
When your workspace passes the validation process, click on Create.
When your deployment is completed, click on Go to resource if you want to view your resource.
Step 3: Open Pipeline Authoring Editor
In the Azure portal, open the available machine learning workspace that you provisioned.
In the workspace, scroll down to where you can see the Launch studio button and click on it. It will open the Azure AI Machine Learning Studio in a new tab inside your web browser.
From the studio, select Designer from the navigation pane on the left-hand side. This will open the Designer environment where you can select a new pipeline if there is no existing pipeline.
In the Designer environment, select the Classic prebuilt component. Then click on the Create a new pipeline using classic prebuilt components. This will open a visual pipeline authoring editor.
Step 4: Add Sample Datasets
In the left navigation pane of the Authoring editor, click the Asset library and go to the Component section. Under Component, click on Sample data.
In the Sample data, scroll down to the Movie Ratings, and IMDB Movie Titles. Drag and drop the selected datasets onto the canvas.
Step 5: Join the two datasets on Movie ID
Close the Sample data drop-down menu. From the Data Transformation section in the left navigation, select the Join Data prebuilt module, and drag and drop the selected module onto the canvas
Connect the output of the Movie Ratings module to the first input of the Join Data module.
Connect the output of the IMDB Movie Titles module to the second input of the Join Data module.
Select the Join Data module. Click the navigation button at the upper right of the canvas to open the Join Data module window.
Select the Edit column link to open the Join key columns for the left dataset editor. Select the MovieId column in the Enter column name field and click Save.
Select the Edit column link to open the Join key columns for the right dataset editor. Select the Movie ID column in the Enter column name field and click Save. Then, close the Join Data window.
Step 6: Select Columns UserId, Movie Name, and Rating using a Python script
From the Python Language section in the left navigation, select the Execute Python Script prebuilt module. Drag and drop the selected module onto the canvas. Then, connect the Join Data output to the input of the Execute Python Script module.
Select Edit code to open the Python script editor, clear the existing code and then enter the following lines of code to select the UserId, Movie Name, and Rating columns from the joined dataset. Ensure best practice by indenting only the second and third lines of your code.
Step 7: Remove duplicate rows with the same Movie Name and UserId
From the Data Transformation section in the left navigation pane, select the Remove Duplicate Rows prebuilt module from the drop-down menu, and drag and drop the selected module onto the canvas.
Connect the first output of the Execute Python Script to the input of the Remove Duplicate Rows module.
Select the Edit column link to open the Select column editor. Click the navigation button at the upper right of the canvas to open the Remove Duplicate Rows module window.
Enter the following list of columns to be included in the output dataset: Movie Name, UserId. Then, click Save.
Step 8: Split the dataset into a training set (0.5) and a test set (0.5)
From the Data Transformation section in the left navigation select the Split Data prebuilt module and drag and drop the selected module onto the canvas, then connect the Dataset to the Split Data module.
Click the navigation button at the upper right of the canvas to open the Split Data module window. Ensure that the Fraction of rows in the first output dataset: 0.5
Step 9: Initialize Recommendation Module
From the Recommendation section in the left navigation pane, select the Train SVD Recommender prebuilt module and drag and drop the selected module onto the canvas. Then, connect the first output of the Split Data module to the input of the Train SVD Recommender module.
Click the navigation button at the upper right of the canvas to open the Train SVD Recommender module window. Set Number of factors: 200. This option specifies the number of factors to use with the recommender.
Number of recommendation algorithm iterations: 30. This number indicates how many times the algorithm should process the input data. The default value is 30.
For Learning rate: 0.001. The learning rate defines the step size for learning.
Step 10: Select Columns UserId, Movie Name from the test set
From the Data Transformation section in the left navigation pane, select the Select Columns in Dataset prebuilt module and drag and drop the selected module onto the canvas. Then, connect the Split Data second output to the input of the Select columns in Dataset module.
Click the navigation button at the upper right of the canvas to open the Select Columns in Dataset module window. Select the Edit column link to open the Select columns editor.
Enter the following list of columns to be included in the output dataset: UserId, Movie Name and Click Save.
Step 11: Configure the Score SVD Recommender
From the Recommendation section in the left navigation pane, select the Score SVD Recommender prebuilt module and drag and drop the selected module onto the canvas
Connect the output of the Train SVD Recommender module to the first input of the Score SVD Recommender module, which is the Trained SVD recommendation input.
Connect the output of the Select Columns in Dataset module to the second input of the Score SVD Recommender module, which is the Dataset to score input.
Open the Score SVD Recommender module on the canvas by clicking on the navigation button at the upper right of the canvas. Set the Recommender prediction kind: Rating Prediction. For this option, no other parameters are required.
Step 12: Setup Evaluate Recommender Module
From the Recommendation section in the left navigation pane, select the Evaluate Recommender prebuilt module and drag and drop the selected module onto the canvas.
Connect the Score SVD Recommender module to the second input of the Evaluate Recommender module, which is the Scored dataset input.
Connect the second output of the Split Data module (train set) to the first input of the Evaluate Recommender module, which is the Test dataset input.
Activity 2: Submit Training Pipeline
In the Authoring editor, ensure that you have AutoSave enabled. Then click on Configure & Submit at the upper right-hand side of your screen.
For the Set up pipeline job window: In the Basics section, click the Create new button under the Experiment name. Type your new experiment name and click the Next button at the bottom of the screen.
In the Inputs & outputs section, click the Next button at the bottom of the screen.
In the Runtime settings section: skip the Default compute. Go to the select compute type and select Compute instance from the drop-down menu. Under the Select Azure ML compute instance, click on Create Azure ML compute instance. The Create compute instance will open in another environment.
In the Create compute instance window, type in your compute name under the Compute name tab. Then, select the CPU button under the Virtual machine type.
While authoring this article, I had to select my virtual machine first to enable the Compute name tab. You may or may not encounter this issue. I selected the Standard_D2_v2 virtual machine for this training. After that, click the Review + Create button at the end of the screen, to take you back to the Runtime settings window.
Back to the Runtime settings window. At the Select Azure ML compute instance, Select the compute instance that you have created. Here, I selected the movie instance from the drop-down menu. Note that your newly created compute instance will take some time to be provisioned and appear in your drop-down menu. Go to the Advanced settings and ensure that the Continue on step failure box is checked. Then, click the Review + Submit button at the end of the screen.
At the Review + Submit section, ensure that your provided details are correct. Then, click the Submit button at the end of the screen.
Activity 3: Visualize Scoring Results
Step 1: When your pipeline is submitted and your model training is completed, at the left navigation pane, go to Jobs under Asset and click on the name of your completed pipeline.
Step 2: Visualize the Scored dataset
Go to the Score SVD Recommender module on the canvas and right-click on it. Select Preview data and click on Scored dataset.
Observe the predicted values under the column Rating.
Step 3: Visualize the Evaluation Results
Go to the Evaluate Recommender module on the canvas and right-click on it. Select Preview data and click on Metric.
Evaluate the model performance by reviewing the various evaluation metrics, such as Mean Absolute Error, Root Mean Squared Error, etc.
Conclusion
In our modern world, recommendation engines play a significant role in enhancing user experiences. These algorithms analyze user data to predict and suggest personalized content, from product recommendations in online stores to movie suggestions on streaming platforms. By adopting a data-driven approach and leveraging machine learning, businesses can create tailored experiences that resonate with users, ultimately driving engagement and retention.
Furthermore, training a recommendation engine with Azure Machine Learning Designer streamlines the process and opens up a world of possibilities for personalized user experiences. As we harness the power of Azure’s tools to refine our models, engaging with communities and resources that foster growth and innovation is equally important.
For enthusiasts and professionals alike, you can leverage these resources to stay informed and inspired as you embark on your AI journey:
Microsoft AI Discord Community is a dynamic space to discuss and share AI-related insights.
Global AI Community offers a platform to connect with peers worldwide.
Azure Samples provides practical code examples to enhance your projects.
Microsoft AI Show delivers the latest updates in AI technology.
If you seek deeper insights, “Mastering Azure Machine Learning” by Christoph Körner and Marcel Alsdorf provides valuable guidance on building robust recommendation systems within the Azure platform.
Microsoft Tech Community – Latest Blogs –Read More
How to Backup Gmail Emails on External Hard Drive Manually?
To manually backup your Gmail emails on an external hard drive, you can use Google Takeout, a service provided by Google that allows you to export your data from Google services including Gmail. Here’s how to do it manually:
Manual Method Using Google Takeout
Go to Google Takeout: Visit takeout.google.com and log in with your Gmail account.
Select Your Data: By default, all data types are selected. Click on “Deselect all” and then scroll down to select only “Mail.” You can also choose to include all mail data or select specific labels.
Choose File Type and Destination: Choose the file type for your export (e.g., .zip) and how large you want the archive parts to be. Google can split your data into multiple files if it’s very large.
Create Archive: Click on “Next step” and then choose “Send download link via email” under the delivery method. Proceed to create the archive. Google will prepare it, which can take anywhere from a few minutes to several hours depending on the size of your mailbox.
Download to Your Computer: Once your archive is ready, Google will email you a link to download it. Download the archive to your computer.
Transfer to External Hard Drive: Connect your external hard drive to your computer and copy the downloaded .zip files to your drive.
For a quicker and more efficient backup with additional options like direct backup to various file formats, you can use the Advik Gmail Backup Tool.
How to Backup Gmail Emails on External Hard Drive?
Download and Install Advik Gmail Backup ToolLaunch the Tool and Log inChoose Mailbox Folders.Select Email FormatSelect Destination & click the Backup.
Initiate the backup process. The Advik Gmail Backup Tool will efficiently transfer all your selected Gmail data to the external hard drive. Using the Advik Gmail Backup Tool simplifies the process and reduces the time it takes to manually backup your emails via Google Takeout. Additionally, it provides more file format options and ensures your emails are backed up securely.
To manually backup your Gmail emails on an external hard drive, you can use Google Takeout, a service provided by Google that allows you to export your data from Google services including Gmail. Here’s how to do it manually: Manual Method Using Google Takeout Go to Google Takeout: Visit takeout.google.com and log in with your Gmail account.Select Your Data: By default, all data types are selected. Click on “Deselect all” and then scroll down to select only “Mail.” You can also choose to include all mail data or select specific labels.Choose File Type and Destination: Choose the file type for your export (e.g., .zip) and how large you want the archive parts to be. Google can split your data into multiple files if it’s very large.Create Archive: Click on “Next step” and then choose “Send download link via email” under the delivery method. Proceed to create the archive. Google will prepare it, which can take anywhere from a few minutes to several hours depending on the size of your mailbox.Download to Your Computer: Once your archive is ready, Google will email you a link to download it. Download the archive to your computer.Transfer to External Hard Drive: Connect your external hard drive to your computer and copy the downloaded .zip files to your drive.For a quicker and more efficient backup with additional options like direct backup to various file formats, you can use the Advik Gmail Backup Tool. How to Backup Gmail Emails on External Hard Drive?Download and Install Advik Gmail Backup ToolLaunch the Tool and Log inChoose Mailbox Folders.Select Email FormatSelect Destination & click the Backup.Initiate the backup process. The Advik Gmail Backup Tool will efficiently transfer all your selected Gmail data to the external hard drive. Using the Advik Gmail Backup Tool simplifies the process and reduces the time it takes to manually backup your emails via Google Takeout. Additionally, it provides more file format options and ensures your emails are backed up securely. Read More
Outlook Email getting locked
My 0365 email users getting locked.i unlock it but few times later again it is getting locked.
My 0365 email users getting locked.i unlock it but few times later again it is getting locked. Read More
Valid Client Certificate Policy Blocking Inconsistent
I have all Office365 traffic passing through Cloud Apps via a Conditional Access policy that targets all users, and I want to use valid client certificate to determine whether a device is managed or unmanaged. I tried ‘Hybrid AD Joined’ but no devices that perform a download action are tagged as such.
I’ve created a session policy to block downloading sensitive labelled files via the web browser from Exchange/SharePoint/OneDrive. If I open a test labelled document in Word Online, click Save As and ‘Download a Copy’, I get the block message. If I navigate to OneDrive/My Files in the web browser, click on the 3 dots next to the same test file and click download, the file successfully downloads.
I’ve tried testing on an unmanaged device with Firefox and a managed device with Edge, with the same results.
Can anyone explain why I am getting different outcomes for what is effectively the same action?
Thanks.
I have all Office365 traffic passing through Cloud Apps via a Conditional Access policy that targets all users, and I want to use valid client certificate to determine whether a device is managed or unmanaged. I tried ‘Hybrid AD Joined’ but no devices that perform a download action are tagged as such. I’ve created a session policy to block downloading sensitive labelled files via the web browser from Exchange/SharePoint/OneDrive. If I open a test labelled document in Word Online, click Save As and ‘Download a Copy’, I get the block message. If I navigate to OneDrive/My Files in the web browser, click on the 3 dots next to the same test file and click download, the file successfully downloads. I’ve tried testing on an unmanaged device with Firefox and a managed device with Edge, with the same results. Can anyone explain why I am getting different outcomes for what is effectively the same action? Thanks. Read More
Excel: Lookup ‘1’ and return multiple values
This seems rather simple be cannot currently find a solve for this.
I am currently in a spreadsheet, which has a column that is returning a binary value on the basis of random sampling.
I need to lookup the value ‘1’, in this column and return all matching values from another ‘item number’ column in the same sheet.
I have tried XLOOKUP and Index Match, but they seem to just be returning the first value in the item number column, where I need each value returned in its own row.
Thanks in advance!
This seems rather simple be cannot currently find a solve for this.I am currently in a spreadsheet, which has a column that is returning a binary value on the basis of random sampling.I need to lookup the value ‘1’, in this column and return all matching values from another ‘item number’ column in the same sheet.I have tried XLOOKUP and Index Match, but they seem to just be returning the first value in the item number column, where I need each value returned in its own row.Thanks in advance! Read More
New and improved network topology experience in Network Watcher and Azure Monitor Network Insights
Azure Network Watcher provides network monitoring and troubleshooting capabilities to increase observability and actionable insights. Network Watcher supports four main scenarios: Connectivity Monitoring detects packet loss and latency, built-in health metrics and topology visualization help to locate issues, traffic monitoring tracks network communication pattern, and diagnostics suite enables troubleshooting.
Efficient management and monitoring of cloud networks is crucial for peak performance, security, and reliability. The blog explains how the new topology experience can help you manage and monitor your cloud network infrastructure with enhanced visualization, simplified monitoring, valuable insights and contextual issue localization capabilities.
What is network topology and why is it important?
Topology has been a much used and appreciated feature of Network Watcher and Azure Monitor Network Insights. This upgrade empowers users to create a unified, and interconnected representation of network deployment across subscriptions, regions, and resource groups, including networking resources, Virtual Machines (VMs) and Virtual Machine Scale Sets (VMSS), along with insights into connectivity and traffic.
Topology helps users understand resource allocation, system context, and enables faster problem solving. Topology becomes a valuable resource for network administrators to understand large scale network architecture for inventory management and easy troubleshooting. It also aids application administrators and DevOps engineers in understanding the application’s network structure and the interconnections among its components and resources.
What’s new in the network topology experience?
The following table compares the extra capabilities from traditional to the updated topology experience.
Capability
Classic Topology
New Topology
Available at
Network Watcher
Network Watcher
Network Insights
Virtual Networks
Available by default
Yes, no configuration needed
Yes, no configuration needed
Cross region support
:cross_mark_button:
:white_heavy_check_mark:
Cross subscription support
:cross_mark_button:
:white_heavy_check_mark:
Cross resource group support
:cross_mark_button:
:white_heavy_check_mark:
Resource Coverage
Limited resources on-boarded (VMs, Virtual Networks, Subnets, Network Interface, Network Security Group)
Comprehensive resource support for Azure Networking resources and VMs and VMSS. See the full list.
Resource health and metrics
:cross_mark_button:
:white_heavy_check_mark:
Resource cross- connectedness
Yes, with limited information
Overlayed with extensive connectivity, traffic and resource health metrics and insights.
Loss, latency and path insights from Network Watcher – Connection Monitor
:cross_mark_button:
:white_heavy_check_mark:
Traffic Insights from Network Watcher -Traffic Analytics
:cross_mark_button:
:white_heavy_check_mark:
Troubleshooting capability with network diagnostic tools
:cross_mark_button:
:white_heavy_check_mark:
Drill down to smaller scoped views like Virtual Networks, Subnets and resources
:cross_mark_button:
:white_heavy_check_mark:
Contextual search for resources
:cross_mark_button:
:white_heavy_check_mark:
How to use the network topology experience?
With the new topology, you can get deep insights into your environment and explore your resources from different levels, such as regions, virtual networks, subnets, and drill down to in-depth topologies of resources – even complex resources like Azure Virtual Network Manager.
When you select a resource in the topology, the resource and all the other resources that are linked to it by edges are highlighted. These edges show the connections between regions/resources, which can be achieved through virtual network peering, virtual network gateways etc. The side pane displays detailed information and properties for the node/resource that you have selected.
Out of box signals, health and resource specific metrics help you identify an affected resource. Comprehensive connectivity insights like packet loss/latency from Connection Monitor and bandwidth usage insights from Traffic Analytics help users see the whole picture of their environment. Diagnostic tools like Packet Capture, Connection Troubleshoot, Next Hop that are placed in context help diagnose an issue without changing a lot of contexts.
What are some use cases for the network topology experience?
Inventory Management
Manage inventory across multi subscription, region, and resource group.
Support for Azure networking resources along with VM and VMSS.
Visualization support for Azure Virtual Network Manager pre-deploy security configuration available.
Actionable Insights
Monitoring metrics and signals for all supported resources are included.
Loss/latency connectivity insights from Connection Monitor available within the topology.
Bandwidth usage and traffic flow information with Traffic Analytics integration.
Issue Localization
Integrated diagnostics tools like Packet Capture, Connection Troubleshoot, Next Hop within the visualization context.
Navigating across hierarchy, users start at global view and can drill down until the resource view (this view enables you to picture even the most complex resource configurations).
Locate impacted resources easily using smart in-context search function within the topology.
How to access the network topology experience?
You can access the new topology experience by navigating to the following locations on the Azure portal:
Network Watcher: Access the new topology experience by navigating to the topology table of contents (TOC) under Network Watcher on the Azure portal
Azure Monitor Network Insights: The refreshed experience is also available at Network Insights under Azure Monitor.
Virtual networks: This experience can also be accessed at the topology tab on the virtual network overview as well as the diagram TOC.
We are excited to offer you the new network topology experience and hope it helps you to manage and monitor your cloud network infrastructure. We appreciate your feedback and suggestions to make this feature better. Please let us know what you think and ask any questions in the comments below or on the Azure Feedback Forum.
Microsoft Tech Community – Latest Blogs –Read More
How do I download video from any websites on Windows 11?
I’m trying to find a way to download videos from various websites on my Windows 11 system. I’ve noticed that while browsing, there are several videos I come across on different platforms that I would like to save for offline viewing. However, I’m unsure about the tools or methods that are effective and safe to use for this purpose. Could someone recommend any reliable software or a step-by-step method that can assist in downloading videos from a range of websites without compromising on the video quality or the security of my computer? Any help or guidance would be greatly appreciated.
I’m trying to find a way to download videos from various websites on my Windows 11 system. I’ve noticed that while browsing, there are several videos I come across on different platforms that I would like to save for offline viewing. However, I’m unsure about the tools or methods that are effective and safe to use for this purpose. Could someone recommend any reliable software or a step-by-step method that can assist in downloading videos from a range of websites without compromising on the video quality or the security of my computer? Any help or guidance would be greatly appreciated. Read More
twinapi.appcore.dll causes program crash
When I attempt to use a program (Mushroom Identification) it always stops after a few seconds. The problem seems to be caused by twinappi.appcore.dll. Is there any way to fix this error?
Faulting application name: Insect Identification.exe, version: 1.0.0.0, time stamp: 0x5c63d3f9
Faulting module name: twinapi.appcore.dll, version: 10.0.22621.3527, time stamp: 0xcfee12e9
Exception code: 0xc000027b
Fault offset: 0x00000000000c9a83
Faulting process ID: 0x0x3A54
Faulting application start time: 0x0x1DA99DE4F4F995A
Faulting application path: C:Program FilesWindowsApps21437Happimoji.MushroomIdentification_1.0.0.0_x64__crpefj3q18kjyInsect Identification.exe
Faulting module path: C:WINDOWSSYSTEM32twinapi.appcore.dll
Report ID: c953fa9d-72d9-4cb0-b1c9-ae5c2e00de8a
Faulting package full name: 21437Happimoji.MushroomIdentification_1.0.0.0_x64__crpefj3q18kjy
When I attempt to use a program (Mushroom Identification) it always stops after a few seconds. The problem seems to be caused by twinappi.appcore.dll. Is there any way to fix this error? Faulting application name: Insect Identification.exe, version: 1.0.0.0, time stamp: 0x5c63d3f9Faulting module name: twinapi.appcore.dll, version: 10.0.22621.3527, time stamp: 0xcfee12e9Exception code: 0xc000027bFault offset: 0x00000000000c9a83Faulting process ID: 0x0x3A54Faulting application start time: 0x0x1DA99DE4F4F995AFaulting application path: C:Program FilesWindowsApps21437Happimoji.MushroomIdentification_1.0.0.0_x64__crpefj3q18kjyInsect Identification.exeFaulting module path: C:WINDOWSSYSTEM32twinapi.appcore.dllReport ID: c953fa9d-72d9-4cb0-b1c9-ae5c2e00de8aFaulting package full name: 21437Happimoji.MushroomIdentification_1.0.0.0_x64__crpefj3q18kjy Read More
Automating PDF File Generation from Excel Online Using Microsoft Forms Data
How can I include an automatic step in my Microsoft Automate flow to generate separate PDF files for each response collected via Microsoft Forms and consolidated in Excel Online? Is there a built-in functionality in Microsoft Forms or Excel Online, or should I integrate a third-party tool to accomplish this?
How can I include an automatic step in my Microsoft Automate flow to generate separate PDF files for each response collected via Microsoft Forms and consolidated in Excel Online? Is there a built-in functionality in Microsoft Forms or Excel Online, or should I integrate a third-party tool to accomplish this? Read More