Category: Microsoft
Category Archives: Microsoft
move spaces to a given column via worksheet command
i am 81 yoa and my lady is sickly. Among other things she has type 1 diabetes and heart problems.
I am building a worksheet containing foods and their calories and carbs.
i want to blank out input a column when she has completed her food menu and is done inputting. How can I do that?
I also want to protect all columns in the worksheet except the one for her input.
I am using Microsoft Professional + 2013 Office on a pc via wireless in Windows 11.
can anyone help me? I would be happy to attach a copy of the worksheet but i don’t see how!
Thanks in advance!
i am 81 yoa and my lady is sickly. Among other things she has type 1 diabetes and heart problems.I am building a worksheet containing foods and their calories and carbs.i want to blank out input a column when she has completed her food menu and is done inputting. How can I do that?I also want to protect all columns in the worksheet except the one for her input. I am using Microsoft Professional + 2013 Office on a pc via wireless in Windows 11.can anyone help me? I would be happy to attach a copy of the worksheet but i don’t see how!Thanks in advance! Read More
How to Fix QuickBook𝖘 Error PS038 after update?
I’m encountering a frustrating issue with QuickBook𝖘 Desktop when trying to update my tax tables. Every time I attempt to download the latest updates, I’m getting Error PS038. This problem has halted my processing, and I’m seeking guidance on how to resolve it.
I’m encountering a frustrating issue with QuickBook𝖘 Desktop when trying to update my tax tables. Every time I attempt to download the latest updates, I’m getting Error PS038. This problem has halted my processing, and I’m seeking guidance on how to resolve it. Read More
Understanding OKR permissions and privacy: not obvious parent-child logic!
Hi,
Wanted to share our experience and evolving understanding of OKR permissions and privacy to see if it matches with what others see and need.
Step 1 – Viva Goals provides a feature to make Objectives or Key Results “private” by setting the permissions to “Only selected people can view and align”. Given we have some Key Results that expose sensitive financial results, for those KR we set the permissions to just a limited set of people. This all feels good as a little “lock” appears beside the KR.
Step 2 – Some of those private KRs are grouped under a parent Objective that we also make “Private” as everything under the Objective is private. However, some of the private KRs are grouped under Objectives that also have KRs that are public, so those parent objectives are kept “public”.
Outcome: we assumed that all the KRs that we made private are indeed private. But to our surprise, this does not seem to be the case. When the private KR is child to a public Objective, then people who are NOT in the permission list can this see the KR and its result. Exactly what we wanted to avoid!
In the case of private KRs that are child to a private Objective, the general public can see the private Objective but they can not open it and see any of its child: which is good! (even though we thought they would not even see the private objective!).
So our conclusion is that to make KRs truly private they NEED to be grouped under a Private objective, not a public one. And it needs to be clear that this private Objective is actually visible to all. This is a relatively acceptable workaround once one is aware of it. However the User Interface of Viva Goals is deceptive in the way that it lets the KR creator think that its KR is private, when it actually is not.
Have others used the OKR privacy settings and see similar outcomes? Other experiences or recommendations on how to make sure private OKRs are truly private?
Hi,Wanted to share our experience and evolving understanding of OKR permissions and privacy to see if it matches with what others see and need.Step 1 – Viva Goals provides a feature to make Objectives or Key Results “private” by setting the permissions to “Only selected people can view and align”. Given we have some Key Results that expose sensitive financial results, for those KR we set the permissions to just a limited set of people. This all feels good as a little “lock” appears beside the KR.Step 2 – Some of those private KRs are grouped under a parent Objective that we also make “Private” as everything under the Objective is private. However, some of the private KRs are grouped under Objectives that also have KRs that are public, so those parent objectives are kept “public”.Outcome: we assumed that all the KRs that we made private are indeed private. But to our surprise, this does not seem to be the case. When the private KR is child to a public Objective, then people who are NOT in the permission list can this see the KR and its result. Exactly what we wanted to avoid!In the case of private KRs that are child to a private Objective, the general public can see the private Objective but they can not open it and see any of its child: which is good! (even though we thought they would not even see the private objective!).So our conclusion is that to make KRs truly private they NEED to be grouped under a Private objective, not a public one. And it needs to be clear that this private Objective is actually visible to all. This is a relatively acceptable workaround once one is aware of it. However the User Interface of Viva Goals is deceptive in the way that it lets the KR creator think that its KR is private, when it actually is not.Have others used the OKR privacy settings and see similar outcomes? Other experiences or recommendations on how to make sure private OKRs are truly private? Read More
Three New Capabilities to Modernize your SQL Server Anywhere with Azure Arc | Data Exposed
With the latest enhancements in SQL Server enabled by Azure Arc, you can modernize your SQL Servers outside Azure and also assess options for end of support SQL Servers 2012 and 2014. On this episode of Data Exposed with Anna Hoffman and Dhananjay Mahajan, we will cover the:
• Migration Assessment preview to identify the SQL Servers that are ready to migrate to Azure
• Physical core model for unlimited virtualization to save on costs
• Extended security updates for flexible monthly billing model to extend azure operations and management of your SQL Server
• Azure copilot now in pre-preview for Arc enabled
Resources:
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
New Blog | Portal extension for Azure Firewall with DDoS protection
By Saleem Bseeu
Introduction
In the ever-evolving landscape of network security, Azure Firewall has emerged as a key player. As a managed, cloud-based network security service, it provides essential protection for your Azure Virtual Network resources. Cyber threats are increasingly sophisticated and frequent, the importance of robust security measures like Distributed Denial of Service (DDoS) protection cannot be overstated. DDoS attacks can cripple services, making them unavailable to users, which can have significant business implications. One of the motivations for integrating DDoS protection into the Azure Firewall creation flow is to simplify the process for users. Many users who deploy Azure Firewall also enable DDoS protection to protect their network resources. However, for those who may not be aware of the importance of DDoS protection or prefer a more straightforward setup process, the new creation flow makes it easier to enable this feature. By integrating DDoS protection into the Firewall creation process, users can activate this essential security measure with just a few clicks, enhancing the overall security of their network environment.
The New Azure Firewall Flow Creation (Integrating DDoS Protection)
The new Azure Firewall flow creation process represents a significant advancement in network security management. This process is designed to be user-friendly, providing a more streamlined experience for setting up and managing firewalls. These improvements not only enhance the user experience but also contribute to a more secure network environment.
The new creation process is notable for its integration of DDoS protection, allowing users to activate this feature seamlessly during setup. This integration streamlines the process of enabling DDoS protection on Azure Firewall public IPs, making it easily accessible to users of all skill levels with just a few clicks. When customers activate DDoS Protection, they can enroll in DDoS IP Protection or DDoS Network Protection SKUs. These SKUs provide value-added features and capabilities, beyond the basic platform-level DDoS protection that safeguards Azure’s infrastructure and services. DDoS attacks targeting your applications and resources are mitigated with a profile that is automatically adjusted to your expected traffic volume, along with attack alert notifications, logging and monitoring, cost protection, and DDoS Rapid Response (included with DDoS Network Protection). This ensures that, even in the event of a DDoS attack, services remain available and secure, which is vital in today’s digital environment where service availability can have a direct impact on business operations.
Note: This new flow creation is now available for preview. To access it, use the URL preview.portal.azure.com.
Exploring the New Service Creation Flow
Let’s delve into the new service creation flow and learn how to navigate it. Start by accessing the Firewall service in your Azure portal and initiate the creation of a new Firewall.
This initial step mirrors the process used in the past to create your Firewall. You’ll need to select the resource, name, region, and availability zones that suit your needs. When it comes to Firewall SKU, you’re presented with three options: Standard, Premium, and Basic. To gain a better understanding of which Firewall SKU aligns with your requirements, refer to Choose the right Azure Firewall SKU to meet your needs | Microsoft Learn
Read the full post here: Portal extension for Azure Firewall with DDoS protection
By Saleem Bseeu
Introduction
In the ever-evolving landscape of network security, Azure Firewall has emerged as a key player. As a managed, cloud-based network security service, it provides essential protection for your Azure Virtual Network resources. Cyber threats are increasingly sophisticated and frequent, the importance of robust security measures like Distributed Denial of Service (DDoS) protection cannot be overstated. DDoS attacks can cripple services, making them unavailable to users, which can have significant business implications. One of the motivations for integrating DDoS protection into the Azure Firewall creation flow is to simplify the process for users. Many users who deploy Azure Firewall also enable DDoS protection to protect their network resources. However, for those who may not be aware of the importance of DDoS protection or prefer a more straightforward setup process, the new creation flow makes it easier to enable this feature. By integrating DDoS protection into the Firewall creation process, users can activate this essential security measure with just a few clicks, enhancing the overall security of their network environment.
The New Azure Firewall Flow Creation (Integrating DDoS Protection)
The new Azure Firewall flow creation process represents a significant advancement in network security management. This process is designed to be user-friendly, providing a more streamlined experience for setting up and managing firewalls. These improvements not only enhance the user experience but also contribute to a more secure network environment.
The new creation process is notable for its integration of DDoS protection, allowing users to activate this feature seamlessly during setup. This integration streamlines the process of enabling DDoS protection on Azure Firewall public IPs, making it easily accessible to users of all skill levels with just a few clicks. When customers activate DDoS Protection, they can enroll in DDoS IP Protection or DDoS Network Protection SKUs. These SKUs provide value-added features and capabilities, beyond the basic platform-level DDoS protection that safeguards Azure’s infrastructure and services. DDoS attacks targeting your applications and resources are mitigated with a profile that is automatically adjusted to your expected traffic volume, along with attack alert notifications, logging and monitoring, cost protection, and DDoS Rapid Response (included with DDoS Network Protection). This ensures that, even in the event of a DDoS attack, services remain available and secure, which is vital in today’s digital environment where service availability can have a direct impact on business operations.
Note: This new flow creation is now available for preview. To access it, use the URL preview.portal.azure.com.
Exploring the New Service Creation Flow
Let’s delve into the new service creation flow and learn how to navigate it. Start by accessing the Firewall service in your Azure portal and initiate the creation of a new Firewall.
This initial step mirrors the process used in the past to create your Firewall. You’ll need to select the resource, name, region, and availability zones that suit your needs. When it comes to Firewall SKU, you’re presented with three options: Standard, Premium, and Basic. To gain a better understanding of which Firewall SKU aligns with your requirements, refer to Choose the right Azure Firewall SKU to meet your needs | Microsoft Learn
Read the full post here: Portal extension for Azure Firewall with DDoS protection
Error code 0x800ccc1a
Hello,
i have an problem with outlook. I use windows 7 and when i am starting outlook there is the error code 0x800ccc1a and i cant get and send email.
Thanks for helping,
Best wishes
Laurenz
Hello,i have an problem with outlook. I use windows 7 and when i am starting outlook there is the error code 0x800ccc1a and i cant get and send email.Thanks for helping, Best wishesLaurenz Read More
Delegate
We are trying to use PowerShell to set the delegation of a user’s calendar to an other user’s account.
Add-MailboxFolderPermission -Identity [EMAIL_ADDRESS1]:Calendar -User [EMAIL_ADDRESS2] -AccessRights Editor -SharingPermissionFlags Delegate,CanViewPrivateItems
We ran the script, however, it only set the delegation to “Delegate Only” and we wanted to set it to “Both my delegate and me” so both people can receive and respond to the meeting invite.
Any way to achieve that?
We are trying to use PowerShell to set the delegation of a user’s calendar to an other user’s account. Add-MailboxFolderPermission -Identity [EMAIL_ADDRESS1]:Calendar -User [EMAIL_ADDRESS2] -AccessRights Editor -SharingPermissionFlags Delegate,CanViewPrivateItems We ran the script, however, it only set the delegation to “Delegate Only” and we wanted to set it to “Both my delegate and me” so both people can receive and respond to the meeting invite. Any way to achieve that? Read More
Save Big on Hosting Your Fine-Tuned Models on Azure OpenAI Service
We’ve heard your feedback loud and clear: folks want to fine tune their models, but the pricing can make experimentation too expensive. Following our update last month to switch to token based billing for training, we’re reducing the hosting charges for many of your favorite models!
Starting from July 1, we have reduced the hosting charges for many Azure OpenAI Service fine-tuned models, including our most popular models – the GPT-35-Turbo family. For folks less familiar with our service, models need to be deployed before they can be used for inferencing – and when deployed, we charge an hourly rate for hosting models. Don’t need to use your model right away? We store up to 100 non-deployed fine tuned models per resource, for free!
The new prices are published on the Azure OpenAI Service Pricing page, and listed below:
Base Model
Previous Price
New Price
(Effective July 1, 2024)
Babbage-002
$1.70 / hour
$1.70 / hour
Davinci-002
$2.00 / hour
$1.70 / hour (15% off)
GPT-35-Turbo (4K)
$3.00 / hour
$1.70 / hour (43% off)
GPT-35-Turbo (16K)
$3.00 / hour
$1.70 / hour (43% off)
Why do we charge for hosting? When you deploy a fine tune model, you’re covered by the same Azure OpenAI SLAs as our base models, with 99.9% uptime, and hosted continuously on Azure infrastructure rather than being loaded on demand. This means that once your model is deployed, there’s no wait for inferencing. And, because you’re paying for your deployment, we charge a relatively low price for inferencing (the same as the equivalent base model).
When comparing different services, you can consider the tradeoff between a fixed price for hosting and a higher per-token rate for inferencing. Because Azure OpenAI has a fixed hosting cost and low inferencing charges, for heavier inferencing workloads it may be much cheaper compared to services that just charge a premium on tokens. For example, if we assume a standard 8:1 ratio for input to output tokens and compare the costs of using a fine-tuned GPT-35-Turbo model, when your workload surpasses ~700K tokens / hour (~12K TPM), Azure OpenAI becomes the cheaper option.
We hope this will make it easier for you to use these models and explore their capabilities. Thank you for choosing Azure OpenAI Service. Happy fine tuning!
Microsoft Tech Community – Latest Blogs –Read More
send messages to a team’s chat using graph api on python
Hi!
Id like to receive some help.
I want to know it its possible to send a message to a chat gruop. The object its to code a python script which among other things it sends a message to a concrect group on teams (just a chat group, not a channel). I’ve all permissions required but until now i’ve not accomplished what i want to.
If what i want its possible, do i have to authenticate everytime i run the code or there’s some way to auto-authenticate?
I’ve tried using this guide https://learn.microsoft.com/es-es/graph/api/chatmessage-get?view=graph-rest-1.0&tabs=python
but could not to get nothing.
If someone could help me with any code example i’ll be gratefull
Hi!Id like to receive some help.I want to know it its possible to send a message to a chat gruop. The object its to code a python script which among other things it sends a message to a concrect group on teams (just a chat group, not a channel). I’ve all permissions required but until now i’ve not accomplished what i want to. If what i want its possible, do i have to authenticate everytime i run the code or there’s some way to auto-authenticate?I’ve tried using this guide https://learn.microsoft.com/es-es/graph/api/chatmessage-get?view=graph-rest-1.0&tabs=pythonbut could not to get nothing. If someone could help me with any code example i’ll be gratefull Read More
General Availability: Customer Key Onboarding Service
We are thrilled to announce the General Availability of the Customer Key Onboarding Automation Service, a game-changer for organizations in highly regulated industries. This feature has been designed to streamline and simplify your onboarding process for Customer Key, delivering significant time savings and efficiency improvements.
Key Benefits:
Faster Onboarding: The average onboarding process duration has been reduced from 1.5 weeks to just one hour. You can now seamlessly register your subscriptions, automatically verify the configuration of your Azure Key Vault and subscription resources, and onboard your tenant without needing email communication with Microsoft.
Enhanced Feedback: The service now provides detailed configurations needed for each resource. Intuitive feedback includes specific error messages and guidance on what needs to be fixed, making the process even smoother.
Unsupported Scenarios:
However, we plan to expand the service’s capabilities to include these scenarios in the future.
Government tenants
Tenants using managed HSMs will still require manual onboarding.
For more information on how to use the Customer Key Onboarding Automation Service, please visit Onboard using the Customer Key Onboarding Service.
We are excited about the improvements this service brings and look forward to continuing to enhance your experience with Microsoft 365.
Sincerely,
The M365 Data-at-Rest Encryption Team
We are thrilled to announce the General Availability of the Customer Key Onboarding Automation Service, a game-changer for organizations in highly regulated industries. This feature has been designed to streamline and simplify your onboarding process for Customer Key, delivering significant time savings and efficiency improvements.
Key Benefits:
Faster Onboarding: The average onboarding process duration has been reduced from 1.5 weeks to just one hour. You can now seamlessly register your subscriptions, automatically verify the configuration of your Azure Key Vault and subscription resources, and onboard your tenant without needing email communication with Microsoft.
Enhanced Feedback: The service now provides detailed configurations needed for each resource. Intuitive feedback includes specific error messages and guidance on what needs to be fixed, making the process even smoother.
Unsupported Scenarios:
However, we plan to expand the service’s capabilities to include these scenarios in the future.
Government tenants
Tenants using managed HSMs will still require manual onboarding.
For more information on how to use the Customer Key Onboarding Automation Service, please visit Onboard using the Customer Key Onboarding Service.
We are excited about the improvements this service brings and look forward to continuing to enhance your experience with Microsoft 365.
Sincerely,
The M365 Data-at-Rest Encryption Team
Read More
Children’s Hospital of Philadelphia transforms fundraising with Moore
Experts at Children’s Hospital of Philadelphia (CHOP) have delivered many firsts in pediatrics—from the first bilateral transplant to the first fetal heart surgery, and the breakthroughs for children continue to happen every day. They developed a new tool to better study genetic variants linked to childhood cancer and other diseases. And they are advancing an in-utero cure for sickle cell disease, which affects one in every 375 African Americans.
“We provide some of the world’s leading pediatric care and research, pioneering approaches that help kids grow up healthier,” says Jon Thompson, Associate Vice President of Philanthropic Strategy and Technology at CHOP. As the nation’s first pediatric hospital, CHOP serves patients from around the world, consults on the most difficult cases at other hospitals, and invents life-saving strategies used across the globe.
To fund this critical work, CHOP sought to maximize its fundraising through deeper constituent relationships. The children’s hospital partnered with Moore, the constituent experience management company that leverages data and predictive modeling to advance nonprofits’ fundraising goals. Moore is also part of the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program, which accelerates the impact of cloud-first software providers through technical, AI-focused expertise and go-to-market support.
Moore developed a novel constituent identity solution and pipeline for CHOP using Microsoft Azure. “With Moore, we have built something that the industry hasn’t seen before—a data-powered, constituent-first marketing operation that links people and causes,” Thompson says. “Microsoft, specifically the Azure platform, allows us to scale that. This technology ultimately drives empathy and human connection.”
Experts at Children’s Hospital of Philadelphia (CHOP) have delivered many firsts in pediatrics—from the first bilateral transplant to the first fetal heart surgery, and the breakthroughs for children continue to happen every day. They developed a new tool to better study genetic variants linked to childhood cancer and other diseases. And they are advancing an in-utero cure for sickle cell disease, which affects one in every 375 African Americans.
“We provide some of the world’s leading pediatric care and research, pioneering approaches that help kids grow up healthier,” says Jon Thompson, Associate Vice President of Philanthropic Strategy and Technology at CHOP. As the nation’s first pediatric hospital, CHOP serves patients from around the world, consults on the most difficult cases at other hospitals, and invents life-saving strategies used across the globe.
To fund this critical work, CHOP sought to maximize its fundraising through deeper constituent relationships. The children’s hospital partnered with Moore, the constituent experience management company that leverages data and predictive modeling to advance nonprofits’ fundraising goals. Moore is also part of the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program, which accelerates the impact of cloud-first software providers through technical, AI-focused expertise and go-to-market support.
Moore developed a novel constituent identity solution and pipeline for CHOP using Microsoft Azure. “With Moore, we have built something that the industry hasn’t seen before—a data-powered, constituent-first marketing operation that links people and causes,” Thompson says. “Microsoft, specifically the Azure platform, allows us to scale that. This technology ultimately drives empathy and human connection.”
Read the case study Read More
Data disk is full
Hi
my database disk is full. I want to extend the disk. Is there any recommendation before extending?
regards
Himy database disk is full. I want to extend the disk. Is there any recommendation before extending? regards Read More
New Nonprofit Community Goals for FY25
We launched the Nonprofit Community at the Summit this year as a new initiative for nonprofit listening, storytelling, and peer-to-peer community building. We are grateful for the support from our partners. Thank you!
Now we’d like to share important changes to how we are evolving the Nonprofit Community in FY25
The Tech Community platform will be re-focused as a platform for post-sales tech Q&A with nonprofit customers, providing direct tech solutions and discussion spaces for nonprofits.
The nonprofitcommunity.microsoft.com address and aka.ms/nonprofitcommunity will be redirected directly to Tech Community.
We encourage you to continue to engage in the community and to be available to ask/answer questions constructively as that will help build a useful community.
As part of this realignment, the blog and events calendar on Nonprofit Community will sunset, effective July (existing articles and posts will remain live).
LinkedIn will become our new focus for announcements, stories, and nonprofit community building.
Stay up to date with the Nonprofit Community
We launched the Nonprofit Community at the Summit this year as a new initiative for nonprofit listening, storytelling, and peer-to-peer community building. We are grateful for the support from our partners. Thank you!
Now we’d like to share important changes to how we are evolving the Nonprofit Community in FY25
The Tech Community platform will be re-focused as a platform for post-sales tech Q&A with nonprofit customers, providing direct tech solutions and discussion spaces for nonprofits.
The nonprofitcommunity.microsoft.com address and aka.ms/nonprofitcommunity will be redirected directly to Tech Community.
We encourage you to continue to engage in the community and to be available to ask/answer questions constructively as that will help build a useful community.
As part of this realignment, the blog and events calendar on Nonprofit Community will sunset, effective July (existing articles and posts will remain live).
LinkedIn will become our new focus for announcements, stories, and nonprofit community building.
Stay up to date with the Nonprofit Community
Read More
Issues with MS Teams Connectors
Hi,We recently started having issues adding new connectors to MS Teams channels for message posting. It was working ok few weeks ago as we were able to search for connectors and add it to the channels. Connectors can no longer be found from the lists in the attached image. We tried to access the connector developer portal at https://outlook.office.com/connectors/publish but the page is down. This has prevented us from testing some of our recent changes that we made to our integration.Also, we are now having issues with messages getting to our MS Teams walls in production. No changes have been made recently to our production Bot connector, but it’s throwing a 403 errors now when posting messages.Kindly provide us with any details on what we need to do as it now seems our MS teams connector integration is broken and affecting our clients Read More
How to effectively use EXCEL when tracking Credit Card payments.
I would like to be able to track the use of my credit card for specific entities.
Can someone help me set it up? I am not that familiar with Excel.
I would like to be able to track the use of my credit card for specific entities.Can someone help me set it up? I am not that familiar with Excel. Read More
How to Implement OAuth for a Bot-Based Message Extension App in Microsoft Teams for Graph API?
Can someone provide a detailed guide on how the OAuth flow works for a bot-based message extension in Microsoft Teams, the specific configurations needed in the Azure app registration, how to configure permissions and consent for accessing the Microsoft Graph API, and any additional settings required in the Teams Developer Portal or Bot Framework Developer Portal?
Any guidance, code examples, or references to detailed documentation would be highly beneficial.
I have created a bot-based message extension app using the Teams Toolkit and need to call the Microsoft Graph API, which requires OAuth implementation. So far, I have created the app in the Teams Developer Portal, registered the app in Azure App registration, and registered the bot in the Bot Framework Developer Portal (dev.botframework.com). However, I am unclear about the OAuth flow and the specific configurations required. Can someone provide a detailed guide on how the OAuth flow works for a bot-based message extension in Microsoft Teams, the specific configurations needed in the Azure app registration, how to configure permissions and consent for accessing the Microsoft Graph API, and any additional settings required in the Teams Developer Portal or Bot Framework Developer Portal?Any guidance, code examples, or references to detailed documentation would be highly beneficial. Read More
Part of Chart Cut off
I have created a chart in Excel but for some reason the first data point is cut off. It seems like the Y axis is overlapping the plot area but adjusting the width of the y axis does not fix the issue. Is there some way to offset the Plot area of the chart further to the right?
I have created a chart in Excel but for some reason the first data point is cut off. It seems like the Y axis is overlapping the plot area but adjusting the width of the y axis does not fix the issue. Is there some way to offset the Plot area of the chart further to the right? Read More
Configuring a Disaster Recovery Solution for Azure Service Bus with Basic Tier
Introduction
Disaster recovery (DR) is crucial for ensuring business continuity and minimizing downtime. While the Azure Service Bus Basic tier doesn’t support advanced Geo-disaster recovery (Geo-DR) or Geo-Replication(Public Preview) features like the Premium tiers, you can still implement a custom DR strategy. This guide will walk you through setting up a disaster recovery solution for Azure Service Bus using the Basic tier.
Prerequisites
Before starting, make sure you have:
An Azure subscription.Two Azure Service Bus namespaces (one primary and one secondary) in different regions.Access to the Azure portal.Familiarity with Azure CLI or PowerShell for automation purposes.
Step-by-Step Guide
Step 1: Create Primary and Secondary Namespaces
Create the Primary Namespace:
Go to the Azure portal.Search for “Service Bus” and select “Create Service Bus namespace”.Enter a name for the namespace (e.g., primary-ns-basic), choose the Basic tier, and select the primary region.Click “Review + create” and then “Create”.
Create the Secondary Namespace:
Repeat the steps to create a secondary namespace in a different region (e.g., secondary-ns-basic).
Step 2: Synchronise Messages Between Namespaces
Since the Basic tier does not support Geo-DR, you’ll need to manually synchronise messages between the primary and secondary namespaces. This can be achieved through custom code or third-party tools.
Implement Message Synchronisation:
Create an application that listens to messages on the primary namespace and republishes them to the secondary namespace.Use Azure Functions or a similar service to trigger this application whenever a new message arrives.Ensure the application handles any potential issues, such as message duplication or order.Sample Synchronization Code (Azure Functions with C#):
using System;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
public static class MessageSynchroniser
{
private static string primaryConnectionString = “<PrimaryNamespaceConnectionString>”;
private static string secondaryConnectionString = “<SecondaryNamespaceConnectionString>”;
private static string queueName = “<QueueName>”;
private static IQueueClient secondaryQueueClient;
[FunctionName(“MessageSynchroniser”)]
public static async Task Run([ServiceBusTrigger(queueName, Connection = “primaryConnectionString”)] Message message, ILogger log)
{
secondaryQueueClient = new QueueClient(secondaryConnectionString, queueName);
try
{
var secondaryMessage = new Message(Encoding.UTF8.GetBytes(message.Body))
{
ContentType = message.ContentType,
Label = message.Label,
MessageId = message.MessageId,
CorrelationId = message.CorrelationId,
UserProperties = message.UserProperties
};
await secondaryQueueClient.SendAsync(secondaryMessage);
log.LogInformation($”Message synchronised to secondary namespace: {message.MessageId}”);
}
catch (Exception ex)
{
log.LogError($”Error synchronising message: {ex.Message}”);
}
finally
{
await secondaryQueueClient.CloseAsync();
}
}
}
Step 3: Failover Procedure
In the event of a disaster, you will need to manually failover to the secondary namespace.
Update Connection Strings:
Modify your application configuration to point to the secondary namespace’s connection string.Restart your applications to ensure they connect to the secondary namespace.
Communicate the Change:
Notify your team and stakeholders about the failover.Monitor the secondary namespace to ensure it is handling the load appropriately.
Step 4: Failback to Primary Namespace
Once the primary region is operational again, you can switch back to the primary namespace.
Resynchronise Messages:
Ensure that any messages in the secondary namespace are synchronised back to the primary namespace.Use the same message synchronisation approach as before but in reverse.
Update Connection Strings:
Change your application configuration back to the primary namespace’s connection string.Restart your applications to point back to the primary namespace.
Best Practices
Regular Testing: Periodically test your disaster recovery plan to ensure it works as expected.Automation: Automate as much of the DR process as possible to minimise downtime and human error.Monitoring: Set up monitoring and alerts for both primary and secondary namespaces to detect issues early.Documentation: Keep detailed documentation of your DR processes and ensure your team is familiar with them.
Conclusion
While the Azure Service Bus Basic tier lacks built-in Geo-DR capabilities, you can still create a robust disaster recovery solution through custom synchronization and failover procedures. By following the steps outlined in this guide, you can ensure your messaging infrastructure is resilient and prepared for any disruptions. Regular testing and monitoring will help maintain the effectiveness of your DR strategy.
Feel free to reach out if you have any questions or need further assistance. Happy configuring!
— Santosh Patkar
IntroductionDisaster recovery (DR) is crucial for ensuring business continuity and minimizing downtime. While the Azure Service Bus Basic tier doesn’t support advanced Geo-disaster recovery (Geo-DR) or Geo-Replication(Public Preview) features like the Premium tiers, you can still implement a custom DR strategy. This guide will walk you through setting up a disaster recovery solution for Azure Service Bus using the Basic tier. PrerequisitesBefore starting, make sure you have:An Azure subscription.Two Azure Service Bus namespaces (one primary and one secondary) in different regions.Access to the Azure portal.Familiarity with Azure CLI or PowerShell for automation purposes. Step-by-Step GuideStep 1: Create Primary and Secondary NamespacesCreate the Primary Namespace:Go to the Azure portal.Search for “Service Bus” and select “Create Service Bus namespace”.Enter a name for the namespace (e.g., primary-ns-basic), choose the Basic tier, and select the primary region.Click “Review + create” and then “Create”.Create the Secondary Namespace:Repeat the steps to create a secondary namespace in a different region (e.g., secondary-ns-basic).Step 2: Synchronise Messages Between NamespacesSince the Basic tier does not support Geo-DR, you’ll need to manually synchronise messages between the primary and secondary namespaces. This can be achieved through custom code or third-party tools.Implement Message Synchronisation:Create an application that listens to messages on the primary namespace and republishes them to the secondary namespace.Use Azure Functions or a similar service to trigger this application whenever a new message arrives.Ensure the application handles any potential issues, such as message duplication or order.Sample Synchronization Code (Azure Functions with C#): using System;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
public static class MessageSynchroniser
{
private static string primaryConnectionString = “<PrimaryNamespaceConnectionString>”;
private static string secondaryConnectionString = “<SecondaryNamespaceConnectionString>”;
private static string queueName = “<QueueName>”;
private static IQueueClient secondaryQueueClient;
[FunctionName(“MessageSynchroniser”)]
public static async Task Run([ServiceBusTrigger(queueName, Connection = “primaryConnectionString”)] Message message, ILogger log)
{
secondaryQueueClient = new QueueClient(secondaryConnectionString, queueName);
try
{
var secondaryMessage = new Message(Encoding.UTF8.GetBytes(message.Body))
{
ContentType = message.ContentType,
Label = message.Label,
MessageId = message.MessageId,
CorrelationId = message.CorrelationId,
UserProperties = message.UserProperties
};
await secondaryQueueClient.SendAsync(secondaryMessage);
log.LogInformation($”Message synchronised to secondary namespace: {message.MessageId}”);
}
catch (Exception ex)
{
log.LogError($”Error synchronising message: {ex.Message}”);
}
finally
{
await secondaryQueueClient.CloseAsync();
}
}
} Step 3: Failover ProcedureIn the event of a disaster, you will need to manually failover to the secondary namespace.Update Connection Strings:Modify your application configuration to point to the secondary namespace’s connection string.Restart your applications to ensure they connect to the secondary namespace.Communicate the Change:Notify your team and stakeholders about the failover.Monitor the secondary namespace to ensure it is handling the load appropriately.Step 4: Failback to Primary NamespaceOnce the primary region is operational again, you can switch back to the primary namespace.Resynchronise Messages:Ensure that any messages in the secondary namespace are synchronised back to the primary namespace.Use the same message synchronisation approach as before but in reverse.Update Connection Strings:Change your application configuration back to the primary namespace’s connection string.Restart your applications to point back to the primary namespace.Best PracticesRegular Testing: Periodically test your disaster recovery plan to ensure it works as expected.Automation: Automate as much of the DR process as possible to minimise downtime and human error.Monitoring: Set up monitoring and alerts for both primary and secondary namespaces to detect issues early.Documentation: Keep detailed documentation of your DR processes and ensure your team is familiar with them. ConclusionWhile the Azure Service Bus Basic tier lacks built-in Geo-DR capabilities, you can still create a robust disaster recovery solution through custom synchronization and failover procedures. By following the steps outlined in this guide, you can ensure your messaging infrastructure is resilient and prepared for any disruptions. Regular testing and monitoring will help maintain the effectiveness of your DR strategy. Feel free to reach out if you have any questions or need further assistance. Happy configuring! — Santosh Patkar Read More
Azure Maps Route Matrix
I’m trying to migrate items from Bing Maps to Azure Maps. I can get the latitude/longitude from an address by the following URL:https://atlas.microsoft.com/geocode?api-version=2023-06-01&&addressLine=15127%20NE%2024th%20Street%20Redmond%20WA%2098052&subscription-key={subscription-key} However, when I try to get a synchronous route matrix, I get an HTTP 405 error stating the page isn’t working.https://atlas.microsoft.com/route/matrix/sync/json?api-version=1.0&subscription-key={subscription-key} What’s the proper way to get the route matrix through a GET and return results in JSON? Can any of this be done with Azure.ResourceManager.Maps and is there a good walkthrough discussing this? I haven’t found one yet. Read More
Pull a report of Integrated Apps in Office365/Microsoft365
Hi,
I know that i can navigate into admin center and have a look at the Integrated apps that are available in my tenant. However i want to know that is there a powershell way to generate a report of these integrated apps
Hi,I know that i can navigate into admin center and have a look at the Integrated apps that are available in my tenant. However i want to know that is there a powershell way to generate a report of these integrated apps Read More