Month: July 2024
General Availability: Customer Key Onboarding Service
We are thrilled to announce the General Availability of the Customer Key Onboarding Automation Service, a game-changer for organizations in highly regulated industries. This feature has been designed to streamline and simplify your onboarding process for Customer Key, delivering significant time savings and efficiency improvements.
Key Benefits:
Faster Onboarding: The average onboarding process duration has been reduced from 1.5 weeks to just one hour. You can now seamlessly register your subscriptions, automatically verify the configuration of your Azure Key Vault and subscription resources, and onboard your tenant without needing email communication with Microsoft.
Enhanced Feedback: The service now provides detailed configurations needed for each resource. Intuitive feedback includes specific error messages and guidance on what needs to be fixed, making the process even smoother.
Unsupported Scenarios:
However, we plan to expand the service’s capabilities to include these scenarios in the future.
Government tenants
Tenants using managed HSMs will still require manual onboarding.
For more information on how to use the Customer Key Onboarding Automation Service, please visit Onboard using the Customer Key Onboarding Service.
We are excited about the improvements this service brings and look forward to continuing to enhance your experience with Microsoft 365.
Sincerely,
The M365 Data-at-Rest Encryption Team
We are thrilled to announce the General Availability of the Customer Key Onboarding Automation Service, a game-changer for organizations in highly regulated industries. This feature has been designed to streamline and simplify your onboarding process for Customer Key, delivering significant time savings and efficiency improvements.
Key Benefits:
Faster Onboarding: The average onboarding process duration has been reduced from 1.5 weeks to just one hour. You can now seamlessly register your subscriptions, automatically verify the configuration of your Azure Key Vault and subscription resources, and onboard your tenant without needing email communication with Microsoft.
Enhanced Feedback: The service now provides detailed configurations needed for each resource. Intuitive feedback includes specific error messages and guidance on what needs to be fixed, making the process even smoother.
Unsupported Scenarios:
However, we plan to expand the service’s capabilities to include these scenarios in the future.
Government tenants
Tenants using managed HSMs will still require manual onboarding.
For more information on how to use the Customer Key Onboarding Automation Service, please visit Onboard using the Customer Key Onboarding Service.
We are excited about the improvements this service brings and look forward to continuing to enhance your experience with Microsoft 365.
Sincerely,
The M365 Data-at-Rest Encryption Team
Read More
Children’s Hospital of Philadelphia transforms fundraising with Moore
Experts at Children’s Hospital of Philadelphia (CHOP) have delivered many firsts in pediatrics—from the first bilateral transplant to the first fetal heart surgery, and the breakthroughs for children continue to happen every day. They developed a new tool to better study genetic variants linked to childhood cancer and other diseases. And they are advancing an in-utero cure for sickle cell disease, which affects one in every 375 African Americans.
“We provide some of the world’s leading pediatric care and research, pioneering approaches that help kids grow up healthier,” says Jon Thompson, Associate Vice President of Philanthropic Strategy and Technology at CHOP. As the nation’s first pediatric hospital, CHOP serves patients from around the world, consults on the most difficult cases at other hospitals, and invents life-saving strategies used across the globe.
To fund this critical work, CHOP sought to maximize its fundraising through deeper constituent relationships. The children’s hospital partnered with Moore, the constituent experience management company that leverages data and predictive modeling to advance nonprofits’ fundraising goals. Moore is also part of the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program, which accelerates the impact of cloud-first software providers through technical, AI-focused expertise and go-to-market support.
Moore developed a novel constituent identity solution and pipeline for CHOP using Microsoft Azure. “With Moore, we have built something that the industry hasn’t seen before—a data-powered, constituent-first marketing operation that links people and causes,” Thompson says. “Microsoft, specifically the Azure platform, allows us to scale that. This technology ultimately drives empathy and human connection.”
Experts at Children’s Hospital of Philadelphia (CHOP) have delivered many firsts in pediatrics—from the first bilateral transplant to the first fetal heart surgery, and the breakthroughs for children continue to happen every day. They developed a new tool to better study genetic variants linked to childhood cancer and other diseases. And they are advancing an in-utero cure for sickle cell disease, which affects one in every 375 African Americans.
“We provide some of the world’s leading pediatric care and research, pioneering approaches that help kids grow up healthier,” says Jon Thompson, Associate Vice President of Philanthropic Strategy and Technology at CHOP. As the nation’s first pediatric hospital, CHOP serves patients from around the world, consults on the most difficult cases at other hospitals, and invents life-saving strategies used across the globe.
To fund this critical work, CHOP sought to maximize its fundraising through deeper constituent relationships. The children’s hospital partnered with Moore, the constituent experience management company that leverages data and predictive modeling to advance nonprofits’ fundraising goals. Moore is also part of the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program, which accelerates the impact of cloud-first software providers through technical, AI-focused expertise and go-to-market support.
Moore developed a novel constituent identity solution and pipeline for CHOP using Microsoft Azure. “With Moore, we have built something that the industry hasn’t seen before—a data-powered, constituent-first marketing operation that links people and causes,” Thompson says. “Microsoft, specifically the Azure platform, allows us to scale that. This technology ultimately drives empathy and human connection.”
Read the case study Read More
Data disk is full
Hi
my database disk is full. I want to extend the disk. Is there any recommendation before extending?
regards
Himy database disk is full. I want to extend the disk. Is there any recommendation before extending? regards Read More
New Nonprofit Community Goals for FY25
We launched the Nonprofit Community at the Summit this year as a new initiative for nonprofit listening, storytelling, and peer-to-peer community building. We are grateful for the support from our partners. Thank you!
Now we’d like to share important changes to how we are evolving the Nonprofit Community in FY25
The Tech Community platform will be re-focused as a platform for post-sales tech Q&A with nonprofit customers, providing direct tech solutions and discussion spaces for nonprofits.
The nonprofitcommunity.microsoft.com address and aka.ms/nonprofitcommunity will be redirected directly to Tech Community.
We encourage you to continue to engage in the community and to be available to ask/answer questions constructively as that will help build a useful community.
As part of this realignment, the blog and events calendar on Nonprofit Community will sunset, effective July (existing articles and posts will remain live).
LinkedIn will become our new focus for announcements, stories, and nonprofit community building.
Stay up to date with the Nonprofit Community
We launched the Nonprofit Community at the Summit this year as a new initiative for nonprofit listening, storytelling, and peer-to-peer community building. We are grateful for the support from our partners. Thank you!
Now we’d like to share important changes to how we are evolving the Nonprofit Community in FY25
The Tech Community platform will be re-focused as a platform for post-sales tech Q&A with nonprofit customers, providing direct tech solutions and discussion spaces for nonprofits.
The nonprofitcommunity.microsoft.com address and aka.ms/nonprofitcommunity will be redirected directly to Tech Community.
We encourage you to continue to engage in the community and to be available to ask/answer questions constructively as that will help build a useful community.
As part of this realignment, the blog and events calendar on Nonprofit Community will sunset, effective July (existing articles and posts will remain live).
LinkedIn will become our new focus for announcements, stories, and nonprofit community building.
Stay up to date with the Nonprofit Community
Read More
Issues with MS Teams Connectors
Hi,We recently started having issues adding new connectors to MS Teams channels for message posting. It was working ok few weeks ago as we were able to search for connectors and add it to the channels. Connectors can no longer be found from the lists in the attached image. We tried to access the connector developer portal at https://outlook.office.com/connectors/publish but the page is down. This has prevented us from testing some of our recent changes that we made to our integration.Also, we are now having issues with messages getting to our MS Teams walls in production. No changes have been made recently to our production Bot connector, but it’s throwing a 403 errors now when posting messages.Kindly provide us with any details on what we need to do as it now seems our MS teams connector integration is broken and affecting our clients Read More
With a ribbon plot, how to make the ribbons go along each matrix row instead of each column?
In the ribbon command, the ribbons run along each column of the matrix. The Y vector must be of length = to the number of rows in the matrix. How do I get the ribbons to run along each row of the matrix? I tried many combinations of flipud, fliplr, transpose, etc., and either the axis is going in the wrong direction, or the labels are wrong.
Z=[0,1,2,2; 1,2,3,3; 2,3,4,4; 3,4,5,5]
Y=[0;1;2;3]
ribbon(Y,Z,0.1);
Here the ribbons run in the y direction, but I want them to run in the x direction. Nothing else, e.g., the axis numbering, should change.In the ribbon command, the ribbons run along each column of the matrix. The Y vector must be of length = to the number of rows in the matrix. How do I get the ribbons to run along each row of the matrix? I tried many combinations of flipud, fliplr, transpose, etc., and either the axis is going in the wrong direction, or the labels are wrong.
Z=[0,1,2,2; 1,2,3,3; 2,3,4,4; 3,4,5,5]
Y=[0;1;2;3]
ribbon(Y,Z,0.1);
Here the ribbons run in the y direction, but I want them to run in the x direction. Nothing else, e.g., the axis numbering, should change. In the ribbon command, the ribbons run along each column of the matrix. The Y vector must be of length = to the number of rows in the matrix. How do I get the ribbons to run along each row of the matrix? I tried many combinations of flipud, fliplr, transpose, etc., and either the axis is going in the wrong direction, or the labels are wrong.
Z=[0,1,2,2; 1,2,3,3; 2,3,4,4; 3,4,5,5]
Y=[0;1;2;3]
ribbon(Y,Z,0.1);
Here the ribbons run in the y direction, but I want them to run in the x direction. Nothing else, e.g., the axis numbering, should change. ribbon plot orientation MATLAB Answers — New Questions
Loading satellite TLE data into MATLAB using satellite function
Using TLE data from an online database, MATLAB throws the error "The specified initial conditions will cause the orbit of ‘STARLINK-2438’ to intersect the Earth’s surface." when using this function:
clc
clearvars
close all
format long
G = 3.986004418e14;
% Create a satellite scenario and add ground stations from latitudes and longitudes.
startTime = datetime(2024,7,18,13,00,41);
stopTime = startTime + days(1);
sampleTime = 60*1;
sc = satelliteScenario(startTime,stopTime,sampleTime);
sat = satellite(sc,’test_tle.txt’)
The TLE data in "test_tle.txt" is as follows:
STARLINK-2438
1 48103U 21027M 24199.52200852 .27610719 12343-4 29456-2 0 9995
2 48103 53.0228 234.3624 0004658 334.7729 174.1970 16.36724891183121
when manually importing the values into the function, I dont receive the error:
% Mean Motion Line2 Field 8
semiMajorAxis = (G)^(1/3)/((2*pi*(16.36724891183121)/86400)^(2/3)); % ref : https://space.stackexchange.com/questions/18289/how-to-get-semi-major-axis-from-tle
% Eccentricity Line2 Field 5
eccentricity = 0.0004658;
% Inclination Line2 Field 3
inclination = 53.0228;
% Right Ascension of the asending node Line2 Field 4
rightAscensionOfAscendingNode = 234.3624;
% Argument of perigee Line2 Field 6
argumentOfPeriapsis = 334.7729;
% Mean Anomaly Line2 Field 7
trueAnomaly = 174.1970;
sat = satellite(sc,semiMajorAxis,eccentricity,inclination, …
rightAscensionOfAscendingNode,argumentOfPeriapsis,trueAnomaly);Using TLE data from an online database, MATLAB throws the error "The specified initial conditions will cause the orbit of ‘STARLINK-2438’ to intersect the Earth’s surface." when using this function:
clc
clearvars
close all
format long
G = 3.986004418e14;
% Create a satellite scenario and add ground stations from latitudes and longitudes.
startTime = datetime(2024,7,18,13,00,41);
stopTime = startTime + days(1);
sampleTime = 60*1;
sc = satelliteScenario(startTime,stopTime,sampleTime);
sat = satellite(sc,’test_tle.txt’)
The TLE data in "test_tle.txt" is as follows:
STARLINK-2438
1 48103U 21027M 24199.52200852 .27610719 12343-4 29456-2 0 9995
2 48103 53.0228 234.3624 0004658 334.7729 174.1970 16.36724891183121
when manually importing the values into the function, I dont receive the error:
% Mean Motion Line2 Field 8
semiMajorAxis = (G)^(1/3)/((2*pi*(16.36724891183121)/86400)^(2/3)); % ref : https://space.stackexchange.com/questions/18289/how-to-get-semi-major-axis-from-tle
% Eccentricity Line2 Field 5
eccentricity = 0.0004658;
% Inclination Line2 Field 3
inclination = 53.0228;
% Right Ascension of the asending node Line2 Field 4
rightAscensionOfAscendingNode = 234.3624;
% Argument of perigee Line2 Field 6
argumentOfPeriapsis = 334.7729;
% Mean Anomaly Line2 Field 7
trueAnomaly = 174.1970;
sat = satellite(sc,semiMajorAxis,eccentricity,inclination, …
rightAscensionOfAscendingNode,argumentOfPeriapsis,trueAnomaly); Using TLE data from an online database, MATLAB throws the error "The specified initial conditions will cause the orbit of ‘STARLINK-2438’ to intersect the Earth’s surface." when using this function:
clc
clearvars
close all
format long
G = 3.986004418e14;
% Create a satellite scenario and add ground stations from latitudes and longitudes.
startTime = datetime(2024,7,18,13,00,41);
stopTime = startTime + days(1);
sampleTime = 60*1;
sc = satelliteScenario(startTime,stopTime,sampleTime);
sat = satellite(sc,’test_tle.txt’)
The TLE data in "test_tle.txt" is as follows:
STARLINK-2438
1 48103U 21027M 24199.52200852 .27610719 12343-4 29456-2 0 9995
2 48103 53.0228 234.3624 0004658 334.7729 174.1970 16.36724891183121
when manually importing the values into the function, I dont receive the error:
% Mean Motion Line2 Field 8
semiMajorAxis = (G)^(1/3)/((2*pi*(16.36724891183121)/86400)^(2/3)); % ref : https://space.stackexchange.com/questions/18289/how-to-get-semi-major-axis-from-tle
% Eccentricity Line2 Field 5
eccentricity = 0.0004658;
% Inclination Line2 Field 3
inclination = 53.0228;
% Right Ascension of the asending node Line2 Field 4
rightAscensionOfAscendingNode = 234.3624;
% Argument of perigee Line2 Field 6
argumentOfPeriapsis = 334.7729;
% Mean Anomaly Line2 Field 7
trueAnomaly = 174.1970;
sat = satellite(sc,semiMajorAxis,eccentricity,inclination, …
rightAscensionOfAscendingNode,argumentOfPeriapsis,trueAnomaly); satellite, tle MATLAB Answers — New Questions
How store evolution of x over the iterations of lsqnonlin?
Hi,
I’m solving a nonlinear system of equations with lsqnonlin and want to save the evolution of values of X.
The ouput of the system is defined by a function file ‘error.m’, which takes X with size 4×1 and returns a vector of residuals with size 8×1.
I run a script with the structure below that sets the some fixed parameters for error.m, then calls the lsqnonlin solver and error.m for n cases.
clear all,clc,close all
load measure;
%initialization: set some constant inputs for ‘error.m’ and options for the
%solver
% solve nCases
for i=1:nCases
inputs.measured=measure(i);
X=lsqnonlin(@(x)error(inputs,x),x0,lb,ub,options)
save X X;
end
I want to store the values of X over the iterations, not only the solution. There are these examples in Optimization Solver Output Functions, but they use nested functions and I don’t know how to adapt them to my script structure.
function stop = myoutput(x,optimvalues,state);
stop = false;
if isequal(state,’iter’)
history = [history; x];
end
end
How will the output function ‘myoutput’ access the variable X inside the lsqnonlin?
I’ve tried to write the myoutput at the end of the main script, predefined history = [ ] in the for loop i=1:nCases and set options = optimset(‘OutputFcn’, @myoutput);
However, the function ‘myoutput’ doesn’t find th ehistory variable to append. Why is that happening?Hi,
I’m solving a nonlinear system of equations with lsqnonlin and want to save the evolution of values of X.
The ouput of the system is defined by a function file ‘error.m’, which takes X with size 4×1 and returns a vector of residuals with size 8×1.
I run a script with the structure below that sets the some fixed parameters for error.m, then calls the lsqnonlin solver and error.m for n cases.
clear all,clc,close all
load measure;
%initialization: set some constant inputs for ‘error.m’ and options for the
%solver
% solve nCases
for i=1:nCases
inputs.measured=measure(i);
X=lsqnonlin(@(x)error(inputs,x),x0,lb,ub,options)
save X X;
end
I want to store the values of X over the iterations, not only the solution. There are these examples in Optimization Solver Output Functions, but they use nested functions and I don’t know how to adapt them to my script structure.
function stop = myoutput(x,optimvalues,state);
stop = false;
if isequal(state,’iter’)
history = [history; x];
end
end
How will the output function ‘myoutput’ access the variable X inside the lsqnonlin?
I’ve tried to write the myoutput at the end of the main script, predefined history = [ ] in the for loop i=1:nCases and set options = optimset(‘OutputFcn’, @myoutput);
However, the function ‘myoutput’ doesn’t find th ehistory variable to append. Why is that happening? Hi,
I’m solving a nonlinear system of equations with lsqnonlin and want to save the evolution of values of X.
The ouput of the system is defined by a function file ‘error.m’, which takes X with size 4×1 and returns a vector of residuals with size 8×1.
I run a script with the structure below that sets the some fixed parameters for error.m, then calls the lsqnonlin solver and error.m for n cases.
clear all,clc,close all
load measure;
%initialization: set some constant inputs for ‘error.m’ and options for the
%solver
% solve nCases
for i=1:nCases
inputs.measured=measure(i);
X=lsqnonlin(@(x)error(inputs,x),x0,lb,ub,options)
save X X;
end
I want to store the values of X over the iterations, not only the solution. There are these examples in Optimization Solver Output Functions, but they use nested functions and I don’t know how to adapt them to my script structure.
function stop = myoutput(x,optimvalues,state);
stop = false;
if isequal(state,’iter’)
history = [history; x];
end
end
How will the output function ‘myoutput’ access the variable X inside the lsqnonlin?
I’ve tried to write the myoutput at the end of the main script, predefined history = [ ] in the for loop i=1:nCases and set options = optimset(‘OutputFcn’, @myoutput);
However, the function ‘myoutput’ doesn’t find th ehistory variable to append. Why is that happening? lsqnonlin, output, iteration, optimization MATLAB Answers — New Questions
How to verify/debug LDAP authentication?
I have enabled LDAP authentication for my MATLAB Web App Server. The server can start successfully but I couldn’t log in. What is wrong?I have enabled LDAP authentication for my MATLAB Web App Server. The server can start successfully but I couldn’t log in. What is wrong? I have enabled LDAP authentication for my MATLAB Web App Server. The server can start successfully but I couldn’t log in. What is wrong? MATLAB Answers — New Questions
How to effectively use EXCEL when tracking Credit Card payments.
I would like to be able to track the use of my credit card for specific entities.
Can someone help me set it up? I am not that familiar with Excel.
I would like to be able to track the use of my credit card for specific entities.Can someone help me set it up? I am not that familiar with Excel. Read More
How to Implement OAuth for a Bot-Based Message Extension App in Microsoft Teams for Graph API?
Can someone provide a detailed guide on how the OAuth flow works for a bot-based message extension in Microsoft Teams, the specific configurations needed in the Azure app registration, how to configure permissions and consent for accessing the Microsoft Graph API, and any additional settings required in the Teams Developer Portal or Bot Framework Developer Portal?
Any guidance, code examples, or references to detailed documentation would be highly beneficial.
I have created a bot-based message extension app using the Teams Toolkit and need to call the Microsoft Graph API, which requires OAuth implementation. So far, I have created the app in the Teams Developer Portal, registered the app in Azure App registration, and registered the bot in the Bot Framework Developer Portal (dev.botframework.com). However, I am unclear about the OAuth flow and the specific configurations required. Can someone provide a detailed guide on how the OAuth flow works for a bot-based message extension in Microsoft Teams, the specific configurations needed in the Azure app registration, how to configure permissions and consent for accessing the Microsoft Graph API, and any additional settings required in the Teams Developer Portal or Bot Framework Developer Portal?Any guidance, code examples, or references to detailed documentation would be highly beneficial. Read More
Part of Chart Cut off
I have created a chart in Excel but for some reason the first data point is cut off. It seems like the Y axis is overlapping the plot area but adjusting the width of the y axis does not fix the issue. Is there some way to offset the Plot area of the chart further to the right?
I have created a chart in Excel but for some reason the first data point is cut off. It seems like the Y axis is overlapping the plot area but adjusting the width of the y axis does not fix the issue. Is there some way to offset the Plot area of the chart further to the right? Read More
Configuring a Disaster Recovery Solution for Azure Service Bus with Basic Tier
Introduction
Disaster recovery (DR) is crucial for ensuring business continuity and minimizing downtime. While the Azure Service Bus Basic tier doesn’t support advanced Geo-disaster recovery (Geo-DR) or Geo-Replication(Public Preview) features like the Premium tiers, you can still implement a custom DR strategy. This guide will walk you through setting up a disaster recovery solution for Azure Service Bus using the Basic tier.
Prerequisites
Before starting, make sure you have:
An Azure subscription.Two Azure Service Bus namespaces (one primary and one secondary) in different regions.Access to the Azure portal.Familiarity with Azure CLI or PowerShell for automation purposes.
Step-by-Step Guide
Step 1: Create Primary and Secondary Namespaces
Create the Primary Namespace:
Go to the Azure portal.Search for “Service Bus” and select “Create Service Bus namespace”.Enter a name for the namespace (e.g., primary-ns-basic), choose the Basic tier, and select the primary region.Click “Review + create” and then “Create”.
Create the Secondary Namespace:
Repeat the steps to create a secondary namespace in a different region (e.g., secondary-ns-basic).
Step 2: Synchronise Messages Between Namespaces
Since the Basic tier does not support Geo-DR, you’ll need to manually synchronise messages between the primary and secondary namespaces. This can be achieved through custom code or third-party tools.
Implement Message Synchronisation:
Create an application that listens to messages on the primary namespace and republishes them to the secondary namespace.Use Azure Functions or a similar service to trigger this application whenever a new message arrives.Ensure the application handles any potential issues, such as message duplication or order.Sample Synchronization Code (Azure Functions with C#):
using System;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
public static class MessageSynchroniser
{
private static string primaryConnectionString = “<PrimaryNamespaceConnectionString>”;
private static string secondaryConnectionString = “<SecondaryNamespaceConnectionString>”;
private static string queueName = “<QueueName>”;
private static IQueueClient secondaryQueueClient;
[FunctionName(“MessageSynchroniser”)]
public static async Task Run([ServiceBusTrigger(queueName, Connection = “primaryConnectionString”)] Message message, ILogger log)
{
secondaryQueueClient = new QueueClient(secondaryConnectionString, queueName);
try
{
var secondaryMessage = new Message(Encoding.UTF8.GetBytes(message.Body))
{
ContentType = message.ContentType,
Label = message.Label,
MessageId = message.MessageId,
CorrelationId = message.CorrelationId,
UserProperties = message.UserProperties
};
await secondaryQueueClient.SendAsync(secondaryMessage);
log.LogInformation($”Message synchronised to secondary namespace: {message.MessageId}”);
}
catch (Exception ex)
{
log.LogError($”Error synchronising message: {ex.Message}”);
}
finally
{
await secondaryQueueClient.CloseAsync();
}
}
}
Step 3: Failover Procedure
In the event of a disaster, you will need to manually failover to the secondary namespace.
Update Connection Strings:
Modify your application configuration to point to the secondary namespace’s connection string.Restart your applications to ensure they connect to the secondary namespace.
Communicate the Change:
Notify your team and stakeholders about the failover.Monitor the secondary namespace to ensure it is handling the load appropriately.
Step 4: Failback to Primary Namespace
Once the primary region is operational again, you can switch back to the primary namespace.
Resynchronise Messages:
Ensure that any messages in the secondary namespace are synchronised back to the primary namespace.Use the same message synchronisation approach as before but in reverse.
Update Connection Strings:
Change your application configuration back to the primary namespace’s connection string.Restart your applications to point back to the primary namespace.
Best Practices
Regular Testing: Periodically test your disaster recovery plan to ensure it works as expected.Automation: Automate as much of the DR process as possible to minimise downtime and human error.Monitoring: Set up monitoring and alerts for both primary and secondary namespaces to detect issues early.Documentation: Keep detailed documentation of your DR processes and ensure your team is familiar with them.
Conclusion
While the Azure Service Bus Basic tier lacks built-in Geo-DR capabilities, you can still create a robust disaster recovery solution through custom synchronization and failover procedures. By following the steps outlined in this guide, you can ensure your messaging infrastructure is resilient and prepared for any disruptions. Regular testing and monitoring will help maintain the effectiveness of your DR strategy.
Feel free to reach out if you have any questions or need further assistance. Happy configuring!
— Santosh Patkar
IntroductionDisaster recovery (DR) is crucial for ensuring business continuity and minimizing downtime. While the Azure Service Bus Basic tier doesn’t support advanced Geo-disaster recovery (Geo-DR) or Geo-Replication(Public Preview) features like the Premium tiers, you can still implement a custom DR strategy. This guide will walk you through setting up a disaster recovery solution for Azure Service Bus using the Basic tier. PrerequisitesBefore starting, make sure you have:An Azure subscription.Two Azure Service Bus namespaces (one primary and one secondary) in different regions.Access to the Azure portal.Familiarity with Azure CLI or PowerShell for automation purposes. Step-by-Step GuideStep 1: Create Primary and Secondary NamespacesCreate the Primary Namespace:Go to the Azure portal.Search for “Service Bus” and select “Create Service Bus namespace”.Enter a name for the namespace (e.g., primary-ns-basic), choose the Basic tier, and select the primary region.Click “Review + create” and then “Create”.Create the Secondary Namespace:Repeat the steps to create a secondary namespace in a different region (e.g., secondary-ns-basic).Step 2: Synchronise Messages Between NamespacesSince the Basic tier does not support Geo-DR, you’ll need to manually synchronise messages between the primary and secondary namespaces. This can be achieved through custom code or third-party tools.Implement Message Synchronisation:Create an application that listens to messages on the primary namespace and republishes them to the secondary namespace.Use Azure Functions or a similar service to trigger this application whenever a new message arrives.Ensure the application handles any potential issues, such as message duplication or order.Sample Synchronization Code (Azure Functions with C#): using System;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.ServiceBus;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Logging;
public static class MessageSynchroniser
{
private static string primaryConnectionString = “<PrimaryNamespaceConnectionString>”;
private static string secondaryConnectionString = “<SecondaryNamespaceConnectionString>”;
private static string queueName = “<QueueName>”;
private static IQueueClient secondaryQueueClient;
[FunctionName(“MessageSynchroniser”)]
public static async Task Run([ServiceBusTrigger(queueName, Connection = “primaryConnectionString”)] Message message, ILogger log)
{
secondaryQueueClient = new QueueClient(secondaryConnectionString, queueName);
try
{
var secondaryMessage = new Message(Encoding.UTF8.GetBytes(message.Body))
{
ContentType = message.ContentType,
Label = message.Label,
MessageId = message.MessageId,
CorrelationId = message.CorrelationId,
UserProperties = message.UserProperties
};
await secondaryQueueClient.SendAsync(secondaryMessage);
log.LogInformation($”Message synchronised to secondary namespace: {message.MessageId}”);
}
catch (Exception ex)
{
log.LogError($”Error synchronising message: {ex.Message}”);
}
finally
{
await secondaryQueueClient.CloseAsync();
}
}
} Step 3: Failover ProcedureIn the event of a disaster, you will need to manually failover to the secondary namespace.Update Connection Strings:Modify your application configuration to point to the secondary namespace’s connection string.Restart your applications to ensure they connect to the secondary namespace.Communicate the Change:Notify your team and stakeholders about the failover.Monitor the secondary namespace to ensure it is handling the load appropriately.Step 4: Failback to Primary NamespaceOnce the primary region is operational again, you can switch back to the primary namespace.Resynchronise Messages:Ensure that any messages in the secondary namespace are synchronised back to the primary namespace.Use the same message synchronisation approach as before but in reverse.Update Connection Strings:Change your application configuration back to the primary namespace’s connection string.Restart your applications to point back to the primary namespace.Best PracticesRegular Testing: Periodically test your disaster recovery plan to ensure it works as expected.Automation: Automate as much of the DR process as possible to minimise downtime and human error.Monitoring: Set up monitoring and alerts for both primary and secondary namespaces to detect issues early.Documentation: Keep detailed documentation of your DR processes and ensure your team is familiar with them. ConclusionWhile the Azure Service Bus Basic tier lacks built-in Geo-DR capabilities, you can still create a robust disaster recovery solution through custom synchronization and failover procedures. By following the steps outlined in this guide, you can ensure your messaging infrastructure is resilient and prepared for any disruptions. Regular testing and monitoring will help maintain the effectiveness of your DR strategy. Feel free to reach out if you have any questions or need further assistance. Happy configuring! — Santosh Patkar Read More
Azure Maps Route Matrix
I’m trying to migrate items from Bing Maps to Azure Maps. I can get the latitude/longitude from an address by the following URL:https://atlas.microsoft.com/geocode?api-version=2023-06-01&&addressLine=15127%20NE%2024th%20Street%20Redmond%20WA%2098052&subscription-key={subscription-key} However, when I try to get a synchronous route matrix, I get an HTTP 405 error stating the page isn’t working.https://atlas.microsoft.com/route/matrix/sync/json?api-version=1.0&subscription-key={subscription-key} What’s the proper way to get the route matrix through a GET and return results in JSON? Can any of this be done with Azure.ResourceManager.Maps and is there a good walkthrough discussing this? I haven’t found one yet. Read More
Pull a report of Integrated Apps in Office365/Microsoft365
Hi,
I know that i can navigate into admin center and have a look at the Integrated apps that are available in my tenant. However i want to know that is there a powershell way to generate a report of these integrated apps
Hi,I know that i can navigate into admin center and have a look at the Integrated apps that are available in my tenant. However i want to know that is there a powershell way to generate a report of these integrated apps Read More
Renew RDS CAL Licenses for 120 Days
Important Note
Before proceeding, please be aware that this method is a temporary workaround and should not replace proper license management practices. Always ensure that you comply with Microsoft’s licensing terms and conditions.
Prerequisites
Administrator access to the RDS server.A backup of the registry (to revert any changes if needed).Familiarity with Windows Registry Editor.
Step-by-Step Guide
Step 1: Backup the Registry
Press Win + R, type regedit, and press Enter to open the Registry Editor.In the Registry Editor, click on File and then Export.Choose a location to save the backup and provide a name for the backup file.Click Save to create the backup.
Step 2: Stop the Remote Desktop Licensing Service
Press Win + R, type services.msc, and press Enter to open the Services Manager.Scroll down to find Remote Desktop Licensing or Remote Desktop Services.Right-click on the service and select Stop.
Step 3: Grant your ID access to GracePeriod and delete the Key from your registry editor:
ComputerHKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlTerminal ServerRCMGracePeriod
Step 4: Restart the Remote Desktop Licensing Service
Go back to the Services Manager.Right-click on Remote Desktop Licensing or Remote Desktop Services.Select Start to restart the service.
Step 5: Reboot the Server
Save any open files and close all applications.Reboot the server to apply the changes.
Step 6: Verify the Renewal
After the server restarts, open a command prompt as an administrator.Type the following command and press Enter:
wmic /namespace:\rootCIMV2TerminalServices PATH Win32_TerminalServiceSetting WHERE (__CLASS !=””) CALL GetGracePeriodDays
Conclusion
Renewing RDS CAL licences by deleting registry entries is a temporary solution that can help extend your licencing period for 120 days. Remember, this is not a permanent fix, and you should plan to acquire proper licenses to ensure compliance with Microsoft’s licensing policies. Regularly check your licensing status and renew your CALs accordingly to avoid any disruptions in your Remote Desktop Services.
If you have any questions or need further assistance, please feel free to reach out.
Stay compliant and ensure your business operations continue smoothly!
— Santosh Patkar
Important NoteBefore proceeding, please be aware that this method is a temporary workaround and should not replace proper license management practices. Always ensure that you comply with Microsoft’s licensing terms and conditions. PrerequisitesAdministrator access to the RDS server.A backup of the registry (to revert any changes if needed).Familiarity with Windows Registry Editor.Step-by-Step GuideStep 1: Backup the RegistryPress Win + R, type regedit, and press Enter to open the Registry Editor.In the Registry Editor, click on File and then Export.Choose a location to save the backup and provide a name for the backup file.Click Save to create the backup.Step 2: Stop the Remote Desktop Licensing ServicePress Win + R, type services.msc, and press Enter to open the Services Manager.Scroll down to find Remote Desktop Licensing or Remote Desktop Services.Right-click on the service and select Stop. Step 3: Grant your ID access to GracePeriod and delete the Key from your registry editor: ComputerHKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlTerminal ServerRCMGracePeriod Step 4: Restart the Remote Desktop Licensing ServiceGo back to the Services Manager.Right-click on Remote Desktop Licensing or Remote Desktop Services.Select Start to restart the service.Step 5: Reboot the ServerSave any open files and close all applications.Reboot the server to apply the changes.Step 6: Verify the RenewalAfter the server restarts, open a command prompt as an administrator.Type the following command and press Enter: wmic /namespace:\rootCIMV2TerminalServices PATH Win32_TerminalServiceSetting WHERE (__CLASS !=””) CALL GetGracePeriodDays ConclusionRenewing RDS CAL licences by deleting registry entries is a temporary solution that can help extend your licencing period for 120 days. Remember, this is not a permanent fix, and you should plan to acquire proper licenses to ensure compliance with Microsoft’s licensing policies. Regularly check your licensing status and renew your CALs accordingly to avoid any disruptions in your Remote Desktop Services. If you have any questions or need further assistance, please feel free to reach out. Stay compliant and ensure your business operations continue smoothly! — Santosh Patkar Read More
New video and image experiences in Viva Engage will reach general availability in August 2024
A new way to view pictures and videos
With this new experience, a user can create a new post on their storyline, attach an image or video, and add text. After clicking Post, the user can choose the layout that best suits their intent and preview the layout.
[Above] A storyline post with a single video attached and some text. Click “Post” to choose the layout.
[Above] Select and preview the layout.
The default (pre-selected) layout is based on the length of the post:
When the text is 200 characters or less, the video or image will be above the post by default.
When the text is more than 200 characters, the text will appear above the video or image by default.
If there is no text in the post, the video or image will be displayed with the new experience. The creator will not be prompted to choose layout.
You can also choose to change the preselected layout.
[Above] Storyline post with a video featured above the text.
Once media is attached, the post offers the same functionality as before, meaning that users can react, reply, and share the post to other storylines or communities.
Immersive view
The new experience can also display the video or image in an immersive view, where people can enjoy and engage with the media full screen. To open the immersive view, simply click the image or the text of the post.
[Above] Immersive view is available for threads with images above the text.
NOTES FOR ADMINISTRATORS
The new experience is supported only for users with Storyline enabled. Storyline is enabled by default, but can be disabled or assigned to specific users in the Viva Engage Admin center, see detailed information here. This new media experience is only available for:
Storyline posts. We will consider adding support for communities in future releases.
Discussion posts. We will consider adding support for questions, polls and praise in future releases.
Posts with one image or video: We will consider adding support for multiple media items in future releases.
Posts with media attached. We will consider adding support for links in future releases.
As we move this feature from public preview to general availability, we will be removing the toggle to Enable stories from the Viva Engage Admin Center.
There’s more coming to Viva Engage storylines and communities soon, so stay tuned!
Microsoft Tech Community – Latest Blogs –Read More
optimizing neural network with NSGA-II
hello
I have a neural network structure of my data and I want to do a multiobjective optimization for example with NSGA-II. How could I connect my ANN to NSGA-II?
I tried the next code:
[x_ga1,fval_ga1,~,gaoutput1] = gamultiobj(fun,nvar,A,b,Aeq,beq,lb,ub,opts_ga);
where "fun" is my ANN function. but It makes an error!!hello
I have a neural network structure of my data and I want to do a multiobjective optimization for example with NSGA-II. How could I connect my ANN to NSGA-II?
I tried the next code:
[x_ga1,fval_ga1,~,gaoutput1] = gamultiobj(fun,nvar,A,b,Aeq,beq,lb,ub,opts_ga);
where "fun" is my ANN function. but It makes an error!! hello
I have a neural network structure of my data and I want to do a multiobjective optimization for example with NSGA-II. How could I connect my ANN to NSGA-II?
I tried the next code:
[x_ga1,fval_ga1,~,gaoutput1] = gamultiobj(fun,nvar,A,b,Aeq,beq,lb,ub,opts_ga);
where "fun" is my ANN function. but It makes an error!! ann, nsga-ii, multiobjective optimization MATLAB Answers — New Questions
A Solution To Understanding Microsoft Speak
Hello –
I am trying to figure out a solution to understanding Microsoft Speak. The acronyms being thrown around are really hard to follow.
Is there a guide that has all acronyms that Microsoft Employees use and their definitions?
I might then develop an auto-complete script for employees at Microsoft that replaces acronyms with the actual words.
I am not joking on this post I really need a guide to help me.
Hello – I am trying to figure out a solution to understanding Microsoft Speak. The acronyms being thrown around are really hard to follow.Is there a guide that has all acronyms that Microsoft Employees use and their definitions? I might then develop an auto-complete script for employees at Microsoft that replaces acronyms with the actual words. I am not joking on this post I really need a guide to help me. Read More
Replace SharePoint sites with custom sites?
Is there a way to build websites inside of a 365 environment. I want to replace SharePoint sites with build websites including document management. Any ideas?
Is there a way to build websites inside of a 365 environment. I want to replace SharePoint sites with build websites including document management. Any ideas? Read More