Month: June 2024
Pharma Sales Trainer Enablement with Copilot – Copilot for Microsoft 365 Starter Series
Are you looking for a way to reduce the time and effort required to complete your tasks in Microsoft 365? Do you want to optimize your workflows and processes to achieve more with less resources? Do you want to see how Copilot for Microsoft 365 can help you solve real-world challenges frequently encountered in the Pharmaceutical Industry today?
In this recorded Webinar learn how Pharma Sales Trainers can leverage the power of Microsoft Copilot in their role to reduce repetitive tasks.
Resources:
Get started with Copilot for Microsoft 365 – Training | Microsoft Learn
Microsoft Copilot for Microsoft 365—Features and Plans | Microsoft 365
Copilot Lab (cloud.microsoft)
Learn about Copilot prompts – Microsoft Support
Data, Privacy, and Security for Microsoft Copilot for Microsoft 365 | Microsoft Learn
Microsoft Copilot for Microsoft 365 documentation | Microsoft Learn
Copilot for Microsoft 365 – Microsoft Adoption
To see the entire series click here.
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Copilot Use with Clinical Trials Manager-Researcher – Copilot for Microsoft 365 Starter Series
Are you looking for a way to reduce the time and effort required to complete your tasks in Microsoft 365? Do you want to optimize your workflows and processes to achieve more with less resources? Do you want to see how Copilot for Microsoft 365 can help you solve real-world challenges frequently encountered in the Pharmaceutical Industry today?
In this recorded Webinar learn how Clinical Trials Managers and Researchers can leverage the power of Microsoft Copilot in their role to reduce repetitive tasks.
Resources:
Get started with Copilot for Microsoft 365 – Training | Microsoft Learn
Microsoft Copilot for Microsoft 365—Features and Plans | Microsoft 365
Copilot Lab (cloud.microsoft)
Learn about Copilot prompts – Microsoft Support
Data, Privacy, and Security for Microsoft Copilot for Microsoft 365 | Microsoft Learn
Microsoft Copilot for Microsoft 365 documentation | Microsoft Learn
Copilot for Microsoft 365 – Microsoft Adoption
To see the entire series click here.
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Enhancing Performance in Azure Container Apps
Azure Container Apps is a powerful platform for running serverless, containerized applications and microservices. In the team’s ongoing commitment to improving Azure Container Apps’ performance, we’ve recently made significant improvements to make the scaling and load balancing behavior of Azure Container Apps more intuitive and better align with customer expectations to meet their performance needs.
Hopefully, some of the insights from our experience in working with Envoy and KEDA will be helpful to you.
Load Balancing Algorithm Update
In the past, Azure Container Apps relied on the ring hash load balancing algorithm to distribute incoming requests across containers. Ring hash aims for minimal distribution by generating a hash based on request properties to match it with a stable upstream instance. However, as a result, some instances will receive an uneven share of requests. This is especially apparent during load tests where there are a small number of clients and could lead to potential bottlenecks.
Switching to Round Robin
To address this issue, we transitioned Azure Container Apps to use the round robin load balancing algorithm when session affinity is not enabled for an app. These are some of the benefits you can expect to see:
Uniform Request Distribution: Round robin evenly distributes requests among containers, reducing the likelihood that one replica gets overloaded and helps utilize all resources effectively.
Improved Scalability: With a balanced request load, Azure Container Apps can scale more effectively.
Predictable Behavior: Developers can now rely on more consistent behavior across containers, simplifying troubleshooting and monitoring.
We’ve observed a significant improvement in overall system performance since implementing this change, and our customers can expect better resource utilization. When session affinity is enabled, Azure Container Apps still uses a ring hashing algorithm to match sequential requests with a consistent upstream client.
Below is an example of an app running 20 instances handling requests from 1,000 clients. The graph shows the traffic assignment between the two approaches. Both scenarios are with session affinity disabled:
Figure 1. Azure Container Apps running with a ring hashing load balancing algorithm.
Figure 2. Azure Container Apps running with a round robin load balancing algorithm.
Horizontal Pod Autoscale Thresholds
Azure Container Apps uses KEDA and Kubernetes’s Horizontal Pod Autoscale (HPA) to handle scaling of replicas. Customers can set up custom scale rules to determine when their application will scale out. A common rule used by customers is the CPU utilization threshold. For example, you could set a threshold of 80%, so when the average CPU utilization across replicas for your container apps in an environment crossed 80%, the app would scale out.
One challenge some Azure Container Apps customers faced was related to apps not scaling as expected when these CPU utilization thresholds were met due to a 10% built-in tolerance for CPU utilization that is default within HPA. Due to this built-in tolerance, our app set to scale at 80% CPU utilization would only scale once the threshold crossed 88%.
Adjusting Tolerance Levels
To address this issue, we fine-tuned the HPA configuration and added an offset to the default 10% tolerance to make Azure Container Apps scale as customers expect. For the previous scenario, this means when a container app has a scale rule of 80% CPU utilization, it will scale when the average CPU utilization crosses the 80% threshold as expected instead of at 88%. This change ensures that Azure Container Apps responds more promptly to increased demand and scales out as expected.
Conclusion
The Azure Container Apps team is constantly investing in improving performance. By switching to round-robin load balancing and fine-tuning HPA thresholds, we’ve made ACA more reliable, efficient, and responsive. Please let us know what you think of these changes and what other performance improvements you think we should be making: Azure Container Apps Github.
Thank you for being part of our journey toward a more performant Azure Container Apps!
Microsoft Tech Community – Latest Blogs –Read More
Connecting the 3 phase inverter output to the inport of the induction machine(squirrel cage) block
As you can see in the attached image, I am trying to connect the 3 phase output AC signal from the DC-AC inverter to the inport of the squirrel cage induction motor block. It seems that the signal should be converted to some form to be connected to the inport.
Any advice?
Thank you
Daryll DavisAs you can see in the attached image, I am trying to connect the 3 phase output AC signal from the DC-AC inverter to the inport of the squirrel cage induction motor block. It seems that the signal should be converted to some form to be connected to the inport.
Any advice?
Thank you
Daryll Davis As you can see in the attached image, I am trying to connect the 3 phase output AC signal from the DC-AC inverter to the inport of the squirrel cage induction motor block. It seems that the signal should be converted to some form to be connected to the inport.
Any advice?
Thank you
Daryll Davis induction machine, squirrel cage induction motor, 3 phase inverter, expandable 3 phase MATLAB Answers — New Questions
How to use rlocfind in root locus.
How do i use rlocfind command to find the range of values of K (feedback gain) for which the closed-loop ststem is stable. For example for the sytem below:
In the tutorial it is hinted that rlocfind is the correct appraoch:
Many thanks in advance,How do i use rlocfind command to find the range of values of K (feedback gain) for which the closed-loop ststem is stable. For example for the sytem below:
In the tutorial it is hinted that rlocfind is the correct appraoch:
Many thanks in advance, How do i use rlocfind command to find the range of values of K (feedback gain) for which the closed-loop ststem is stable. For example for the sytem below:
In the tutorial it is hinted that rlocfind is the correct appraoch:
Many thanks in advance, root locus, rlocfind MATLAB Answers — New Questions
Finding Transfer Function from Step Response
A bit of context:
I’m in a stundent project at my universtiy. We’re desingning and building an Mechanical ventilator for Corona-Patients. I’m having some trouble with the control loop of the system.
The ouput of the control loop is controlling a proportional valve. The purpose of this is to reach the desired pressure (Setpoint) in less than 3 seconds. I’m using a PID control, yet it’s not tuned correctly. That’s why I decided to go to basics, and get the transfer function from a step response, to then simulate the different constants in MATLAB.
The following step response was achieved by opening the proportional valve fully, and waited till the Setpoint was achieved. There’s a 1 bar relative pressure in the system. Any pressure above 25mbar flows out, through a PEEP valve. The proplem with the system is that we can’t overshot, because we cannot get rid of this pressure as the patient is inhaling.
If someone could help me, calculate the transfer function of the following step response, I would be really gratefull.
Thanks in advance.A bit of context:
I’m in a stundent project at my universtiy. We’re desingning and building an Mechanical ventilator for Corona-Patients. I’m having some trouble with the control loop of the system.
The ouput of the control loop is controlling a proportional valve. The purpose of this is to reach the desired pressure (Setpoint) in less than 3 seconds. I’m using a PID control, yet it’s not tuned correctly. That’s why I decided to go to basics, and get the transfer function from a step response, to then simulate the different constants in MATLAB.
The following step response was achieved by opening the proportional valve fully, and waited till the Setpoint was achieved. There’s a 1 bar relative pressure in the system. Any pressure above 25mbar flows out, through a PEEP valve. The proplem with the system is that we can’t overshot, because we cannot get rid of this pressure as the patient is inhaling.
If someone could help me, calculate the transfer function of the following step response, I would be really gratefull.
Thanks in advance. A bit of context:
I’m in a stundent project at my universtiy. We’re desingning and building an Mechanical ventilator for Corona-Patients. I’m having some trouble with the control loop of the system.
The ouput of the control loop is controlling a proportional valve. The purpose of this is to reach the desired pressure (Setpoint) in less than 3 seconds. I’m using a PID control, yet it’s not tuned correctly. That’s why I decided to go to basics, and get the transfer function from a step response, to then simulate the different constants in MATLAB.
The following step response was achieved by opening the proportional valve fully, and waited till the Setpoint was achieved. There’s a 1 bar relative pressure in the system. Any pressure above 25mbar flows out, through a PEEP valve. The proplem with the system is that we can’t overshot, because we cannot get rid of this pressure as the patient is inhaling.
If someone could help me, calculate the transfer function of the following step response, I would be really gratefull.
Thanks in advance. control_loop, pid, transfer function, step response MATLAB Answers — New Questions
Defender 365 admin console – Disabled Connected to a custom indicator & Connected to a unsanctionned
Issue:
I want to know how I can disable these two following alerts :
Disabled Connected to a custom indicatorConnected to an unsanctioned blocked app
Those alerts type needs to be enabled or disabled on demand, like the other alerts types.
Why’s that :
Description of the workload : When we block(Unsanctioned) an application through Defender for Cloud apps. It create automatically the indicators to Defender XDR. When someone for example click or go the URL related with the application, the following alerts will be triggered. When an indicator is automatically created through that, it check the box of generate alert when the indicator is triggered. We would like to automatically uncheck the box or disable to alerts describing.
Possible to disable the custom alert in setting ?
No.
Why ?
Explanation : You cannot suppress “custom detection”. But, they are categorized as “Informational” and you can suppress severity alert type.
Solutions :
Note: If you want to customize which alerts you want to close automatically, you can create a Playbook for it. Or, see the option 2 for a simple way.
Option 1:
So i found a Quick Workaround that is working good for me right now. You have different options to doing it. However here’s the solution :
Note: you need all licences and Security Admin role about Sentinel, Defender all solutions related.
Steps to Automate Alert Management
Create a NRT (Near-Real Time) Rule in Sentinel:Configure a detection rule that runs in near real-time to detect the specific alerts you want to manage.
Create an Automation Rule:
Define an automation rule in Microsoft Sentinel that triggers when an alert matching your NRT rule is generated. You can also create an incident to group alerts if needed.
Trigger a Logic App Playbook:
Set this automation rule to run a Logic App playbook when the alerts are generated. This playbook can be configured to perform various actions on the alerts.
Configuring the Logic App Playbook
Retrieve Alerts:
Use an action in the playbook to call the Sentinel API and retrieve the details of the alerts triggered by the automation rule.
Change Alert Status:
Add an action in the playbook to update the status of the retrieved alerts to “Resolved”. This can be done using either the Microsoft Sentinel API or the Microsoft Defender for Endpoint (WindowsDefenderATP) API.
API Integration Options
Microsoft Sentinel API:
Use built-in Sentinel actions in Logic Apps to interact directly with alerts and incidents in Sentinel.
Microsoft Defender for Endpoint (WindowsDefenderATP) API:
You can also use this API to manage alerts. Refer to the documentation for details on the necessary API calls: Microsoft Defender for Endpoint API.
Summary of Actions
Automate Closing Alerts: Create an automated playbook in Sentinel to automatically close alerts.Bidirectional Management: With SIEM integration in the Defender portal, you can manage incidents and alerts in both directions (from Sentinel to Defender and vice versa).
Option 2:
Note: This removes all types of informational alerts. You can still filter by source type to reduce irrelevant items.
In the Defendetr XDR setting->Alert tunning->
Regards
Issue:I want to know how I can disable these two following alerts :Disabled Connected to a custom indicatorConnected to an unsanctioned blocked appThose alerts type needs to be enabled or disabled on demand, like the other alerts types. Why’s that :Description of the workload : When we block(Unsanctioned) an application through Defender for Cloud apps. It create automatically the indicators to Defender XDR. When someone for example click or go the URL related with the application, the following alerts will be triggered. When an indicator is automatically created through that, it check the box of generate alert when the indicator is triggered. We would like to automatically uncheck the box or disable to alerts describing. Possible to disable the custom alert in setting ?No.Why ?Explanation : You cannot suppress “custom detection”. But, they are categorized as “Informational” and you can suppress severity alert type. Solutions :Note: If you want to customize which alerts you want to close automatically, you can create a Playbook for it. Or, see the option 2 for a simple way. Option 1:So i found a Quick Workaround that is working good for me right now. You have different options to doing it. However here’s the solution : Note: you need all licences and Security Admin role about Sentinel, Defender all solutions related.Steps to Automate Alert ManagementCreate a NRT (Near-Real Time) Rule in Sentinel:Configure a detection rule that runs in near real-time to detect the specific alerts you want to manage.Create an Automation Rule:Define an automation rule in Microsoft Sentinel that triggers when an alert matching your NRT rule is generated. You can also create an incident to group alerts if needed.Trigger a Logic App Playbook:Set this automation rule to run a Logic App playbook when the alerts are generated. This playbook can be configured to perform various actions on the alerts.Configuring the Logic App PlaybookRetrieve Alerts:Use an action in the playbook to call the Sentinel API and retrieve the details of the alerts triggered by the automation rule.Change Alert Status:Add an action in the playbook to update the status of the retrieved alerts to “Resolved”. This can be done using either the Microsoft Sentinel API or the Microsoft Defender for Endpoint (WindowsDefenderATP) API.API Integration OptionsMicrosoft Sentinel API:Use built-in Sentinel actions in Logic Apps to interact directly with alerts and incidents in Sentinel.Microsoft Defender for Endpoint (WindowsDefenderATP) API:You can also use this API to manage alerts. Refer to the documentation for details on the necessary API calls: Microsoft Defender for Endpoint API.Summary of ActionsAutomate Closing Alerts: Create an automated playbook in Sentinel to automatically close alerts.Bidirectional Management: With SIEM integration in the Defender portal, you can manage incidents and alerts in both directions (from Sentinel to Defender and vice versa).Option 2:Note: This removes all types of informational alerts. You can still filter by source type to reduce irrelevant items. In the Defendetr XDR setting->Alert tunning->Regards Read More
Error code: STATUS_BREAKPOINT when accessing Amazon.com
I’m hoping someone will be able to get this issue resolved for me.
I recently switched from Chrome to Edge. Love Edge. My account is a company account.
When I want to go to www.amazon.com, the website pops up, and a second or two later, I receive an error.
This page is having a problem.
Try coming back to it later.
You could also: Open a new tab or refresh this page
Error code: STATUS_BREAKPOINT
I have cleared the cache. Uninstalled Edge. I reinstalled Edge and am still receiving this error.
I removed extensions and added them back, but I still get errors.
I’m wondering if it is a setting that I am not aware of or do not fully understand in the setting.
I’m hoping someone will be able to get this issue resolved for me. I recently switched from Chrome to Edge. Love Edge. My account is a company account.When I want to go to www.amazon.com, the website pops up, and a second or two later, I receive an error. This page is having a problem.Try coming back to it later.You could also: Open a new tab or refresh this pageError code: STATUS_BREAKPOINT I have cleared the cache. Uninstalled Edge. I reinstalled Edge and am still receiving this error. I removed extensions and added them back, but I still get errors. I’m wondering if it is a setting that I am not aware of or do not fully understand in the setting. Read More
Congratulations and felicitaciones to the first Innovation Challenge winners!
The judges decision are in for our first Innovation Challenge hackathon! This diversity and inclusion program combines features training events, Azure certifications, and culminated with an invite only hackathon with 113 developers competing for over $27,000 in prizes. There were many, very strong projects, building solutions to real world AI use cases. Here are the projects from the top teams.
First prize
Dandelion Hub, a data management platform designed to foster collaboration and knowledge sharing among government agencies
Second place
TerraGuard, AI-driven web application designed to empower users with the ability to predict and visualize sinkhole hazards
Connect+ Helping women in technology to network, learn, and grow together
Third place
BrightPath Empowering the visually impaired with seamless, multilingual assistance
GovQuery Search Designed to utilize public data sources to better ingest, process, and provide insightful knowledge management system for federal and state agencies
GhostWise Eliminating ghosting of job candidates through efficient recruitment, providing warm, transparent and effective communication between companies and candidates
A big thanks to the organizations who made this experience possible: BITE-CON, Black Women In Artificial Intelligence, Blacks in Technology, Código Facilito, GenSpark, and Women in Cloud.
The judges decision are in for our first Innovation Challenge hackathon! This diversity and inclusion program combines features training events, Azure certifications, and culminated with an invite only hackathon with 113 developers competing for over $27,000 in prizes. There were many, very strong projects, building solutions to real world AI use cases. Here are the projects from the top teams.
First prize
Dandelion Hub, a data management platform designed to foster collaboration and knowledge sharing among government agencies
Second place
TerraGuard, AI-driven web application designed to empower users with the ability to predict and visualize sinkhole hazards
Connect+ Helping women in technology to network, learn, and grow together
Third place
BrightPath Empowering the visually impaired with seamless, multilingual assistance
GovQuery Search Designed to utilize public data sources to better ingest, process, and provide insightful knowledge management system for federal and state agencies
GhostWise Eliminating ghosting of job candidates through efficient recruitment, providing warm, transparent and effective communication between companies and candidates
A big thanks to the organizations who made this experience possible: BITE-CON, Black Women In Artificial Intelligence, Blacks in Technology, Código Facilito, GenSpark, and Women in Cloud. Read More
How are the starting points for surrogate optimization chosen?
I have a question regarding the "starting points" or the random points chosen to construct the surrogate. Is it possible to access the function that determines how the surrogate selects its random points?
I am testing different algorithms in MATLAB for a global optimization problem, and I always perform 50 different trials of 100 iterations each. The surrogate algorithm consistently produces very similar starting points and performs exceptionally well with them. The documentation describes this process as a pseudorandom sequence (https://en.wikipedia.org/wiki/Low-discrepancy_sequence), but in my opinion, this does not explain the similarity of the points shown on the cost function graph, especially since no random seed is set.
The graph displays the mean, minimum, and maximum of the cost function for the surrogate algorithm with random starts (over the initial X condition in the options of the surrogate) called Surrogate RNG and the surrogate algorithm. It is interesting to note how close the minimum and maximum of the cost function of the surrogate are.
Is it possible to have more options for surrogate optimization (such as those available in https://github.com/Piiloblondie/MATSuMoTo)? Additionally, is it possible to access the function for determining the starting points so that I can try it with other algorithms?
What could be the reason for the starting points to be so close together even when no random seed is chosen?
For additional context, I use 8-13 different variables, all of which are doubles. I also use one linear constraint that can be implemented in the function.I have a question regarding the "starting points" or the random points chosen to construct the surrogate. Is it possible to access the function that determines how the surrogate selects its random points?
I am testing different algorithms in MATLAB for a global optimization problem, and I always perform 50 different trials of 100 iterations each. The surrogate algorithm consistently produces very similar starting points and performs exceptionally well with them. The documentation describes this process as a pseudorandom sequence (https://en.wikipedia.org/wiki/Low-discrepancy_sequence), but in my opinion, this does not explain the similarity of the points shown on the cost function graph, especially since no random seed is set.
The graph displays the mean, minimum, and maximum of the cost function for the surrogate algorithm with random starts (over the initial X condition in the options of the surrogate) called Surrogate RNG and the surrogate algorithm. It is interesting to note how close the minimum and maximum of the cost function of the surrogate are.
Is it possible to have more options for surrogate optimization (such as those available in https://github.com/Piiloblondie/MATSuMoTo)? Additionally, is it possible to access the function for determining the starting points so that I can try it with other algorithms?
What could be the reason for the starting points to be so close together even when no random seed is chosen?
For additional context, I use 8-13 different variables, all of which are doubles. I also use one linear constraint that can be implemented in the function. I have a question regarding the "starting points" or the random points chosen to construct the surrogate. Is it possible to access the function that determines how the surrogate selects its random points?
I am testing different algorithms in MATLAB for a global optimization problem, and I always perform 50 different trials of 100 iterations each. The surrogate algorithm consistently produces very similar starting points and performs exceptionally well with them. The documentation describes this process as a pseudorandom sequence (https://en.wikipedia.org/wiki/Low-discrepancy_sequence), but in my opinion, this does not explain the similarity of the points shown on the cost function graph, especially since no random seed is set.
The graph displays the mean, minimum, and maximum of the cost function for the surrogate algorithm with random starts (over the initial X condition in the options of the surrogate) called Surrogate RNG and the surrogate algorithm. It is interesting to note how close the minimum and maximum of the cost function of the surrogate are.
Is it possible to have more options for surrogate optimization (such as those available in https://github.com/Piiloblondie/MATSuMoTo)? Additionally, is it possible to access the function for determining the starting points so that I can try it with other algorithms?
What could be the reason for the starting points to be so close together even when no random seed is chosen?
For additional context, I use 8-13 different variables, all of which are doubles. I also use one linear constraint that can be implemented in the function. surrogate, global optimization toolbox MATLAB Answers — New Questions
Parsing and editing txt file line by line
Hello,
How to automatically transform a txt file in this form by removing strings start and end:
Onset,Annotation
+234.3428079,start
+244.1317829,end
+255.1007751,start
+263.0000000,end
to this form:
+234.3428079,+244.1317829
+255.1007751,+263.0000000
RegardsHello,
How to automatically transform a txt file in this form by removing strings start and end:
Onset,Annotation
+234.3428079,start
+244.1317829,end
+255.1007751,start
+263.0000000,end
to this form:
+234.3428079,+244.1317829
+255.1007751,+263.0000000
Regards Hello,
How to automatically transform a txt file in this form by removing strings start and end:
Onset,Annotation
+234.3428079,start
+244.1317829,end
+255.1007751,start
+263.0000000,end
to this form:
+234.3428079,+244.1317829
+255.1007751,+263.0000000
Regards parsing txt files MATLAB Answers — New Questions
Need ability to set separate sounds for Outlook web inbox alerts and Calendar (web version) alerts.
I use the web versions of both Outlook and Calendar and keep them open as pinned tabs throughout the workday. It would be ideal to have the option for separate notification sounds so I know which alert type I’m hearing. It’s annoying to think I’m getting an email only to find it is a calendar alert for a blocked chunk of time. I’d prefer not use the desktop versions. If anyone knows of a way or if there is a better place to post this suggestion, please advise – thanks!
I use the web versions of both Outlook and Calendar and keep them open as pinned tabs throughout the workday. It would be ideal to have the option for separate notification sounds so I know which alert type I’m hearing. It’s annoying to think I’m getting an email only to find it is a calendar alert for a blocked chunk of time. I’d prefer not use the desktop versions. If anyone knows of a way or if there is a better place to post this suggestion, please advise – thanks! Read More
Footnotes in Chicago Style
I am writing a paper whose publisher demands that footnotes be in full Chicago style, for example:
Edward Gibbon, The Decline and Fall of the Roman Empire (Chicago, IL: Encyclopaedia Britannica, Inc. 1952), 16.90-91.
However, when I insert a citation in a footnote from an item in the Manage Sources tool, the best I can get is:
(Gibbon, 1952)
How can I modify the citation format to produce a full Chicago style footnote?
I am writing a paper whose publisher demands that footnotes be in full Chicago style, for example: Edward Gibbon, The Decline and Fall of the Roman Empire (Chicago, IL: Encyclopaedia Britannica, Inc. 1952), 16.90-91. However, when I insert a citation in a footnote from an item in the Manage Sources tool, the best I can get is: (Gibbon, 1952) How can I modify the citation format to produce a full Chicago style footnote? Read More
Windows Server Datacenter: Azure Edition preview build 26236 now available in Azure
Windows Server Datacenter: Azure Edition preview build 26236 now available in Azure
Hello Windows Server Insiders!
We welcome you to try Windows Server 2025 Datacenter: Azure Edition preview build 26236 in both Desktop experience and Core version on the Microsoft Server Operating Systems Preview offer in Azure. Azure Edition is optimized for operation in the Azure environment. For additional information, see Preview: Windows Server VNext Datacenter (Azure Edition) for Azure Automanage on Microsoft Docs. For more information about this build, see Announcing Windows Server Preview Build 26236 – Microsoft Community Hub.
Windows Server Datacenter: Azure Edition preview build 26236 now available in Azure
Hello Windows Server Insiders!
We welcome you to try Windows Server 2025 Datacenter: Azure Edition preview build 26236 in both Desktop experience and Core version on the Microsoft Server Operating Systems Preview offer in Azure. Azure Edition is optimized for operation in the Azure environment. For additional information, see Preview: Windows Server VNext Datacenter (Azure Edition) for Azure Automanage on Microsoft Docs. For more information about this build, see Announcing Windows Server Preview Build 26236 – Microsoft Community Hub. Read More
Disable New outlook switch from new outlook app
<body>i just want to know how can i disable the new outlook switch(toggle button) form new outlook app so that no one can go back to the older version of outlook. Thanks.</body> Read More
Enhance productivity with devices certified for Microsoft Teams
Microsoft Teams is the hub for teamwork, enabling effortless communication and collaboration. By using devices certified for Microsoft Teams, you can elevate your meeting and calling experience. These devices are carefully tested and certified to ensure they complement the Teams environment and make every interaction more engaging and productive.
Why use devices certified for Teams?
Devices certified for Teams are specifically designed to enhance your Teams experience. Let’s explore some the benefits below:
Quality and Compatibility: These devices undergo thorough testing and certification to ensure they meet the highest standards of quality and reliability, delivering high-fidelity audio and HD video to ensure clear and effective communication. You can easily get started without any configuration required for these devices to work with Teams.
Firmware Updates: All devices support firmware updates to ensure you have access to the latest features and performance improvements.
Easily access Teams features: Personal peripheral devices are equipped with the Microsoft Teams button, which are designed to streamline your workflow by providing quick access to essential Teams functions. Let’s explore the functionality below:
Bring up the Teams App.
Join a Meeting.
Raise Your Hand within a meeting.
Optimized performance and reliable calling with phone devices certified for Teams
Certified phone devices for Teams deliver reliable and high-quality calling experiences with Teams, making it easy to make and receive calls. We’re committed to supporting reliable experiences on Teams phone devices and have made the following improvements to support uninterrupted experiences for our users. See the full list of updates here.
Simplified user experience
We continue to invest in new capabilities that create easy to use and consistent experiences for Teams phone devices users. The features below are only a few of the investments we’ve made to help users enjoy a unified experience that makes communication and collaboration easier.
Enhanced user Experience: We have made updates to the user interface on the Calls app, and the Dialpad, to make it easier and faster for you to navigate and access the features you need. You can now switch between the Calls app and the home screen with ease and enjoy a Dialpad-only view in both portrait and landscape modes, to avoid typing errors.
New call handling capabilities: We’ve introduced several new capabilities and improvements to help you manage your calls in less clicks. You can now set up call forwarding from the phone home screen, send incoming calls to voicemail, and update your caller ID to make a call on behalf of a call queue phone number.
Performance, reliability, and stability enhancements
We recognize the critical importance of device performance and reliability for our customers using certified phone devices for Teams. We are dedicated to delivering calling and meeting experiences that work when you need them and have made several investments to ensure reliable and consistent communications for our customers.
Improved performance and reliability: We’re continuously monitoring reliability incidents and have addressed the top issues based on customer feedback. We have made improvements to the Teams app by organizing and updating its building blocks and resources. These updates have noticeably improved app performance, making the app faster to use and load.
OS upgrade: In collaboration with our OEM partners, we are advancing support for Android OS 12 on phone devices, to ensure users have the latest security updates available.
While Microsoft Teams phone devices offer the most immersive Teams experience, we understand that numerous customers have prior investments in SIP devices. SIP Gateway allows these customers to utilize their existing telephony equipment as they transition to Teams Phone, ensuring that the fundamental calling features of Teams are accessible. Learn more about SIP Gateway and see the full list of supported SIP devices here.
Learn more
Explore the comprehensive portfolio of devices certified for Teams here. Easily find and buy certified Teams devices through the Teams admin center or within the new device store in the Teams app.
Stay up to date on the latest feature announcements for certified peripherals and phone devices.
Microsoft Tech Community – Latest Blogs –Read More
A/B Testing, Session Affinity & Regional Rules for Multi-region AKS clusters with Azure Front Door
In this article we will explore how A/B testing in multi-region environments can be performed leveraging Front Door session affinity and an ingress controller to ensure consistent user pools as we scale up our traffic. We will also explore how we can use origin group rewrite rules on existing paths to ensure traffic is routed for specific user sets to specific locations.
Azure Front Door Rulesets and Session Affinity
Azure Front Door is a content delivery network (CDN) that provides fast, reliable and secure access between users and applications using edge locations across the globe. Front Door, in this instance is used to route traffic across the globe between the two regionally isolated AKS clusters. Front Door also supports usage of a Web Application Firewall, custom domains and rewrite rules and more.
Rewrite rules can be thought of as rule sets, these rule sets can evaluate and perform actions on any request according to certain properties or criteria. For example we could create an evaluation on the address of a request such as a “geomatch” and pair that with one or multiple actions. In Front Door we have multiple actions that can be used including modifying request headers, response headers, redirects, rewrites and route overrides. For example we in this case may want to use a route configuration action to ensure that anything every request that originates from a UK location will be routed to the UK origin group.
Front Door has a number of routing methods available that are set at the origin group level. Most people are familiar with latency based routing which involves routing the incoming request to the origin with the lowest latency, usually the origin in closest proximity to the user. Azure Front Door also supports weighted traffic routing at the origin group level which is perfect for A/B testing. In a weighted traffic routing method the traffic gets distributed with a round-robin mechanism using the ratio of the weights specified. It is important to note that this still honours the “acceptable” latency sensitivity set by the user. If the latency sensitivity is set to 0, the weighted routing will not take effect unless both origins have the exact same latency.
Although Front Door offers multiple traffic routing methods when rolling out A/B testing we may want to be more granular with which users or requests are landing on our test origin. Let’s say for example we initially only route internal customers to a certain app version on a specific cluster based on the request IP or perhaps only a certain request protocol to a specific version of our API on a cluster. In these cases rule sets can be implemented to give us granular controls of the users or requests that are being sent to our test application.
Using rewrite rules will involve multiple origin groups. We could create an origin group per region that may hold route’s specific for the applications that are regional as well as a shared services cluster that will have both regions origins for services that can be accessed regardless of the users location. There are some benefits to this group split.
Resiliency – By splitting our origin groups up in such a way we maintain the multi-region resiliency for the services that support it. If the East US cluster/s go down only regional services are effected. While DR takes place for shared services users can still access the UK south cluster.
Data Protection – For stateful services that have stringent data requirements we can ensure that users are not routed to a service that is not suitable even when using weighted routing as we can apply our rulesets.
Limitation of multiple Routes for one path – Front Door does not allow multiple identical route paths. Paths are also limited to only one origin group. If we use an example of a route “/blue” that is across both clusters it will have to exist and be associated only with the “services-shared” origin group, however using re-write rules we can reroute the request to an origin group of our choice such as “services-uksouth”.
It is worth being aware that when creating origin groups the hard limit is 200 origin groups per Front Door profile. If you surpass 200 origin groups it is advised to create an additional Front Door profile.
One of the challenges when performing A/B testing is as we change the weight or expand the ruleset we are evaluating is that often in other global load balancers or CDN’s the user pools will be reset. With Front Door we can avoid this by ensuring that we enforce session affinity on our origin group. Without session affinity Front Door may route a single users requests to multiple origins. Once enabled, Azure Front Door adds a cookie to the user’s session. The cookies are called ASLBSA and ASLBSACORS. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address. This will allow us to dynamically adjust the weighting of our A/B testing without disrupting our existing user pool on either our A or B cluster.
Before we take a look at the example let’s first look at how we setup session affinity when using Front Door and AKS.
AKS & Reverse Proxies
When using Sticky Sessions with most Azure PaaS services no additional setup is required. For AKS as we use a reverse proxy in most cases to expose our services we need to take an additional step to ensure that our sessions remain sticky. This is because as mentioned Front Door uses a session affinity cookie which if the response is cacheable will not be done as it will disrupt the cookies of every other client requesting the same response. As a result if Front Door receives a cacheable response from the Origin a session cookie will not be set.
To ensure our responses are not cacheable we need to add a cache-control header to our responses. We have multiple options to do this. Below are two examples one for NGINX and one for Traefik.
NGINX
NGINX supports an annotation called configuration snippet. We can use it to set headers:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers “Cache-Control: no-store”;
2. Traefik
Traefik does not support configuration snippets so on Traefik we can use the following custom-request-headers annotation:
ingress.kubernetes.io/custom-request-headers: “Cache-Control: no-store”
It’s important to note here we are talking about session affinity at the node level. For pod affinity please review the specific guidance for your selected ingress controller. This will be used in conjunction with Front Door session affinity.
Example – Session Affinity for A/B Testing
I will admit this is not the most thrilling demo to see as text and images but it does show how this can be validated. We use a container image that provides node and pod information to understand what pod/version of our application we have landed on. This is a public image and can be pulled here (scubakiz/servicedemo:1.0). This application is running on the same path across two clusters in the services-shared origin group. Front Door has session affinity enabled and the headers are set on both ingress paths. It is important to note that this application refreshes the browser every 10 seconds. Without session affinity you would notice your pod changing.
We initially set the US origin within the origin group to have 99% of the incoming traffic and when we access the web application we can see we are routed to a US deployment of our application. We can see that this pod exists in our US cluster.
When we adjust the weighting to be 99% to the UK cluster and open a new incognito tab we can see that we are now routed to our UK deployments. This weighting change takes about 5 minutes to take effect.
As mentioned this application refreshes every 10 seconds. This means that we are able to observe our original US user pool remaining on that cluster while new users are now directed to the UK user pool. We can see that by comparing the new pod details incognito window on the right to our UK pods. We can see in the bottom left that our constantly refreshing US Session is still connected.
Although this is an extreme example if we think of the UK pool as out B testing pool under the original weightings we could slowly increase the percentage of traffic from 1% to onboard more users without interrupting other users. Similarly at the point we wanted to go to 100% on a shared services cluster we could flip the traffic with assurance that the users on the old version will not suddenly be moved onto a new version.
Microsoft Tech Community – Latest Blogs –Read More
Connect from Azure SQL database to Storage account using Private Endpoint
We have cases where our customers want to access from Azure SQL Database to Azure Storage Account(SA) using Private Endpoint(PE).
For additional information how you can configure PE for your storage account, please visit the following link: Tutorial: Connect to a storage account using an Azure Private Endpoint. The process involves configuring the private endpoint for the storage account to allow secure and private communication between the Azure resources and your storage account.
I would like to clarify that the use of a private endpoint is a connection from a VNET to a resource. However, Azure SQL DB is not VNET integrated and, as a result, it is not possible to access from Azure SQL Database to a storage account via a private endpoint.
The PE can still exist for other resources that can connect to the SA using PE, as example Azure SQL MI or Virtual Machines, but Azure SQL DB can’t use it.
Our customers need to at least use the Selected Networks(public, but restricted), and use the Trusted option, specify the trusted server, ensure the server’s managed identity has RBAC to it, and use managed identity (not SAS) for the Database credential.
Microsoft Tech Community – Latest Blogs –Read More
MATLAB not indexing table with correct data type, how to specify data type when indexing table?
I wrote a script that takes in an excel table using the "readtable" command.
inputTable = readtable(completeTableFilePath,’Sheet’,sheetChoiceFileName,’TextType’,’string’);
This, to my knowledge, should import all the cells of the excel file as strings. One part of the excel file is a column that has hex numbers (they cold be just "92" or "13C" etc…).
I had a really long excel table (around 300 lines) that had this column of hex numbers. I tried the program with a smaller table, maybe only 8 lines, and now it is having issues, particularly in the hex column.
The code:
% I just added these for testing (the idName)
idName = upper(inputTable{currentRowNumber,’ID’});
idName
if upper(inputTable{currentRowNumber,’ID’}) ~= "TBD"
if upper(inputTable{currentRowNumber,’ID’}) ~= recurringID
recurringID = upper(inputTable{currentRowNumber,’ID’});
messageIDDecimal = hex2dec(recurringID);
end
end
As I said, I changed nothing about this between runs, it works perfectly with the large table and it does not work with the smaller table. When I try to run it with the smaller table (which is just the larger table with a lot of the rows chopped off) I get idName as a "double" data type.
I figured I could fix this, by instead of relying on MATLAB to have the correct data type (which it should anyways because I specified so earlier!) I force the data type of strings using string(…).
% idName for testing
idName = upper(string(inputTable{currentRowNumber,’ID’}));
idName
if upper(string(inputTable{currentRowNumber,’ID’})) ~= "TBD"
if upper(string(inputTable{currentRowNumber,’ID’})) ~= recurringID
recurringID = upper(string(inputTable{currentRowNumber,’ID’}));
messageIDDecimal = hex2dec(recurringID);
end
end
I ran the above code on the smaller table and got the error: <missing> string element not supported; error on line with the hex2dec. idName shows up as <missing>.
I changed the code again to show the raw table indexing:
% idName for testing
idName = upper(string(inputTable{currentRowNumber,’ID’}));
idName
idTest = inputTable{currentRowNumber,’ID’};
idTest
if upper(string(inputTable{currentRowNumber,’ID’})) ~= "TBD"
if upper(string(inputTable{currentRowNumber,’ID’})) ~= recurringID
recurringID = upper(string(inputTable{currentRowNumber,’ID’}));
messageIDDecimal = hex2dec(recurringID);
end
end
idTest was coming up as the double data type, until the final go. idName was <missing> and idTest was "NaN". To my knowledge, "NaN" is "Not a Number". It is giving an error because it’s trying to input a hex number as a regular double data type. I cannot find a way to fix this. I already specified that the table is to be imported as strings. I cannot cast to this, because its not just holding the data, but being the wrong type. It is throwing an error and not holding the data at all.
There is nothing I can do, unless there is some way to make MATLAB only import as a specific data type. I am having a lot of issues, because MATLAB assumes data types. Maybe coming from C just has me think differently. I can see how it can be useful, but the fact that there is no way around it (that I know of), makes it not useful.
I ran the script with "idName" and "idTest" on the larget excel table. It properly imported them as strings. I was able to index and both idTest and idName showed as strings, even the hex numbers that only had the regular 0-9 numbers. So it is not the code. It is just MATLAB sometimes deciding to import as strings and sometimes to not.I wrote a script that takes in an excel table using the "readtable" command.
inputTable = readtable(completeTableFilePath,’Sheet’,sheetChoiceFileName,’TextType’,’string’);
This, to my knowledge, should import all the cells of the excel file as strings. One part of the excel file is a column that has hex numbers (they cold be just "92" or "13C" etc…).
I had a really long excel table (around 300 lines) that had this column of hex numbers. I tried the program with a smaller table, maybe only 8 lines, and now it is having issues, particularly in the hex column.
The code:
% I just added these for testing (the idName)
idName = upper(inputTable{currentRowNumber,’ID’});
idName
if upper(inputTable{currentRowNumber,’ID’}) ~= "TBD"
if upper(inputTable{currentRowNumber,’ID’}) ~= recurringID
recurringID = upper(inputTable{currentRowNumber,’ID’});
messageIDDecimal = hex2dec(recurringID);
end
end
As I said, I changed nothing about this between runs, it works perfectly with the large table and it does not work with the smaller table. When I try to run it with the smaller table (which is just the larger table with a lot of the rows chopped off) I get idName as a "double" data type.
I figured I could fix this, by instead of relying on MATLAB to have the correct data type (which it should anyways because I specified so earlier!) I force the data type of strings using string(…).
% idName for testing
idName = upper(string(inputTable{currentRowNumber,’ID’}));
idName
if upper(string(inputTable{currentRowNumber,’ID’})) ~= "TBD"
if upper(string(inputTable{currentRowNumber,’ID’})) ~= recurringID
recurringID = upper(string(inputTable{currentRowNumber,’ID’}));
messageIDDecimal = hex2dec(recurringID);
end
end
I ran the above code on the smaller table and got the error: <missing> string element not supported; error on line with the hex2dec. idName shows up as <missing>.
I changed the code again to show the raw table indexing:
% idName for testing
idName = upper(string(inputTable{currentRowNumber,’ID’}));
idName
idTest = inputTable{currentRowNumber,’ID’};
idTest
if upper(string(inputTable{currentRowNumber,’ID’})) ~= "TBD"
if upper(string(inputTable{currentRowNumber,’ID’})) ~= recurringID
recurringID = upper(string(inputTable{currentRowNumber,’ID’}));
messageIDDecimal = hex2dec(recurringID);
end
end
idTest was coming up as the double data type, until the final go. idName was <missing> and idTest was "NaN". To my knowledge, "NaN" is "Not a Number". It is giving an error because it’s trying to input a hex number as a regular double data type. I cannot find a way to fix this. I already specified that the table is to be imported as strings. I cannot cast to this, because its not just holding the data, but being the wrong type. It is throwing an error and not holding the data at all.
There is nothing I can do, unless there is some way to make MATLAB only import as a specific data type. I am having a lot of issues, because MATLAB assumes data types. Maybe coming from C just has me think differently. I can see how it can be useful, but the fact that there is no way around it (that I know of), makes it not useful.
I ran the script with "idName" and "idTest" on the larget excel table. It properly imported them as strings. I was able to index and both idTest and idName showed as strings, even the hex numbers that only had the regular 0-9 numbers. So it is not the code. It is just MATLAB sometimes deciding to import as strings and sometimes to not. I wrote a script that takes in an excel table using the "readtable" command.
inputTable = readtable(completeTableFilePath,’Sheet’,sheetChoiceFileName,’TextType’,’string’);
This, to my knowledge, should import all the cells of the excel file as strings. One part of the excel file is a column that has hex numbers (they cold be just "92" or "13C" etc…).
I had a really long excel table (around 300 lines) that had this column of hex numbers. I tried the program with a smaller table, maybe only 8 lines, and now it is having issues, particularly in the hex column.
The code:
% I just added these for testing (the idName)
idName = upper(inputTable{currentRowNumber,’ID’});
idName
if upper(inputTable{currentRowNumber,’ID’}) ~= "TBD"
if upper(inputTable{currentRowNumber,’ID’}) ~= recurringID
recurringID = upper(inputTable{currentRowNumber,’ID’});
messageIDDecimal = hex2dec(recurringID);
end
end
As I said, I changed nothing about this between runs, it works perfectly with the large table and it does not work with the smaller table. When I try to run it with the smaller table (which is just the larger table with a lot of the rows chopped off) I get idName as a "double" data type.
I figured I could fix this, by instead of relying on MATLAB to have the correct data type (which it should anyways because I specified so earlier!) I force the data type of strings using string(…).
% idName for testing
idName = upper(string(inputTable{currentRowNumber,’ID’}));
idName
if upper(string(inputTable{currentRowNumber,’ID’})) ~= "TBD"
if upper(string(inputTable{currentRowNumber,’ID’})) ~= recurringID
recurringID = upper(string(inputTable{currentRowNumber,’ID’}));
messageIDDecimal = hex2dec(recurringID);
end
end
I ran the above code on the smaller table and got the error: <missing> string element not supported; error on line with the hex2dec. idName shows up as <missing>.
I changed the code again to show the raw table indexing:
% idName for testing
idName = upper(string(inputTable{currentRowNumber,’ID’}));
idName
idTest = inputTable{currentRowNumber,’ID’};
idTest
if upper(string(inputTable{currentRowNumber,’ID’})) ~= "TBD"
if upper(string(inputTable{currentRowNumber,’ID’})) ~= recurringID
recurringID = upper(string(inputTable{currentRowNumber,’ID’}));
messageIDDecimal = hex2dec(recurringID);
end
end
idTest was coming up as the double data type, until the final go. idName was <missing> and idTest was "NaN". To my knowledge, "NaN" is "Not a Number". It is giving an error because it’s trying to input a hex number as a regular double data type. I cannot find a way to fix this. I already specified that the table is to be imported as strings. I cannot cast to this, because its not just holding the data, but being the wrong type. It is throwing an error and not holding the data at all.
There is nothing I can do, unless there is some way to make MATLAB only import as a specific data type. I am having a lot of issues, because MATLAB assumes data types. Maybe coming from C just has me think differently. I can see how it can be useful, but the fact that there is no way around it (that I know of), makes it not useful.
I ran the script with "idName" and "idTest" on the larget excel table. It properly imported them as strings. I was able to index and both idTest and idName showed as strings, even the hex numbers that only had the regular 0-9 numbers. So it is not the code. It is just MATLAB sometimes deciding to import as strings and sometimes to not. table, importing excel data, data import, data types MATLAB Answers — New Questions
3D Figure from Excel (x,y,z)
I have been trying to create a 3D representation of my data for an ellipsoid-shaped measurement. I have x (length), y (width), and z (intensity) data and have been able to create plots in the past for a variation of this data (y was time). The data is 300×3 (100×1 for each value). For this project, I am struggling to create a 3D figure for my data. I have tried using surf, ellipsoid, fimplicit3, and a bunch of different functions. I’m attaching an image of the desired shape.
Does anyone have any ideas for what code or function I should try instead?I have been trying to create a 3D representation of my data for an ellipsoid-shaped measurement. I have x (length), y (width), and z (intensity) data and have been able to create plots in the past for a variation of this data (y was time). The data is 300×3 (100×1 for each value). For this project, I am struggling to create a 3D figure for my data. I have tried using surf, ellipsoid, fimplicit3, and a bunch of different functions. I’m attaching an image of the desired shape.
Does anyone have any ideas for what code or function I should try instead? I have been trying to create a 3D representation of my data for an ellipsoid-shaped measurement. I have x (length), y (width), and z (intensity) data and have been able to create plots in the past for a variation of this data (y was time). The data is 300×3 (100×1 for each value). For this project, I am struggling to create a 3D figure for my data. I have tried using surf, ellipsoid, fimplicit3, and a bunch of different functions. I’m attaching an image of the desired shape.
Does anyone have any ideas for what code or function I should try instead? 3d plots, importing excel data, matlab function, surf, ellipsoid, mesh, 3d MATLAB Answers — New Questions