Tag Archives: microsoft
Disable New outlook switch from new outlook app
<body>i just want to know how can i disable the new outlook switch(toggle button) form new outlook app so that no one can go back to the older version of outlook. Thanks.</body> Read More
Enhance productivity with devices certified for Microsoft Teams
Microsoft Teams is the hub for teamwork, enabling effortless communication and collaboration. By using devices certified for Microsoft Teams, you can elevate your meeting and calling experience. These devices are carefully tested and certified to ensure they complement the Teams environment and make every interaction more engaging and productive.
Why use devices certified for Teams?
Devices certified for Teams are specifically designed to enhance your Teams experience. Let’s explore some the benefits below:
Quality and Compatibility: These devices undergo thorough testing and certification to ensure they meet the highest standards of quality and reliability, delivering high-fidelity audio and HD video to ensure clear and effective communication. You can easily get started without any configuration required for these devices to work with Teams.
Firmware Updates: All devices support firmware updates to ensure you have access to the latest features and performance improvements.
Easily access Teams features: Personal peripheral devices are equipped with the Microsoft Teams button, which are designed to streamline your workflow by providing quick access to essential Teams functions. Let’s explore the functionality below:
Bring up the Teams App.
Join a Meeting.
Raise Your Hand within a meeting.
Optimized performance and reliable calling with phone devices certified for Teams
Certified phone devices for Teams deliver reliable and high-quality calling experiences with Teams, making it easy to make and receive calls. We’re committed to supporting reliable experiences on Teams phone devices and have made the following improvements to support uninterrupted experiences for our users. See the full list of updates here.
Simplified user experience
We continue to invest in new capabilities that create easy to use and consistent experiences for Teams phone devices users. The features below are only a few of the investments we’ve made to help users enjoy a unified experience that makes communication and collaboration easier.
Enhanced user Experience: We have made updates to the user interface on the Calls app, and the Dialpad, to make it easier and faster for you to navigate and access the features you need. You can now switch between the Calls app and the home screen with ease and enjoy a Dialpad-only view in both portrait and landscape modes, to avoid typing errors.
New call handling capabilities: We’ve introduced several new capabilities and improvements to help you manage your calls in less clicks. You can now set up call forwarding from the phone home screen, send incoming calls to voicemail, and update your caller ID to make a call on behalf of a call queue phone number.
Performance, reliability, and stability enhancements
We recognize the critical importance of device performance and reliability for our customers using certified phone devices for Teams. We are dedicated to delivering calling and meeting experiences that work when you need them and have made several investments to ensure reliable and consistent communications for our customers.
Improved performance and reliability: We’re continuously monitoring reliability incidents and have addressed the top issues based on customer feedback. We have made improvements to the Teams app by organizing and updating its building blocks and resources. These updates have noticeably improved app performance, making the app faster to use and load.
OS upgrade: In collaboration with our OEM partners, we are advancing support for Android OS 12 on phone devices, to ensure users have the latest security updates available.
While Microsoft Teams phone devices offer the most immersive Teams experience, we understand that numerous customers have prior investments in SIP devices. SIP Gateway allows these customers to utilize their existing telephony equipment as they transition to Teams Phone, ensuring that the fundamental calling features of Teams are accessible. Learn more about SIP Gateway and see the full list of supported SIP devices here.
Learn more
Explore the comprehensive portfolio of devices certified for Teams here. Easily find and buy certified Teams devices through the Teams admin center or within the new device store in the Teams app.
Stay up to date on the latest feature announcements for certified peripherals and phone devices.
Microsoft Tech Community – Latest Blogs –Read More
A/B Testing, Session Affinity & Regional Rules for Multi-region AKS clusters with Azure Front Door
In this article we will explore how A/B testing in multi-region environments can be performed leveraging Front Door session affinity and an ingress controller to ensure consistent user pools as we scale up our traffic. We will also explore how we can use origin group rewrite rules on existing paths to ensure traffic is routed for specific user sets to specific locations.
Azure Front Door Rulesets and Session Affinity
Azure Front Door is a content delivery network (CDN) that provides fast, reliable and secure access between users and applications using edge locations across the globe. Front Door, in this instance is used to route traffic across the globe between the two regionally isolated AKS clusters. Front Door also supports usage of a Web Application Firewall, custom domains and rewrite rules and more.
Rewrite rules can be thought of as rule sets, these rule sets can evaluate and perform actions on any request according to certain properties or criteria. For example we could create an evaluation on the address of a request such as a “geomatch” and pair that with one or multiple actions. In Front Door we have multiple actions that can be used including modifying request headers, response headers, redirects, rewrites and route overrides. For example we in this case may want to use a route configuration action to ensure that anything every request that originates from a UK location will be routed to the UK origin group.
Front Door has a number of routing methods available that are set at the origin group level. Most people are familiar with latency based routing which involves routing the incoming request to the origin with the lowest latency, usually the origin in closest proximity to the user. Azure Front Door also supports weighted traffic routing at the origin group level which is perfect for A/B testing. In a weighted traffic routing method the traffic gets distributed with a round-robin mechanism using the ratio of the weights specified. It is important to note that this still honours the “acceptable” latency sensitivity set by the user. If the latency sensitivity is set to 0, the weighted routing will not take effect unless both origins have the exact same latency.
Although Front Door offers multiple traffic routing methods when rolling out A/B testing we may want to be more granular with which users or requests are landing on our test origin. Let’s say for example we initially only route internal customers to a certain app version on a specific cluster based on the request IP or perhaps only a certain request protocol to a specific version of our API on a cluster. In these cases rule sets can be implemented to give us granular controls of the users or requests that are being sent to our test application.
Using rewrite rules will involve multiple origin groups. We could create an origin group per region that may hold route’s specific for the applications that are regional as well as a shared services cluster that will have both regions origins for services that can be accessed regardless of the users location. There are some benefits to this group split.
Resiliency – By splitting our origin groups up in such a way we maintain the multi-region resiliency for the services that support it. If the East US cluster/s go down only regional services are effected. While DR takes place for shared services users can still access the UK south cluster.
Data Protection – For stateful services that have stringent data requirements we can ensure that users are not routed to a service that is not suitable even when using weighted routing as we can apply our rulesets.
Limitation of multiple Routes for one path – Front Door does not allow multiple identical route paths. Paths are also limited to only one origin group. If we use an example of a route “/blue” that is across both clusters it will have to exist and be associated only with the “services-shared” origin group, however using re-write rules we can reroute the request to an origin group of our choice such as “services-uksouth”.
It is worth being aware that when creating origin groups the hard limit is 200 origin groups per Front Door profile. If you surpass 200 origin groups it is advised to create an additional Front Door profile.
One of the challenges when performing A/B testing is as we change the weight or expand the ruleset we are evaluating is that often in other global load balancers or CDN’s the user pools will be reset. With Front Door we can avoid this by ensuring that we enforce session affinity on our origin group. Without session affinity Front Door may route a single users requests to multiple origins. Once enabled, Azure Front Door adds a cookie to the user’s session. The cookies are called ASLBSA and ASLBSACORS. Cookie-based session affinity allows Front Door to identify different users even if behind the same IP address. This will allow us to dynamically adjust the weighting of our A/B testing without disrupting our existing user pool on either our A or B cluster.
Before we take a look at the example let’s first look at how we setup session affinity when using Front Door and AKS.
AKS & Reverse Proxies
When using Sticky Sessions with most Azure PaaS services no additional setup is required. For AKS as we use a reverse proxy in most cases to expose our services we need to take an additional step to ensure that our sessions remain sticky. This is because as mentioned Front Door uses a session affinity cookie which if the response is cacheable will not be done as it will disrupt the cookies of every other client requesting the same response. As a result if Front Door receives a cacheable response from the Origin a session cookie will not be set.
To ensure our responses are not cacheable we need to add a cache-control header to our responses. We have multiple options to do this. Below are two examples one for NGINX and one for Traefik.
NGINX
NGINX supports an annotation called configuration snippet. We can use it to set headers:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers “Cache-Control: no-store”;
2. Traefik
Traefik does not support configuration snippets so on Traefik we can use the following custom-request-headers annotation:
ingress.kubernetes.io/custom-request-headers: “Cache-Control: no-store”
It’s important to note here we are talking about session affinity at the node level. For pod affinity please review the specific guidance for your selected ingress controller. This will be used in conjunction with Front Door session affinity.
Example – Session Affinity for A/B Testing
I will admit this is not the most thrilling demo to see as text and images but it does show how this can be validated. We use a container image that provides node and pod information to understand what pod/version of our application we have landed on. This is a public image and can be pulled here (scubakiz/servicedemo:1.0). This application is running on the same path across two clusters in the services-shared origin group. Front Door has session affinity enabled and the headers are set on both ingress paths. It is important to note that this application refreshes the browser every 10 seconds. Without session affinity you would notice your pod changing.
We initially set the US origin within the origin group to have 99% of the incoming traffic and when we access the web application we can see we are routed to a US deployment of our application. We can see that this pod exists in our US cluster.
When we adjust the weighting to be 99% to the UK cluster and open a new incognito tab we can see that we are now routed to our UK deployments. This weighting change takes about 5 minutes to take effect.
As mentioned this application refreshes every 10 seconds. This means that we are able to observe our original US user pool remaining on that cluster while new users are now directed to the UK user pool. We can see that by comparing the new pod details incognito window on the right to our UK pods. We can see in the bottom left that our constantly refreshing US Session is still connected.
Although this is an extreme example if we think of the UK pool as out B testing pool under the original weightings we could slowly increase the percentage of traffic from 1% to onboard more users without interrupting other users. Similarly at the point we wanted to go to 100% on a shared services cluster we could flip the traffic with assurance that the users on the old version will not suddenly be moved onto a new version.
Microsoft Tech Community – Latest Blogs –Read More
Connect from Azure SQL database to Storage account using Private Endpoint
We have cases where our customers want to access from Azure SQL Database to Azure Storage Account(SA) using Private Endpoint(PE).
For additional information how you can configure PE for your storage account, please visit the following link: Tutorial: Connect to a storage account using an Azure Private Endpoint. The process involves configuring the private endpoint for the storage account to allow secure and private communication between the Azure resources and your storage account.
I would like to clarify that the use of a private endpoint is a connection from a VNET to a resource. However, Azure SQL DB is not VNET integrated and, as a result, it is not possible to access from Azure SQL Database to a storage account via a private endpoint.
The PE can still exist for other resources that can connect to the SA using PE, as example Azure SQL MI or Virtual Machines, but Azure SQL DB can’t use it.
Our customers need to at least use the Selected Networks(public, but restricted), and use the Trusted option, specify the trusted server, ensure the server’s managed identity has RBAC to it, and use managed identity (not SAS) for the Database credential.
Microsoft Tech Community – Latest Blogs –Read More
Subform updating table, but not when you open form individually to input data
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now.
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now. Read More
Outlook Mobile App – only showing like 24 hours worth of email
Anyone come across the issue where you are only seeing like 24 hours worth of emails in the Oulook Mobile App. It’s happening on the Android and some of the Apple devices.
Anyone come across the issue where you are only seeing like 24 hours worth of emails in the Oulook Mobile App. It’s happening on the Android and some of the Apple devices. Read More
ADF throwing error while connecting thru SFTP
Hi there,
There are files coming from the partners in csv format. They upload it on FTP and a moveit job moves them to SFTP location. When the ADF uses the SFTP linked service to read the file it errors out with the following error. This file does not have any data issue.
However if I use the Azure blob storage and upload the file there and read it using ADF’s Azure blob storage linked service it gets processed perfectly.
Could you please help me understand why I am getting the error only when processing the file using SFTP?
Error while using SFTP –
ErrorCode=DelimitedTextMoreColumnsThanDefined,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error found when processing ‘Csv/Tsv Format Text’ source ‘TodaysFile_06_18_2024.csv’ with row number 88186: found more columns than expected column count 25.,Source=Microsoft.DataTransfer.Common,’
Hi there,There are files coming from the partners in csv format. They upload it on FTP and a moveit job moves them to SFTP location. When the ADF uses the SFTP linked service to read the file it errors out with the following error. This file does not have any data issue. However if I use the Azure blob storage and upload the file there and read it using ADF’s Azure blob storage linked service it gets processed perfectly. Could you please help me understand why I am getting the error only when processing the file using SFTP? Error while using SFTP – ErrorCode=DelimitedTextMoreColumnsThanDefined,’Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error found when processing ‘Csv/Tsv Format Text’ source ‘TodaysFile_06_18_2024.csv’ with row number 88186: found more columns than expected column count 25.,Source=Microsoft.DataTransfer.Common,’ Read More
cancelled booking
Hello – if a user cancels a booking, where do i find the name/email of the user who cancelled?
Hello – if a user cancels a booking, where do i find the name/email of the user who cancelled? Read More
Name change & profile picture sync takes weeks
Hello,
When we change a users display name or profile picture, it takes weeks for the change to fully synchronize before it shows the same correct information in Teams, Outlook, SharePoint.
During the week after the change has taken place, the new information can we viewed in Teams but not in SharePoint, and the day after it can be vice versa – and the day after that you could not be able to view the new information anywhere, and then the day after that it could come back to being visible in only Teams. and on it goes.
But after a couple of weeks or a month+, everything comes in order an the new display name or profile picture is correctly visible everywhere.
Why is this happening, and is there some kind of manual sync I could run to get the change to pull thorugh at once?
Best regards,
Linus
Hello, When we change a users display name or profile picture, it takes weeks for the change to fully synchronize before it shows the same correct information in Teams, Outlook, SharePoint. During the week after the change has taken place, the new information can we viewed in Teams but not in SharePoint, and the day after it can be vice versa – and the day after that you could not be able to view the new information anywhere, and then the day after that it could come back to being visible in only Teams. and on it goes. But after a couple of weeks or a month+, everything comes in order an the new display name or profile picture is correctly visible everywhere. Why is this happening, and is there some kind of manual sync I could run to get the change to pull thorugh at once? Best regards,Linus Read More
Nonprofit CRM Donorfy builds the future of fundraising with Microsoft
Donorfy’s cloud-based nonprofit CRM platform provides fundraisers with the tools to spend less time on busywork—and focus on generating the revenue that drives impact. With Donorfy, charity startups and established giants alike save time on recurring fundraising tasks, deepen relationships with new and ongoing supporters, and increase revenue to fund their important work. “We deliver simplicity,” explains Ben Brett, CTO and co-founder of Donorfy. “Our platform takes care of the nuts and bolts, and it removes tedious manual steps, so our customers can grow.”
To expand its reach and accelerate the development of advanced features, Donorfy joined the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program. The program brings on cloud-first SaaS companies and independent service vendors (ISVs) to serve nonprofits through innovative technology solutions in line with Microsoft technology offerings.
“The Digital Natives partnership helps Donorfy, but ultimately it helps our customers do more,” says Ben Twyman, Chief Commercial Officer at Donorfy. “Working with Microsoft—probably the most advanced company in the world working on AI and related specialties—means we can bring that expertise to our customer base, and Digital Natives helps us shout out how we’re helping customers reach their mission faster.”
“We are proud to partner with Donorfy to help charities in the UK and beyond achieve their mission,” says Craig Parker, Global SaaS Partnerships Lead for the Digital Natives Partner Program at Microsoft Tech for Social Impact. “Donorfy, as a leading fundraising CRM platform, embraces innovation and shares our vision of how AI can accelerate social good.”
As a cloud-native company, Donorfy has always been built in Microsoft Azure. “By using a single supplier—Microsoft—we have an end-to-end chain that enables us to build a great solution,” Brett explains. “The scalability in Azure saves a massive amount of time and allows us to get on with our jobs of supporting charities.”
The ongoing relationship with Microsoft made joining Digital Natives a clear next step. Digital Natives supports nonprofit-focused businesses like Donorfy to reach more customers and develop new solutions that empower mission-driven organizations.
“There’s a good synergy between Microsoft and Donorfy, in that we’re both making technology simple, accessible, and smart enough to make the world better and fairer,” Twyman says. “With this partnership, both companies win and achieve.”
Donorfy’s cloud-based nonprofit CRM platform provides fundraisers with the tools to spend less time on busywork—and focus on generating the revenue that drives impact. With Donorfy, charity startups and established giants alike save time on recurring fundraising tasks, deepen relationships with new and ongoing supporters, and increase revenue to fund their important work. “We deliver simplicity,” explains Ben Brett, CTO and co-founder of Donorfy. “Our platform takes care of the nuts and bolts, and it removes tedious manual steps, so our customers can grow.”
To expand its reach and accelerate the development of advanced features, Donorfy joined the Microsoft Tech for Social Impact (TSI) Digital Natives Partner Program. The program brings on cloud-first SaaS companies and independent service vendors (ISVs) to serve nonprofits through innovative technology solutions in line with Microsoft technology offerings.
“The Digital Natives partnership helps Donorfy, but ultimately it helps our customers do more,” says Ben Twyman, Chief Commercial Officer at Donorfy. “Working with Microsoft—probably the most advanced company in the world working on AI and related specialties—means we can bring that expertise to our customer base, and Digital Natives helps us shout out how we’re helping customers reach their mission faster.”
“We are proud to partner with Donorfy to help charities in the UK and beyond achieve their mission,” says Craig Parker, Global SaaS Partnerships Lead for the Digital Natives Partner Program at Microsoft Tech for Social Impact. “Donorfy, as a leading fundraising CRM platform, embraces innovation and shares our vision of how AI can accelerate social good.”
As a cloud-native company, Donorfy has always been built in Microsoft Azure. “By using a single supplier—Microsoft—we have an end-to-end chain that enables us to build a great solution,” Brett explains. “The scalability in Azure saves a massive amount of time and allows us to get on with our jobs of supporting charities.”
The ongoing relationship with Microsoft made joining Digital Natives a clear next step. Digital Natives supports nonprofit-focused businesses like Donorfy to reach more customers and develop new solutions that empower mission-driven organizations.
“There’s a good synergy between Microsoft and Donorfy, in that we’re both making technology simple, accessible, and smart enough to make the world better and fairer,” Twyman says. “With this partnership, both companies win and achieve.”
Read the full case study Read More
Microsoft Entra ID Governance licensing clarifications
In the past few weeks, we’ve announced the general availability of Microsoft Entra External ID and Microsoft Entra ID multi-tenant collaboration. We’ve received requests for more detail from some of you regarding licensing, so I’d like to provide additional clarity for both of these scenarios.
One person, one license
Included in the first announcement of more multi-tenant organization (MTO) features to enhance collaboration between users, we stated that only one Microsoft Entra ID P1 license is required per employee per multi-tenant organization. Expanding on that, the term “multi-tenant organization” has two descriptions: an organization that owns and operates more than one tenant; and a set of features that enhance the collaboration experience for users between these tenants. However, your organization doesn’t have to deploy those capabilities to take advantage of the one person, one license philosophy. An organization that owns and operates multiple tenants only needs one Entra ID license per employee across those tenants. The same philosophy applies to Entra ID Governance: the organization only needs one license per person to govern the identities of these users across these tenants.
To illustrate this scenario, let’s consider an organization called Contoso, which owns ZT Tires and Tailspin Toys. Mallory is hired by Contoso, which uses Lifecycle Workflows in Entra ID Governance to onboard her user account and grant her access to the resources she needs for her job. Her account receives an access package with an entitlement to ZT Tires’ ERP app, and she requests access to Tailspin Toys inventory management app. Because Mallory has an Entra ID Governance license in the Contoso tenant, her identity can be governed in the ZT Tires and Tailspin Toys tenants with no additional governance licenses – one person, one license.
Entra ID Governance in Microsoft Entra External ID
The other announcement covered Entra External ID, Microsoft’s solution to secure customer and business collaborator access to applications. In November, I blogged about the licensing model to govern the identities of business guests in the B2B scenario for Entra External ID and shared that pricing would be $0.75 per actively governed identity per month. Because metered, usage-based pricing to govern the identities of business guests is a different model than the existing, licensed-based pricing model to govern the identities of employees, I’d like to share more detail.
A business guest identity in Entra External ID will accrue a single $0.75 charge in any month in which that identity is actively governed, no matter how many governance actions are taken on that identity. For example:
A Contoso employee named Gerhart collaborates with Pradeep of Woodgrove Bank to produce Contoso’s quarterly financial statements. Contoso has deployed Entra External ID for its business partners such as Woodgrove Bank. In April, Pradeep accesses Contoso’s Microsoft Teams where Gerhart stores his quarterly reporting documents, but his Entra External ID has no identity governance actions taken on them, so it doesn’t accrue any charges.
In May, Pradeep receives an access package with an entitlement to Contoso’s accounting system, and Gerhart reviews Pradeep’s existing access to Contoso’s inventory management database, as well as to the Teams with the quarterly reporting documents. Because Pradeep’s identity in Entra External ID had identity governance actions taken on it, Contoso will accrue a $0.75 charge. Note that the charge is applied once, even though there were three identity governance actions taken during the month. Once that Entra External ID identity was governed in May, additional identity governance actions do not generate additional charges for that identity in May.
To learn more about Microsoft Entra ID Governance licensing, visit the Licensing Fundamentals page.
Read more on this topic
Entra ID multi-tenant collaboration
Microsoft Entra External ID general availability
Learn more about Microsoft Entra
Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.
Microsoft Entra News and Insights | Microsoft Security Blog
Microsoft Entra blog | Tech Community
Microsoft Entra documentation | Microsoft Learn
Microsoft Entra discussions | Microsoft Community
Microsoft Tech Community – Latest Blogs –Read More
I need to apply a $25 credit to our Microsoft 365 subscription
Our subscription to Microsoft 365 will automatically renew in July. The subscription is under my wife’s accout (we share the app). I have a $25 credit in my Microsoft account. How can I apply this to the subscription charge? Thank you.
Our subscription to Microsoft 365 will automatically renew in July. The subscription is under my wife’s accout (we share the app). I have a $25 credit in my Microsoft account. How can I apply this to the subscription charge? Thank you. Read More
How would Azure create IoT systems for FM with focus on sustainability?
How would Azure develop IT solutions for FM that are transparent on their impact on sustainability? How could we demonstrate to clients the efficiency savings and sustainability impact?
How would Azure develop IT solutions for FM that are transparent on their impact on sustainability? How could we demonstrate to clients the efficiency savings and sustainability impact? Read More
Is there a place to post .NET jobs?
My company is looking ot hire strong developer and I was wondering if there was a place within this community where I could post the job
My company is looking ot hire strong developer and I was wondering if there was a place within this community where I could post the job Read More
Subform updating table, but not when open form
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now.
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now. Read More
Octo Tempest: Hybrid identity compromise recovery
Have you ever gone toe to toe with the threat actor known as Octo Tempest? This increasingly aggressive threat actor group has evolved their targeting, outcomes, and monetization over the past two years to become a dominant force in the world of cybercrime. But what exactly defines this entity, and why should we proceed with caution when encountering them?
Octo Tempest (formerly DEV-0875) is a group known for employing social engineering, intimidation, and other human-centric tactics to gain initial access into an environment, granting themselves privilege to cloud and on-premises resources before exfiltrating data and unleashing ransomware across an environment. Their ability to penetrate and move around identity systems with relative ease encapsulates the essence of Octo Tempest and is the purpose of this blog post. Their activities have been closely associated with:
SIM swapping scams: Seize control of a victim’s phone number to circumvent multifactor authentication.
Identity compromise: Initiate password spray attacks or phishing campaigns to gain initial access and create federated backdoors to ensure persistence.
Data breaches: Infiltrate the networks of organizations to exfiltrate confidential data.
Ransomware attacks: Encrypt a victim’s data and demand primary, secondary or tertiary ransom fees to refrain from disclosing any information or release the decryption key to enable recovery.
Figure 1: The evolution of Octo Tempest’s targeting, actions, outcomes, and monetization.
Some key considerations to keep in mind for Octo Tempest are:
Language fluency: Octo Tempest purportedly operates predominantly in native English, heightening the risk for unsuspecting targets.
Dynamic: Have been known to pivot quickly and change their tactics depending on the target organizations response.
Broad attack scope: They target diverse businesses ranging from telecommunications to technology enterprises.
Collaborative ventures: Octo Tempest may forge alliances with other cybercrime cohorts, such as ransomware syndicates, amplifying the impact of their assaults.
As our adversaries adapt their tactics to match the changing defense landscape, it’s essential for us to continually define and refine our response strategies. This requires us to promptly utilize forensic evidence and efficiently establish administrative control over our identity and access management services. In pursuit of this goal, Microsoft Incident Response has developed a response playbook that has proven effective in real-world situations. Below, we present this playbook to empower you to tackle the challenges posed by Octo Tempest, ensuring the smooth restoration of critical business services such as Microsoft Entra ID and Active Directory Domain Services.
Cloud eviction
We begin with the cloud eviction process. If any actor takes control of the identity plane in Microsoft Entra ID, a set of steps should be followed to hit reset and take back administrative control of the environment. Here are some tactical measures employed by the Microsoft Incident Response team to ensure the security of the cloud identity plane:
Figure 2: Cloud response playbook.
Break glass accounts
Emergency scenarios require emergency access. For this purpose, one or two administrative accounts should be established. These accounts should be exempted from Conditional Access policies to ensure access in critical situations, monitored to verify their non-use, and passwords should be securely stored offline whenever feasible.
More information on emergency access accounts can be found here: Manage emergency access admin accounts – Microsoft Entra ID | Microsoft Learn.
Federation
Octo Tempest leverages cloud-born federation features to take control of a victim’s environment, allowing for the impersonation of any user inside the environment, even if multifactor authentication (MFA) is enabled. While this is a damaging technique, it is relatively simple to mitigate by logging in via the Microsoft Graph PowerShell module and setting the domain back from Federated to Managed. Doing so breaks the relationship and prevents the threat actor from minting further tokens.
Connect to your Azure/Office 365 tenant by running the following PowerShell cmdlet and entering your Global Admin Credentials:
Connect-MgGraph
Change federation authentication from Federated to Managed running this cmdlet:
Update-MgDomain -DomainId “test.contoso.com” -BodyParameter @{AuthenticationType=”Managed”}
Service principals
Service principals have their own identities, credentials, roles, and permissions, and can be used to access resources or perform actions on behalf of the applications or services they represent. These have been used by Octo Tempest for persistence in compromised environments. Microsoft Incident Response recommends reviewing all service principals and removing or reducing permissions as needed.
Conditional Access policies
These policies govern how an application or identity can access Microsoft Entra ID or your organization resources and configuring these appropriately ensures that only authorized users are accessing company data and services. Microsoft provides template policies that are simple to implement. Microsoft Incident Response recommends using the following set of policies to secure any environment.
Note: Any administrative account used to make a policy will be automatically excluded from it. These accounts should be removed from exclusions and replaced with a break glass account.
Figure 3: Conditional Access policy templates.
Conditional Access policy: Require multifactor authentication for all users
This policy is used to enhance the security of an organization’s data and applications by ensuring that only authorized users can access them. Octo Tempest is often seen performing SIM swapping and social engineering attacks, and MFA is now more of a speed bump than a roadblock to many threat actors. This step is essential.
Conditional Access policy: Require phishing-resistant multifactor authentication for administrators
This policy is used to safeguard access to portals and admin accounts. It is recommended to use a modern phishing-resistant MFA type which requires an interaction between the authentication method and the sign-in surface such as a passkey, Windows Hello for Business, or certificate-based authentication.
Note: Exclude the Entra ID Sync account. This account is essential for the synchronization process to function properly.
Conditional Access policy: Block legacy authentication
Implementing a Conditional Access policy to block legacy access prohibits users from signing in to Microsoft Entra ID using vulnerable protocols. Keep in mind that this could block valid connections to your environment. To avoid disruption, follow the steps in this guide.
Conditional Access policy: Require password change for high-risk users
By implementing a user risk Conditional Access policy, administrators can tailor access permissions or security protocols based on the assessed risk level of each user. Read more about user risk here.
Conditional Access policy: Require multifactor authentication for risky sign-ins
This policy can be used to block or challenge suspicious sign-ins and prevent unauthorized access to resources.
Segregate Cloud admin accounts
Administrative accounts should always be segregated to ensure proper isolation of privileged credentials. This is particularly true for cloud admin accounts to prevent the vertical movement of privileged identities between on-premises Active Directory and Microsoft Entra ID.
In addition to the enforced controls provided by Microsoft Entra ID for privileged accounts, organizations should establish process controls to restrict password resets and manipulation of MFA mechanisms to only authorized individuals.
During a tactical takeback, it’s essential to revoke permissions from old admin accounts, create entirely new accounts, and ensure that the new accounts are secured with modern MFA methods, such device-bound passkeys managed in the Microsoft Authenticator app.
Review Azure resources
Octo Tempest has a history of manipulating resources such as Network Security Groups (NSGs), Azure Firewall, and granting themselves privileged roles within Azure Management Groups and Subscriptions using the ‘Elevate Access’ option in Microsoft Entra ID.
It’s imperative to conduct regular, and thorough, reviews of these services to carefully evaluate all changes to these services and effectively remove Octo Tempest from a cloud environment.
Of particular importance are the Azure SQL Server local admin accounts and the corresponding firewall rules. These areas warrant special attention to mitigate any potential risks posed by Octo Tempest.
Intune Multi-Administrator Approval (MAA)
Intune access policies can be used to implement two-person control of key changes to prevent a compromised admin account from maliciously using Intune, causing additional damage to the environment while mitigation is in progress.
Access policies are supported by the following resources:
Apps – Applies to app deployments but doesn’t apply to app protection policies.
Scripts – Applies to deployment of scripts to devices that run Windows.
Octo Tempest has been known to leverage Intune to deploy ransomware at scale. This risk can be mitigated by enabling the MAA functionality.
Review of MFA registrations
Octo Tempest has a history of registering MFA devices on behalf of standard users and administrators, enabling account persistence. As a precautionary measure, review all MFA registrations during the suspected compromise window and prepare for the potential re-registration of affected users.
On-premises eviction
Additional containment efforts include the on-premises identity systems. There are tried and tested procedures for rebuilding and recovering on-premises Active Directory, post-ransomware, and these same techniques apply to an Octo Tempest intrusion.
Figure 5: On-premises recovery playbook.
Active Directory Forest Recovery
If a threat actor has taken administrative control of an Active Directory environment, complete compromise of all identities, in Active Directory, and their credentials should be assumed. In this scenario, on-premises recovery follows this Microsoft Learn article on full forest recovery:
Active Directory Forest Recovery – Procedures | Microsoft Learn
If there are good backups of at least one Domain Controller for each domain in the compromised forest, these should be restored. If this option is not available, there are other methods to isolate Domain Controllers for recovery. This can be accomplished with snapshots or by moving one good Domain Controller from each domain into an isolated network so that Active Directory sanitization can begin in a protective bubble.
Once this has been achieved, domain recovery can begin. The steps are identical for every domain in the forest:
Metadata cleanup of all other Domain Controllers
Seizing the Flexible Single Master Operations (FSMO) roles
Raising the RID Pool and invalidating the RID Pool
Resetting the Domain Controller computer account password
Resetting the password of KRBTGT twice
Resetting the built-in Administrator password twice
If Read-Only Domain Controllers existed, removing their instance of krbtgt_xxxxx
Resetting inter-domain trust account (ITA) passwords on each side of the parent/child trust
Removing external trusts
Performing an authoritative restore of the SYSVOL content
Cleaning up DNS records for metadata cleaned up Domain Controllers
Resetting the Directory Services Restore Mode (DSRM) password
Removing Global Catalog and promoting to Global Catalog
When these actions have been completed, new Domain Controllers can be built in the isolated environment. Once replication is healthy, the original systems restored from backup can be demoted.
Octo Tempest is known for targeting Key Vaults and Secret Servers. Special attention will need to be paid to these secrets to determine if they were accessed and, if so, to sanitize the credentials contained within.
Tiering model
Restricting privilege escalation is critical to containing any attack since it limits the scope and damage. Identity systems in control of privileged access, and critical systems where identity administrators log onto, are both under the scope of protection.
Microsoft’s official documentation guides customers towards implementing the enterprise access model (EAM) that supersedes the “legacy AD tier model.” The EAM serves as an all-encompassing means of addressing where and how privileged access is used. It includes controls for cloud administration, and even network policy controls to protect legacy systems that lack accounts entirely.
However, the EAM has several limitations. First, it can take months, or even years, for an organization’s architects to map out and implement. Secondly, it spans disjointed controls and operating systems. Lastly, not all of it is relevant to the immediate concern of mitigating Pass-the-Hash (PtH) as outlined here.
Our customers, with on-premises systems, are often looking to implement PtH mitigations yesterday. The AD Tiering model is a good starting point for domain-joined services to satisfy this requirement. It is:
Easier to conceptualize
Has practical implementation guidance
Rollout can be partially automated
The EAM is still a valuable strategy to work towards in an organization’s journey to security; but this is a better goal for after the fires and smoldering embers have been extinguished.
Figure 6: Securing privileged access Enterprise access model – Privileged access | Microsoft Learn.
Segregated privileged accounts
Accounts should be created for each tier of access, and processes should be put in place to ensure that these remain correctly isolated within their tiers.
Control plane isolation
Identify all systems that fall under the control plane. The key rule to follow is that anything that accesses or can manipulate an asset must be treated at the same level as the assets that they manipulate. At this stage of eviction, the control plane is the key focus area. As an example, SCCM being used to patch Domain Controllers must be treated as a control plane asset.
Backup accounts are particularly sensitive targets and must be managed appropriately.
Account disposition
The next phase of on-premises recovery and containment consists of a procedure known as account disposition in which all privileged or sensitive groups are emptied except for the account that is performing the actions. These groups include, but are not limited to:
Built-In Administrators
Domain Admins
Enterprise Admins
Schema Admins
Account Operators
Server Operators
DNS Admins
Group Policy Creator Owners
Any identity that gets removed from these groups goes through the following steps:
Password is reset twice
Account is disabled
Account is marked with Smartcard is required for interactive login
Access control lists (ACLs) are reset to the default values and the adminCount attribute is cleared
Once this is done, build new accounts as per the tiering model. Create new Tier 0 identities for only the few staff that require this level of access, with a complex password and marked with the Account is sensitive and cannot be delegated flag.
Access Control List (ACL) review
Microsoft Incident Response has found a plethora of overly-permissive access control entries (ACEs) within critical areas of Active Directory of many environments. These ACEs may be at the root of the domain, on AdminSDHolder, or on Organizational Units that hold critical services. A review of all the ACEs in the access control lists (ACLs) of these sensitive areas within Active Directory is performed, and unnecessary permissions are removed.
Mass password reset
In the event of a domain compromise, a mass password reset will need to be conducted to ensure that Octo Tempest does not have access to valid credentials. The method in which a mass password reset occurs will vary based on the needs of the organization and acceptable administrative overhead. If we simply write a script that gets all user accounts (other than the person executing the code) and resets the password twice to a random password, no one will know their own password and, therefore, will open tickets with the helpdesk. This could lead to a very busy day for those members of the helpdesk (who also don’t know their own password).
Some examples of mass password reset methods, that we have seen in the field, include but are not limited to:
All at once: Get every single user (other than the newly created tier 0 accounts) and reset the password twice to a random password. Have enough helpdesk staff to be able to handle the administrative burden.
Phased reset by OU, geographic location, department, etc.: This method targets a community of individuals in a more phased out approach which is less of an initial hit to the helpdesk.
Service account password resets first, humans second: Some organizations start with the service account passwords first and then move to the human user accounts in the next phase.
Whichever method you choose to use for your mass password resets, ensure that you have an attestation mechanism in place to be able to accurately confirm that the person calling the helpdesk to get their new password (or enable Self-Service Password Reset) can prove they are who they say they are. An example of attestation would be a video conference call with the end user and the helpdesk and showing some sort of identification (for instance a work badge) on the screen.
It is recommended to also deploy and leverage Microsoft Entra ID Password Protection to prevent users from choosing weak or insecure passwords during this event.
Conclusion
The battle against Octo Tempest underscores the importance of a multi-faceted and proactive approach to cybersecurity. By understanding a threat actors’ tactics, techniques and procedures and by implementing the outlined incident response strategies, organizations can safeguard their identity infrastructure against this adversary and ensure all pervasive traces are eliminated. Incident Response is a continuous process of learning, adapting, and securing environments against ever-evolving threats.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the General Availability of Change Actor
Change Analysis
Identifying who made a change to your Azure resources and how the change was made just became easier! With Change Analysis, you can now see who initiated the change and with which client that change was made, for changes across all your tenants and subscriptions.
Audit, troubleshoot, and govern at scale
Changes should be available in under five minutes and are queryable for fourteen days. In addition, this support includes the ability to craft charts and pin results to Azure dashboards based on specific change queries.
What’s new: Actor Functionality
Who made the change
This can be either ‘AppId’ (client or Azure service) or email-ID of the user
changedBy: elizabeth@contoso.com
With which client the change was made
clientType: portal
What operation was called
Azure resource provider operations | Microsoft Learn
Try it out
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Sample Queries
Here is documentation on how to query resourcechanges and resourcecontainerchanges in Azure Resource Graph. Get resource changes – Azure Resource Graph | Microsoft Learn
The following queries all show changes made within the last 7 days.
Summarization of who and which client were used to make resource changes in the last 7 days ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| where changeTime > ago(7d)
| project changeType, changedBy, changedByType, clientType
| summarize count() by changedBy, changeType, clientType
| order by count_ desc
Summarization of who and what operations were used to make resource changes ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation = tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeType, changedBy, operation
| summarize count() by changedBy, operation
| order by count_ desc
List resource container (resource group, subscription, and management group) changes. who made the change, what client was used, and which operation was called, ordered by the time of the change
resourcecontainerchanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation=tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeTime, changeType, changedBy, changedByType, clientType, operation, targetResourceId
| order by changeTime desc
FAQ
How do I use Change Analysis?
Change Analysis can be used by querying the resourcechanges or resourcecontainerchanges tables in Azure Resource Graph, such as with Azure Resource Graph Explorer in the Azure Portal or through the Azure Resource Graph APIs.
More information can be found here: Get resource changes – Azure Resource Graph | Microsoft Learn.
What does unknown mean?
Unknown is displayed when the change happened on a client that is unrecognized. Clients are recognized based on the user agent and client application id associated with the original change request.
What does System mean?
System is displayed as a changedBy value when a background change occurred that wasn’t correlated with any direct user action.
What resources are included?
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Questions and Feedback
If you have any other questions or input, you can reach out to the team at argchange@microsoft.com
Share Product feedback and ideas with us at Azure Governance · Community
For more information about Change Analysis Get resource changes – Azure Resource Graph | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Breaking the Speed Limit with WEKA: The World’s Fastest File System on top of Azure Hot Blob
Abstract
Azure Blob Storage is engineered to manage immense volumes of unstructured data efficiently. While utilizing Blob Storage for High-Performance Computing (HPC) tasks presents numerous benefits, including scalability and cost-effectiveness, it also introduces specific challenges. Key among these challenges are data access latency and the potential for performance decline in workloads, particularly noticeable in compute-intensive or real-time applications when accessing data stored in Blob. In this article, we will examine how WEKA’s patented filesystem, WekaFS™, and its parallel processing algorithms accelerate Blob storage performance.
About WEKA
The WEKA® Data Platform was purpose-built to seamlessly and sustainably deliver speed, simplicity, and scale that meets the needs of modern enterprises and research organizations without compromise. Its advanced, software-defined architecture supports next-generation workloads in virtually any location with cloud simplicity and on-premises performance.
At the heart of the WEKA® Data Platform is a modern fully distributed parallel filesystem, WekaFS™ which can span across 1,000’s of NVMe SSD spread across multiple hosts and seamlessly extend itself over compatible object storage.
WEKA in Azure
Many organizations are leveraging Microsoft Azure to run their High-Performance Computing (HPC) applications at scale. As cloud infrastructure becomes integral, users expect the same performance as on-premises deployments. WEKA delivers unbeatable performance for your most demanding applications running in Microsoft Azure supporting high I/O, low latency, small files, and mixed workloads with zero tuning and automatic storage rebalancing.
WEKA software is deployed on a cluster of Microsoft Azure LSv3 VMs with local NVMe SSD to create a high-performance storage layer. WEKA can also take advantage of Azure Blob Storage to scale your namespace at the lowest cost. You can automate your WEKA deployment through HashiCorp Terraform templates for fast easy installation. Data stored with your WEKA environment is accessible to applications in your environment through multiple protocols, including NFS, SMB, POSIX, and S3-compliant applications.
Kent has written an excellent article on WEKA’s SMB performance for HPC Windows Grid Integration. For more, please see:
WEKA Architecture
WEKA is a fully distributed, parallel file system that was written entirely from the ground up to deliver the highest-performance file services designed for NVMe SSD. Unlike traditional parallel file systems which require extensive file system knowledge to deploy and manage, WEKA’s zero-tuning approach to storage allows for easy management from 10’s of terabytes to 100’s of petabytes in scale.
WEKA’s unique architecture in Microsoft Azure, as shown in Figure 1, provides parallel file access via POSIX, NFS, SMB and AKS. It provides a rich enterprise feature set, including but not limited to local and remote snapshots, snap clones, automatic data tiering, dynamic cluster rebalancing, backup, encryption, and quotas (advisory, soft, and hard).
Figure 1 – WekaFS combines NVMe flash with cloud object storage in a single global namespace
Key components to WEKA Data Platform in Azure include:
The infrastructure is deployed directly into a customer’s subscription of choice
WEKA software is deployed across 6 or more Azure LSv3 VMs. The LSv3 VMs are clustered to act as one single device.
The WekaFS™ namespace is extended onto Azure Hot Blob
WekaFS Scale Up and Scale down functions are driven by Azure Logic Apps and Function Apps
All client secrets are kept in Azure Key Vault
Deployment is fully automated using Terraform WEKA Templates
WEKA and Data Tiering
WEKA’s tiering capabilities in Azure integrates seamlessly with Azure Blob Storage. This integration leverages WEKA’s distributed parallel file system, WekaFS™, to extend from local NVMe SSDs on LSv3 VMs (performance tier) to lower cost Azure Blob Storage (capacity tier). WEKA writes incoming data in 4K blocks (commonly referred to as chunks) aligning to NVMe SSD block size, packaged into 1MB extents, and distributes the writes across multiple storage nodes in the cluster (in Azure, a storage node is represented as a LSv3 VM). WEKA then packages the 1MB extents into 64MB objects. Each object can contain data blocks from multiple files. Files smaller than 1 MB are consolidated into a single 64 MB object. For larger files, their parts are distributed across multiple objects.
Figure 2 – WekaFS Tiering to HOT BLOB
How do you retrieve data that is cold? What are the options?
Tiered data is always accessible and is treated as if it was part of the primary file system. Moreover, while data may be tiered, the metadata is always maintained on the SSDs. This allows traversing files and directories without impacting performance.
Consider a scenario where an HPC job has run and outputs are written to WekaFS. In time the outputs file data will be tiered to Azure Blob (capacity tier) to free up the WekaFS (performance tier) to run new jobs. At some later date the data is required again for processing. What are the options?
Cache Tier: When file data is tiered to Blob, the file metadata always remains locally on the flash tier, so all files are available to the applications. WEKA maintains the cache tier (stored in NVMe SSD) within its distributed file system architecture. When file data is rehydrated from Azure Blob Storage, WEKA stores the data in “read cache” for improved subsequent read performance.
Pre-Fetch: WEKA provides a pre-fetch API to instruct the WEKA system to fetch all of the data back from Blob (capacity tier) to NVMe (performance tier). For further details please refer to this link: https://docs.Weka.io/fs/tiering/pre-fetching-from-object-store
Cold read the data directly from Blob. The client will still access the data from the WEKA mount. The data will not be cached by WEKA FS and sent directly to the client
It is bullet #3 that is the had me intrigued. WEKA claims to parallelize reads, so would it be possible to read directly from Blob at a “WEKA Accelerated Rate”?
Testing Methodology:
The test design.
The testing infrastructure consisted of:
6 x Standard_D64_v5 Azure VMs used for clients
20 x L8s_v3 VM instances that were used for the NVME WEKA layer
Hot Zone Redundant Storage (ZRS) enabled Blob
For the test, a 2 TB file system was used on the NVME layer (for metadata) and 20 TB was configured on the HOT BLOB layer.
Figure 3 – WekaFS testing Design.
A 20 TB Filesystem was created on WEKA:
Figure 4 – Sizing the WekaFS
We choose an Object Store direct mount (see the option obs_direct).
pdsh mount -t wekafs -o net=eth1,obs_direct [weka backend IP]/archive /mnt/archive
To simulate load, we used to write random data to the object store in a 1M block size.
pdsh ‘fio –name=$HOSTNAME-fio –directory=/mnt/archive –numjobs=200 –size=500M –direct=1 –verify=0 –iodepth=1 –rw=write –bs=1M’
Once the write workload completes, notice that only 2.46 GB of data resides on the SSD tier (this is all metadata), and 631.6 GB resides on BLOB storage.
Figure 5 – SSD Tier used for Metadata only
Double checking the file system using the Weka fs command. The used SSD capacity remains at 2.46 GB which is the size of our metadata.
Figure 6 – SSD Tier used for Metadata only.
Now that all the data resides on BLOB, lets measure how quickly it can be accessed.
We’ll benchmark our performance with FIO. We’ll run load testing across all six of our clients. Each client will be reading in 1MB block sizes.
pdsh ‘fio –name=$HOSTNAME-fio –directory=/mnt/archive –numjobs=200 –size=500M –direct=1 –verify=0 –iodepth=1 –rw=read –bs=1M –time_based –runtime=90’’
The command is configured to run for 90 seconds so we can capture the sustained bandwidth from the hot blob tier of the WEKA data platform.
From the screenshot below (Figure 7), observe that we are reading data from Azure Blob at speeds up to 20 GB/s.
Figure 7 – 19.63 GB/s 100% reads coming directly from BLOB
How does WEKA do it?
Simple answer…even load distribution across all nodes in the cluster. Each WEKA compute process establishes 64 threads to run GET operations from the Blob container. Each WEKA backend is responsible for an equal portion of the namespace, and each will perform the appropriate API operation from the Azure Blob.
Thus, Multiple nodes working together to process 64 threads each equals a term I will call “WEKA Accelerated HOT BLOB Tier”
Looking at the stats on the command line while the test was running (Figure 8), you can observe the distribution of servicing the tiered data is fully balanced across all the WEKA nodes in the cluster. This balance helps WEKA achieve its optimal performance from Azure Blob.
Figure 8 – Balanced backend nodes with 64 threads each for GET operations from BLOB
What real world problems can we solve with this feature?
1 – When one needs to ingest large volumes of data at once into the WEKA Azure platform. If the end user does not know what files will be “hot”, they can have it all reside directly on BLOB storage so that it doesn’t force any currently active data out of the flash tier.
2 – Running workloads that need to sequentially read large volumes of data infrequently. For example, an HPC job where the data is only used once a month or once a quarter. If each compute node reads a different subset of the data, there is no value to be gained from rehydrating the data into the flash tier / displacing data that is used repeatedly.
3 – Running read-intensive workloads where weka accelerated BLOB cold read performance is satisfactory. Clients can mount the file system in obs direct mode.
Conclusion
WEKA in Azure delivers exceptional performance for data-intensive workloads by leveraging parallelism, scalability, flash optimization, data tiering, & caching features. This enables organizations to achieve high throughput, low latency, and optimal resource utilization for their most demanding applications and use cases.
You can also add low latency high throughput reads directly from Hot Blob Storage as another use case. To quote from Kent one last time:
…..As the digital landscape continues to evolve, embracing the WEKA Data Platform is not just a smart choice; it’s a strategic advantage that empowers you to harness the full potential of your HPC Grid.
Reference:
Microsoft Tech Community – Latest Blogs –Read More
Defender for Serever on RDP Session Host
Hi all,
I have a simple question that rises during considerations about a security concept. Is it enough to secure a RDP Session Host with Microsoft Defender for Server Plan 2? Or do I have to secure the single RDP sessions as well?
The question behind that is: Is the Defender for Server able to secure the interactions and all the app-data (e.g. Mails) for all users?
Thanks!
Hi all, I have a simple question that rises during considerations about a security concept. Is it enough to secure a RDP Session Host with Microsoft Defender for Server Plan 2? Or do I have to secure the single RDP sessions as well?The question behind that is: Is the Defender for Server able to secure the interactions and all the app-data (e.g. Mails) for all users? Thanks! Read More