Category: Microsoft
Category Archives: Microsoft
Is there a place to post .NET jobs?
My company is looking ot hire strong developer and I was wondering if there was a place within this community where I could post the job
My company is looking ot hire strong developer and I was wondering if there was a place within this community where I could post the job Read More
Subform updating table, but not when open form
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now.
I have an employee form connected to the employee table. I have a contacts form connected to the Contacts Table. When I drop in the Contact Form into the Employee form and type a few notes. It updates in the table and links the ID’s together. If I create a button just to open the contacts form to modify on that employee’s record….it updates the table, but no linked ID now. Read More
Octo Tempest: Hybrid identity compromise recovery
Have you ever gone toe to toe with the threat actor known as Octo Tempest? This increasingly aggressive threat actor group has evolved their targeting, outcomes, and monetization over the past two years to become a dominant force in the world of cybercrime. But what exactly defines this entity, and why should we proceed with caution when encountering them?
Octo Tempest (formerly DEV-0875) is a group known for employing social engineering, intimidation, and other human-centric tactics to gain initial access into an environment, granting themselves privilege to cloud and on-premises resources before exfiltrating data and unleashing ransomware across an environment. Their ability to penetrate and move around identity systems with relative ease encapsulates the essence of Octo Tempest and is the purpose of this blog post. Their activities have been closely associated with:
SIM swapping scams: Seize control of a victim’s phone number to circumvent multifactor authentication.
Identity compromise: Initiate password spray attacks or phishing campaigns to gain initial access and create federated backdoors to ensure persistence.
Data breaches: Infiltrate the networks of organizations to exfiltrate confidential data.
Ransomware attacks: Encrypt a victim’s data and demand primary, secondary or tertiary ransom fees to refrain from disclosing any information or release the decryption key to enable recovery.
Figure 1: The evolution of Octo Tempest’s targeting, actions, outcomes, and monetization.
Some key considerations to keep in mind for Octo Tempest are:
Language fluency: Octo Tempest purportedly operates predominantly in native English, heightening the risk for unsuspecting targets.
Dynamic: Have been known to pivot quickly and change their tactics depending on the target organizations response.
Broad attack scope: They target diverse businesses ranging from telecommunications to technology enterprises.
Collaborative ventures: Octo Tempest may forge alliances with other cybercrime cohorts, such as ransomware syndicates, amplifying the impact of their assaults.
As our adversaries adapt their tactics to match the changing defense landscape, it’s essential for us to continually define and refine our response strategies. This requires us to promptly utilize forensic evidence and efficiently establish administrative control over our identity and access management services. In pursuit of this goal, Microsoft Incident Response has developed a response playbook that has proven effective in real-world situations. Below, we present this playbook to empower you to tackle the challenges posed by Octo Tempest, ensuring the smooth restoration of critical business services such as Microsoft Entra ID and Active Directory Domain Services.
Cloud eviction
We begin with the cloud eviction process. If any actor takes control of the identity plane in Microsoft Entra ID, a set of steps should be followed to hit reset and take back administrative control of the environment. Here are some tactical measures employed by the Microsoft Incident Response team to ensure the security of the cloud identity plane:
Figure 2: Cloud response playbook.
Break glass accounts
Emergency scenarios require emergency access. For this purpose, one or two administrative accounts should be established. These accounts should be exempted from Conditional Access policies to ensure access in critical situations, monitored to verify their non-use, and passwords should be securely stored offline whenever feasible.
More information on emergency access accounts can be found here: Manage emergency access admin accounts – Microsoft Entra ID | Microsoft Learn.
Federation
Octo Tempest leverages cloud-born federation features to take control of a victim’s environment, allowing for the impersonation of any user inside the environment, even if multifactor authentication (MFA) is enabled. While this is a damaging technique, it is relatively simple to mitigate by logging in via the Microsoft Graph PowerShell module and setting the domain back from Federated to Managed. Doing so breaks the relationship and prevents the threat actor from minting further tokens.
Connect to your Azure/Office 365 tenant by running the following PowerShell cmdlet and entering your Global Admin Credentials:
Connect-MgGraph
Change federation authentication from Federated to Managed running this cmdlet:
Update-MgDomain -DomainId “test.contoso.com” -BodyParameter @{AuthenticationType=”Managed”}
Service principals
Service principals have their own identities, credentials, roles, and permissions, and can be used to access resources or perform actions on behalf of the applications or services they represent. These have been used by Octo Tempest for persistence in compromised environments. Microsoft Incident Response recommends reviewing all service principals and removing or reducing permissions as needed.
Conditional Access policies
These policies govern how an application or identity can access Microsoft Entra ID or your organization resources and configuring these appropriately ensures that only authorized users are accessing company data and services. Microsoft provides template policies that are simple to implement. Microsoft Incident Response recommends using the following set of policies to secure any environment.
Note: Any administrative account used to make a policy will be automatically excluded from it. These accounts should be removed from exclusions and replaced with a break glass account.
Figure 3: Conditional Access policy templates.
Conditional Access policy: Require multifactor authentication for all users
This policy is used to enhance the security of an organization’s data and applications by ensuring that only authorized users can access them. Octo Tempest is often seen performing SIM swapping and social engineering attacks, and MFA is now more of a speed bump than a roadblock to many threat actors. This step is essential.
Conditional Access policy: Require phishing-resistant multifactor authentication for administrators
This policy is used to safeguard access to portals and admin accounts. It is recommended to use a modern phishing-resistant MFA type which requires an interaction between the authentication method and the sign-in surface such as a passkey, Windows Hello for Business, or certificate-based authentication.
Note: Exclude the Entra ID Sync account. This account is essential for the synchronization process to function properly.
Conditional Access policy: Block legacy authentication
Implementing a Conditional Access policy to block legacy access prohibits users from signing in to Microsoft Entra ID using vulnerable protocols. Keep in mind that this could block valid connections to your environment. To avoid disruption, follow the steps in this guide.
Conditional Access policy: Require password change for high-risk users
By implementing a user risk Conditional Access policy, administrators can tailor access permissions or security protocols based on the assessed risk level of each user. Read more about user risk here.
Conditional Access policy: Require multifactor authentication for risky sign-ins
This policy can be used to block or challenge suspicious sign-ins and prevent unauthorized access to resources.
Segregate Cloud admin accounts
Administrative accounts should always be segregated to ensure proper isolation of privileged credentials. This is particularly true for cloud admin accounts to prevent the vertical movement of privileged identities between on-premises Active Directory and Microsoft Entra ID.
In addition to the enforced controls provided by Microsoft Entra ID for privileged accounts, organizations should establish process controls to restrict password resets and manipulation of MFA mechanisms to only authorized individuals.
During a tactical takeback, it’s essential to revoke permissions from old admin accounts, create entirely new accounts, and ensure that the new accounts are secured with modern MFA methods, such device-bound passkeys managed in the Microsoft Authenticator app.
Review Azure resources
Octo Tempest has a history of manipulating resources such as Network Security Groups (NSGs), Azure Firewall, and granting themselves privileged roles within Azure Management Groups and Subscriptions using the ‘Elevate Access’ option in Microsoft Entra ID.
It’s imperative to conduct regular, and thorough, reviews of these services to carefully evaluate all changes to these services and effectively remove Octo Tempest from a cloud environment.
Of particular importance are the Azure SQL Server local admin accounts and the corresponding firewall rules. These areas warrant special attention to mitigate any potential risks posed by Octo Tempest.
Intune Multi-Administrator Approval (MAA)
Intune access policies can be used to implement two-person control of key changes to prevent a compromised admin account from maliciously using Intune, causing additional damage to the environment while mitigation is in progress.
Access policies are supported by the following resources:
Apps – Applies to app deployments but doesn’t apply to app protection policies.
Scripts – Applies to deployment of scripts to devices that run Windows.
Octo Tempest has been known to leverage Intune to deploy ransomware at scale. This risk can be mitigated by enabling the MAA functionality.
Review of MFA registrations
Octo Tempest has a history of registering MFA devices on behalf of standard users and administrators, enabling account persistence. As a precautionary measure, review all MFA registrations during the suspected compromise window and prepare for the potential re-registration of affected users.
On-premises eviction
Additional containment efforts include the on-premises identity systems. There are tried and tested procedures for rebuilding and recovering on-premises Active Directory, post-ransomware, and these same techniques apply to an Octo Tempest intrusion.
Figure 5: On-premises recovery playbook.
Active Directory Forest Recovery
If a threat actor has taken administrative control of an Active Directory environment, complete compromise of all identities, in Active Directory, and their credentials should be assumed. In this scenario, on-premises recovery follows this Microsoft Learn article on full forest recovery:
Active Directory Forest Recovery – Procedures | Microsoft Learn
If there are good backups of at least one Domain Controller for each domain in the compromised forest, these should be restored. If this option is not available, there are other methods to isolate Domain Controllers for recovery. This can be accomplished with snapshots or by moving one good Domain Controller from each domain into an isolated network so that Active Directory sanitization can begin in a protective bubble.
Once this has been achieved, domain recovery can begin. The steps are identical for every domain in the forest:
Metadata cleanup of all other Domain Controllers
Seizing the Flexible Single Master Operations (FSMO) roles
Raising the RID Pool and invalidating the RID Pool
Resetting the Domain Controller computer account password
Resetting the password of KRBTGT twice
Resetting the built-in Administrator password twice
If Read-Only Domain Controllers existed, removing their instance of krbtgt_xxxxx
Resetting inter-domain trust account (ITA) passwords on each side of the parent/child trust
Removing external trusts
Performing an authoritative restore of the SYSVOL content
Cleaning up DNS records for metadata cleaned up Domain Controllers
Resetting the Directory Services Restore Mode (DSRM) password
Removing Global Catalog and promoting to Global Catalog
When these actions have been completed, new Domain Controllers can be built in the isolated environment. Once replication is healthy, the original systems restored from backup can be demoted.
Octo Tempest is known for targeting Key Vaults and Secret Servers. Special attention will need to be paid to these secrets to determine if they were accessed and, if so, to sanitize the credentials contained within.
Tiering model
Restricting privilege escalation is critical to containing any attack since it limits the scope and damage. Identity systems in control of privileged access, and critical systems where identity administrators log onto, are both under the scope of protection.
Microsoft’s official documentation guides customers towards implementing the enterprise access model (EAM) that supersedes the “legacy AD tier model.” The EAM serves as an all-encompassing means of addressing where and how privileged access is used. It includes controls for cloud administration, and even network policy controls to protect legacy systems that lack accounts entirely.
However, the EAM has several limitations. First, it can take months, or even years, for an organization’s architects to map out and implement. Secondly, it spans disjointed controls and operating systems. Lastly, not all of it is relevant to the immediate concern of mitigating Pass-the-Hash (PtH) as outlined here.
Our customers, with on-premises systems, are often looking to implement PtH mitigations yesterday. The AD Tiering model is a good starting point for domain-joined services to satisfy this requirement. It is:
Easier to conceptualize
Has practical implementation guidance
Rollout can be partially automated
The EAM is still a valuable strategy to work towards in an organization’s journey to security; but this is a better goal for after the fires and smoldering embers have been extinguished.
Figure 6: Securing privileged access Enterprise access model – Privileged access | Microsoft Learn.
Segregated privileged accounts
Accounts should be created for each tier of access, and processes should be put in place to ensure that these remain correctly isolated within their tiers.
Control plane isolation
Identify all systems that fall under the control plane. The key rule to follow is that anything that accesses or can manipulate an asset must be treated at the same level as the assets that they manipulate. At this stage of eviction, the control plane is the key focus area. As an example, SCCM being used to patch Domain Controllers must be treated as a control plane asset.
Backup accounts are particularly sensitive targets and must be managed appropriately.
Account disposition
The next phase of on-premises recovery and containment consists of a procedure known as account disposition in which all privileged or sensitive groups are emptied except for the account that is performing the actions. These groups include, but are not limited to:
Built-In Administrators
Domain Admins
Enterprise Admins
Schema Admins
Account Operators
Server Operators
DNS Admins
Group Policy Creator Owners
Any identity that gets removed from these groups goes through the following steps:
Password is reset twice
Account is disabled
Account is marked with Smartcard is required for interactive login
Access control lists (ACLs) are reset to the default values and the adminCount attribute is cleared
Once this is done, build new accounts as per the tiering model. Create new Tier 0 identities for only the few staff that require this level of access, with a complex password and marked with the Account is sensitive and cannot be delegated flag.
Access Control List (ACL) review
Microsoft Incident Response has found a plethora of overly-permissive access control entries (ACEs) within critical areas of Active Directory of many environments. These ACEs may be at the root of the domain, on AdminSDHolder, or on Organizational Units that hold critical services. A review of all the ACEs in the access control lists (ACLs) of these sensitive areas within Active Directory is performed, and unnecessary permissions are removed.
Mass password reset
In the event of a domain compromise, a mass password reset will need to be conducted to ensure that Octo Tempest does not have access to valid credentials. The method in which a mass password reset occurs will vary based on the needs of the organization and acceptable administrative overhead. If we simply write a script that gets all user accounts (other than the person executing the code) and resets the password twice to a random password, no one will know their own password and, therefore, will open tickets with the helpdesk. This could lead to a very busy day for those members of the helpdesk (who also don’t know their own password).
Some examples of mass password reset methods, that we have seen in the field, include but are not limited to:
All at once: Get every single user (other than the newly created tier 0 accounts) and reset the password twice to a random password. Have enough helpdesk staff to be able to handle the administrative burden.
Phased reset by OU, geographic location, department, etc.: This method targets a community of individuals in a more phased out approach which is less of an initial hit to the helpdesk.
Service account password resets first, humans second: Some organizations start with the service account passwords first and then move to the human user accounts in the next phase.
Whichever method you choose to use for your mass password resets, ensure that you have an attestation mechanism in place to be able to accurately confirm that the person calling the helpdesk to get their new password (or enable Self-Service Password Reset) can prove they are who they say they are. An example of attestation would be a video conference call with the end user and the helpdesk and showing some sort of identification (for instance a work badge) on the screen.
It is recommended to also deploy and leverage Microsoft Entra ID Password Protection to prevent users from choosing weak or insecure passwords during this event.
Conclusion
The battle against Octo Tempest underscores the importance of a multi-faceted and proactive approach to cybersecurity. By understanding a threat actors’ tactics, techniques and procedures and by implementing the outlined incident response strategies, organizations can safeguard their identity infrastructure against this adversary and ensure all pervasive traces are eliminated. Incident Response is a continuous process of learning, adapting, and securing environments against ever-evolving threats.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the General Availability of Change Actor
Change Analysis
Identifying who made a change to your Azure resources and how the change was made just became easier! With Change Analysis, you can now see who initiated the change and with which client that change was made, for changes across all your tenants and subscriptions.
Audit, troubleshoot, and govern at scale
Changes should be available in under five minutes and are queryable for fourteen days. In addition, this support includes the ability to craft charts and pin results to Azure dashboards based on specific change queries.
What’s new: Actor Functionality
Who made the change
This can be either ‘AppId’ (client or Azure service) or email-ID of the user
changedBy: elizabeth@contoso.com
With which client the change was made
clientType: portal
What operation was called
Azure resource provider operations | Microsoft Learn
Try it out
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Sample Queries
Here is documentation on how to query resourcechanges and resourcecontainerchanges in Azure Resource Graph. Get resource changes – Azure Resource Graph | Microsoft Learn
The following queries all show changes made within the last 7 days.
Summarization of who and which client were used to make resource changes in the last 7 days ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| where changeTime > ago(7d)
| project changeType, changedBy, changedByType, clientType
| summarize count() by changedBy, changeType, clientType
| order by count_ desc
Summarization of who and what operations were used to make resource changes ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation = tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeType, changedBy, operation
| summarize count() by changedBy, operation
| order by count_ desc
List resource container (resource group, subscription, and management group) changes. who made the change, what client was used, and which operation was called, ordered by the time of the change
resourcecontainerchanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation=tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeTime, changeType, changedBy, changedByType, clientType, operation, targetResourceId
| order by changeTime desc
FAQ
How do I use Change Analysis?
Change Analysis can be used by querying the resourcechanges or resourcecontainerchanges tables in Azure Resource Graph, such as with Azure Resource Graph Explorer in the Azure Portal or through the Azure Resource Graph APIs.
More information can be found here: Get resource changes – Azure Resource Graph | Microsoft Learn.
What does unknown mean?
Unknown is displayed when the change happened on a client that is unrecognized. Clients are recognized based on the user agent and client application id associated with the original change request.
What does System mean?
System is displayed as a changedBy value when a background change occurred that wasn’t correlated with any direct user action.
What resources are included?
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Questions and Feedback
If you have any other questions or input, you can reach out to the team at argchange@microsoft.com
Share Product feedback and ideas with us at Azure Governance · Community
For more information about Change Analysis Get resource changes – Azure Resource Graph | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Breaking the Speed Limit with WEKA: The World’s Fastest File System on top of Azure Hot Blob
Abstract
Azure Blob Storage is engineered to manage immense volumes of unstructured data efficiently. While utilizing Blob Storage for High-Performance Computing (HPC) tasks presents numerous benefits, including scalability and cost-effectiveness, it also introduces specific challenges. Key among these challenges are data access latency and the potential for performance decline in workloads, particularly noticeable in compute-intensive or real-time applications when accessing data stored in Blob. In this article, we will examine how WEKA’s patented filesystem, WekaFS™, and its parallel processing algorithms accelerate Blob storage performance.
About WEKA
The WEKA® Data Platform was purpose-built to seamlessly and sustainably deliver speed, simplicity, and scale that meets the needs of modern enterprises and research organizations without compromise. Its advanced, software-defined architecture supports next-generation workloads in virtually any location with cloud simplicity and on-premises performance.
At the heart of the WEKA® Data Platform is a modern fully distributed parallel filesystem, WekaFS™ which can span across 1,000’s of NVMe SSD spread across multiple hosts and seamlessly extend itself over compatible object storage.
WEKA in Azure
Many organizations are leveraging Microsoft Azure to run their High-Performance Computing (HPC) applications at scale. As cloud infrastructure becomes integral, users expect the same performance as on-premises deployments. WEKA delivers unbeatable performance for your most demanding applications running in Microsoft Azure supporting high I/O, low latency, small files, and mixed workloads with zero tuning and automatic storage rebalancing.
WEKA software is deployed on a cluster of Microsoft Azure LSv3 VMs with local NVMe SSD to create a high-performance storage layer. WEKA can also take advantage of Azure Blob Storage to scale your namespace at the lowest cost. You can automate your WEKA deployment through HashiCorp Terraform templates for fast easy installation. Data stored with your WEKA environment is accessible to applications in your environment through multiple protocols, including NFS, SMB, POSIX, and S3-compliant applications.
Kent has written an excellent article on WEKA’s SMB performance for HPC Windows Grid Integration. For more, please see:
WEKA Architecture
WEKA is a fully distributed, parallel file system that was written entirely from the ground up to deliver the highest-performance file services designed for NVMe SSD. Unlike traditional parallel file systems which require extensive file system knowledge to deploy and manage, WEKA’s zero-tuning approach to storage allows for easy management from 10’s of terabytes to 100’s of petabytes in scale.
WEKA’s unique architecture in Microsoft Azure, as shown in Figure 1, provides parallel file access via POSIX, NFS, SMB and AKS. It provides a rich enterprise feature set, including but not limited to local and remote snapshots, snap clones, automatic data tiering, dynamic cluster rebalancing, backup, encryption, and quotas (advisory, soft, and hard).
Figure 1 – WekaFS combines NVMe flash with cloud object storage in a single global namespace
Key components to WEKA Data Platform in Azure include:
The infrastructure is deployed directly into a customer’s subscription of choice
WEKA software is deployed across 6 or more Azure LSv3 VMs. The LSv3 VMs are clustered to act as one single device.
The WekaFS™ namespace is extended onto Azure Hot Blob
WekaFS Scale Up and Scale down functions are driven by Azure Logic Apps and Function Apps
All client secrets are kept in Azure Key Vault
Deployment is fully automated using Terraform WEKA Templates
WEKA and Data Tiering
WEKA’s tiering capabilities in Azure integrates seamlessly with Azure Blob Storage. This integration leverages WEKA’s distributed parallel file system, WekaFS™, to extend from local NVMe SSDs on LSv3 VMs (performance tier) to lower cost Azure Blob Storage (capacity tier). WEKA writes incoming data in 4K blocks (commonly referred to as chunks) aligning to NVMe SSD block size, packaged into 1MB extents, and distributes the writes across multiple storage nodes in the cluster (in Azure, a storage node is represented as a LSv3 VM). WEKA then packages the 1MB extents into 64MB objects. Each object can contain data blocks from multiple files. Files smaller than 1 MB are consolidated into a single 64 MB object. For larger files, their parts are distributed across multiple objects.
Figure 2 – WekaFS Tiering to HOT BLOB
How do you retrieve data that is cold? What are the options?
Tiered data is always accessible and is treated as if it was part of the primary file system. Moreover, while data may be tiered, the metadata is always maintained on the SSDs. This allows traversing files and directories without impacting performance.
Consider a scenario where an HPC job has run and outputs are written to WekaFS. In time the outputs file data will be tiered to Azure Blob (capacity tier) to free up the WekaFS (performance tier) to run new jobs. At some later date the data is required again for processing. What are the options?
Cache Tier: When file data is tiered to Blob, the file metadata always remains locally on the flash tier, so all files are available to the applications. WEKA maintains the cache tier (stored in NVMe SSD) within its distributed file system architecture. When file data is rehydrated from Azure Blob Storage, WEKA stores the data in “read cache” for improved subsequent read performance.
Pre-Fetch: WEKA provides a pre-fetch API to instruct the WEKA system to fetch all of the data back from Blob (capacity tier) to NVMe (performance tier). For further details please refer to this link: https://docs.Weka.io/fs/tiering/pre-fetching-from-object-store
Cold read the data directly from Blob. The client will still access the data from the WEKA mount. The data will not be cached by WEKA FS and sent directly to the client
It is bullet #3 that is the had me intrigued. WEKA claims to parallelize reads, so would it be possible to read directly from Blob at a “WEKA Accelerated Rate”?
Testing Methodology:
The test design.
The testing infrastructure consisted of:
6 x Standard_D64_v5 Azure VMs used for clients
20 x L8s_v3 VM instances that were used for the NVME WEKA layer
Hot Zone Redundant Storage (ZRS) enabled Blob
For the test, a 2 TB file system was used on the NVME layer (for metadata) and 20 TB was configured on the HOT BLOB layer.
Figure 3 – WekaFS testing Design.
A 20 TB Filesystem was created on WEKA:
Figure 4 – Sizing the WekaFS
We choose an Object Store direct mount (see the option obs_direct).
pdsh mount -t wekafs -o net=eth1,obs_direct [weka backend IP]/archive /mnt/archive
To simulate load, we used to write random data to the object store in a 1M block size.
pdsh ‘fio –name=$HOSTNAME-fio –directory=/mnt/archive –numjobs=200 –size=500M –direct=1 –verify=0 –iodepth=1 –rw=write –bs=1M’
Once the write workload completes, notice that only 2.46 GB of data resides on the SSD tier (this is all metadata), and 631.6 GB resides on BLOB storage.
Figure 5 – SSD Tier used for Metadata only
Double checking the file system using the Weka fs command. The used SSD capacity remains at 2.46 GB which is the size of our metadata.
Figure 6 – SSD Tier used for Metadata only.
Now that all the data resides on BLOB, lets measure how quickly it can be accessed.
We’ll benchmark our performance with FIO. We’ll run load testing across all six of our clients. Each client will be reading in 1MB block sizes.
pdsh ‘fio –name=$HOSTNAME-fio –directory=/mnt/archive –numjobs=200 –size=500M –direct=1 –verify=0 –iodepth=1 –rw=read –bs=1M –time_based –runtime=90’’
The command is configured to run for 90 seconds so we can capture the sustained bandwidth from the hot blob tier of the WEKA data platform.
From the screenshot below (Figure 7), observe that we are reading data from Azure Blob at speeds up to 20 GB/s.
Figure 7 – 19.63 GB/s 100% reads coming directly from BLOB
How does WEKA do it?
Simple answer…even load distribution across all nodes in the cluster. Each WEKA compute process establishes 64 threads to run GET operations from the Blob container. Each WEKA backend is responsible for an equal portion of the namespace, and each will perform the appropriate API operation from the Azure Blob.
Thus, Multiple nodes working together to process 64 threads each equals a term I will call “WEKA Accelerated HOT BLOB Tier”
Looking at the stats on the command line while the test was running (Figure 8), you can observe the distribution of servicing the tiered data is fully balanced across all the WEKA nodes in the cluster. This balance helps WEKA achieve its optimal performance from Azure Blob.
Figure 8 – Balanced backend nodes with 64 threads each for GET operations from BLOB
What real world problems can we solve with this feature?
1 – When one needs to ingest large volumes of data at once into the WEKA Azure platform. If the end user does not know what files will be “hot”, they can have it all reside directly on BLOB storage so that it doesn’t force any currently active data out of the flash tier.
2 – Running workloads that need to sequentially read large volumes of data infrequently. For example, an HPC job where the data is only used once a month or once a quarter. If each compute node reads a different subset of the data, there is no value to be gained from rehydrating the data into the flash tier / displacing data that is used repeatedly.
3 – Running read-intensive workloads where weka accelerated BLOB cold read performance is satisfactory. Clients can mount the file system in obs direct mode.
Conclusion
WEKA in Azure delivers exceptional performance for data-intensive workloads by leveraging parallelism, scalability, flash optimization, data tiering, & caching features. This enables organizations to achieve high throughput, low latency, and optimal resource utilization for their most demanding applications and use cases.
You can also add low latency high throughput reads directly from Hot Blob Storage as another use case. To quote from Kent one last time:
…..As the digital landscape continues to evolve, embracing the WEKA Data Platform is not just a smart choice; it’s a strategic advantage that empowers you to harness the full potential of your HPC Grid.
Reference:
Microsoft Tech Community – Latest Blogs –Read More
Defender for Serever on RDP Session Host
Hi all,
I have a simple question that rises during considerations about a security concept. Is it enough to secure a RDP Session Host with Microsoft Defender for Server Plan 2? Or do I have to secure the single RDP sessions as well?
The question behind that is: Is the Defender for Server able to secure the interactions and all the app-data (e.g. Mails) for all users?
Thanks!
Hi all, I have a simple question that rises during considerations about a security concept. Is it enough to secure a RDP Session Host with Microsoft Defender for Server Plan 2? Or do I have to secure the single RDP sessions as well?The question behind that is: Is the Defender for Server able to secure the interactions and all the app-data (e.g. Mails) for all users? Thanks! Read More
Entra External ID (Azure): Set up SaaS B2B Multi Tenancy Scenario
Dear Community,
I want to test a scenario and have already created an external client.
Scenario B2B Company (SaaS):
-3 applications
-Many (100+) corporate customers, each with 2-3 employees, who usually only use 1-2 of the applications
-Many of the corporate customers want to connect their own IdPs for authentication
The idea: Hierarchically
-> connect 3 applications to Entra ID, create an external tenant for each of the corporate customers, only allow them access to the apps used
-> when logging in: forward domain-specifically to the IdP of the respective tenant or to Azure in general (if no extra IdP)
Is this possible?
So far I have only been able to connect an application to the external tenant and theoretically also an IdP. But how do I get this higher-level logic to work? Any ideas?
Kind regards and thank you very much
Jen
Dear Community, I want to test a scenario and have already created an external client. Scenario B2B Company (SaaS):-3 applications-Many (100+) corporate customers, each with 2-3 employees, who usually only use 1-2 of the applications-Many of the corporate customers want to connect their own IdPs for authentication The idea: Hierarchically-> connect 3 applications to Entra ID, create an external tenant for each of the corporate customers, only allow them access to the apps used-> when logging in: forward domain-specifically to the IdP of the respective tenant or to Azure in general (if no extra IdP) Is this possible? So far I have only been able to connect an application to the external tenant and theoretically also an IdP. But how do I get this higher-level logic to work? Any ideas? Kind regards and thank you very much Jen Read More
conversion of time to a text string
Need some help how to convert “2140:03:00” as shown in a cell formated as [t]:mm;ss, into a textstring showing “2140:03:00”.
Need some help how to convert “2140:03:00” as shown in a cell formated as [t]:mm;ss, into a textstring showing “2140:03:00”. Read More
Virtual Machine issue
Hi everyone, does anyone know what the issue is? I am using the VM normally, but recently my VM was unable to access it, and then this error message showed up.
Hi everyone, does anyone know what the issue is? I am using the VM normally, but recently my VM was unable to access it, and then this error message showed up. Read More
Beginner – VBA Copying data from sheet & range from several workbooks to a master via Teams
Hi experts!
I have read several forum posts on using Macros to copy a specific range from one workbook to another. However nothing I have read seems to match my requirements.
The scenario is I have 14 workbooks all identical in structure hosted in different MS Teams Channels.
I have one master workbook located on my OneDrive.
I want to be able to extract into my master workbook a specific sheets data in a range. Although the 14 workbooks have the exact structure, the names are different ie. Planner_100.xls, Planner_200.xls etc
To be more specific, in each of the 14 workbooks I wish to copy Sheet “Data”, range A1:C10 to my master workbook sheet “Summary” as a continuous list.
What is the best approach? Can this even be done when hosting the 14 workbooks in different MS Teams channels?
Does a macros be hosted in the master workbook? Or does each of the 14 workbooks host a macros?
Thank you for your advice
Hi experts!I have read several forum posts on using Macros to copy a specific range from one workbook to another. However nothing I have read seems to match my requirements. The scenario is I have 14 workbooks all identical in structure hosted in different MS Teams Channels. I have one master workbook located on my OneDrive. I want to be able to extract into my master workbook a specific sheets data in a range. Although the 14 workbooks have the exact structure, the names are different ie. Planner_100.xls, Planner_200.xls etc To be more specific, in each of the 14 workbooks I wish to copy Sheet “Data”, range A1:C10 to my master workbook sheet “Summary” as a continuous list. What is the best approach? Can this even be done when hosting the 14 workbooks in different MS Teams channels? Does a macros be hosted in the master workbook? Or does each of the 14 workbooks host a macros? Thank you for your advice Read More
Any risks to enabling Password Writeback ?
Hi Everyone,
I’ve been trying to configure password writeback in Entra ID so as to enable Azure SSPR. Would enabling Writeback on Entra ID Connect(currently it’s Hash sync) introduce any service disruption or risk either on azure or On-Prem ?
Hi Everyone, I’ve been trying to configure password writeback in Entra ID so as to enable Azure SSPR. Would enabling Writeback on Entra ID Connect(currently it’s Hash sync) introduce any service disruption or risk either on azure or On-Prem ? Read More
How to render column charts in Logic Apps using Run query and visualize results connector
Team,
I have been trying to run a kql query and render results in column chart , but i couldnt as currently “Run query and visualize results connector” support only bar, line and pie chart ? Is there any solution for this
Team, I have been trying to run a kql query and render results in column chart , but i couldnt as currently “Run query and visualize results connector” support only bar, line and pie chart ? Is there any solution for this Read More
5 ways to dramatically speed up your cloud application teams
Working with applications teams and partners developing cloud native apps on Azure, you quickly learn developer time is valuable, enthusiasm & flow state is critically important.
Whenever an application team has to wait for an environment, wait for a access, a service now ticket, a support case, admin access to install tooling, productivity is dramatically effected, projects can be 2x longer and of lower quality.
Equally, it’s important to have a designed, governed secure environment when using the public cloud, so your workload teams start right, and stay right! This covers all the normal design pillars of a well architected solution, reliability, security, cost and performance.
Application teams work best when they can select their preferred platform services, tooling, languages and libraries, and most importantly, reduce their dependencies on external requests & constraints that limit their selection of services. To this end, platform teams number one priority should be to work towards a self-service model, removing themselves from the process, constantly unblock friction points.
Where Platform teams should focus
You don’t need to have everything automated from day 1, nor have tooling for everything, but focusing in these 5 crucial elements will result in more impact for everyone in your organization, while avoiding unnecessary tickets/cases and bottlenecks, something I have seen depressingly way to often.
1. Environment Provisioning
When a application team works on a new product, timely access to an environment is important when enthusiasm is high. You should be targeting giving teams access to an environment they can use within 30mins from the initial request. Vending a Resource Groups to the application team with all the access they need to immediatly start deploying their solution designs (more on this later). The resource group naming, tagging, the subscription sharing model and level of access can all be determined base on the environment requested.
Subscriptions now in Azure can support hundreds of developers, the subscriptions have granular role-based controls, mature cost tracking services, and you can now track Subscription Limits & usage very effectively. We recently had 300 developers across 35 resource groups, deploying resources across the globe, all working happily in a single sandbox subscription.
During the vending process, it will be important to capture:
Workload Type (e.g. Production vs Sandbox), this drives the policies that are applied to control what can be deployed & levels of access, and keeps the separation between these environments. Keep the separation of any Production and Sandbox environments should be at the subscription level.
Required Networking (e.g. Connected or Non-connected) this determines if private IP connectivity is required by the workload, or if ingress/egress to the workload needs to be privately routed.
The first, simplest, most unconstrained environment you should offer is a Non-connected Sandbox, this allows the application teams the most flexibility to experiment with multiple services, full access to the environment in the portal to allow the team to rapidly get ideas to a POC stage. Here typically, there are no or little restrictions on access or resources that can be provisioned. The most constrained, and complex environment will be a Connected Production subscription, this will have policies to ensure production guardrails are followed, and networking to allow private IP connectivity, and ingress/egress routing controls (if needed).
The new Subscription Vending Bicep Verified Module is a excellent starting point to start vending these environments, from the simplest to the most complex with a module & parameter driven approach. You can collect the required information from the application team, then call the vending module directly from the az cli to start with, or create a pipeline/action in your favourite devops tool, maybe trigger a GitHub workflow from a Issue template:
Hot Take: I’d recommend Bicep over Terraform when automating environment provisioning or application deployment on Azure, even if you are multi-cloud, It’s a simple, powerful, performant 1st class experience, without needing the complexities of a state file, as the state is whatever is deployed in azure, and templates can be re-run and only the changes will be deployed.
2. Environment Permissions
So you have vended an environment, the app team tried to provision their first internally authenticated webapp that calls a gpt-4o model using identity based access, deployed using github actions… error error error 4 tickets in 5 minutes, now the team are googling for workarounds, not delivering their projects, wasting valuable time, and enthusiasm. Whats the problem?
No permissions to create a Role Assignment on the webapp managed identity
OpenAI resource not registered in subscription
Cannot create Application Registration in EntraID
Require Admin consent for application permissions.
Cannot create federated access from github to deploy to Azure
When building cloud native apps, managed identity and role based access is a crucial part of the application architecture, and 100% the best and most secure way of creating cloud native applications.
Platform teams must provide the appropriate level of access to the application team to allow these solution architectures. I’ve seen this being the single thing that wastes tens/hundreds of hours of skilled peoples time
Recommendation #1
When assigning roles to the application team, Contributor is not enough to create identity-based solution architectures! Consider providing the team Contributor & Role Based Access Control Administrator, this role can be scoped to either the resource group , and can be further limited to only assign selected roles to selected principals.
Recommendation #2
Ensure resource provider registrations have been done as part of the vending process, and not blocking the application teams from creating their resources.
Recommendation #3
Many applications will need users to authenticate, and the best way of doing that is with Entra ID. These apps need application registrations within EntraID, if your organization blocks the self-service creation of new application registrations, and/or has restrictive consent granting. Ensure the team know the process for requesting a new application registration. Also, unless you want a new Service now ticket every time the app team what to add a new callback uri, make then a owner of the app registration in the process.
Recommendation #4
Lastly, yes, ensure the use of Identity & Automation for deployments in Production environments, but don’t take away access to the portal from your application teams! Grant your application team corporate identities roles in the environment. Not granting this access, especially in the lower environments will make it much harder for the teams.
3. A little less documentation & a little more sample repos
Our environment provisioning provides the application teams a blank slate at this stage, it doesn’t make any assumptions about the application teams solution architecture, this will allow the application team to select the optimal services for their use-cases, that could be a microservices app or a integration workflow, or a simple static webapp. selecting the appropriate service for the use-case will make the best use of the public cloud, optimize your public cloud costs while minimizing the required operation to support your application. Equally, it don’t assume the structure or number of repo’s that the application team will use.
However, we should be providing the application teams more support than just a blank canvas, we should be looking to share successful architecture patterns, example applications that have already been approved for use within your organization.
Rather than documents, start to foster a innersource repo of samples, that can be simply provisioned into the vended environment, to show what a static webapp, or a simple microservices app, or an event driven process, or a integration workflow could look like. This can provide new teams a starting point with built in approved patterns to accelerated their journey to production. These examples, with good READMEs can also inform the teams how to structure there application team repos with the infrastructure-as-code, and automation deployment workflows.
Look at the Azure Developer CLI templates us a good example of this, I’m not saying you should use this tool, but azd template list shows a list of sample application patterns with well documented, structured repos. You can start to create a curated list that demonstrates getting started repos in each of the application solution categories for your organisation, even starting with some of these samples where relevant.
Infrastructure-as-code modules
Another thing to notice/adopt, in these samples repo /infra folders, their main bicep file is just composing a number of modules, these modules represent the ‘right’ way of configuring each service for your organization, for example, pre-configured with private endpoints and RBAC based access and so on. You can look to build a repo of these modules approved for use in your organization to again, accelerate your application teams. You can also get started by using Azure Verified Modules, or build your own bicep module library, using inner-sourcing, sharing this between the application teams.
Kubernetes namespace vending
4. Tooling / Local loop development
Application teams experiment in the portal, develop locally, and provision from their local machine, then, add the automation and the managed identity to perform auto deployments via source control to apply in the later environments.
Ensure the teams can install/configure VS code / VS code extensions / command line tools / docker locally, and they have connectivity from these tools to the public cloud APIs they need.
@azure/identity libraries are now brilliant! For many dependencies, there is no need any more to use API keys or credentials that need to be stored in key-vault’s & rotated periodically, now, just use your EntraID’s corporate identity or a Managed Identity with RBAC. Using these identity libraries, If the developer wants to run their code locally, and connect to a database or message service in azure, the locally running app will operate with the local developers corporate identity (obtained through az login), and as long as the dev has the appropriate RBAC on the database, all good. If they deploy their app to Azure PaaS Service, without any code changes, the code will access the database using the services managed identity. This makes the apps secure and resilient, and can be prompted up to production securely.
Without these tools and access, the application teams will not be writing the most secure way of coding their app.
5. Track Metrics
Track anything that causes friction. Anytime the application team is waiting on something, a case, access to a service, resolving a bug, track it, dashboard it, and constantly priorities securely removing friction. Promote the creation of issues on the platform teams repo, keep a prioritized backlog. Hold monthly feedback sessions.
If Application teams are held up, they will try to work around issues to ship their product, this can mean using the wrong environment, or using a less than ideal service or configuration. So removing friction will result in better, more secure use of the public cloud.
Wrapup
Let me know what you think of these recommendations, if you are in a Platform team supporting Azure, I’d love to hear your experiences. If you are in a Application Team deploying to Azure, have a chat with the team providing you the environment, show them this blog, setup a regular call, its important the teams collaborate to get your companies products out the door, security, reliably and on time.
Microsoft Tech Community – Latest Blogs –Read More
Create a hyperlink to a website using a specific cell in a spreadsheet as what to look for at the we
I want to create a link to a website that will search within the website using a specific cell in a workbook as the basis for the search selection. For example, I want to use a stock symbol that is inserted into a cell on a page in a workbook that can be linked to a website like https://finance.yahoo.com that will search within yahoo.com to show a specific page on the yahoo website related to the stock symbol inserted into the specific cell in the workbook. I think the hyperlink looks something like https://finance.yahoo.com/quote//analysis where the cell with the inserted stock symbol is somehow inserted between the 2 forward slashes, i.e. //. Please help if you know how to create this link.
I want to create a link to a website that will search within the website using a specific cell in a workbook as the basis for the search selection. For example, I want to use a stock symbol that is inserted into a cell on a page in a workbook that can be linked to a website like https://finance.yahoo.com that will search within yahoo.com to show a specific page on the yahoo website related to the stock symbol inserted into the specific cell in the workbook. I think the hyperlink looks something like https://finance.yahoo.com/quote//analysis where the cell with the inserted stock symbol is somehow inserted between the 2 forward slashes, i.e. //. Please help if you know how to create this link. Read More
Windows 2K16 Standard recognized as Essential/SBS
Hello,
I would ask for some advice for an issu : we have 2 Windows Server server:
SRV1 , Windows Server 2016 as PDC
SRV2, Windows Server 2016 as data server .
For few weeks know, the SRV2 (no PDC) Just sleep every 7 day. When I analyzed log, I saw the problem :
But I’m in trouble, someone could explain me this please ?
Why Windows detected itself as SBS/Essential version ? (this log is typical)Why only SRV2 have this log/ issu ?
Hello, I would ask for some advice for an issu : we have 2 Windows Server server: SRV1 , Windows Server 2016 as PDCSRV2, Windows Server 2016 as data server . For few weeks know, the SRV2 (no PDC) Just sleep every 7 day. When I analyzed log, I saw the problem : But I’m in trouble, someone could explain me this please ? Why Windows detected itself as SBS/Essential version ? (this log is typical)Why only SRV2 have this log/ issu ? Read More
SCCM Bitlocker – will not start encryption
Good morning, all.
I’ve ran through the following setup guides and both are giving the same results.
– https://www.systemcenterdudes.com/sccm-mbam-integration/
We are on version 2403
I’m specifically getting the error
Unable to connect to the MBAM recovery and hardware service
Error Code -2147024809
Details : the parameter is incorrect
Looking at MSFTs documentation here
This error occurs if the website isn’t HTTPS, or the client doesn’t have a PKI cert.
We do not have a PKI infrastructure, MECM is EHTTP and the website is HTTPS enabled as i can get to the site on the computer that is throwing this error
– I’ve verified the laptop is in an OU with absolutely no bitlocker policies enabled
– checked RSOP to verify there is nothing rogue
– opened the firewall completely up for this machine
– nothing glaring in either bitlocker logs under the CCM logs folder
unsure where else to check – been googling for the last day and cannot come across much with this specific error message if HTTPS is enabled
Good morning, all. I’ve ran through the following setup guides and both are giving the same results.- https://msendpointmgr.com/2020/04/02/goodbye-mbam-bitlocker-management-in-configuration-manager-part-1/- https://www.systemcenterdudes.com/sccm-mbam-integration/ We are on version 2403 I’m specifically getting the errorUnable to connect to the MBAM recovery and hardware serviceError Code -2147024809 Details : the parameter is incorrect Looking at MSFTs documentation here – https://learn.microsoft.com/en-us/mem/configmgr/protect/tech-ref/bitlocker/client-event-logs#18-coreservicedownThis error occurs if the website isn’t HTTPS, or the client doesn’t have a PKI cert. We do not have a PKI infrastructure, MECM is EHTTP and the website is HTTPS enabled as i can get to the site on the computer that is throwing this error – I’ve verified the laptop is in an OU with absolutely no bitlocker policies enabled – checked RSOP to verify there is nothing rogue – opened the firewall completely up for this machine – nothing glaring in either bitlocker logs under the CCM logs folder unsure where else to check – been googling for the last day and cannot come across much with this specific error message if HTTPS is enabled Read More
LAB VM Hardening loosing connectivity
Hi, I need some help here,
I am working on a project on an AzureLab to automate the installation of a Privileged Access Management solution (CyberArk). The problem I am encountering is that the Vault VM (containing passwords) needs a drastic hardening.
Everything works until I restart the VM after the hardening process from AzureLAB (I am able to restart it from windows without a problem). The starting button never ends and after 10 minutes the VM is disconected. However, until those 10 minutes I am still able to use it as if it worked completly fine.
My only clue here is I suppose that AzureLAB uses behind the scenes a specific utility to check if the VM is actually started, wich is blocked by my hardening ?
Does anyone already encountered a similar problem ?
Any help would be appreciated, thanks.
Hi, I need some help here, I am working on a project on an AzureLab to automate the installation of a Privileged Access Management solution (CyberArk). The problem I am encountering is that the Vault VM (containing passwords) needs a drastic hardening. Everything works until I restart the VM after the hardening process from AzureLAB (I am able to restart it from windows without a problem). The starting button never ends and after 10 minutes the VM is disconected. However, until those 10 minutes I am still able to use it as if it worked completly fine. My only clue here is I suppose that AzureLAB uses behind the scenes a specific utility to check if the VM is actually started, wich is blocked by my hardening ? Does anyone already encountered a similar problem ?Any help would be appreciated, thanks. Read More
New Project – Regex on Project Name – Limit Special Characters
Greetings,
I haven’t been able to find any reference to this anywhere online…
When creating a new Project in PoL/PWA, is there a way to apply RegEx to the ‘Name’ (Project Name) field?
Basically: I would like to limit it to Alpha-Numeric, Spaces and Dashes…and definitely prevent ‘&’ and brackets ‘()[]’. Either make it required or prevent ‘finish’ button (Create project) from functioning until it is valid.
What might be a strategy to go about this validation?
(Note: I am somewhat surprised this has never been discussed before…especially considering that PWA creates a sub-site & use the ‘Name’ field as the URL. Frankly, it is surprising that PWA even allows ‘&’.)
Much appreciated,
-TR
Greetings, I haven’t been able to find any reference to this anywhere online… When creating a new Project in PoL/PWA, is there a way to apply RegEx to the ‘Name’ (Project Name) field? Basically: I would like to limit it to Alpha-Numeric, Spaces and Dashes…and definitely prevent ‘&’ and brackets ‘()[]’. Either make it required or prevent ‘finish’ button (Create project) from functioning until it is valid. What might be a strategy to go about this validation? (Note: I am somewhat surprised this has never been discussed before…especially considering that PWA creates a sub-site & use the ‘Name’ field as the URL. Frankly, it is surprising that PWA even allows ‘&’.) Much appreciated,-TR Read More