Tag Archives: microsoft
Build your Web Apps faster with the Azure Cache for Redis: Quick Start Template
Are you a developer looking to quickly and securely spin up a Webapp with a database and cache? Look no further than the Azure Cache for Redis Quick Start Template, now available in the Azure Marketplace. This template allows developers to work across various databases and languages of their choice, making it easier than ever to get started.
The Quick Start Template is compatible with several popular programming languages, including **Java, .NET, Python, Go, PHP, and Node.js**. It also works with a variety of data services, such as **Azure SQL, Azure PostgreSQL, Azure MySQL, and Azure Cosmos DB for MongoDB**.
Azure Cache for Redis is a 1st party service that fully manages caching solutions based on open-source Redis, an in-memory data store that is commonly used for caching, message brokering, session management, and real-time data processing. The Quick Start Template lets you quickly and securely deploy an Azure Cache for Redis instance and connect it with your Webapp.
But that’s not all – the Azure Cache for Redis also offers several advanced features, such as data persistence, clustering, load balancing, and geo-replication, which make it easy to scale and deploy Redis-based solutions across multiple regions and data centers. Azure Cache for Redis can be used effectively with other Azure services, such as Azure App Service, Azure Kubernetes Service, Azure Functions, and Azure Logic Apps, enabling developers to easily incorporate caching into their applications without having to manage infrastructure or worry about scalability and availability.
And if you’re interested in learning more about the Azure Cache for Redis Quick Start Template, be sure to check out the upcoming Open at Microsoft episode that is all about this quick start template. The episode is hosted by Ricky Diep, Product Marketing Manager, and Catherine Wang, Senior Product Manager, and it’s a great opportunity to see the template in action and learn from the experts. In the episode, there will be a demo showcasing how Catherine quickly spun up a python web app using Azure Cache for Redis with PostgreSQL that is used for restaurant reviews.
In addition to its ease of use and advanced features, the Azure Cache for Redis Quick Start Template has several practical use cases that can enhance the performance and functionality of your web applications. For example:
Session Store: Storing user session data in Azure Cache for Redis can enhance web application performance by reducing the database server load, particularly beneficial for high-traffic or complex session data.
Message Broker: Azure Cache for Redis can also serve as a message queue broker for background processing tasks, allowing web applications to offload time-consuming tasks to background workers while maintaining reliability and scalability.
Distributed Caching: Implementing leaderboards, counters, or other real-time ranking systems can be achieved efficiently using Azure Cache for Redis, ensuring fast and accurate updates.
Content Cache: For web applications serving dynamic content, you can cache HTML fragments, page components, or entire rendered pages in Azure Cache for Redis. This approach can help offload the web server and reduce the generation time of dynamic content.
Cache-Aside is ideal for storing frequently accessed data like product catalogs, user profiles, or configuration settings. By caching this data in Azure Cache for Redis, you can reduce the latency associated with database queries. The Cache-Aside pattern is a smart caching strategy where the application itself is responsible for managing the cache. When a request for data arrives, the application first checks the cache. If the cache doesn’t contain the required data, the application retrieves the data from the primary data store, such as a database, and stores it in the cache for future use. This can improve performance and help maintain consistency between data held in the cache and data in the underlying data store.
So why wait? Head over to the Azure Marketplace and try out the Azure Cache for Redis Quick Start Template today!
TRY NOW
Resources
Learn More on Quick Deploy Template
Learn More on Azure Cache for Redis
Watch the Open at Microsoft Video
Microsoft Tech Community – Latest Blogs –Read More
Released: SCOM Management Packs for SQL Server, RS, AS (7.4.0.0)
Updates to SQL Server, SQL Server Dashboards, Reporting Services, and Analysis Services Management Packs are available (7.4.0.0). You can download the MPs from the links below. Majority of the changes are based on your direct feedback. Thank you.
Download Microsoft System Center Management Pack for SQL Server
Download Microsoft System Center Management Pack for SQL Server Dashboards
Download Microsoft System Center Management Pack for SQL Server Analysis Services
Download Microsoft System Center Management Pack for SQL Server Reporting Services
There are a lot of new features as well as some bug fixes in these MPs. You can find the full list by following the links below. Some of the bigger additions are:
For SQL MP
Added support for custom management server resource pools for agentless monitoring mode
Added new “SQL Connection Encryption Certificate Status” monitor for SQL Server on Linux, which targets DB Engine and checks if the server’s TLS certificate is valid
For AS MP
Added nine new performance collection rules
Improved the memory-related instance monitoring workflows
For RS MP
Added new “Securables Configuration Status” monitors
Improved accessibility for the Summary Dashboard view and Monitoring Wizard template
Updated the “Product Version Compliance” monitor with the most recent version of public updates for SQL Server
The operations guides for all SQL Server family of management packs live on learn.microsoft.com. The link to the operation guide for each MP can be found on the MP download page. Here are the links that show what’s new in these MPs:
Features and Enhancements in Management Pack for SQL Server
Features and Enhancements in Management Pack for SQL Server Dashboards
Features and Enhancements in Management Pack for SQL Server Analysis Services
Features and Enhancements in Management Pack for SQL Server Reporting Services
Microsoft Tech Community – Latest Blogs –Read More
Defender Experts’ recommendations for impactful security posture management
Introduction
The Microsoft Defender Experts for XDR service provides value to customers from both a proactive and reactive perspective. Proactively, we provide guidance to customers on overall security posture improvements and perform threat hunting to surface malicious activity in their environments. Simultaneously, our team reactively investigates and responds to incidents that occur in customer environments on their behalf. Working with both sides of the security equation, Defender Experts for XDR is uniquely positioned to understand the value of security controls and configurations in terms of their impact on the rate and severity of actual customer incidents.
While the basics of security hygiene, such as patching, inventory, security baselining, and least privilege delegations are undeniably important, once those bases are covered there are many more specific controls that receive less attention but can be critical in mitigating the frequency and impact of future incidents. Leveraging our experience helping customers protect themselves, we’re thrilled to share some of the security controls and configurations we find most impactful in the real world.
Top Configuration Recommendations
Listed below, in no particular order, are the top configuration recommendations from Defender Experts for XDR.
Microsoft Defender for Office
——————————————————————————————————–
Restrict user ability to release emails from quarantine
The Exchange Online Protection (EOP) quarantine is leveraged widely to prevent suspicious emails from being delivered to user inboxes without entirely deleting them. Emails that match the anti-malware, anti-phishing, and anti-spam policies configured within a given tenant will most often be sent to quarantine. This protection is significantly curtailed when end users have the capability to indiscriminately release their own emails from quarantine. Our team has investigated an unfortunate number of incidents resulting from users searching out phishing emails that were quarantined, releasing them, and promptly compromising their own account. A full access permissions group in a quarantine policy permits this to happen and is strongly discouraged.
Fortunately, regardless of the quarantine policy applied, users can’t release their own messages that were quarantined as malware or high confidence phishing – they can only request their release. But for all other emails detected as phishing, one of the following permissions groups must be applied in order to prevent unrestricted quarantine release.
Limited access permissions group
This is the recommended permissions group for most environments that are not highly restricted. Limited access permits the user to preview quarantined messages (with hyperlinks disabled), view their headers, and request their release (in addition to deleting the email or blocking the sender).
No access is the most restrictive permissions group that can be applied to a quarantine policy. The default quarantine policy AdminOnlyAccessPolicy uses this permissions group. When this is configured, the most that a user could do with a quarantined message is view the email headers.
Implementation
Within the Microsoft Defender portal under Quarantine policy, create a new policy leveraging Limited access, No access, or Specific access with the action “Allow recipients to request a message to be released from quarantine.” Then apply this quarantine policy to your anti-phishing, anti-spam, and anti-malware policies.
Quarantine policies | Step 1 Create quarantine policies | Microsoft Learn
Quarantine policies | Anatomy of a quarantine policy | Microsoft Learn
Microsoft Defender for Endpoint
——————————————————————————————————-
Enable tamper protection
Tamper protection is a critical feature of Defender for Endpoint that protects security settings from being changed. When enabled, tamper protection prevents other key components of Defender for Endpoint, including virus and threat protection, antivirus (AV), real-time protection, automatic remediation, and tamper protection itself, from being disabled. If these security features can be disabled by an attacker, then their value is nullified. Once an attacker has compromised a device, it is commonly part of their attack chain to disable any security services running on the device, thereby enabling more severe and destructive follow-on actions. This activity has been observed in Cypherpunk, DarkSide, and Ryuk ransomware operations among many others. Every supported device onboarded with Defender for Endpoint should have tamper protection enabled. It is also advisable to seriously investigate any incidents involving attempted tampering, as they often point to ongoing compromise.
Implementation
Enable tamper protection via the Defender Portal, Intune, or Configuration Manager.
Protect security settings with tamper protection | Microsoft Learn
Enable network protection in block mode
Network protection is a Defender for Endpoint feature that leverages and extends Microsoft Edge SmartScreen to protect Windows, Linux, and macOs devices. SmartScreen, when in block mode, prevents network connections from the Edge browser to known malicious websites. When network protection is enabled in block mode, these malicious connections will also be blocked from all other supported browsers (Chrome, Firefox, Brave, and Opera, etc.) and non-browser applications. The default blocklist leverages Microsoft’s extensive threat intelligence resources to protect users across all customer environments from unintentionally visiting malicious websites. Furthermore, custom indicators can be configured within a given tenant to block network connections to additional undesired domains, Ips, and URLs.
If network protection is not enabled, or not in block mode, users are vulnerable to visiting websites that are known to be malicious. This is a very common occurrence in Defender Experts for XDR investigations, resulting in malware infections, credential compromise, or other malicious activity. The Microsoft Threat Intelligence community has already done the work to provide the threat intel, so why not leverage it to protect your organization?
Implementation
Network protection can be enabled via PowerShell, MDM, Group Policy, or Microsoft Configuration Manager.
Turn on network protection | Microsoft Learn
Block untrusted and unsigned processes that run from USB
This is an Attack Surface Reduction (ASR) rule that is prebuilt within Microsoft Defender Antivirus to help prevent USB malware. When enabled in block mode, this rule prevents the execution of unsigned or untrusted executables (.exe, .dll, .scr, .ps, .vbs, .js, etc.) that are either present on mounted removable media (e.g., USB or SD card) or that were copied to disk from removable media. For some organizations, USB malware is quite rare. But for organizations with a large, distributed set of end users, or organizations with a large quantity of bring your own device (BYOD) users, this can become a constant challenge. China-based nation-state group Twill Typhoon is known to utilize removable devices containing malicious executables to infect victims, and the LemonDuck and LemonCat mining malware also spread this using this technique, among others. Enabling this rule in block mode can be very effective at preventing these types of damaging USB malware.
Implementation
Ensure that Microsoft Defender Antivirus is turned on and Real-Time Protection and Tamper Protection are enabled. Then, enable the rule via Defender for Endpoint security settings management, MEM, Group Policy, or MDM.
Block untrusted and unsigned processes that run from USB | Microsoft Learn
Block JavaScript or VBScript from launching downloaded executable content
This ASR rule detects attempts by JavaScript or VBScript to launch executables downloaded from the internet and blocks them from executing if enabled in block mode. This prevents a pattern of activity known to be utilized by multiple common types of malware. The FakeUpdates/SocGholish malware in particular leverages a JavaScript backdoor to download and/or launch its payload. FakeUpdates remains relatively prevalent (Manatee Tempest – from FakeUpdates to ransomware), infecting devices via drive-by downloads from malvertising (malicious advertising), SEO poisoning, and more. Russian state-sponsored threat actor Midnight Blizzard has also been observed utilizing phishing emails containing HTML attachments embedded with the EnvyScout JS dropper to compromise victims.
Some organizations may utilize legitimate line-of-business applications that exhibit this same behavior, so it is recommended to test this rule in audit mode prior to fully enabling in block mode. Refer to the Demystifying attack surface reduction rules blog series for more information on the transition from auditing to blocking.
Implementation
Ensure that Microsoft Defender Antivirus is turned on and Real-Time Protection and Tamper Protection are enabled. Then, enable the rule via Defender for Endpoint security settings management, MEM, Group Policy, or MDM.
Block JavaScript or VBScript from launching downloaded executable content | Microsoft Learn
Block Office applications from creating executable content
This ASR rule detects attempts by Office applications (Word, Excel, and PowerPoint) to execute files written to disk, and execution of untrusted files saved by Office macros. In block mode, this rule prevents these executions. Office files have long been utilized to deliver and/or run malicious code, and unfortunately this remains a successful initial access vector into many organizations with insufficient protections. Emotet, Trickbot, Hancitor, and ZLoader malware are all frequently delivered via phishing emails that either directly attach or link to these types of malicious Office files. Individual threat actors including Iran-based nation-state group Mint Sandstorm, China-based nation-state group Canary Typhoon, and Vietnam-based nation-state group Canvas Cyclone, among others, have been known to utilize these methods as well.
Implementation
Ensure that Microsoft Defender Antivirus is turned on and Real-Time Protection and Tamper Protection are enabled. Then, enable the rule via Defender for Endpoint security settings management, MEM, Group Policy, or MDM.
Block Office applications from creating executable content | Microsoft Learn
Block executable content from email client and webmail
This ASR rule detects executable files and scripts attempting to run directly from Microsoft Outlook, outlook.com, or other common webmail services. When enabled in block mode, these executions will be prevented. More sophisticated threat actors and Phishing-as-a-Service (PhaaS) providers have pivoted away from this technique, but this control provides valuable protection against the low-sophistication phishing attacks that can be just as damaging. Given that phishing is one of the most prevalent initial access vectors we see today, any controls that can be applied to reduce the frequency or severity of successful phishing, without disrupting business, should be.
Implementation
Ensure that Microsoft Defender Antivirus is turned on and Real-Time Protection and Tamper Protection are enabled. Then, enable the rule via Defender for Endpoint security settings management, MEM, Group Policy, or MDM.
Block executable content from email client and webmail | Microsoft Learn
Microsoft Entra ID
——————————————————————————————————-
Ensure multifactor authentication (MFA) is enabled for all users in administrative roles in Entra ID
For a long time, MFA was heralded as the ultimate impenetrable line of defense against account compromise. While we know now that there are many ways to bypass it such as cookie/token theft, SIM swapping, social engineering, etc., MFA remains a valuable control for defense in depth. All administrative user accounts should require MFA, but there are a few critical roles in particular that should be prioritized for this control:
The global admin role has the most powerful overall permissions within a tenant and should be protected accordingly.
The power of the billing admin is less widely known, but it can in fact take over a tenant from anyone, including the global admin! With the power to move subscriptions to an associated billing tenant, the billing admin could transfer subscriptions to a tenant where they hold global admin, giving them complete control.
Implementation
Within Entra ID, create a Conditional Access policy that applies to administrative roles requiring MFA on all cloud applications.
Require MFA for administrators with Conditional Access – Microsoft Entra ID | Microsoft Learn
Require MFA for self-service password reset (SSPR)
Self-service password reset enables users to reset their own password without needing to go through a help desk. When performing a password reset, users should be required to robustly verify their identity in order to prevent potential account takeover. SSPR permits four types of authentication methods, which includes email and mobile phone. A determined attacker can typically gain access to one of these methods with relative ease. Octo Tempest has been known to take over accounts via SSPR using access to user phones acquired through SIM swapping, among other methods. Requiring two authentication methods in order to complete SSPR might not stop every attacker, but it does introduce an additional defensive layer to the process that could make all the difference.
Implementation
Within Entra ID under password reset, set authentication methods to two.
Select authentication methods and registration options – Microsoft Entra ID | Microsoft Learn
Microsoft Defender for Identity
——————————————————————————————————-
Set a honeytoken account
A honeytoken account works like a security alarm; it is a dormant account with no legitimate business purpose, so any activity that occurs on the account generates an alert. This facilitates the identification of attacker activity that may otherwise have gone unnoticed. A honeytoken is a very simple and effective detective control, and can be leveraged in multiple different ways as described in Deceptive defense: best practices for identity based honeytokens in Microsoft Defender for Identity. While attack prevention is preferable to retroactive detection, these days it is not reasonable to expect that an organization will avoid being breached. It is vital to be prepared to detect attacks that get past the outer layer of defense in order to mitigate their impact.
Implementation
Create or repurpose an account with no business purpose, and ensure its privileges are removed. Tag this account as a honeytoken within the Defender portal under Settings > Identities > Honeytoken.
Entity tags in Microsoft Defender for Identity – Microsoft Defender for Identity | Microsoft Learn
Conclusion
Every organization can take actions to improve their security posture, but the sheer volume of control recommendations can sometimes overwhelm organizations into inaction. Through this blog post, the Defender Experts for XDR team has aimed to provide a discrete list of configurations and controls that we have observed to be impactful through our daily work with Microsoft customers. We hope that these recommendations will be implemented, or at least considered, for the protection of your organization as well.
If you’re interested in learning more about Defender Experts for XDR, visit the Microsoft Defender Experts for XDR web page or the Defender Experts for XDR docs page.
Microsoft Tech Community – Latest Blogs –Read More
Corporate Comms Discussion – January
What a great discussion with Wendy Sherwood about what was done at CBRE with Viva Connections and Viva Engage. Lots of great stuff from creating goals to thinking about the hybrid workforce. Take a listen.
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL DB license-free standby replica | Data Exposed
Learn more about the License-free HADR replica in Azure SQL DB in this episode of Data Exposed with Anna Hoffman and Rajesh Setlem.
Resources:
To learn more and get started: https://aka.ms/sqldbstandby
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Securing the Clouds: Navigating Multi-Cloud Security with Advanced SIEM Strategies
Note: this is the first of a four-part blog series that explores the complexities of securing multiple clouds and the limitations of traditional Security Information and Event Management (SIEM) tools.
This first article is by a team of Microsoft experts who share their insights and experiences in establishing a comprehensive security posture in a multi-cloud environment. It explores strategies for achieving a unified security stance, implementing Microsoft’s security solutions, and realizing the benefits and greater insights of a multi-cloud SOC. It also explores how a threat-based approach is beneficial for helping organizations stay ahead of adversaries in this modern AI world.
Multi-cloud challenges and SIEM limitations
The era of cloud computing has revolutionized the way businesses operate, providing flexibility, scalability, and efficiency. However, the transition to and implementation of multi-cloud environments comes with a unique set of security challenges. These include disparate data formats, varying security protocols, and the sheer volume and velocity of data traffic that traditional SIEM tools were not originally designed to handle. Organizations that take proactive measures and who leverage a modern SIEM strategy with the correct balance of tools, including moving from best of breed to best of platform, and who work towards reducing complexity will be less vulnerable to attacks and better positioned to thrive.
Diverse data and inconsistent protocols
Significant complexity arises from the need to manage and secure disparate data types across different cloud platforms. Each cloud service provider (CSP) has its own set of tools and services, with varying logging formats and protocols. Traditional SIEM solutions struggle to integrate this diverse data and are often designed with a single, on-premises infrastructure in mind. As a result, they were not originally designed to handle the complexity, scale, and variety of data sources that exist in today’s hybrid and cloud-based infrastructures. Their architecture and capabilities are often limited to on-premises use cases, making it challenging to effectively ingest, process, and analyze the wide array of data generated by diverse sources in these environments. This, in turn, can lead to gaps in monitoring and analysis.
Volume and velocity
The volume of data generated by cloud services can be staggering. Most traditional SIEMs are not built to scale rapidly or cost-effectively with the exponential growth of log data, which can result in performance bottlenecks and increased costs. Moreover, the velocity at which this data is generated and needs to be analyzed is another challenge. This requires SIEMs to have high processing capabilities and advanced analytics to provide timely insights into security events.
Evolving threat landscape
Cloud services are continuously evolving, with frequent updates and new features. This constant change means that security monitoring tools must be equally agile. Traditional SIEM systems may not update as quickly, leading to outdated security measures that cannot protect against the latest threats or leverage the newest cloud security services.
Integration and correlation issues
Integrating multiple SIEM solutions across multiple clouds can lead to increased complexity in data correlation and analysis. With data silos, security teams often find it challenging to correlate events across different platforms, which is crucial for detecting sophisticated attacks. These SIEM systems may require custom configurations and extensive manual effort to achieve a unified view, consuming valuable time and resources.
Limitations in cloud-specific threat detection
Traditional SIEM tools are often limited in their ability to detect cloud-specific threats and vulnerabilities. They might lack the context or specialized detection capabilities needed to identify and respond to incidents that are unique to cloud environments, such as misconfigured storage buckets, excessive permissions, or unsecured serverless computing resources.
Cost and resource constraints
The cost implications of operating multiple SIEMs are not trivial. Licensing, infrastructure, and operational costs can skyrocket, particularly as data volumes grow and retention periods must extend to meet new and changing regulatory requirements. Additionally, the expertise required to manage and maintain multiple SIEMs can strain already limited cybersecurity personnel resources.
Inflexible and cumbersome upgrades
Traditional SIEM tools may also be inflexible, requiring significant downtime for upgrades and maintenance, which can be at odds with the all-day, everyday nature of cloud services. This inflexibility can hinder a business’s ability to adapt quickly to new security requirements or operational demands.
The limitations of traditional SIEM tools in the context of multi-cloud security can lead to increased risk and decreased visibility into threats. Therefore, organizations must look towards next-generation SIEM solutions that are built for modern cloud capabilities, offering the scalability, flexibility, and advanced analytics needed to secure their cloud and on-premises environments effectively.
Conclusion
Multi-cloud security is a complex and evolving challenge that requires a modern and agile approach. Traditional SIEM tools are not designed to cope with the scale, diversity, and dynamism of cloud-based environments, resulting in reduced visibility, increased risk, and inefficient operations. To overcome these limitations, organizations need to adopt next-generation SIEM solutions that are cloud-native, scalable, flexible, and intelligent.
Future posts in this series will cover the following topics:
How Microsoft has applied a threat-driven approach to enrich use-case development as a proactive and strategic way of managing cybersecurity risks that focuses on the threats rather than just the controls and vulnerabilities as required by your compliance requirements.
How Microsoft has implemented its security solutions across Azure, Oracle, AWS, and on-premises environments, thus enabling a unified and comprehensive defense against threats, for any enterprise
Key benefits and outcome examples for some of our multi-cloud security projects, including improved detection capabilities, enhanced visibility across enterprise, efficiency, and cost savings.
Microsoft Tech Community – Latest Blogs –Read More
Modernizing Azure Automation: A 2023 Retrospective and Future outlook
Majority of the organizations are at different stages of their cloud adoption journey, as they navigate through public clouds, private clouds and on-premises data centers. Their IT landscape is often characterized by multiple applications and services, that are spread across diverse environments. Managing this complex landscape manually or with multiple orchestration services can be daunting and inefficient. Irrespective of whether organizations are completely on-premises or exploring cloud solutions for the first time or born in the cloud, all share a common goal: to enhance efficiency and agility. Orchestration has become indispensable to streamline management tasks effectively to reduce cost and allow business to focus on its core priorities.
Azure Automation has emerged as a pivotal service for managing complex hybrid environments by delivering a consistent user experience across multiple cloud platforms. Customers utilize Azure Automation for a variety of tasks, such as resource lifecycle management, mission-critical jobs that often require manual intervention, guest management at scale and other common enterprise IT operations such as periodic maintenance. It targets orchestration on a wide array of resources such as Virtual Machines, Arc-enabled Servers, Databases, Storage, Azure Active Directory, Mailboxes and much more, along with complex workflows involving many resources. Azure Automation provides a complete end-to-end solution that facilitates authoring of PowerShell and Python scripts, with a serverless platform for execution of those scripts, offers the flexibility to execute those scripts on-premises or in customer’s local environment and monitors those executions comprehensively.
A 2023 Retrospective
Azure Automation has made substantial investments in modernizing its platform and significantly improving user experience over the previous year and promises to continue delivering value to its customers in the years to come. Here is a summary of key enhancements so far, that have laid the foundation for even greater benefits in the future:
New runtime languages: PowerShell 7.2 and Python 3.8 runbooks are Generally available. This enables Developers and IT administrators to execute runbooks in the most popular scripting languages. Customers are adopting Azure Automation to consolidate their scripts that are distributed on-premises and across multiple clouds and gaining operational efficiency by managing their Azure and Arc-enabled resources through a consistent experience.
Support for Azure CLI commands: Now Azure CLI commands can be invoked in Azure Automation runbooks (preview). The rich command set of Azure CLI expands capabilities of runbooks even further, allowing you to reap combined benefits of both to automate and streamline resource management on Azure.
Advanced script authoring experience: Azure Automation extension for Visual Studio Code is Generally Available. It offers an advanced authoring and editing experience for PowerShell and Python scripts. The extension leverages GitHub Copilot for intelligent code completion that provides suggestions directly within the editor, thereby making the coding process faster and simpler.
Granular control through Runtime environment: Module management and runbook update has never been so hassle-free! Runtime environment (preview) allows complete configuration of the job execution environment without worrying about mixing different module versions in a single Automation account. You can upgrade runbooks to newer language versions with minimal effort to stay secure and take advantage of latest functionalities. It is strongly recommended to use Runtime environment to update runbooks on end-of-support runtimes PowerShell 7.1 and Python 2.7 since both PowerShell 7.1 and Python 2.7 have been announced retired by parent products PowerShell and Python respectively.
Unified experience across diverse platforms: Hybrid Worker extension is Generally Available and supports Azure VMs, off-Azure servers registered as Arc-enabled servers, Arc-enabled SCVMM and Arc-enabled VMware VMs. This empowers organizations to orchestrate their entire hybrid environment at scale through a single interface. You can directly install the extension on Azure or Arc-enabled servers and execute runbooks for a variety of scenarios. These include in-guest VM management, access to other services privately from Azure Virtual Network, and to overcome organizational restrictions of keeping data in cloud.
State-of-the-art backend platform: Azure Automation has redesigned its platform and majority of the runbooks are now executing successfully on secure and modern Hyper-V containers. With this move and additional measures taken to minimize infrastructure failures, the service has further hardened its security and improved reliability. These enhancements have established the groundwork for faster release of innovative features in the coming months. If your runbooks have taken dependency on old platform and you observe unexpected job failures, take a look at the known issues and workarounds here.
Future outlook
Azure Automation is continuously evolving and enhancing its capabilities, striving to become the best-in-class platform for resource management in an adaptive cloud. It is providing organizations with more efficient and reliable ways to navigate across different services and applications residing in multiple clouds (on-premises data centers, private clouds, and public clouds). In addition to its ongoing commitments to strengthen security, reliability, resiliency and scale, Azure Automation is building critical features to further improve customer experience. Here are some of the improvements currently under development and expected to be released soon:
Aligning Runbook support with latest Runtime releases: Azure Automation is working actively to reduce the time gap between release of new PowerShell and Python language versions and their support in runbooks. Stay tuned for upcoming announcements on PowerShell 7.4!
Source control integration for new runtimes: You would now be able to keep runbooks updated with scripts in GitHub or Azure DevOps source control repository. This feature simplifies the process of promoting code that has undergone testing in the development environment to the production Automation account.
Native integration with Azure services: Azure Automation is already being used for creating runbooks that orchestrate across multiple resources. Keep an eye out for deeper integrations with more Azure resources for ease of management and to improve efficiency.
Richer Gallery of Runbooks: Improvements are planned in Runbook Gallery to help you search runbooks effortlessly for common scenarios and boost your productivity. Contribute to the community by sharing your scripts here.
Reminder for upcoming Retirements
Ensure to transition to the supported services/features prior to the retirement date:
AzureRM PowerShell module will retire on 29 February 2024 and will be replaced by Az PowerShell module. Update your outdated runbooks immediately.
With the retirement of Log Analytics agent, following dependent services/features will retire on 31 August 2024. It is strongly recommended to migrate to supported services before retirement date:
Log Analytics agent-based Hybrid Runbook Worker will be retired in favor of extension-based Hybrid Runbook Worker. Learn more.
Azure Automation Update Management will be retired in favor of Azure Update Manager. Learn more.
Azure Automation Change Tracking & Inventory will be retired in favor of Change Tracking & Inventory with AMA. Learn more.
For any questions or feedback, please reach out to askazureautomation@microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Wired for Hybrid – What’s New in Azure Networking – January 2024 edition
Hello Folks,
Azure Networking is the foundation of your infrastructure in Azure. Each month we bring you an update on What’s new in Azure Networking.
In this blog post, we’ll cover what’s new with Azure Networking in January 2024. In this blog post, we will cover the following announcements and how they can help you.
Standard and High-Performance VPN Gateway SKUs will be retired
Migration of Azure Virtual Network injected Azure Data Explorer cluster to Private Endpoints
Security Update for Azure Front Door and Application Gateway WAF
Prohibiting Domain Fronting with Azure Front Door and Azure CDN Standard from Microsoft
Simplified management of Listeners TLS certificates
Public preview: Private subnet
Enjoy!
Standard and High-Performance VPN Gateway SKUs will be retired
On 30 September 2025, Basic SKU public IP addresses will be retired in Azure. You can continue to use your existing Basic SKU public IP addresses until then, however, you will no longer be able to create new ones after 31 March 2025.
Standard SKU public IP addresses offer significant improvements, including:
Access to a variety of other Azure products, including Standard Load Balancer, Azure Firewall, and NAT Gateway.
Security by default—closed to inbound flows unless allowed by a network security group.
Zone-redundant and zonal front ends for inbound and outbound traffic.
If you have any Basic SKU public IP addresses deployed in Azure Cloud Services (extended support), those deployments will not be affected by this retirement, and you do not need to take any action for them. Because of the retirement of Basic IP, which Standard and High-Performance SKUs only accept, we will retire these SKUs on 30 September 2025. Starting 1 December 2023, you will no longer be able to create a new gateway with these SKUs.
Recommended action: Post December 2024, you will be able to upgrade your Standard/High-Performance gateway SKU to one of the other VPN Gateway SKUs available.
If you do not upgrade your gateway by August 2025, your gateway will be automatically upgraded to VPNGw1AZ (Standard) or VPNGw2AZ (High-Performance) after 30 September 2025.
Migration of Azure Virtual Network injected Azure Data Explorer cluster to Private Endpoints
An Azure Virtual Network injected Azure Data Explorer cluster is a cluster that is deployed into a subnet in your Virtual Network (VNet). This enables you to access the cluster privately from your Azure virtual network or on-premises, access resources such as Event Hubs and Azure Storage inside your virtual network and restrict inbound and outbound traffic.
Private Endpoint is a network interface that connects your ADX cluster to a private IP address within your VNet. Private endpoints enable you to connect to your ADX cluster using a private IP address within your VNet, without the need for public IP addresses.
Microsoft Azure has released a preview feature that allows users to migrate their VNet injected ADX cluster to Private Endpoints with minimal downtime and disruption. This migration is recommended as VNet injection has some limitations and drawbacks, such as increased complexity, reduced scalability, and dependency on public IP addresses.
The migration process is simple and can be done using the Azure portal, the ARM template, or any code which uses the ADX SDK 1. For more information on the migration process, prerequisites, and steps to follow, please refer to the detailed documentation article.
Resources:
Azure Data Explorer documentation
Migrate a Virtual Network injected cluster to private endpoints (Preview)
Microsoft Azure Data Fundamentals: Explore relational data in Azure
Data analysis in Azure Data Explorer with Kusto Query Language
Create dashboards in Azure Data Explorer
Security Update for Azure Front Door and Application Gateway WAF
Front Door and Application Gateway web application firewall (WAF) protects web applications from common vulnerabilities and exploits.
Azure-managed rule sets provide an easy way to deploy protection against a common set of security threats. Since such rule sets are managed by Azure, the rules are updated as needed to protect against new attack signatures.
Default rule set also includes the Microsoft Threat Intelligence Collection rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
Customers also have the option of using rules that are defined based on the OWASP (Open Worldwide Application Security Project (OWASP) core rule sets 3.2, 3.1, 3.0, or 2.2.9.
At the end of December, we updated our Default Rule Set (DRS) and OWASP has updated the Core Rule Set (CRS) to address the security vulnerability CVE-2023-50164. (An attacker can manipulate file upload params to enable paths traversal and under some circumstances this can lead to uploading a malicious file which can be used to perform Remote Code Execution)
Prohibiting Domain Fronting with Azure Front Door and Azure CDN Standard from Microsoft
Domain fronting is a network technique that enables an attacker to conceal the actual destination of a request by sending traffic to a different domain in HTTP host header than the one used in the TLS/SSL handshake.
Azure Front Door and Azure CDN Standard from Microsoft (classic) protects against domain fronting occurring on domains hosted across different Azure subscriptions. The Server Name Indication (SNI) in TLS/SSL handshake and HTTP host header, whether they are the same or different, must be configured under the same Azure subscription.
Starting from January 22, 2024, all existing Azure Front Door and Azure CDN Standard from Microsoft (classic) resources will block any HTTP request that exhibits domain fronting behavior. The enforcement of blocking changes may require up to two weeks to propagate on the global PoPs (point of presences) starting from January 22, 2024.
To help identify if an Azure Front Door or Azure CDN from Microsoft (classic) resources display domain fronting behavior, two new log fields will be available on December 25, 2023.
Resources:
Prohibiting Domain Fronting with Azure Front Door and Azure CDN
Azure Networking Blog – Microsoft Community Hub.
Simplified management of Listeners TLS certificates
If you use Application Gateway, you know that terminating TLS (HTTP traffic) can be done on the Gateway to take the burden off the backend resources. Given you many have a large number of backend resources with difference hostnames (FQDNs), this can be challenging to manage. Traditionally, this could only be done with Azure PowerShell or Azure CLI.
Now you can manage all your TLS certificates for APP Gateway through the Azure portal:
Key Features include:
Quick listing
Certificate information
Bulk Operations
Resources:
Simplified management of Listeners TLS certificates
Public preview: Private subnet
Now customers will be able to create custom private subnets in Azure for their resources.
Currently, when virtual machines are created in a virtual network without any explicit outbound connectivity, they are assigned a default outbound public IP address. These implicit IPs are subject to change, not associated with a subscription, difficult to troubleshoot, and do not follow Azure’s model of “secure by default” which ensures customers have strong security without additional steps needed. (The depreciation for this type of implicit connectivity was recently announced and is scheduled for September 2025.)
The private subnet feature will let you prevent this insecure implicit connectivity for any newly created subnets by setting the “default outbound access” parameter to false. You can then pick your preferred method for explicit outbound connectivity to the internet.
How to implement and turn off default outbound?
Utilize Private Subnet parameter
Add the Private subnet feature at creation
Add an explicit outbound connectivity method
NAT Gateway
Standard LB
Standar Public IP
Use Flexible orchestration mode for Virtual Machine Scale sets
Resources:
Default outbound access in Azure
How can I transition to an explicit method of public connectivity (and disable default outbound access)?
That’s it for this month. Happy 2024! (it’s January… I can still say that. Right?!?)
Cheers
Pierre
Microsoft Tech Community – Latest Blogs –Read More
ICYMI | Great article on Azure Cognitive Services & Azure Machine Learning Cost Analysis
Azure Cognitive Services & Azure Machine Learning Cost Analysis
This document serves as an essential guide for Independent Software Vendors (ISVs) to navigate the complexities of cost management associated with Azure Cognitive Services, focusing on Azure OpenAI and Azure Machine Learning. It adopts a structured approach, examining costs across different project phases—Development, Testing, and Production—to provide a comprehensive view of financial implications at each stage. More than just listing prices, this research explains them, linking to official Azure documentation for accuracy, and offering practical tips and strategies for cost optimization. It’s crafted to assist both developers and CTOs in making informed decisions, balancing technological innovation with budget constraints. This is your go-to resource for understanding and managing the costs of Azure’s advanced cognitive services.
Microsoft Tech Community – Latest Blogs –Read More
Automatic Image Creation using Azure VM Image Builder is now generally available!
We’re happy to announce automatic image creation using Azure Image Builder is now generally available. This feature improves your speed and efficiency by allowing you the ability to start image builds for new base images automatically.
Automatic image creation is critical for keeping your images up-to-date and secure. It also minimizes the manual steps required for managing individual security and image update requirements.
You no longer have to manually update images that have been patched. Instead, you can create ‘triggers’ for the images you wish to update automatically and allow the Azure Image Builder service to perform the build for you.
Getting started
You can get started using the auto image creation feature by following the instructions provided in the documentation: How to use Azure Image Builder triggers to set up an automatic image build.
Feedback
If you have questions or feedback, please reach out to me at kofiforson@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
ProcDump 3.1 for Linux
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Credentials roundup: In-demand news for in-demand skills
At Microsoft Learn, we’re inspired every day to empower our learners on their skill-building journeys, whether they’re discovering how to use the latest technology, earning Microsoft Credentials, making a career move—or all of the above. To support and guide your changing skilling needs, we’re introducing a series of blog posts that highlight our credentials portfolio updates. We invite you to follow this series over the coming months for ongoing news as we evolve our credentials offerings. Our goal is to provide you with the technical skills necessary to excel in your training and career endeavors.
In this article
Validate your tech skills with the latest Microsoft Credentials
Highlight your abilities with Microsoft Applied Skills
Explore new scenarios
Discover new language offerings
Make the most of Microsoft Cloud Skills Challenges
Prove you’re ready for in-demand job roles with Microsoft Certifications
Earn new certifications with beta exams for Fabric and Dynamics 365 Business Central
Find out how certification and exam retirements make way for new opportunities
Take charge of your career with Microsoft Credentials
Validate your tech skills with the latest Microsoft Credentials
As emerging technologies like AI rapidly evolve to meet business needs, more organizations are turning to a skills-first approach for finding the right talent—both in-house and externally. Microsoft Credentials, including our new Applied Skills and industry-recognized Microsoft Certifications, support that approach.
Highlight your abilities with Microsoft Applied Skills
Many learners have already taken the opportunity to earn Applied Skills. Because these credentials validate skills related to real-world technical scenarios, they’re also proving to be very popular with employers. Customers have told us that task-oriented skill-building and accreditation are effective for quickly applying competencies aimed at the solution components in their projects. For the latest offerings and details:
Read Announcing Microsoft Applied Skills, the new credentials to verify in-demand technical skills.
Watch Explore Microsoft Applied Skills.
Explore new scenarios
Released on January 17, 2024
We recently released the following Applied Skills:
Deploy cloud-native apps using Azure Container Apps
Develop generative AI solutions with Azure OpenAI Service
Train and deploy a machine learning model with Azure Machine Learning
Build collaborative apps for Microsoft Teams
Create and manage model-driven apps with Power Apps and Dataverse
Coming soon
We look forward to offering new scenarios for implementing data lakehouses, data warehouses, and real-time analytics solutions with Microsoft Fabric.
To see the complete portfolio, check out our Applied Skills credentials poster.
Discover new language offerings
In other Applied Skills news, if your preferred language is Brazilian Portuguese, Simplified Chinese, English, French, German, Japanese, or Spanish, we’re pleased to share that the following credentials are now available in those languages:
Build a natural language processing solution with Azure AI Language
Build an Azure AI Vision solution
Configure secure access to your workloads using Azure networking
Configure SIEM security operations using Microsoft Sentinel
Create an intelligent document processing solution with Azure AI Document Intelligence
Create and manage automated processes by using Power Automate
Create and manage canvas apps with Power Apps
Deploy and configure Azure Monitor
Deploy containers by using Azure Kubernetes Service
Develop an ASP.NET Core web app that consumes an API
Migrate SQL Server workloads to Azure SQL Database
Secure Azure services and workloads with Microsoft Defender for Cloud regulatory compliance controls
Secure storage for Azure Files and Azure Blob Storage
Available in multiple languages as of January 24, 2024
Build collaborative apps for Microsoft Teams
Create and manage model-driven apps with Power Apps and Dataverse
Deploy cloud-native apps using Azure Container Apps
Implement security through a pipeline using Azure DevOps
If the language set in your browser is one of those itemized, your assessment will be in that language.
Make the most of Microsoft Cloud Skills Challenges
Complete a Microsoft Cloud Skills Challenge with 30 Days to Learn It, which provides an engaging experience to help you prepare for an Applied Skills assessment or certification exam. Check out the challenges for:
Azure AI Document Intelligence
Azure AI Language
Azure AI Vision
Create Power Platform Solutions with AI and Copilot
Generative AI with Azure OpenAI
After earning your Microsoft-verified credential, you can elevate your profile across your professional network by sharing the news of your new credentials on LinkedIn, leaving little doubt about your skills and expertise.
Prove you’re ready for in-demand job roles with Microsoft Certifications
Microsoft Certifications validate technical proficiency for in-demand job roles in infrastructure, data and AI, digital apps and innovation, Modern Work, business applications, and security. For all the latest offerings and details:
Watch Explore Microsoft Certifications.
Check out our Microsoft Certifications poster.
Earn new certifications with beta exams for Fabric and Dynamics 365 Business Central
The new Microsoft Certified: Fabric Analytics Engineer Associate certification validates that you have the broad technical expertise to transform data into reusable analytics assets by using Microsoft Fabric components. And it proves your expertise in designing, creating, and deploying enterprise-scale data analytics solutions. To earn this certification, pass Exam DP-600: Implementing Analytics Solutions Using Microsoft Fabric, currently in beta. For more details, read Validate your skills with our new certification for Microsoft Fabric Analytics Engineers and then take the beta exam.
The new Microsoft Certified: Microsoft Dynamics 365 Business Central Developer Associate certification offers you the opportunity to prove your skills in designing, developing, testing, and maintaining solutions, along with your ability to integrate Business Central with other applications, such as Microsoft Power Platform apps. To earn this certification, pass Exam MB-820: Microsoft Dynamics 365 Business Central Developer, currently in beta. For specifics, read Validate your skills: New certification for Dynamics 365 Business Central Developers and then take the beta exam.
Find out how certification and exam retirements make way for new opportunities
Microsoft Fabric—the all-in-one analytics solution that covers everything from data movement to data science—has enabled the role of enterprise data analyst to evolve into that of analytics engineer. As a result, effective April 30, 2024, we’ll retire the Microsoft Certified: Azure Enterprise Data Analyst Associate certification and Exam DP-500: Designing and Implementing Enterprise-Scale Analytics Solutions Using Microsoft Azure and Microsoft Power BI. Enterprise data analysts can now earn the Fabric Analytics Engineer Associate certification by passing Exam DP-600.
In other news, Microsoft Power Platform app makers have new opportunities to demonstrate skills in specific scenarios relevant to the work that they do every day, such as automating business processes with Power Automate and creating apps with Power Apps, with our new Applied Skills credentials. As a result, effective June 30, 2024, we’ll retire the Microsoft Certified: Power Platform App Maker Associate certification and Exam PL-100: Microsoft Power Platform App Maker.
Take charge of your career with Microsoft Credentials
You don’t have to choose between Microsoft Certification and Applied Skills. In fact, combining both types of Microsoft Credentials can help you maximize the potential to achieve your goals. For example, if you want to validate your skills for specific projects that you’re working on related to Microsoft Fabric, like implementing a data lakehouse, a data warehouse, or real-time analytics, or if you’re preparing for the exam, you can start by earning the Applied Skills that cover these topics that are coming soon.
Alternatively, after you’ve earned the certification, you can demonstrate that you have skills needed for specific projects related to Fabric by earning one of the related Applied Skills, when available.
If you’re trying to decide which type of credential suits your current needs, career goals, skill set, and experience, check out Choose your Microsoft Credential.
We hope that this Microsoft Credentials roundup has inspired you to continue your learning journey and to pursue credentials—whether Microsoft Certifications for broader validation of your ability to fill particular job roles or Applied Skills for scenario-based validation of your specific tech skills. In today’s ever-changing business environment, both can help you succeed in your chosen profession. These complementary credentials can help you take charge of your career and give you the tools you need to become indispensable.
Follow us on X and LinkedIn, and make sure you’re subscribed to The Spark, our LinkedIn newsletter.
Microsoft Tech Community – Latest Blogs –Read More
Azure Cognitive Services & Azure Machine Learning Cost Analysis
Azure Cognitive Services & Azure Machine Learning Cost Analysis
This document serves as an essential guide for Independent Software Vendors (ISVs) to navigate the complexities of cost management associated with Azure Cognitive Services, focusing on Azure OpenAI and Azure Machine Learning. It adopts a structured approach, examining costs across different project phases—Development, Testing, and Production—to provide a comprehensive view of financial implications at each stage. More than just listing prices, this research explains them, linking to official Azure documentation for accuracy, and offering practical tips and strategies for cost optimization. It’s crafted to assist both developers and CTOs in making informed decisions, balancing technological innovation with budget constraints. This is your go-to resource for understanding and managing the costs of Azure’s advanced cognitive services.
Read on for a detailed exploration of Azure Cognitive Services costs and how to smartly navigate them.
Introduction
Overview of the Research Objective
Empowering ISV Developers and CTOs: This research is designed to equip developers and Chief Technology Officers (CTOs) in the Independent Software Vendor (ISV) sector with a deep understanding of the cost structures associated with Azure Cognitive Services. The focus is specifically on Azure OpenAI (including models like Ada, GPT, and DALL-E) and Azure Machine Learning.
Understanding Cost Calculations: We aim to clarify how costs are calculated for various Azure OpenAI models and Azure Machine Learning. This will include an examination of factors that influence costs, usage patterns, and the implications of scaling.
Interpreting Pricing Information: Rather than merely presenting pricing details, our goal is to interpret and explain these aspects, providing actionable insights for effective budgeting and cost planning. This includes linking to official documentation for accuracy and offering resources for a more comprehensive understanding.
Facilitating Informed Decision Making: The ultimate aim is to demystify the cost aspects of these Azure services, thereby enabling ISV professionals to make informed decisions, plan budgets efficiently, and conduct thorough audits of their investments in Azure Cognitive Services.
Importance for Customers
Understanding Costs is Key: For software companies like ISVs, knowing how much they will spend on Azure Cognitive Services, including OpenAI and Machine Learning, is very important. This helps them use these services wisely without overspending spending.
Real-Life Scenarios and Keeping a Balance: For example, a company might build a small PoC project with Azure OpenAI’s GPT model and find it works well. However, if they’re not clear about the costs for larger scale use, they might end up spending more than planned. This research guides them in understanding these costs and maintaining a balance between using new technologies and staying within budget. The aim is to understand these cost as part of the project preparation for production.
Planning for Growth: As companies grow and use more Azure services, their costs can go up. This research helps them see how costs change with growth, allowing them to plan better.
Making Smart Decisions: This research provides ISVs with essential cost information. This helps them make wise choices about using Azure Cognitive Services, balancing their business needs with their budget.
Azure OpenAI Pricing
Azure OpenAI charges are primarily based on token usage, with variations depending on the model and service used. A token is roughly equivalent to 4 characters or ¾ of a word, meaning 1,000 tokens represent approximately 750 words. This token-based billing applies to both the input (prompt) and output (response) of the models.
Language Models
Models: GPT-3.5-Turbo 4K, GPT-3.5-Turbo 16K, GPT-4 8K, GPT-4 32K.
Charging Mechanism: Per 1,000 tokens.
Base Models
Models: Babbage-002, Davinci-002.
Charging Mechanism: Per 1,000 tokens.
Fine-tuning Models
In Azure OpenAI, fine-tuning allows customers to tailor models (such as Babbage-002, Davinci-002, GPT-3.5-Turbo) to their specific needs by training them on a custom dataset. The cost structure for fine-tuning models is multi-faceted:
Models: Babbage-002, Davinci-002, GPT-3.5-Turbo.
Charging Mechanism: Costs are incurred in three main areas:
Training: Billed per compute hour during the training of the model on custom data.
Hosting: Charged per hour for hosting the fine-tuned model. It’s important to note that hosting costs accrue continuously, regardless of whether the model is actively processing requests or not. This can result in significant expenses, especially if the model is hosted but not used frequently.
Token Usage: Billed per 1,000 tokens for both input and output. This is similar to other Azure OpenAI services.
A critical aspect to consider with fine-tuning models is the hosting cost. Even if there are no calls to the model, the hosting charges continue, which can add up quickly. Additionally, deploying a fine-tuned model often requires a minimum number of nodes, leading to a baseline cost that is incurred regardless of usage intensity. This aspect makes it crucial for customers to carefully plan and manage their usage, ensuring that the model is hosted only when necessary and is optimally scaled according to the demand.
Image and Embedding Models
Dall-E and Ada: Charged per 100 images or 1,000 tokens respectively.
Speech Models
Whisper: Charged per hour, irrespective of audio length processed.
For detailed pricing information, visit the Azure OpenAI Service Pricing (Opens in new window or tab) page.
Azure Machine Learning Pricing: General Costs
Services
Azure Container Registry: Manages and stores private Docker container images.
Block Blob Storage: Stores large amounts of unstructured data, such as datasets.
Key Vault: Securely stores and accesses secrets like keys and tokens.
Application Insights: Provides analytics and telemetry for application performance monitoring.
Compute Instances
Purpose: Tailored for development and testing in Azure Machine Learning.
Billing: Charged for the duration the VM is running. Can be started and stopped as needed.
Specialization: Designed specifically for machine learning workloads and integrated into the AML workspace.
VMs and Other Resources
General-Purpose VMs
Billing: Charged on an hourly basis. Billing is continuous as long as the VM is operational, irrespective of the level of activity or workload running on it.
Usage: Essential for running machine learning models, training algorithms, or hosting applications. The choice of VM size and capacity should align with the computational needs of the specific machine learning tasks to optimize cost-efficiency.
Load Balancers
Billing: Load Balancers in Azure are typically billed based on the number of configured rules and the amount of data processed. The first five rules are charged at a fixed rate per hour, with additional rules incurring extra charges. Note that a partial hour of usage is billed as a full hour.
Function: Crucial for distributing incoming network traffic across multiple servers or VMs. This ensures high availability and reliability by spreading the load, which is particularly important in scenarios where machine learning applications require high uptime and consistent performance.
Data Processing Charges: The cost also includes the amount of data processed, both inbound and outbound, which is an important factor to consider for machine learning applications that may process large volumes of data.
For more detailed and up-to-date pricing information, refer to the Azure Load Balancer Pricing (Opens in new window or tab) page.
Note
“Compute Instances” are specialized for machine learning tasks and are integrated into the AML workspace, billed based on usage. “VMs and Other Resources” encompass a broader range of VMs and additional services like Load Balancers, each with their specific billing models.
Cost Analysis in Development & Test Phase
Azure offers a free tier for Cognitive Services, beneficial for experimenting during the development phase (Azure Free Tier Information (Opens in new window or tab)).
Effective cost management is crucial, with tools like Azure Pricing Calculator and Azure Cost Analysis helping monitor and plan pricing needs (Cost Management Strategies (Opens in new window or tab)).
Optimizing resource usage involves strategies such as managing separate resources for individual Cognitive Services components for granular cost tracking and control (Resource Management Tips (Opens in new window or tab)).
Azure Dev Test Subscriptions offer discounted rates on services for development and testing (Azure Dev Test Subscriptions (Opens in new window or tab)).
Implementing strategies like auto shutdown/startup during off-hours and autoscaling resources based on usage patterns can lead to significant cost savings (Right Sizing and Shutdowns (Opens in new window or tab)).
In the testing phase, consider using mock data or simulations for cost-effective testing, stress testing and performance monitoring to understand service performance under different loads, and utilizing separate environments or Azure’s sandbox features to test services (Testing Strategies for Azure Services (Opens in new window or tab)).
Various payment options for VMs, such as pay-as-you-go and reserved instances, offer flexibility in managing costs to suit different workload requirements and budgets (Cost Control Options (Opens in new window or tab)).
Cost Management in Production Phase
In the production phase, ISVs can leverage insights and strategies developed in earlier phases for effective cost management:
Leverage Forecasting Insights: Utilize usage forecasts developed during the development and testing phases to anticipate and plan for scaling needs and associated costs.
Optimize Based on Testing Data: Apply performance and cost optimization strategies identified during testing to enhance efficiency in the production environment.
Continuous Monitoring and Adjustment: Implement ongoing cost monitoring and optimization strategies, using tools such as Azure Cost Management, to adjust resources and strategies in response to actual usage and performance data.
Utilize Azure Reserved Instances: For predictable and steady workloads identified through earlier analysis, consider Azure Reserved Instances for cost savings.
Implement Cost Allocation and Tagging: Extend cost allocation and tagging practices from earlier phases to maintain granular control over expenses and facilitate detailed reporting in production.
These strategies help in transitioning smoothly from development and testing to a cost-effective production environment.
Conclusion
Summary of Findings
This analysis simplifies the costs of Azure Cognitive Services and Azure Machine Learning, offering ISVs a clear guide to manage these services’ financial aspects. Key findings are:
Cost Structures Across Phases: The document elaborates on the different cost structures during Development, Testing, and Production phases, offering a thorough understanding of financial implications at each stage.
Target Audience: Specifically designed for ISVs, including developers and Chief Technology Officers (CTOs), the guide offers deep insights into Azure OpenAI and Azure Machine Learning’s pricing models and cost calculation methods.
Practical and Actionable Insights: Beyond presenting raw pricing details, the document interprets and explains these aspects, thus providing ISVs with actionable insights for effective budgeting and cost planning.
Importance of Cost Management: It underscores the significance of cost management for ISVs, especially in balancing the use of innovative technologies like Azure Cognitive Services with budget limitations.
Final Recommendations
Based on the findings, the following recommendations are made to ISVs:
Informed Decision-Making: Utilize the insights provided in this guide to make informed decisions about investments in Azure Cognitive Services and Azure Machine Learning. Understanding the nuances of cost calculations and pricing models is crucial for effective financial planning.
Optimization Strategies: Implement the cost optimization strategies outlined in this document. This includes leveraging Azure’s pricing calculator, employing cost management tools, and optimizing resource usage based on the project phase.
Balancing Innovation and Cost: Maintain a balance between adopting technological innovations and adhering to budget constraints. This balance is essential for the sustainable growth and competitiveness of ISVs in the technology sector.
Continuous Monitoring and Adjustment: Engage in ongoing monitoring and adjustment of strategies, using tools like Azure Cost Management. This will help in adapting to changing requirements and optimizing costs in real-time.
In conclusion, ISVs are encouraged to actively apply the insights and recommendations from this analysis to manage their investments in Azure services effectively, ensuring that their technological advancements are both impactful and financially viable.
References and Resources
Azure OpenAI Service Detailed Pricing (Opens in new window or tab)
Azure Load Balancer Pricing Information (Opens in new window or tab)
Information on Azure Free Tier (Opens in new window or tab)
Strategies for Managing Azure Cognitive Services Costs (Opens in new window or tab)
Tips for Managing Resources in Azure Cognitive Services (Opens in new window or tab)
Azure Dev Test Subscriptions and Cost Savings (Opens in new window or tab)
Guidance on Right Sizing and Shutdowns in Azure (Opens in new window or tab)
Testing Strategies for Azure Services Documentation (Opens in new window or tab)
Options for Cost Control in Azure Services (Opens in new window or tab)
Microsoft Tech Community – Latest Blogs –Read More
Rehosting On-Prem Process Automation when migrating to Azure
Many enterprises seek to migrate on-premises IT infrastructure to cloud for cost optimization, scalability, and enhanced reliability. During modernization, key aspect is to transition automated processes from on-premises environments, where tasks are automated using scripts (PowerShell or Python) and tools like Windows Task Scheduler or System Center Service Management Automation (SMA).
This blog showcases successful transitions of customer automated processes to the cloud with Azure Automation, emphasizing script re-use and modernization through smart integrations with complementing Azure products. Using runbooks in PowerShell or Python, the platform supports PowerShell versions 5.1, and PowerShell 7.2. To learn more, click here.
Additionally, Azure Automation provides seamless certificate authentication with managed identity, eliminating the need to manage certificates and credentials while rehosting. Azure Automation safeguards the keys and passwords by wrapping the encryption key with the customer-managed key associated to key vault. Integration with Azure Monitor coupled with Automation’s native job logs equip the customers with advanced monitoring and error/failure management. Azure Automation platform efficiently manages long-running scripts in the cloud or on-premises with resource limits options with Hybrid runbook worker. Hybrid runbook worker also equips you to automate workloads off-Azure while utilizing the goodness of Azure Automation runbooks.
Rehosting on-premises operations with minimal effort covers scenarios listed below. Additional efforts involve modernizing scripts for cloud-native management of secrets, certificates, logging, and monitoring. –
State configuration management – Monitor state changes in the infrastructure and generate insights/alerts for subsequent actions.
Build, deploy and manage resources – Deploy virtual machines across a hybrid environment using runbooks. This is not entirely serverless and requires relatively higher manual effort in rehosting.
Periodic maintenance – to execute tasks that need to be performed at set timed intervals like
purging stale data or reindex a SQL database.
Checking for orphaned computer and users in Active Directory
Windows Update notifications
Respond to alerts – Orchestrate a response when cost-based (e.g. VM cost consumption), system-based, service-based, and/or resource utilization alerts are generated.
Specifically, here are some of the scenarios of managing state configuration of M365 suite where our customer rehosted the on-premises PowerShell script to cloud with Azure Automation
Scenarios for State Configuration Management of M365 Suite
User Permission & access control management
Mailbox alerts configuration
Configuring SharePoint sites availability
Synchronizing Office 365 with internal applications
Example: Rehosting User Permission & access control management in M365 mailboxes
Here is how one of the customers rehosted a heavy monolithic PowerShell script to Azure. The objective of the job was to identify –
List of shared mailboxes –> list of permissions existing for these mailboxes –> users & groups mapped to the mailboxes –> list of permissions granted (& modified overtime) to these users/groups –> Final output with a view of Mailbox Id, Groups, Users, Permissions provided, Permissions modified (with timestamps).
1. Shared mailboxes credentials
###########################################
# Get Shared Mailboxes
###########################################
$forSharedMailboxes = @{
Properties = “GrantSendOnBehalfTo”
RecipientTypeDetails = “SharedMailbox”
ResultSize = “Unlimited”
}
$sharedMailboxes = Get-EXOMailbox @forSharedMailboxes
2. Obtain shared Mailbox permissions
###########################################
# Get Shared Mailbox Permissions
###########################################
$sharedMailboxesPermissions = foreach ($sharedMailbox in $sharedMailboxes) {
# ——————————————————————————————————-
# Get Send As Permissions
# ——————————————————————————————————-
try {
$forTheSharedMailbox = @{
Identity = $sharedMailbox.Identity
ResultSize = “Unlimited”
}
$recipientPermissions = @(Get-EXORecipientPermission @forTheSharedMailbox)
$recipientPermissions = $recipientPermissions.Where({ $_.Trustee -ne “NT AUTHORITYSELF” })
$recipientPermissions = $recipientPermissions.Where({ $_.Trustee -notlike “S-1-5-21*” })
if ($recipientPermissions) {
foreach ($recipientPermission in $recipientPermissions) {
[SharedMailboxPermission]@{
MailboxDisplayName = $sharedMailbox.DisplayName
MailboxEmailAddresses = $sharedMailbox.EmailAddresses
MailboxId = $sharedMailbox.Id
MailboxUserPrincipalName = $sharedMailbox.UserPrincipalName
Permission = $recipientPermission.AccessRights
PermissionExchangeObject = $recipientPermission.Trustee
}
}
}
}
catch {
Write-Warning (“Getting send as permissions for $($sharedMailbox.Identity).”)
continue
}
3. User & groups mapped to the mailboxes
###########################################
# Get Entra and Exchange User Objects
###########################################
$forEntraAndExchangeUserObjects = @{
Connection = $forTheSharedMailboxGovernanceSite
Identity = $entraAndExchangeUserObjectListRelativeUrl
}
$userObjectsList = Get-PnPList @forEntraAndExchangeUserObjects
$fromTheEntraAndExchangeUserObjectsList = @{
Connection = $forTheSharedMailboxGovernanceSite
List = $userObjectsList
PageSize = 5000
}
$userObjectsListItems = (Get-PnPListItem @fromTheEntraAndExchangeUserObjectsList).FieldValues
###########################################
# Get Entra and Exchange Group Objects
###########################################
$forEntraAndExchangeGroupObjects = @{
Connection = $forTheSharedMailboxGovernanceSite
Identity = $entraAndExchangeGroupObjectListRelativeUrl
}
$groupObjectsList = Get-PnPList @forEntraAndExchangeGroupObjects
$fromTheEntraAndExchangeGroupObjectsList = @{
Connection = $forTheSharedMailboxGovernanceSite
List = $groupObjectsList
PageSize = 5000
}
$groupObjectsListItems = (Get-PnPListItem @fromTheEntraAndExchangeGroupObjectsList).FieldValues
4. List of permissions granted (& modified overtime) to these users/groups
# —————————————-
# Get Full Access Permissions
# ————————————-
try {
$forTheSharedMailbox = @{
Identity = $sharedMailbox.Identity
ResultSize = “Unlimited”
}
$mailboxPermissions = @(Get-EXOMailboxPermission @forTheSharedMailbox)
$mailboxPermissions = $mailboxPermissions.Where({ $_.User -ne “NT AUTHORITYSELF” })
$mailboxPermissions = $mailboxPermissions.Where({ $_.User -notlike “S-1-5-21*” })
if ($mailboxPermissions) {
foreach ($mailboxPermission in $mailboxPermissions) {
[SharedMailboxPermission]@{
MailboxDisplayName = $sharedMailbox.DisplayName
MailboxEmailAddresses = $sharedMailbox.EmailAddresses
MailboxId = $sharedMailbox.Id
MailboxUserPrincipalName = $sharedMailbox.UserPrincipalName
Permission = $mailboxPermission.AccessRights
PermissionExchangeObject = $mailboxPermission.User
}
}
}
}
catch {
Write-Warning (“Getting full access permissions for $($sharedMailbox.Identity).”)
continue
}
# ——————————————————————————————————-
# Get Send On Behalf Of Permissions
# ——————————————————————————————————-
$grantSendOnBehalfToPermissions = @($sharedMailbox.GrantSendOnBehalfTo)
$grantSendOnBehalfToPermissions = $grantSendOnBehalfToPermissions.Where({ $_ -notlike “S-1-5-21*” })
if ($grantSendOnBehalfToPermissions) {
foreach ($grantSendOnBehalfToPermission in $grantSendOnBehalfToPermissions) {
[SharedMailboxPermission]@{
MailboxDisplayName = $sharedMailbox.DisplayName
MailboxEmailAddresses = $sharedMailbox.EmailAddresses
MailboxId = $sharedMailbox.Id
MailboxUserPrincipalName = $sharedMailbox.UserPrincipalName
Permission = “SendOnBehalfOf”
PermissionExchangeObject = $grantSendOnBehalfToPermission
}
}
}
}
As the customer modernized from On-premises to Azure via Azure Automation, the following list captures the aspects that have to be updated. The changes were mostly an improvement in terms of experience offered by Azure Automation leveraging smart integrations with other Azure capabilities and little to no reliance on custom scripts.
Setup Logging & Monitoring methods – In On prem setup, customers authored custom scripts for logging, which was no more needed with Azure Automation. Customers utilized in-portal Azure Monitor integration to forward logs to Azure monitor, quey logs, and set up alerts for insights.
Handling certificate authentication – Managed Identity based authentication provides improved means to store secrets and passwords without doing regular updates to code credentials. Azure Automation supports both PS script and in-built portal experience to configure Managed Identity
Storing passwords and security keys – Key Vault integration with Azure Automation helped the customers to transition this on-prem experience seamlessly. The sample PS script below is recommended to enable Key Vault integration.
Install-Module -Name Microsoft.PowerShell.SecretManagement -Repository PSGallery -Force
Install-Module Az.KeyVault -Repository PSGallery -Force
Import-Module Microsoft.PowerShell.SecretManagement
Import-Module Az.KeyVault
$VaultParameters = @{
AZKVaultName = $vaultName
SubscriptionId = $subID
}
Register-SecretVault -Module Az.KeyVault -Name AzKV -VaultParameters $VaultParameters
If you are currently utilizing Azure Automation for rehosting such light weight environment agnostic operations from on-prem to cloud or want to know more details, please reach out to us on askazureautomation@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog| Revisit expert insights from Ultimate Partner LIVE
Business leaders and decision-makers are increasingly grasping the vast potential of AI and the need to invest in this technology to remain competitive. As their trusted advisor, customers are looking to you for guidance on estimating the time to value of their AI investments and initiating their AI journey. Partners who embrace the economic opportunity to drive software innovation on the Microsoft platform and copilot ecosystem will create real value for their customers.
Maximizing this opportunity was top of mind for many partners attending the recent Ultimate Partner LIVE: The Americas Summit, a two-day event showcasing real-world insights, best practices, and key information to enable software and services solutions partners and their ecosystems to learn how to align their business with Microsoft.
Topics at the event ranged from the Microsoft commercial marketplace vision, to Small, Medium & Corporate (SMC) co-sell opportunities. Below are some highlights:
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More