Tag Archives: microsoft
Lesson Learned #470: Resolving ‘EXECUTE Permission Denied’ Error on sp_send_dbmail in Azure SQL MI
We worked on a service request that our customer encountering an error message “Executed as user: user1. The EXECUTE permission was denied on the object ‘sp_send_dbmail’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (Error 229). The step failed.“, I would like to share with you how was the resolution for this specific error message.
Understanding the Error
The error message explicitly points to a permission issue. The user (in this case, ‘user1’) does not have the necessary permission to execute the sp_send_dbmail stored procedure located in the msdb database. This procedure is essential for sending emails from Azure SQL Managed Instance, and lacking execute permissions will prevent the Database Mail feature from functioning correctly.
In this situation, we identified that the user1 was not part DatabaseMailUserRole role in the msdb database. Membership in this role is a prerequisite for using Database Mail.
USE msdb;
ALTER ROLE DatabaseMailUserRole ADD MEMBER [user1];
Once the permission was granted the ‘user1’ was able to successfully send emails through Database Mail in Azure SQL Managed Instance.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #469:Implementing a Linked Server Alternative with Azure SQL Database and C#
In scenarios where direct Linked Server connections are not feasible, such as between Azure SQL Database and an on-premise SQL Server, developers often seek alternative solutions. This blog post introduces a C# implementation that simulates the functionality of a Linked Server for data transfer between Azure SQL Database and SQL Server, providing a flexible and efficient way to exchange data.
Overview of the Solution
The proposed solution involves a C# class, ClsRead, designed to manage the data transfer process. The class connects to both the source (SQL Server) and the target (Azure SQL Database), retrieves data from the source, and inserts it into the target database.
Key Features
Connection Management: ClsRead maintains separate connection strings for the source and target databases, allowing for flexible connections to different SQL Server and Azure SQL Database instances.
Data Transfer Control: The class includes methods to execute a SQL query on the source database, retrieve the results into a DataTable, and then use SqlBulkCopy to efficiently insert the data into the target Azure SQL Database.
Error Handling: Robust error handling is implemented within each method, ensuring that any issues during the connection, data retrieval, or insertion processes are appropriately logged and can be managed or escalated.
Implementation Details
Class Properties
SourceConnectionString: Connection string to the source SQL Server.
TargetConnectionString: Connection string to the target Azure SQL Database.
SQLToExecuteFromSource: SQL query to be executed on the source database.
TargetTable: Name of the target table in Azure SQL Database where data will be inserted.
Methods
TransferData(): Coordinates the data transfer process, including validation of property values.
GetDataFromSource(): Executes the SQL query on the source database and retrieves the results.
InsertDataIntoAzureSql(DataTable TempData): Inserts the data into the target Azure SQL Database using SqlBulkCopy.
Error Handling
The methods include try..catch blocks to handle any exceptions, ensuring that errors are logged, and the process can be halted or adjusted as needed.
Usage Scenario
A typical use case involves setting up the ClsRead class with appropriate connection strings, specifying the SQL query and the target table, and then invoking TransferData(). This process can be used to synchronize data between different databases, migrate data, or consolidate data for reporting purposes.
For example, we have in our on-premise server the table PerformanceVarcharNVarchar that we need only the top 2000 rows and we need to compare with the table PerformanceVarcharNVarchar in our Azure SQL Database.
The first thing that we are going to perform is to create the temporal table, of course, we could create a normal table.
DROP TABLE IF EXISTS [##__MyTable__]
CREATE Table [##__MyTable__] (ID INT Primary Key)
Once we have created the table we are going to call our ClsRead with the following parameters:
static void Main(string[] args)
{
ClsRead oClsRead = new ClsRead();
oClsRead.SourceConnectionString = “Server=OnPremiseServer;User Id=userName;Password=Pwd1!;Initial Catalog=DbSource;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.TargetConnectionString = “Server=tcp:servername.database.windows.net,1433;User Id=username1;Password=pwd2;Initial Catalog=DBName;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.SQLToExecuteFromSource = “Select TOP 2000 ID from dbo.PerformanceVarcharNVarchar”;
oClsRead.TargetTable = “[##__MyTable__]”;
oClsRead.TransferData();
}
If everything has been executed correctly, we could execute, queries like this one:
select * from [##__MyTable__] A
INNER JOIN PerformanceVarcharNVarchar B
ON A.ID = B.ID
Conclusion
While a direct Linked Server connection is not possible from Azure SQL Database, the ClsRead class provides a viable alternative with flexibility and robust error handling. This approach is particularly useful in cloud-based and hybrid environments where Azure SQL Database is used in conjunction with on-premise SQL Server instances.
using System;
using System.Collections.Generic;
using System.Data;
using System.Text;
using Microsoft.Data.SqlClient;
namespace LinkedServer
{
class ClsRead
{
private string _sSourceConnectionString = “”;
private string _sTargetConnectionString = “”;
private string _sSQLToReadFromSource = “”;
private string _sTargetTable = “”;
public string SourceConnectionString
{
get
{
return _sSourceConnectionString;
}
set
{
_sSourceConnectionString = value;
}
}
public string TargetConnectionString
{
get
{
return _sTargetConnectionString;
}
set
{
_sTargetConnectionString = value;
}
}
public string SQLToExecuteFromSource
{
get
{
return _sSQLToReadFromSource;
}
set
{
_sSQLToReadFromSource = value;
}
}
public string TargetTable
{
get
{
return _sTargetTable;
}
set
{
_sTargetTable = value;
}
}
// Constructor por defecto
public ClsRead() { }
public void TransferData()
{
// Check that all properties are set
if (string.IsNullOrEmpty(SourceConnectionString) ||
string.IsNullOrEmpty(TargetConnectionString) ||
string.IsNullOrEmpty(SQLToExecuteFromSource) ||
string.IsNullOrEmpty(TargetTable))
{
throw new InvalidOperationException(“All properties must be set.”);
}
try
{
DataTable TempData = GetDataFromSource();
InsertDataIntoAzureSql(TempData);
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“Error during data transfer: ” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
private DataTable GetDataFromSource()
{
DataTable dataTable = new DataTable();
try
{
using (SqlConnection connection = new SqlConnection(SourceConnectionString))
{
using (SqlCommand command = new SqlCommand(SQLToExecuteFromSource, connection))
{
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
dataTable.Load(reader);
}
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Obtaining data from Source..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
return dataTable;
}
private void InsertDataIntoAzureSql(DataTable TempData)
{
try
{
using (SqlConnection connection = new SqlConnection(TargetConnectionString))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = TargetTable;
bulkCopy.BatchSize = 1000;
bulkCopy.BulkCopyTimeout = 50;
bulkCopy.WriteToServer(TempData);
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Saving data into target..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
}
}
Microsoft Tech Community – Latest Blogs –Read More
Monthly news – January 2024
Microsoft Defender for Cloud
Monthly news
January 2024 Edition
This is our monthly “What’s new” blog post, summarizing product updates and various new assets we released over the past month. In this edition, we are looking at all the goodness from December 2023.
Legend:
Product videos
Webcasts (recordings)
Docs on Microsoft
Blogs on Microsoft
GitHub
External content
Product improvements
Announcements
Microsoft Defender for Cloud
It is now possible to manage Defender for Servers on specific resources within your subscription, giving you full control over your protection strategy. With this capability, you can configure specific resources with custom configurations that differ from the settings configured at the subscription level.
Learn more about enabling Defender for Servers at the resource level.
The Coverage workbook allows you to keep track of which Defender for Cloud plans are active on which parts of your environments. This workbook can help you to ensure that your environments and subscriptions are fully protected. By having access to detailed coverage information, you can also identify any areas that might need other protection and take action to address those areas.
Learn more about the Coverage workbook.
As the landscape of DevOps continues to expand and confront increasingly sophisticated security threats, the need for proactive attack surface reduction measures has never been more critical. To enhance DevOps security and prevent attacks, Defender for Cloud, a Cloud Native Application Protection Platform (CNAPP), is enabling customers with new capabilities: DevOps Environment Posture Management, Code to Cloud Mapping for Service Principals, and new DevOps Attack Paths.
In this blog we dive deep into how these features represent a strategic shift towards a more integrated and holistic approach to cloud native application security throughout the entire development lifecycle.
The classic multicloud connector experience is retired and data is no longer streamed to connectors created through that mechanism. These classic connectors were used to connect AWS Security Hub and GCP Security Command Center recommendations to Defender for Cloud and onboard AWS EC2s to Defender for Servers.
The full value of these connectors has been replaced with the native multicloud security connectors experience, which has been Generally Available for AWS and GCP since March 2022 at no extra cost.
The new native connectors are included in your plan and offer an automated onboarding experience with options to onboard single accounts, multiple accounts (with Terraform), and organizational onboarding with auto provisioning for the following Defender plans: free foundational CSPM capabilities, Defender Cloud Security Posture Management (CSPM), Defender for Servers, Defender for SQL, and Defender for Containers.
Over the past three years, a notable shift has unfolded in the realm of cloud security. Increasingly, security vendors are introducing agentless scanning solutions to enhance the protection of their customers. These solutions empower users with visibility into their security posture and the ability to detect threats — all achieved without the need to install any additional software, commonly referred to as an agent, onto their workloads.
This transformative phase in cloud security, embracing the agentless approach, owes its development to the robust suite of management APIs offered by cloud service providers. In this blog post, our focus will center on the technical aspects of agentless scanning applicable to virtual machines operating in the cloud. Whether it be an Azure Virtual Machine, an AWS EC2 instance, or a Google Cloud Compute instance, for simplicity’s sake, we will term them as cloud-native virtual machines (VMs).
In this article we share the technical details of our agentless scanning platform.
PostgreSQL Flexible Server support in the Microsoft Defender for open-source relational databases plan is now generally available. Microsoft Defender for open-source relational databases provides advanced threat protection to PostgreSQL Flexible Servers, by detecting anomalous activities and generating security alerts.
Learn how to Enable Microsoft Defender for open-source relational databases.
Watch new episodes of the Defender for Cloud in the Field show to learn about the Agentless secret scanning for VMs, Native integration with ServiceNow, Defender for APIs General Availability and updates from Microsoft Ignite 2023.
Microsoft Defender for Cloud Labs have been updated and now include several new detailed step by step guidance on how to enable, configure and test the Defender for Cloud capabilities.
Discover how other organizations successfully use Microsoft Defender for Cloud to protect their cloud workloads. This month we are featuring Rabobank – a Dutch multinational banking and financial services company headquartered in Utrecht, Netherlands – that uses Microsoft security solutions, including Defender for Cloud, to secure their environment.
Join our experts in the upcoming webinars to learn what we are doing to secure your workloads running in Azure and other clouds.
Note: If you want to stay current with Defender for Cloud and receive updates in your inbox, please consider subscribing to our monthly newsletter: https://aka.ms/MDCNewsSubscribe
Microsoft Tech Community – Latest Blogs –Read More
Empower Azure Video Indexer Insights with your own models
Overview
Azure Video Indexer (AVI) offers a comprehensive suite of models that extract diverse insights from the audio, transcript, and visuals of videos. Recognizing the boundless potential of AI models and the unique requirements of different domains, AVI now enables integration of custom models. This enhances video analysis, providing a seamless experience both in the user interface and through API integrations.
The Bring Your Own (BYO) capability enables the process of integrating custom models. Users can provide AVI with the API for calling their model, define the input via an Azure Function, and specify the integration type. Detailed instructions are available here.
Demonstrating this functionality, a specific example involves the automotive industry: Users with numerous car videos can now detect various car types more effectively. Utilizing AVI’s Object Detection insight, particularly the Car class, the system has been expanded to recognize new sub-classes: Jeep and Family Car. This enhancement employs a model developed in Azure AI Vision Studio using Florence, based on a few-shots learning technique. This method, leveraging the foundational Florence Vision model, enables training for new classes with a minimal set of examples – approximately 15 images per class.
The BYO capability in AVI allows users to efficiently and accurately generate new insights by building on and expanding existing insights such as object detection and tracking. Instead of starting from scratch, users can begin with a well-established list of cars that have already been detected and tracked along the video, each with a representative image. Users can then use only numerous requests for the new Florence-based model to differentiate between the cars according to their model.
Note: This article is accompanied by a step-by-step code-based tutorial. Please visit the official Azure Video Indexer “Bring Your Own” Sample under the Video Indexer Samples Github Repository.
High Level Design and Flow
To demonstrate the usage of building customized AI pipeline, we will be using the following pipeline that leverages several key aspects of Video Indexer components and integrations:
1. Users employ their existing Azure Video Indexer account on Azure to index a video, either through the Azure Video Indexer Portal or the Azure Video Indexer API.
2. The Video Indexer account integrates with a Log Analytics workspace, enabling the publication of Audit and Events Data into a selected stream. For additional details on video index collection options, refer to: Monitor Azure Video Indexer | Microsoft Learn.
3. Indexing operation events (such as “Video Uploaded,” “Video Indexed,” and “Video Re-Indexed”) are streamed to Azure Event Hubs. Azure Event Hubs enhances the reliability and persistence of event processing and supports multiple consumers through “Consumer Groups.”
4. A dedicated Azure Function, created within the customer’s Azure Subscription, activates upon receiving events from the EventHub. This function specifically waits for the “Indexing-Complete” event to process video frames based on criteria like object detection, cropped images, and insights. The compute layer then forwards selected frames to the custom model via Cognitive Services Vision API and receives the classification results. In this example it sends the crops of the representative image for each tracked car in the video.
Note: The integration process involves strategic selection of video frames for analysis, leveraging AVI’s car detection and tracking capabilities, to only process representative cropped images of each tracked car in the custom model.
5. The compute layer (Azure Function) then transmits the aggregated results from the custom model back to the Azure API to update the existing indexing data using the Update Video Index API Call.
6. The enriched insights are subsequently displayed on the Video Indexer Portal. The ID in the custom model matches the ID in the original insights JSON.
Note: for more in-depth step-by-step tutorial accomplished with code sample, please consult the official Azure Video Indexer GitHub Sample under the “Bring-Your-Own” Section.
Result Analysis
The outcome is a novel insight displayed in the user interface, revealing the outcomes from the custom model. This application allowed for the detection of a new subclass of objects, enhancing the video with additional, user-specific insights. In the examples provided below, each car is distinctly classified: for instance, the white car is identified as a family car (Figure 3), whereas the red car is categorized as a jeep (Figure 4).
Conclusions
With only a handful of API calls to the bespoke model, the system effectively conducts a thorough analysis of every car featured in the video. This method, which involves the selective use of certain images for the custom model combined with insights from AVI, not only reduces expenses but also boosts overall efficiency. It delivers a holistic analysis tool to users, paving the way for endless customization and AI integration opportunities.
Microsoft Tech Community – Latest Blogs –Read More
Check This Out! (CTO!) Guide (December 2023)
Hi everyone! Brandon Wilson here once again with this month’s “Check This Out!” (CTO!) guide.
These posts are only intended to be your guide, to lead you to some content of interest, and are just a way we are trying to help our readers a bit more, whether that is learning, troubleshooting, or just finding new content sources! We will give you a bit of a taste of the blog content itself, provide you a way to get to the source content directly, and help to introduce you to some other blogs you may not be aware of that you might find helpful.
From all of us on the Core Infrastructure and Security Tech Community blog team, thanks for your continued reading and support!
Title: Renew Certificate Authority Certificates on Windows Server Core. No Problem!
Source: Ask the Directory Services Team
Author: Robert Greene
Publication Date: 12/18/23
Content excerpt:
Today’s blog strives to clearly elucidate an administrative procedure that comes along more frequently with PKI Hierarchies being deployed to Windows Server Core operating systems.
Title: Keep your Azure optimization on the right track with Azure patterns and practices
Source: Azure Architecture
Author: Ben Brauer
Publication Date: 12/13/23
Content excerpt:
Businesses are at a pivotal juncture in their cloud migration journeys, as the question is no longer “Should we do this?”, but “What’s the best way to do this?” With questions of cost, reliability, and security looming over any migration plans, Microsoft is driven to fortify your organization for a successful transformation to Azure. That’s why we offer two complementary frameworks that together provide a comprehensive approach to cloud adoption and optimization. With best-practice guidance and checklists to keep your cloud modernization on track, our goal is to help your organization avoid costly mistakes and save time by leveraging proven strategies. The Microsoft Cloud Adoption Framework (CAF) and Well-Architected Framework (WAF) are resources that businesses can leverage to confidently transform their operations into being cloud-centric and build/manage cloud-hosted applications securely and cost-effectively. In this blog we’ll take you through the purpose of each framework and how you can start applying them to your cloud migration today.
Title: How to use Azure Front Door with Azure Kubernetes Service (Tips and Tricks)
Source: Azure Architecture
Author: Pranab Paul
Publication Date: 12/26/23
Content excerpt:
As its definition says – “Azure Front Door is a global, scalable, and secure entry point for fast delivery of your web applications. It offers dynamic site acceleration, SSL offloading, domain and certificate management, application firewall, and URL-based routing”. We can consider this as an Application Gateway at global scale with CDN profile thrown in to spice it up. AGIC or Application Gateway as Ingress Controller is already available and widely used. I received this question recently, asking whether Azure Front Door can be used in the same way. I didn’t have to reinvent the wheel as so many blog posts and YouTube videos are already there on this topic. In this article, I will only discuss different options to implement Azure Front Door with AKS and will add some critical tips you should be aware of.
Title: Public Preview Announcement: Azure VM Regional to Zonal Move
Source: Azure Compute
Author: Kaza Sriram
Publication Date: 12/12/23
Content excerpt:
We are excited to announce the public preview of single instance VM regional to zonal move, a new feature that allows you to move an existing VM in a regional configuration (deployed without any infrastructure redundancy) to a zonal configuration (deployed into specific Azure availability zone) within the same region. This feature announcement continues the momentum with our earlier announced VMSS Zonal expansion features and reinforces the Azure wide zonal strategy, that enables you to take advantage of higher availability with Azure availability zones and make them an integral part of your comprehensive business continuity and resiliency strategy.
This feature is intended for single instance VMs in regional configurations only and not for VMs already in availability zones, or VMs part of an availability set (AvSet) or Virtual Machine Scale Sets (VMSS).
Title: Interconnected guidance for an optimized cloud journey
Source: Azure Governance and Management
Author: Antonio Ortoll
Publication Date: 12/11/23
Content excerpt:
The cost of cloud computing can add up quickly, especially for businesses with a high volume of data, high traffic or mission-critical applications. As organizations increasingly put cloud capabilities to work, they are constantly looking for ways to trim costs and focus their cloud spend to align to the right business priorities. Cost optimization is key to making that happen. But how do you know when there are opportunities to optimize?
To make it easier for you to identify cost optimization opportunities during every step of your Azure journey, we provide resources, tools and guidance to help you evaluate your costs, identify efficiencies, and set you up for success. From building your business case to optimizing new workloads, you’ll find interconnected guidance and assessments designed to continually increase the value of your Azure investments and enable you to invest in projects that drive ongoing business growth and innovation. Whether you’re migrating to the cloud for the first time or already have Azure workloads in place, these cost management, governance and monitoring tools can help you visualize your costs and gain insights.
Let’s take a closer look at each of these tools and how you can use them to understand and forecast your bill, optimize workload costs, and control your spending.
Title: Azure Firewall: New Embedded Workbooks
Source: Azure Network Security
Author: Eliran Azulai
Publication Date: 12/4/23
Content excerpt:
After our previous announcement in August 2023, we want to delve deeper into the enhanced capabilities of the new embedded workbooks. Within Azure, Workbooks serve as a versatile canvas for conducting data analysis and generating visually compelling reports directly within the Azure portal. They empower users to access diverse data sources across Azure, amalgamating them into cohesive, interactive experiences. Workbooks enable the amalgamation of various visualizations and analyses, making them ideal for unrestricted exploration.
Notably, the Azure Firewall Portal has now incorporated embedded workbooks functionality, offering customers a seamless means to analyze Azure Firewall traffic. This feature facilitates the creation of sophisticated visual reports within the Azure portal, allowing users to leverage data from multiple Firewalls deployed across Azure and unify them into interactive, cohesive experiences.
Title: Azure Firewall’s Auto Learn SNAT Routes: A Guide to Dynamic Routing and SNAT Configuration
Source: Azure Network Security
Author: David Frazee
Publication Date: 12/21/23
Content excerpt:
Azure Firewall is a cloud-native network security service that protects your Azure virtual network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. However, some Azure Firewall customers may face challenges when they need to configure non-RFC-1918 address spaces to not SNAT through the Azure Firewall. This can cause issues with routing, connectivity, and performance. To address this problem, Azure Firewall has introduced a new feature that allows customers to specify which address spaces should not be SNATed by the firewall. This feature can help customers reduce the overhead of managing custom routes and NAT rules and improve the efficiency and reliability of their network traffic. In this blog, we will explain how the feature works, what Azure Route Server is, and how to enable it. We will also provide a QuickStart guide and some examples to help you get started with this feature.
Title: Securely uploading blob files to Azure Storage from API Management
Source: Azure PaaS
Author: Una Chen
Publication Date: 12/26/23
Content excerpt:
This article will provide a demonstration on how to utilize either SAS token authentication or managed identity from API Management to make requests to Azure Storage. Furthermore, it will explore and compare the differences between these two options.
Title: The Twelve Days of Blog-mas: No.4 – Sync Cloud Groups from AAD/Entra ID back to Active Directory
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/1/23
Content excerpt:
For a loooong time, you and I have been waiting for the ability to sync ‘cloud-born-and-managed’ security groups (and their memberships) back into on-premises AD. This takes us further on our journey of moving “the management plane” from on-prem AD to the cloud – and provides you the ability to create/manage groups in the cloud to manage resource access in Active Directory.
Title: The Twelve Days of Blog-mas: No.5 – The Endpoint Management Jigsaw
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/5/23
Content excerpt:
Most orgs (hopefully) have a well-developed ‘practice’ around Endpoint management, combining people, process and technology to deploy, configure, operate and support a fleet of devices that adhere to corporate policy. This has been a main-stay of endpoint IT Pros for decades.
As IT Pros, whether we like it or not, we’re continually expanding our knowledge and skills to account for the ever-growing scope that we’re accountable for and the winds of change in technology. The cloud, mobile devices, BYO, VDI and other flavors of endpoints – as well as a global pandemic – have all pushed or pulled (or dragged) us to where we are “today.”
Title: Switch to the New Defender for Resource Manager Pricing Plan
Source: Core Infrastructure and Security
Author: Felipe Binotto
Publication Date: 12/5/23
Content excerpt:
In case you missed it, a new pricing plan has been announced for Microsoft Defender for Resource Manager.
The legacy pricing plan (per-API call) is priced at $4 per 1M API Calls, which can become a bit expensive if there is a lot going on in your subscriptions.
The new pricing plan (per-subscription) is priced at $5 per subscription per month.
We have made available a workbook which provides a cost estimation for all the Defender plans across all your subscriptions.
Title: The Twelve Days of Blog-mas: No. 6 – The Reporting Edition – Microsoft Community Hub
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/6/23
Content excerpt:
Good morning, Internet! At first glance, this post may appear a weeee bit thin … but sometimes, less is more. Who doesn’t need/want more reporting/visualizations and tracking of what’s going on within an environment?
I think it’s safe to say that when it comes to “Reporting,” it often feels like less actually is ‘less’ (and sometimes, ‘more’ is even less ‘less,’ or ‘more less?’ How should one say that?). Reporting is never ‘enough’ or ‘done’ but we steadily expand and improve that aspect of our services – and we’re constantly doing more.
Title: The Twelve Days of Blog-mas: No. 7 – Architecture Visuals – for Your Reference or Your Own Docs
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/7/23
Content excerpt:
A softball for #7 … enjoy!
Title: The Twelve Days of Blog-mas: No. 8 – The Evolution of Windows Server Management
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/8/23
Content excerpt:
As was discussed previously, our Endpoint Management modernization story is compelling. The server team overheard that good news and is curious – but the Server Management discipline is quite different than Endpoint management.
Server teams manage/operate systems that are usually locked away in datacenters – either their own and/or a cloud provider. They’re usually not exposed to physical loss or theft, nor people shoulder-surfing at a coffee shop. They’re usually only accessible via remote management capabilities. They usually have much more stringent change control and update processes – and often extreme business sensitivity to reboots (especially unplanned, but planned ones, too).
So, what is our Server Management story then, circa ‘Holidays 2023?’
Well, I’m glad you asked – and I get this question a lot these days.
Title: Introduction to Network Trace Analysis 4: DNS (it’s always DNS)
Source: Core Infrastructure and Security
Author: Will Aftring
Publication Date: 12/11/23
Content excerpt:
Howdy everyone! I’m back to talk about one of my favorite causes of heartache, the domain name system (DNS). This will be our first foray into an application layer protocol. The concept of DNS is simple enough, but it can lead to some confusing situations if you don’t keep its function in mind. No time to waste, let’s get going!
Title: The Twelve Days of Blog-mas: No.9 – It’s a Multi-Tenant and Cross-Platform World: Part I
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/12/23
Content excerpt:
Greetings! Before the cloud, when on-prem Active Directory was the hub of many enterprise architectures, business needs often drove the requirement to expand single-domain AD forests into multi-domain AD forests. Even in the NT days, one might have ‘Account Domains’ and ‘Resource Domains’ – connected via one-ways trusts. As was often the case, multiple existing NT 4.0 domains were ‘upgraded’ into a single AD forest, as additional domains. These days, a single-domain AD Forest is pretty rare for main-stream use.
Title: The Twelve Days of Blog-mas: No.10 – It’s a Multi-Tenant and Cross-Platform World: Part II
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/13/23
Content excerpt:
In Part I of this mini-series, I discussed some of the new hotness around multi-tenant capabilities in our Entra ID space. In Part II, I’ll cover cross-platform support across several of our cloud services. The cloud era ushered in mainstream cross-platform support from many Microsoft services. Like the title of this post says, anymore, it’s a cross-platform world.
Title: The Twelve Days of Blog-mas: No.11 – The Kitchen Sink
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/14/23
Content excerpt:
I am running out of days for my “Twelve Days” timeframe, so I’m dropping a pile of topics here that I feel are important/helpful but less-known.
Apologies in advance for the brevity and link-breadcrumbs.
Title: The Twelve Days of Blog-mas: No.12 – Copilot(s) – Your AI Assistant(s)
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/15/23
Content excerpt:
Now, you didn’t really think I would go for 12 without one about Copilot, did you?
Our AI/ML efforts have been on-going for a long time, but very recently, they’ve gone mainstream -and SUCH a cool logo/icon. Be aware, though, for now, this space changes frequently, varies by region/market and software version (Windows, Office apps, Edge, etc.). Docs, product names, major and minor functionality are all moving very fast. Do your brain a favor and make some peace with that – but then, jump into the pool!
Title: Designing Cloud Architecture: Creating Professional Azure Diagrams with PowerPoint
Source: Core Infrastructure and Security
Author: Werner Rall
Publication Date: 12/17/23
Content excerpt:
In the fast-evolving landscape of cloud computing, the ability to visually represent complex architectures is not just a skill but a necessity. Among the myriad of tools and platforms, Microsoft Azure stands as a titan, offering a vast array of services that cater to diverse computing needs. However, the true challenge lies in effectively communicating the structure and functionality of Azure-based solutions. This is where the power of visualization comes into play, and surprisingly, a tool as familiar as PowerPoint emerges as an unlikely ally.
Title: Windows 365 deployment checklist
Source: FastTrack
Author: Josh Gutierrez
Publication Date: 12/22/23
Content excerpt:
We’re excited to announce that we’ve just released an updated Windows 365 deployment checklist in the Microsoft 365 admin center (MAC).
Title: Known Issue: Some management settings become permanent on Android 14
Source: Intune Customer Success
Author: Intune Support Team
Publication Date: 12/18/23
Content excerpt:
Google recently identified two issues in Android 14 that make some management policies permanent on non-Samsung devices. When a device is upgraded from Android 13 to Android 14, certain settings are made permanent on the device. Additionally, when devices that have been upgraded to Android 14 are rebooted, other settings are made permanent on the device.
Title: Transforming the iOS/iPadOS ADE experience in Microsoft Intune – Microsoft Community Hub
Source: Intune Customer Success
Author: Intune Support Team
Publication Date: 12/19/23
Content excerpt:
In July of 2021, we announced that Running the Company Portal in Single App Mode until authentication is not a supported flow by Apple for iOS/iPadOS automated device enrollment (ADE). Since then, we’ve been hard at work to improve the ADE experience through the release of Setup Assistant with modern authentication, Just in Time (JIT) registration and compliance remediation, and the “Await until configuration” setting.
Title: Wired for Hybrid – What’s New in Azure Networking December 2023 edition
Source: ITOps Talk
Author: Pierre Roman
Publication Date: 12/20/23
Content excerpt:
Azure Networking is the foundation of your infrastructure in Azure. Each month we bring you an update on What’s new in Azure Networking.
In this blog post, we’ll cover what’s new with Azure Networking in December 2023. In this blog post, we will cover the following announcements and how they can help you.
Enjoy!
Title: Deploy secret-less Conditional Access policies with Microsoft Entra ID Workload Identity Federation
Source: Microsoft Entra (Azure AD)
Author: Claus Jespersen
Publication Date: 12/4/23
Content excerpt:
Many customers face challenges in managing their Conditional Access (CA) policies. Over time, they accumulate more and more policies that are created ad-hoc to solve specific business scenarios, resulting in a loss of overview and increased troubleshooting efforts. Microsoft has provided guidance on how to structure your Conditional Access policies in a way that follows the Zero Trust principles, using a persona-based approach. The guidance includes a set of Conditional Access policies that can serve as a starting point. These CA policies can be automated from a CI/CD pipeline using various tools. One such tool is Microsoft365DSC, an open-source tool developed by members of the Microsoft Graph Product Group, who are still actively involved in its maintenance.
Title: Enhancements to Microsoft Entra certificate-based authentication
Source: Microsoft Entra (Azure AD)
Author: Alex Weinert; Vimala Ranganathan
Publication Date: 12/13/23
Content excerpt:
At Ignite 2022, we announced the general availability of Microsoft Entra certificate-based authentication (CBA) as part of Microsoft’s commitment to Executive Order 14028, Improving the Nation’s Cybersecurity. Based on our experience working with government customers, PIV/CAC cards are the most common authentication method used within the federal government. While valuable for all customers, the ability to use X.509 certificate for authentication directly against Entra ID is particularly critical for federal government organizations using PIV/CAC cards and looking to easily comply with the Executive Order 14028 requirements as well as customers who want to migrate from a federated server like Active Directory Federated Server to Entra ID for CBA.
Since then, we’ve added many new features and enhancements, which made CBA available on all platforms, including mobile, with support for certificates on devices as well as external security keys like YubiKeys. Customers now have more control and flexibility to tailor authentication policies by certificate and resource type, as well as user group and select certificate strength for different users, use CBA with other methods for multi-factor or step-up authentication, and set high affinity (strong) binding for either the entire tenant or by user group.
Vimala Ranganathan, Product Manager on Microsoft Entra, will now talk about how these new features will help in your journey toward phishing-resistant MFA.
Title: Introducing New Features of Microsoft Entra Permissions Management
Source: Microsoft Entra (Azure AD)
Author: Joseph Dadzie
Publication Date: 12/14/23
Content excerpt:
Microsoft Entra Permissions Management is a Cloud Infrastructure Entitlement Management (CIEM) solution that helps organizations manage the permissions of any identity across organizations’ multicloud infrastructure. With Permissions Management, organizations can assess, manage, and monitor identities and their permissions continuously and right-size them based on past activity.
Today, we’re thrilled to unveil the details of our Ignite announcement and introduce new features and APIs for Permissions Management, enhancing your overall permissions management experience.
Title: Advancing Cybersecurity: The Latest enhancement in Phishing-Resistant Authentication
Source: Microsoft Entra (Azure AD)
Author: Alex Weinert
Publication Date: 12/15/23
Content excerpt:
Today, I’m excited to share with you several new developments in the journey towards phishing-resistant authentication for all users! This isn’t just essential for compliance with Executive Order 14028 on Improving the Nation’s Cybersecurity but is increasingly critical for the safety of all the orgs and users who bet on digital identity.
Title: Strengthening identity protection in the face of highly sophisticated attacks
Source: Security, Compliance, and Identity
Author: Alex Weinert
Publication Date: 12/12/23
Content excerpt:
When it comes to security at Microsoft, we’re customer zero as our Chief Security Advisor and CVP Bret Arsenault often emphasizes. That means we think a lot about how we build security into everything we do—not only for our customers—but for ourselves. We continuously work to improve the built-in security of our products and platforms. With the unparalleled breadth of our digital landscape and the integral role we play in our customers’ businesses, we feel a unique responsibility to take a leadership role in securing the future for our customers, ourselves, and our community.
To that end, on November 2nd, 2023, we launched the Secure Future Initiative (SFI). It’s a multi-year commitment to advance the way we design, build, test, and operate our technology to ensure we deliver solutions that meet the highest possible standards of security.
Title: A new, modern, and secure print experience from Windows
Source: Security, Compliance, and Identity
Author: Johnathan Norman
Publication Date: 12/13/23
Content excerpt:
Over the past year, the MORSE team has been working in collaboration with the Windows Print team to modernize the Windows Print System. This new design represents one of the largest changes to the Windows Print stack in more than 20 years. The goal was to build a more modern and secure print system that maximizes compatibility and puts users first. We are calling this new platform Windows Protected Print Mode (WPP). We believe users should be Secure-by-Default which is why WPP will eventually be on by default in Windows.
Title: Plan for Windows 10 EOS with Windows 11, Windows 365, and ESU
Source: Windows IT Pro
Author: Jason Leznek
Publication Date: 12/5/23
Content excerpt:
Windows 10 will reach end of support (EOS) on October 14, 2025. While two years may seem like a long runway, ensuring a modernized infrastructure will help keep your organization productive and its data secure. We’re encouraged to see organizations realizing the benefits of Windows 11 by upgrading eligible devices to Windows 11 well ahead of the EOS date. Consider joining organizations like Westpac who recently leveraged Microsoft Intune, Windows Autopatch, and App Assure to efficiently move 40,000 employees to Windows 11, while also incorporating new Windows 11 devices as part of a regular hardware refresh cycle.
In this post, learn about the various options you have to smoothly transition to Windows 11, including extended protection for those needing more time.
Title: Upcoming changes to Windows Single Sign-On
Source: Windows IT Pro
Author: Adam Steenwyk
Publication Date: 12/14/23
Content excerpt:
Microsoft has been working to ensure compliance with the Digital Markets Act (DMA) in the European Economic Area (EEA). As part of this ongoing commitment to provide your organization with solutions that comply with global regulations like the DMA, we will be changing the ways Windows works. Signing in to apps on Windows is one area where we will be making such changes.
Title: Skilling snack: Network security basics for endpoints
Source: Windows IT Pro
Author: Clay Taylor
Publication Date: 12/14/23
Content excerpt:
Why is network security important? In the chip-to-cloud environment, every component adds a layer of protection. It’s the Zero Trust approach to Windows security. We’ve already covered the basics of endpoint, identity, and data security in Skilling snack: Windows security fundamentals. You can also dig into another layer with Skilling snack: Windows application security. Today, let’s bake in a high-level overview of network security capabilities and options.
Previous CTO! Guides:
CIS Tech Community-Check This Out! (CTO!) Guides
Additional resources:
Azure documentation
Azure pricing calculator (VERY handy!)
Microsoft Azure Well-Architected Framework
Microsoft Cloud Adoption Framework
Windows Server documentation
Windows client documentation for IT Pros
PowerShell documentation
Core Infrastructure and Security blog
Microsoft Tech Community blogs
Microsoft technical documentation (Microsoft Docs)
Sysinternals blog
Microsoft Learn
Microsoft Support (Knowledge Base)
Microsoft Archived Content (MSDN/TechNet blogs, MSDN Magazine, MSDN Newsletter, TechNet Newsletter)
Microsoft Tech Community – Latest Blogs –Read More
Domain and certificate bindings for IDN hostnames in Azure App Service
Overview
When it comes to website security, one important step is to add a custom domain and connect it with a TLS/SSL certificate. This not only enhances the trust and safety of your website but also ensures that your visitors’ information is encrypted and protected. Azure App Service provides TLS bindings for the most common custom domains. This blog discusses the special domain and certificate binding situations in Azure App Service for IDN hostnames.
What is an IDN hostname?
An IDN hostname is a domain name that includes characters used in the local representation of languages not written with the basic Latin alphabet “a-z”. These characters can be Arabic, Hebrew, Chinese, Cyrillic, Tamil, Hindi, and more.
What is Punnycode?
Domain bindings for IDN hostnames in Azure App Service
There are several common ways to bind a domain to the Azure App Service, such as Azure portal, Azure CLI/ PowerShell, and ARM template. For domain bindings on the portal, it is not supported yet to add IDN hostnames. For now, we only support domains with alphanumeric characters (A-Z, a-z, 0-9), period (‘.’), dash (‘-‘), and asterisk (‘*’) to be added.
For domain bindings with Azure CLI/ PowerShell/ ARM template, we could currently bypass this validation and could add those special punny code characters successfully. Referencing this blog: Create and bind the custom domain contains special Unicode character in App Service Using Azure CLI – Microsoft Community Hub
Certificate validations for IDN hostnames in Azure App Service
To enable secure communication between the App Service and the client, a TLS/SSL certificate is necessary. There are two types of certificates to secure your domain: wildcard certificates and standard certificates. A wildcard certificate secures multiple subdomains under a single domain, while a standard certificate is specific to a single domain. In most use cases, a wildcard cert will be used to secure different subdomains as this is more manageable.
In the scenario of binding the certificate to the IDN hostnames, a wildcard cert is not recommended as it will encounter unexpected errors.
Error message:
Error screenshot:
Getting an error from PowerShell command lines as well
Workaround:
We are splitting the wildcard certificate when validating from the backend and this is why we are getting the unmatching error. The quickest workaround for now is to request a standard certificate specific to this hostname.
Summary
Overall, Azure App Service somehow enables you to configure domain bindings for IDN hostnames. You can associate an IDN hostname with your Azure App Service app with command lines. Additionally, you can manage the certificate bindings for these domains with standard certificates, ensuring security.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #469:Implementing a Linked Server Alternative with Azure SQL Database and C#
In scenarios where direct Linked Server connections are not feasible, such as between Azure SQL Database and an on-premise SQL Server, developers often seek alternative solutions. This blog post introduces a C# implementation that simulates the functionality of a Linked Server for data transfer between Azure SQL Database and SQL Server, providing a flexible and efficient way to exchange data.
Overview of the Solution
The proposed solution involves a C# class, ClsRead, designed to manage the data transfer process. The class connects to both the source (SQL Server) and the target (Azure SQL Database), retrieves data from the source, and inserts it into the target database.
Key Features
Connection Management: ClsRead maintains separate connection strings for the source and target databases, allowing for flexible connections to different SQL Server and Azure SQL Database instances.
Data Transfer Control: The class includes methods to execute a SQL query on the source database, retrieve the results into a DataTable, and then use SqlBulkCopy to efficiently insert the data into the target Azure SQL Database.
Error Handling: Robust error handling is implemented within each method, ensuring that any issues during the connection, data retrieval, or insertion processes are appropriately logged and can be managed or escalated.
Implementation Details
Class Properties
SourceConnectionString: Connection string to the source SQL Server.
TargetConnectionString: Connection string to the target Azure SQL Database.
SQLToExecuteFromSource: SQL query to be executed on the source database.
TargetTable: Name of the target table in Azure SQL Database where data will be inserted.
Methods
TransferData(): Coordinates the data transfer process, including validation of property values.
GetDataFromSource(): Executes the SQL query on the source database and retrieves the results.
InsertDataIntoAzureSql(DataTable TempData): Inserts the data into the target Azure SQL Database using SqlBulkCopy.
Error Handling
The methods include try..catch blocks to handle any exceptions, ensuring that errors are logged, and the process can be halted or adjusted as needed.
Usage Scenario
A typical use case involves setting up the ClsRead class with appropriate connection strings, specifying the SQL query and the target table, and then invoking TransferData(). This process can be used to synchronize data between different databases, migrate data, or consolidate data for reporting purposes.
For example, we have in our on-premise server the table PerformanceVarcharNVarchar that we need only the top 2000 rows and we need to compare with the table PerformanceVarcharNVarchar in our Azure SQL Database.
The first thing that we are going to perform is to create the temporal table, of course, we could create a normal table.
DROP TABLE IF EXISTS [##__MyTable__]
CREATE Table [##__MyTable__] (ID INT Primary Key)
Once we have created the table we are going to call our ClsRead with the following parameters:
static void Main(string[] args)
{
ClsRead oClsRead = new ClsRead();
oClsRead.SourceConnectionString = “Server=OnPremiseServer;User Id=userName;Password=Pwd1!;Initial Catalog=DbSource;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.TargetConnectionString = “Server=tcp:servername.database.windows.net,1433;User Id=username1;Password=pwd2;Initial Catalog=DBName;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.SQLToExecuteFromSource = “Select TOP 2000 ID from dbo.PerformanceVarcharNVarchar”;
oClsRead.TargetTable = “[##__MyTable__]”;
oClsRead.TransferData();
}
If everything has been executed correctly, we could execute, queries like this one:
select * from [##__MyTable__] A
INNER JOIN PerformanceVarcharNVarchar B
ON A.ID = B.ID
Conclusion
While a direct Linked Server connection is not possible from Azure SQL Database, the ClsRead class provides a viable alternative with flexibility and robust error handling. This approach is particularly useful in cloud-based and hybrid environments where Azure SQL Database is used in conjunction with on-premise SQL Server instances.
using System;
using System.Collections.Generic;
using System.Data;
using System.Text;
using Microsoft.Data.SqlClient;
namespace LinkedServer
{
class ClsRead
{
private string _sSourceConnectionString = “”;
private string _sTargetConnectionString = “”;
private string _sSQLToReadFromSource = “”;
private string _sTargetTable = “”;
public string SourceConnectionString
{
get
{
return _sSourceConnectionString;
}
set
{
_sSourceConnectionString = value;
}
}
public string TargetConnectionString
{
get
{
return _sTargetConnectionString;
}
set
{
_sTargetConnectionString = value;
}
}
public string SQLToExecuteFromSource
{
get
{
return _sSQLToReadFromSource;
}
set
{
_sSQLToReadFromSource = value;
}
}
public string TargetTable
{
get
{
return _sTargetTable;
}
set
{
_sTargetTable = value;
}
}
// Constructor por defecto
public ClsRead() { }
public void TransferData()
{
// Check that all properties are set
if (string.IsNullOrEmpty(SourceConnectionString) ||
string.IsNullOrEmpty(TargetConnectionString) ||
string.IsNullOrEmpty(SQLToExecuteFromSource) ||
string.IsNullOrEmpty(TargetTable))
{
throw new InvalidOperationException(“All properties must be set.”);
}
try
{
DataTable TempData = GetDataFromSource();
InsertDataIntoAzureSql(TempData);
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“Error during data transfer: ” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
private DataTable GetDataFromSource()
{
DataTable dataTable = new DataTable();
try
{
using (SqlConnection connection = new SqlConnection(SourceConnectionString))
{
using (SqlCommand command = new SqlCommand(SQLToExecuteFromSource, connection))
{
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
dataTable.Load(reader);
}
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Obtaining data from Source..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
return dataTable;
}
private void InsertDataIntoAzureSql(DataTable TempData)
{
try
{
using (SqlConnection connection = new SqlConnection(TargetConnectionString))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = TargetTable;
bulkCopy.BatchSize = 1000;
bulkCopy.BulkCopyTimeout = 50;
bulkCopy.WriteToServer(TempData);
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Saving data into target..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
}
}
Microsoft Tech Community – Latest Blogs –Read More
Decoding the Dynamics: Dapr vs. Service Meshes
Dapr and Service Meshes are more and more usual suspects in Cloud native architectures. However, I noticed that there is still some confusion about their purpose, especially because of some overlapping features. People sometimes wonder how to choose between Dapr and a Service Mesh or even if both should be enabled at the same time.
The purpose of this post is to highlight the differences, especially on the way they handle mTLS, as well as the impact on the application code itself. You can already find a summary about how Dapr and Service Meshes differ on the Dapr web site but the explanations are not deep enough to really understand the differences. This blog post is an attempt to dive deeper and give you a real clue on what’s going on behind the scenes. Let me first start with what Dapr and Service Meshes have in common.
Things that Dapr and Service Meshes have in common
Secure service-to-service communication with mTLS encryption
Service-to-service metric collection
Service-to-service distributed tracing
Resiliency through retries
Yes, this is the exact same list as the one documented on the Dapr web site! However, I will later focus on the mTLS bits because you might think that these are equivalent, overlapping features but the way Dapr and Service Meshes enforce mTLS is not the same. I’ll show some concrete examples with Dapr and the Linkerd Service Mesh to illustrate the use cases.
On top of the above list, I’d add:
They both leverage the sidecar pattern, although the Istio Service Mesh is exploring the Ambient Mesh, which is sidecar free, but the sidecar approach is still mainstream today. Here again, the role of the sidecars and what happens during the injection is completely different between Dapr and Service Meshes.
They both allow you to define fine-grained authorization policies
They both help deal with distributed architectures
Before diving into the meat of it, let us see how they totally differ.
Differences between Dapr and Service Meshes
Applications are Mesh-agnostic, while they must explicitly be Dapr-aware to leverage the Dapr capabilities. Dapr infuses the application code. Being Dapr-aware does not mean that you must use a specific SDK. Every programming language that has an HTTP client and/or gRPC client can benefit from the great Dapr features. However, the application must comply to some Dapr pre-requisites, as it must expose an API to initialize Dapr’s app channel.
Meshes can deal with both layer-4 (TCP) and layer-7 traffic, while Dapr is focused on layer-7 only protocols such as HTTP, gRPC, AMQP, etc.
Meshes serve infrastructure purposes while Dapr serves application purposes
Meshes typically have smart load balancing algorithms
Meshes typically let you define dynamic routes across multiple versions of a given web site/API
Some meshes ship with extra OAuth validation features
Some meshes let you stress your applications through Chaos Engineering techniques, by injecting faults, artificial latency, etc.
Meshes typically incur a steep learning curve while Dapr is much smoother to learn. On the contrary, Dapr even eases the development of distributed architectures.
Dapr provides true service discovery, not meshes
Dapr is designed from the ground up to deal with distributed and microservice architectures, while meshes can help with any architecture style, but prove to be a good ally for microservices.
Demo material
I will reuse one demo app that I developed 4 years ago (time flies), which is a Linkerd Calculator. The below figure illustrates it:
Some services talking together. MathFanBoy, a console app randomly talking to the arithmetic operations, while the percentage operation also calls multiplication and division. The goal of this app was to generate traffic and show how Linkerd helped us see in near real time what’s going on. I also purposely introduced exceptions by performing divisions by zero…to also demo how Linkerd (or any other mesh) helps spot errors. Feel free to clone the repo and try it out on your end if you want to test what is later described in this post. I have now created the exact same app, using Dapr, which is made available here. Let us now dive into the technical details.
Diving into the technical differences
Invisible to the application code vs code awareness
As stated earlier, an application is agnostic to the fact that it is injected or not by a Service Mesh. If you look at the application code of the Linkerd Calculator, you won’t find anything related to Linkerd. The magic happens at deployment time where we annotate our K8s deployment to make sure the application gets injected by the Mesh. On the other hand, the application code of the Dapr calculator is directly impacted in multiple ways:
– While I could use a mere .NET Console App for the Linkerd Calculator, I had to turn MathFanBoy into a web host, to comply with the Dapr app initialization channel. However, because MathFanBoy generates activity by calling random operations, I could not just turn it as an API, so I had to run different tasks in parallel. Here are the most important bits:
class Program
{
static string[] endpoints = null;
static string[] apis = new string[5] { “addition”, “division”, “multiplication”, “substraction”, “percentage” };
static string[] operations = new string[5] { “addition/add”, “division/divide”, “multiplication/multiply”, “substraction/substract”, “percentage/percentage” };
static async Task Main(string[] args)
{
var host = CreateHostBuilder(args).Build();
var runHostTask = host.RunAsync();
var loopTask = Task.Run(async () =>
{
while (true)
{
var pos = new Random().Next(0, 5);
using var client = new DaprClientBuilder().Build();
var operation = new Operation { op1 = 10, op2 = 2 };
try
{
var response = await client.InvokeMethodAsync<object, object>(
apis[pos], // The name of the Dapr application
operations[pos], // The method to invoke
operation); // The request payload
Console.WriteLine(response);
}
catch(Exception ex) {
Console.WriteLine(ex.ToString());
}
await Task.Delay(5000);
}
});
await Task.WhenAll(runHostTask, loopTask);
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
Lines 9 and 10 create the web host. Between lines 13 and 35, I generate random calls to the operations, but here again we have another difference as the application is using the Dapr Client’s InvokeMethodAsync to perform the calls. As you might have noticed, the application does not need to know the URL of these services. Dapr will discover where the services are located, thanks to its Service Discovery feature. The only thing we need to provide is the App ID and the operation that we want to call. With the Linkerd calculator, I had to know the endpoints of the target services, so they were injected through environment variables during the deployment. The same principles apply to the percentage operation, which is a true API. I had to inject the Dapr client through Dependency Injection:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers().AddDapr();
}
In order to to get an instance through the controller’s constructor:
public PercentageController(ILogger<PercentageController> logger, DaprClient dapr)
{
_logger = logger;
_dapr = dapr;
}
and use that instance to call the division and multiplication operations from within another controller operation, using again the Invoke method as for MathFanBoy. As you can see, the application code is explicitly using Dapr and must comply to some Dapr requirements. Dapr has many features other than Service Discovery but I’ll stick to that since the point is made that a Dapr-injected Application must be Dapr-aware while it is completely agnostic of a Service Mesh.
mTLS
Now things will get a bit more complicated. While both Service Meshes and Dapr implement mTLS as well as fine-grained authorization policies based on the client certificate presented by the caller to the callee, the level of protection of Dapr-injected services is not quite the same as the one from Mesh-injected services.
Roughly, you might think that you end up with something like this:
A very comparable way of working between Dapr and Linkerd. This is correct but only to some extents. If we take the happy path, meaning every pod is injected by Linkerd or Dapr, we should end up in the above situation. However, in a K8s cluster, not every pod is injected by Dapr nor Linkerd. The typical reason why you enable mTLS is to make sure injected services are protected from the outside world. By outside world, I mean anything that is not either Dapr-injected, either Mesh-injected. However, with Dapr, nothing prevents the following situation:
The blue path is taking the Dapr route and is both encrypted and authenticated using mTLS. However, the green paths from both a Dapr-injected pod and a non-Dapr pod still goes through in plain text and anonymously. How come is that possible?
For the blue path, the application is going through the Dapr route ==> http://localhost:3500/ this is the port that the Daprd sidecar listens to. In that case, the sidecar will find out the location of the target and will talk to the target service’s sidecar. However, because Dapr does not intercept network calls, nothing prevents you from taking a direct route, from both a Dapr-injected pod and a non-Dapr one (green paths). So, you might end up in a situation where you enforce a strict authorization policy as shown below:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: multiplication
namespace: dapr-calculator
spec:
accessControl:
defaultAction: deny
trustDomain: “public”
policies:
– appId: mathfanboy
defaultAction: allow
trustDomain: ‘public’
namespace: “dapr-calculator”
– appId: percentage
defaultAction: allow
trustDomain: ‘public’
namespace: “dapr-calculator”
where you only allow MathFanBoy and Percentage to call the multiplication operation, and yet have other pods bypass the Dapr sidecar, which ultimately defeats the policy itself. Make no mistake, the reason why we define such policies is to enforce a certain behavior and I don’t have peace of mind if I know that other routes are still possible.
So, in summary, Dapr’s mTLS and policies are only effective if you take the Dapr route but nothing prevents you from taking another route.
Let us see how this works with Linkerd. As stated on their web site, Linkerd also does not enforce mTLS by default and has added this to their backlog. However, with Linkerd (same and even easier with Istio), we can make sure that only authorized services can talk to meshed ones. So, with Linkerd, we would not end up in the same situation:
First thing to notice, we simply use the service name to contact our target because there is no such Dapr route in this case nor any service discovery feature. However, because Linkerd leverages the Ambassador pattern, which intercepts all network calls coming in and going outside of a pod. Therefore, when the application container of a Linkerd-injected pod tries to connect to another service, Linkerd’s sidecar performs the call to the target, which lands onto the other sidecar (if the target is well a Linkerd-injected service of course). In this case no issue. Of course, as for Dapr, nothing prevents us from directly calling the pod IP of the target. Yet, from an injected pod, the Linkerd sidecar will intercept that call. From a non-injected pod, there is no such outbound sidecar, but our target’s sidecar will still tackle inbound calls, so you can’t bypass it. By default, because Linkerd does not enforce mTLS, it will let it go, unless you define fine-grained authorizations as shown below:
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
namespace: rest-calculator
name: multiplication
spec:
podSelector:
matchLabels:
app: multiplication
port: 80
proxyProtocol: HTTP/1
—
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
namespace: rest-calculator
name: multiplication-from-mathfanboy
spec:
server:
name: multiplication
client:
meshTLS:
identities:
– mathfanboy
– percentage
In this case, only MathFanBoy and and Percentage will be allowed to call the multiplication operation. In other words, Linkerd allows us to enforce mTLS, whatever route is taken. With Istio, it’s even easier since you can simply enforce mTLS through the global mesh config. You do not even need to specify explicit authorization policies (although it is a best practice). Just to illustrate the above diagrams, here are some screenshots showing these routes in action:
I’m first calling the multiplication operation from the addition pod, while we told Dapr that only MathFanboy and Percentage could call multiplication. As you can see, the Dapr policy kicks in and forbids the call as expected.
but while this policy is defined, I can still call the multiplication using a direct route (pod IP):
and the same applies to non-injected pods of course.
While, with the Linkerd policy in place, there will be no way to call multiplication other than from MathFanBoy and Percentage. For sake of brevity, I won’t show you the screenshots but trust me, you will be blocked if you try.
Let us now focus on the injection process which will clarify what is going on behind the scenes.
Injection process Dapr vs Service Mesh
Both Dapr and Service Meshes will inject application pods according to annotations. They both have controllers in charge of injecting their victims. However, when looking at the lifecycle of a Dapr-injected pod as well as a Linkerd-injected pod, we can see noticeable differences.
When injecting Linkerd to an application, in plain Kubenet (not using the CNI plugin), we notice that Linkerd injects not only the sidecar but also an Init Container:
When looking more closely at the init container, we can see that it requires a few capabilities such as NET_ADMIN and NET_RAW, and that is because the init container will rewrite IP tables to make sure network traffic entering and leaving the pod is captured by Linkerd’s sidecar. When using Linkerd together with a CNI, the same principle applies but route tables are not rewritten by the init container. No matter how you use Linkerd, all traffic is redirected to its sidecar. This means that the sidecar cannot be bypassed.
When injecting Dapr, we see that there is no Init Container and only the daprd container (sidecar) is injected:
There is no rewrite of any IP table, meaning that the sidecar can be bypassed without any problem, thus bypass Dapr routes and Dapr policies. In other words, we can easily escape the Dapr world.
Wrapping up
As stated initially, I mostly focused on the impact of Dapr or a Service Mesh on the application itself and how the overall protection given by mTLS varies according to whether you use Dapr or a Service Mesh. I hope it is clear by now that Dapr is definitely an application framework that infuses the application code, while a Service Mesh is completely transparent for the application. Note that the latter is only true when using a decent Service Mesh. By decent, I mean something stable, performant and reliable. I have been recently confronted to a Mesh that I will not name here, but this was a true nightmare for the application and it kept breaking it.
Although Dapr & Service Meshes seem to have overlapping features, they are not equally covering the workloads. With regards to the initial question about when to use Dapr or a Service Mesh, I would take the following elements into account:
– For distributed architectures that are also heavily event-driven, Dapr is a no brainer because Dapr brings many features on the table to interact with message and event brokers, as well as state stores. Yet, Service Meshes could still help measure performance, spot issues and load balance traffic by understanding protocols such as HTTP/2, gRPC, etc. Meshes would also help in the release process of the different services, splitting traffic across versions, etc.
– For heterogeneous workloads, with a mix of APIs, self-hosted databases, self-hosted message brokers (such as Rabbit MQ), etc., I would go for Service Meshes.
– If the trigger of choosing a solution is more security-centric, I would go for a Service Mesh
– If you need to satisfy all of the above, I would combine Dapr and a Service Mesh for microservices, while using Service Mesh only for the other types of workloads. However, when combining, you must consider the following aspects:
– Disable Dapr’s mTLS and let the Service Mesh manage this, including fine-grained authorization policies. Beware that doing so, you would loose some Dapr functionality such as defining ACLs on the components
– Evaluate the impact on the overall performance as you would have two sidecars instead of one. From that perspective, I would not mix Istio & Dapr together, unless Istio’s performance dramatically improves over time.
– Evaluate the impact on the running costs because each sidecar will consume a certain amount of CPU and memory, which you will have to pay for.
– Assess whether your Mesh goes well with Dapr. While an application is agnostic to a mesh, Dapr is not, because Dapr also manipulates K8s objects such as K8s services, ports, etc. There might be conflicts between what the mesh is doing and what Dapr is doing. I have seen Dapr and Linkerd be used together without any issues, but I’ve also seen some Istio features being broken because of Dapr naming its ports dapr-http instead of http. I reported this problem to the Dapr team 2 years ago but they didn’t change anything.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Viva Glint Monthly Newsletter – January 2024
Happy New Year from our Microsoft Viva Glint family!
Welcome to the January edition of our Viva Glint newsletter! This communication is full of information that will help you get the most from your Viva Glint programs.
Our next feature release
Viva Glint’s next feature release is scheduled for January 13, 2024. Your dashboard will provide date and timing details two or three days before the release.
In your Viva Glint programs
Customize a message to your survey takers – Within General Settings, enter any org-specific guidance that you’d like added to your existing privacy statement. This message will apply to all new and scheduled surveys and can be translated into additional languages. Within a specific survey, go ahead and edit that statement in Program Setup, as needed.
Customize your logo and survey email content, too! Your customization capabilities are enhanced! By following in-platform guidance, we’re empowering you to take the reins and deliver customized email communications that meet Microsoft compliance requirements. Use the Microsoft Admin Center to set a custom logo and sending domain to create customized survey emails in Viva Glint that resonate with your organization.
We are just a few weeks away from the Copilot in Viva Glint Private Preview! This innovative new tool within the Viva Glint platform is designed to help organizational leaders and HR analysts easily understand, interpret, and act on employee feedback. Say “goodbye” to the tedious task of sifting through thousands of comments – Microsoft Copilot in Viva Glint provides short, natural language summaries that accurately represent the feedback you need to see.
Changes to how you’ll set up your employee attributes – As an admin, the changes we’re rolling out will allow you to view and edit your original schema after its initial setup, incorporate user time zones, setup survey and dashboard language fields, set up personal email fields for surveying exiting employees, and we’ve updated tenure buckets, too. Read about the new attribute setup experience.
News from Viva People Science
The Microsoft Viva People Science team has been busy hosting events and authoring blogs on current tips and trends to empower you to improve your business. Check out our most recent content:
• People Science Predictions: The impact of AI on the Employee Experience – Read our blog from the Viva People Science team, who has been busy making predictions about how AI is likely to impact employees and organizations. Read the 12 Predictions blog.
Connect and learn with Glint
Join us for our first Viva Glint: Ask the Experts session! Use this early registration link to join our new series to have questions answered about your Viva Glint programs.
We have platform trainings for Viva Glint admins and managers on Microsoft Learn! Use step-by-step guides to understand our dashboards, reports, and how to have quality team conversations.
All Viva Glint users can benefit from our new module – Navigate and Share your Viva Glint results module located on Microsoft Learn. Use these step-by-step guides to understand our dashboards, reports, and how to have quality team conversations.
Thanks to all our Viva Glint Learning Circles first-time joiners! The Viva Glint Learning Circles program is open to all customers who want to connect with other like-minded talent professionals to share knowledge, experiences, and challenges related to employee experience. Watch for news of our next sign-up period in this monthly newsletter.
How are we doing?
Please share feedback with your Customer Experience Program Manager (CxPM) if you have one, or by emailing us here. Also, if you do not want to receive these emails in the future, please let us know and you will be removed from the distribution list. Conversely, if there are people on your teams that should be receiving this monthly update, send us those emails and we’ll be sure they are added.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Gain AI and cloud technical skills with Microsoft Depth Workshops
Amid the increasing integration of AI into various business applications, cloud and AI skilling is essential for partners to reach their full potential. Microsoft is committed to helping address these skilling needs so that partners can help customers enable new services and products that use these quickly advancing technologies. Our partners have asked for our help in giving their employees Microsoft Azure Depth Enablement—in-depth knowledge specifically focused on Azure solutions for the Microsoft platforms and technology they use every day.
As part of our training efforts, Microsoft is now offering partner employees the opportunity to elevate their technical skills in specific lines of technology related to Microsoft AI and the Microsoft Cloud. These multi-day Microsoft Depth Workshops focus on practical technical aspects, including architecture and implementation considerations.
We encourage all partner technical learners to register for the skilling events relevant to their business to enhance architecting, deployment, and implementation skills and help unlock the capabilities of AI and cloud technology and applications.
Hone your technical skills in Microsoft Azure, Business Applications, and Security solutions
Microsoft Depth Workshops embody their titles by offering deeper, hands-on training that builds on advanced certifications. Their goal is to equip partner employees with the knowledge to help customers confidently adopt and optimize Microsoft AI and cloud products and services.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Learn lança novo conteúdo de IA generativa para inovadores
IA generativa para inovadores
Os módulos são os seguintes:
Módulo de aprendizagem
Resumo
Link
Neste projeto de desafio, você usará o Bing Chat para conduzir uma sessão de brainstorming e criar o resumo de uma ideia em um único slide, pronto para implementação. Este seria um grande desafio para concluir no início de um hackathon.
Usando IA generativa para ideação
Desafio: Usando IA Generativa para Prototipagem e MVP (Minimum Viable Product)
Neste módulo, o Bing Chat irá orientá-lo sobre como você pode criar protótipos ou modelos para sua ideia e como você pode implementar o projeto.
Uso de IA generativa para prototipagem e MVP (Minimum Viable Product)
Desafio – Use IA generativa para criar um modelo de negócio para sua startup.
No módulo, você é um Chief Strategy Officer (CSO) e assume a tarefa de criar um Modelo de Negócios/Estratégia usando o guia Business Model Canvas Template.
Mas você não fará isso sozinho. Você vai cocriar essa visão com a Inteligência Artificial: juntos vocês vão idealizar, pesquisar e preparar tudo para preparar sua startup para o sucesso.
Use IA generativa para criar um modelo de negócios para sua startup
Complete um desses projetos de desafio para ganhar um certificado digital no Microsoft Learn hoje!
Prepare-se para a Imagine Cup 2024!
Complete estes módulos para transformar suas ideias brilhantes em projetos de Startup e prepare-se para a competição estudantil Imagine Cup 2024. Os módulos também podem ajudá-lo a se preparar para seu próximo hackathon, portanto, para ajudá-lo a criar materiais de alta qualidade para uma próxima ideia de hackathon e melhorar suas chances de ganhar.
Por fim, ajude-nos a melhorar esse conteúdo para você.
Depois de concluir esses módulos de aprendizado, comente abaxio onde podemos melhorar e conteúdo adicional que você acharia útil para começar sua jornada como empreendedor de IA.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Gain AI and cloud technical skills with Microsoft Depth Workshops
Amid the increasing integration of AI into various business applications, cloud and AI skilling is essential for partners to reach their full potential. Microsoft is committed to helping address these skilling needs so that partners can help customers enable new services and products that use these quickly advancing technologies. Our partners have asked for our help in giving their employees Microsoft Azure Depth Enablement—in-depth knowledge specifically focused on Azure solutions for the Microsoft platforms and technology they use every day.
As part of our training efforts, Microsoft is now offering partner employees the opportunity to elevate their technical skills in specific lines of technology related to Microsoft AI and the Microsoft Cloud. These multi-day Microsoft Depth Workshops focus on practical technical aspects, including architecture and implementation considerations.
We encourage all partner technical learners to register for the skilling events relevant to their business to enhance architecting, deployment, and implementation skills and help unlock the capabilities of AI and cloud technology and applications.
Hone your technical skills in Microsoft Azure, Business Applications, and Security solutions
Microsoft Depth Workshops embody their titles by offering deeper, hands-on training that builds on advanced certifications. Their goal is to equip partner employees with the knowledge to help customers confidently adopt and optimize Microsoft AI and cloud products and services.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Watch the newly released Surface videos for device repair
The engineers at Surface have created new instructional videos demonstrating how to disassemble the newly available Surface devices, along with a high-level overview of how to replace the components. The latest videos are for Surface Laptop Studio 2 and Surface Go 4.
Use these videos as a companion to the Surface Service Guides documentation,
Surface Laptop Studio 2
Surface Go 4
Surface Laptop Go 2 & Surface Laptop Go 3
Surface Laptop 3 & Surface Laptop 4
Surface Pro 9 with 5G
Surface Pro 8
Surface Pro 7+
See also:
Hands-on videos for Surface device repair (Part 1)
Surface Laptop Studio 2
Contents
Introduction
Removing feet and cover
Removing SSD
Removing display module
Removing Surface Connect port and audio jack
Removing micro SD port
Removing USB ports
Removing fans
Removing subwoofer speakers
Removing motherboard
Removing tweeters
Surface Go 4
Contents
Introduction
Removing kickstand
Debonding and removal of the display
Removing hinges
Removing antennae deck
Removing SD connector
Removing blade connector
Removing camera modules
Removing motherboard
Removing speakers
Surface Laptop Go 2 & Laptop Go 3
Surface Laptop 3 & Surface Laptop 4
Surface Pro 9 with 5G
Surface Pro 8
Surface Pro 7+
Learn more
Hands-on videos for Surface device repair (Part 1)
Full playlist of Surface repair videos
Surface for Business service and repair
Microsoft Tech Community – Latest Blogs –Read More
Armchair Architects: Artificial Intelligence, Large Language Models, and Architects (Part 1 of 2)
Welcome back to the fourth season of Armchair Architects! You asked for more, and we’re here to deliver. This season, we’re diving deep into the world of Artificial Intelligence (AI), specifically focusing on large language models (LLMs) with our host David Blank-Edelman and our armchair architects Uli Homann and Eric Charran.
Our conversation kicks off with Eric and Uli, two seasoned architects, discussing their experiences with ChatGPT and Bard. The topic of discussion? Large Language Models (LLMs), a term you’ll hear a lot throughout the season.
Eric shares his disruptive experience with these hosted foundational models, like ChatGPT, which have changed our lives in unexpected and delightful ways. The most impactful way he’s seen it affect his work as an architect is in supporting his role as an architect.
The Architect’s New Assistant
As architects, understanding the product features, prioritized requirements, and non-functional requirements is crucial. Traditionally, this would involve extensive research and application of various patterns like the bulkhead pattern and the orchestrator pattern.
However, the advent of generative AI has revolutionized this process. Eric shares an instance where he plugged some requirements into ChatGPT, suggested the orchestrator model’s relevance, and asked for its opinion. The result? A cogent response on how to meet the requirements, understand all the features (both functional and non-functional), adhere to the architectural patterns, and even get recommendations on other potentially relevant patterns.
This process, which Eric refers to as ‘prompt engineering’, has transformed what used to be a manual activity into an automated one. Now, architects have a research assistant, through AI, that can perform architectural jobs. However, it’s important to note that the architect still needs to be the arbiter of whether the AI’s suggestions are correct to avoid dealing with ‘hallucinations’ or false information generated by the AI, but it’s a great starting point for what used to be a manual activity.
Unpacking the Jargon
During their discussion, Eric mentioned some interesting terms like ‘prompt engineering’ and ‘hallucinations’. They also take a moment to define what a large language model is for those unfamiliar with the term.
In essence, a large language model is a continuation of two technologies that have been growing bigger and bigger – neural networks is an outcome of all of the whole AI work which is a technology invented in the 90s .and deep learning, which was invented by Google in 2015.
The Power of Deep Learning
If you’re a Dune fan, you might liken the process of deep learning to space folding. It’s about folding the neural network to allow for greater depth, hence the term ‘deep learning’. The OpenAI folks, in collaboration with the Azure AI infrastructure, have managed to push this to a size of trillions of variables, creating a large language model.
These large language models focus on human language. It’s not just about speech or words, but also images, code, and other forms of human expression. Essentially, large language models are communication models. This is evident in the work done by OpenAI, Bard, and the Llama models for Meta.
Prompt Engineering: Steering the Model
Prompt engineering is about utilizing human expertise within a specific domain to steer the model to produce productive outputs. A large language model uses its vast training corpus of information to predict the next most likely cogent word in a sequence of words. Prompt engineering structures a query so that the most accurate output is achieved based on the results of the input question.
For instance, instead of asking the model for great patterns to create a microservice, which might result in a dump of information, prompt engineering refines the question. It constructs a prompt so that it specifically outputs the information in a way that can be used effectively.
The Hallucination Check
Of course, there’s the hallucination check. This is a crucial step to ensure the accuracy of the model’s output. But before we delve into hallucinations, it’s important to understand that prompt engineering is not just about direction, but also about constraining.
The corpus that the system has access to is incredibly wide, encompassing human knowledge acquired over thousands of years. Prompt engineering effectively tells the model to constrain what it’s looking at. One of the niftiest tricks in prompt engineering is asking the model to take on a persona. For example, asking the model to assume the role of a software architect looking for patterns for microservices implementations. This allows the model to switch its perspective and provide better and deeper outputs.
As we wrap up Part 1 of this episode, we’re about to head in a slightly different direction. Join us for Part 2 as we continue our exploration of AI and large language models.
Recommended Next Steps
If you’d like to learn more about the general principles prescribed by Microsoft, we recommend Microsoft Cloud Adoption Framework for platform and environment-level guidance and Azure Well-Architected Framework. You can also register for an upcoming workshop led by Azure partners on cloud migration and adoption topics and incorporate click-through labs to ensure effective, pragmatic training.
You can view the whole video below and check our more videos from the Azure Enablement Show.
Microsoft Tech Community – Latest Blogs –Read More
Armchair Architects: Artificial Intelligence, Large Language Models, and Architects (Part 2 of 2)
Large Language Models: A Deep Dive
Welcome to the second part of our exploration into large language models. In this episode, we delve deeper into the intricacies of these models, discussing everything from the formulation of effective prompts to the phenomenon of hallucinations with our host David Blank-Edelman and our armchair architects Uli Homann and Eric Charran.
Crafting Effective Prompts
One of the key aspects of working with large language models is the ability to craft effective prompts. These prompts need to be suitably constrained to elicit useful responses. For instance, an architect might want to ask for a solution architecture perspective response that meets specific functional and non-functional requirements.
Eric used an example where he prompted “from a software architect perspective come up and recommend a solution architecture that accomplishes all of these functional and non-functional requirements and then write it as if I’m creating a specification for a developer. The architecture requires investments from the organization in terms of CapEx and OpEx, new services, new cloud subscriptions.” In another prompt, he took the output and prompted “Then take this and write it as an e-mail to the CIO.”
It took the outputs, raised it up a level and created a good foundation, it wasn’t perfect, but a good foundation for executive messaging as to why the CIO would lobby the CFO to invest in these particular technologies.
Then Eric asked it to switch personas. “Assume that I’m an SRE or platform engineering team lead and I need to support this thing that I just created. Write me a quick spec for the SRE or platform engineering team lead who will support the architecture.”
This process involves a form of ‘code switching’, where the language and level of detail are adjusted based on the audience. It provided a great starting point for refinement.
Understanding Hallucinations
As we delve deeper into the workings of large language models, we encounter the phenomenon of ‘hallucinations’. These occur when the model makes assumptions based on the patterns it has seen so far. For example, if the model sees the sequence 1, 2, 3, it might assume that 4 should naturally follow.
While this extrapolation can work in many scenarios, it can also lead to inaccurate or even dangerous assumptions, especially in sensitive domains like healthcare. It’s crucial to remember that these models cannot make assumptions or extrapolations when dealing with diagnostic information.
Hallucinations were quite prevalent in large language models at the beginning of the year. However, thanks to the concerted efforts of the research community, their occurrence has decreased dramatically. Techniques like fine-tuning allow users to constrain and limit the number of hallucinations.
The Mystery of AI Outputs
In the world of artificial intelligence, large language models have emerged as a fascinating area of study. However, their workings often remain a mystery to the users, leading to a myriad of questions and concerns.
One of the intriguing aspects of these models is the generation of outputs. Users often find themselves puzzled by the responses they receive, unsure of the rationale behind them. This lack of understanding can be problematic, especially for professionals like architects who rely on these models for their work.
The key here is to understand how these models function. While reading the response, it’s crucial to fact-check the information to ensure its accuracy. There’s got to be a voice in the back of your head saying, “all right, let me just factually check this thing to make sure it just didn’t make this up because it wants to.” The models are designed to link concepts together and generate a response based on the input. However, they might sometimes fabricate links between concepts, leading to inaccurate outputs.
Understand how the process works and then as you’re reading the output just quality check it to make sure it makes sense before you proffer it as the as the answer.
The Role of the User
Large language models are tools designed to assist users in creating artifacts more efficiently and in greater depth. They provide proposals based on the input given by the user. It’s important to remember that these proposals need to be validated by the user before they can be accepted as the final output.
The user plays a vital role in this process. They need to understand what they’re asking the model to do and validate the output once it’s produced, then it becomes your proposal. The responsibility of the final output lies with the user, not the AI. The user cannot simply blame the AI if something goes wrong.
The World Beyond Natural Language
While much of the discussion around large language models revolves around natural language text, these models are capable of much more. They can understand and generate code, making them useful for tasks beyond generating human language text.
For instance, OpenAI has three model families: GPT for language, DALL-E for images, and Codex for code. These models can express anything that can be represented in code, including schemas. This capability opens up a whole new realm of possibilities for users, allowing them to leverage these models in a variety of ways.
Type Chat: Prompt Engineering with JSON
In the realm of artificial intelligence, large language models have emerged as powerful tools capable of generating a wide array of outputs. From creating JSON schemas to documenting legacy code, these models are revolutionizing the way we approach problem-solving.
One innovative application of large language models is ‘TypeChat’, an application developed by Anders Hejlsberg, and his colleagues. TypeChat leverages the power of prompt engineering to generate outputs in the form of a JSON schema.
Users can instruct the large language model to generate a specific output and format it using JSON. By providing a JSON schema as part of the prompt, the system can automatically respond in a JSON schema. This approach offers an elegant way to create programmable outputs, as parsing a JSON schema is much easier than parsing text.
Large language models can also generate code in various forms. One area where this capability has proven particularly useful is in the documentation of legacy code bases.
For instance, many Cobalt code bases are quite old and lack proper documentation. Generative AI can be used to document these code bases and explain what the code does. This is especially useful when the original purpose or functionality of the code is no longer known.
Knocking Their Socks Off: Impressive Prompts for Architects
When it comes to impressing architects with the capabilities of large language models, one effective approach is to use these models to analyze problematic code. For example, a piece of legacy code or code that’s leaking memory can be plugged into ChatGPT with the prompt “find the memory leak” or “find another way to write this optimally”.
However, it’s important to remember that the outputs generated by these models need to be quality checked to ensure their correctness. It’s also crucial to ensure that your organization is comfortable with the data being fed into the model, especially when using the consumer version of ChatGPT.
ChatGPT Enterprise and Bing Chat Enterprise.
For those concerned about data privacy, there are private versions of ChatGPT available, such as ChatGPT Enterprise and Bing Chat Enterprise. These versions ensure that the data fed into them stays within your organizational boundaries, offering an added layer of security.
Persona-Based Modeling and Prompt Engineering
Another effective strategy when working with large language models is persona-based modeling. This involves framing prompts as if the model is a specific persona, such as a software architect or a support person for a specific technology. This approach helps the model better understand the problem scenario and generate more relevant responses.
As we continue to explore the capabilities of large language models, it’s clear that these tools offer immense potential in a variety of fields. From prompt engineering to code generation, these models are paving the way for innovative solutions to complex problems. Stay tuned for more insights into the fascinating world of AI in our upcoming discussions.
Recommended Next Steps
If you’d like to learn more about the general principles prescribed by Microsoft, we recommend Microsoft Cloud Adoption Framework for platform and environment-level guidance and Azure Well-Architected Framework. You can also register for an upcoming workshop led by Azure partners on cloud migration and adoption topics and incorporate click-through labs to ensure effective, pragmatic training.
You can read Part 1 of this blog if you just read Part 2.
You can view the whole video below and check our more videos from the Azure Enablement Show,
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #468: Understanding and Resolving the “Could not find prepared statement with handle”
Introduction:
In the realm of SQL Server, encountering errors is a part of the development process. One such common error is “Could not find prepared statement with handle“. In this article, we’ll explore what this error means, why it occurs, and how to resolve it.
Understanding the Error:
The error message “Could not find prepared statement with handle” occurs in SQL Server when there’s an attempt to execute a prepared statement with a handle that is unrecognized or unavailable. A handle in SQL Server is an identifier used to execute or deallocate a prepared statement.
A Functional Script Example: Let’s consider a functional script example:
DECLARE @P1 INT;
EXEC sp_prepare @P1 OUTPUT,
N’@P1 NVARCHAR(128)’,
N’SELECT state_desc FROM sys.databases WHERE name=@P1′;
EXEC sp_execute @P1, N’testdb’
EXEC sp_unprepare @P1;
In this script, sp_prepare prepares a statement and assigns it a handle (@P1). Then, sp_execute executes the prepared statement using this handle. Finally, sp_unprepare deallocates the prepared statement.
Common Causes of the Error: This error commonly occurs due to:
Incorrect or modified handle used between preparation and execution.
The prepared statement is unprepared before execution.
Client-server application synchronization issues, leading to lost or altered handles.
Solutions and Best Practices: To avoid this error, consider the following practices:
Handle Verification: Always ensure the handle used in sp_execute matches the one generated by sp_prepare.
Order of Operations: Check to make sure that sp_unprepare isn’t called before sp_execute.
Error Handling in Applications: Implement robust error handling in your client applications to manage unforeseen errors effectively.
Conclusion:
Understanding the “Could not find prepared statement with handle” error in SQL Server is crucial for database management and application development. By recognizing the common causes and adopting best practices, developers can efficiently navigate and resolve this error, leading to more stable and reliable SQL applications.
Remember that depending on the driver or application language that your are using the implementation could be different but, normally, this error needs to be managed by developer to review why the handle has been lost or incorrect.
Enjoy!
Microsoft Tech Community – Latest Blogs –Read More
Anna AI:n ideoida AI-käyttötapauksia asiakkaallesi
Generatiivista tekoälyä voi hyödyntää monin tavoin ideoinnissa, koska se on nimensä mukaisesti on hyvä keksimään asioita, kunhan sille annetaan riittävästi kontekstia. Miksi emme kokeilisi käyttää AI:ta tukiälynä myös AI:n myynnissä? Microsoft tarjoaa kumppaneilleen (ja miksei myös asiakkailleen) helppokäyttöisen AI Use Cases -palvelun, jossa voi luoda AI-käyttötapauksia organisaation verkkosivun julkisen tiedon perusteella. Palvelussa on 13 kategoriaa, joihin pyritään ensin löytämään informaatiota, joista syntyy konteksti, jonka perusteella luodaan ehdotuksia tekoälyn käytön hyödyntämisestä ko. organisaatiossa. Simple!
AI Use Cases -palvelu sijaitsee osoitteessa: https://azureopenaiusecases.azurewebsites.net/
Käyttäjä syöttää verkkosivuston osoitteen tai jonkun alla olevan sivun vaikkapa https://www.mustigroup.com/fi/tietoa-meista/ ja painaa Extract Profile, jolloin Azure OpenAi alkaa kerätä informaatiota sivulta ja muodostaa organisaatioprofiilin.
Seuraavaksi painetaan Generate Use Cases -painiketta, jolloin tekoäly luo valmiin asiakaskirjepohjan, joka sisältää muutaman Azure OpenAI -käyttötapauksen.
Tämän pohjan voi kopioida ja muokata siitä sopivan sähköpostin tai käyttää muuten vain ideoinnin pohjana.
Microsoft Tech Community – Latest Blogs –Read More
New on Azure Marketplace: December 15-21, 2023
We continue to expand the Azure Marketplace ecosystem. For this volume, 151 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Get it now in our marketplace
CellTrust SL2 Enterprise Capture: SL2 Enterprise Capture by CellTrust helps organizations in highly regulated industries capture and archive electronic communications on mobile devices for compliance and eDiscovery. It separates personal and work data for BYOD, CYOD, and COPE, and is integrated with Microsoft Endpoint Manager, BlackBerry UEM, Ivanti Neurons, and AppConfig.
Ivanti Neurons for ITAM: Ivanti Neurons for ITAM consolidates IT asset data, allowing for tracking, configuration, optimization, and strategic management throughout the asset lifecycle. The solution offers a mobile app for managing assets and enables quick acquisition of financial and contractual information for optimized asset purchases.
Go further with workshops, proofs of concept, and implementations
AI Center of Excellence: 6-Month Workshop: Xoriant’s AI CoE offers support for Microsoft Azure usage and consumption at all levels, with a focus on improving productivity and scaling AI capabilities. Its solutions optimize the use of Azure stacks and AI tools while aligning with business priorities. The framework guides customers from ideation to final product development.
Application Design and Development: Sii helps businesses digitalize internal processes, analyze and design systems, and redesign existing systems with new functionalities using Microsoft Azure. It offers flexible deployment models, reliable working environments, and benefits such as performance, scalability, data collection and analysis, monitoring tools, and security.
Azure Arc: 3-Day Jump Start Workshop: The Azure Arc Workshop by Noventiq offers a cloud-first approach to managing complex and distributed environments, from private and public clouds to data centers and the edge. The workshop provides a comprehensive range of cloud services from top-tier cloud providers and services for workload modernization, management, security, and transformation.
Azure Cloud Native Application Development: 4-Week Engagement: The App of the Future (AOTF) workshop by Optimus streamlines the process of envisioning and prototyping applications on Microsoft Azure. It offers a comprehensive solution by delivering detailed architecture designs and tailored Azure cost estimates, empowering businesses to bridge the gap between concept and execution.
Azure Landing Zone with Telefonica: 2-Week Implementation: Telefonica offers a cloud adoption framework for Microsoft Azure that includes a foundational landing zone with network architecture, security compliance, and automation solutions. With experienced architects and a validated framework, Telefonica can quickly deploy a reliable and scalable cloud environment.
Azure Migration with Telefonica: Telefonica offers a seamless migration process to Microsoft Azure with minimal impact on business operations. The process uses continuous replication techniques to ensure business logic remains active until the cutover window. The estimated cost is based on 15 servers and migration outside of business hours.
Azure Migration (CSP): 8-Week Implementation: Optimus Information offers an 8-week migration services package for Microsoft Azure that aligns business objectives with a tailored cloud adoption plan. The package includes a workshop to understand requirements and goals, in-depth analysis of existing workload, and a detailed proposal with timelines and costs.
Azure Migration: 8-Week Implementation: Optimus Information offers a comprehensive 8-week solution for organizations to smoothly transition to Microsoft Azure. The service includes workshops, analysis, migration plans, and cost proposals to ensure a streamlined and value-driven adoption of cloud technology.
Azure Modernization: 2 Hour Workshop: This workshop from Cloud Direct offers a 1:1 session to understand cloud-first and migration strategies. It focuses on building a Microsoft Azure modernization road map and bridging the gap between current and target state. The deliverables include expert guidance, business context discussion, clarity on next steps, and building blocks for the business case.
Azure OpenAI Service: 2-Week Proof of Concept: Optimus Information offers a 2-week proof of concept that combines the enterprise-grade capabilities of Microsoft Azure with OpenAI’s generative AI model. The solution includes pre-trained generative AI models, customization of AI models with business data, built-in tools for data security, and enterprise-grade security with role-based access control.
Azure Optimization: 2-Hour Workshop: This optimization workshop offers a 1:1 session on the Microsoft Azure Well-Architected Framework and cloud strategy. It provides guidance on building an optimization road map and bridging the gap between current and targeted state. The workshop includes insight from a cloud specialist, two-way discussion, clarity on next steps, and building blocks for a business case.
Azure Solution Assessment: 1-Day Workshop: This service from TwinCap First offers a tailored system for businesses, including a comprehensive assessment and data-driven recommendations. It also provides a technical deep dive into Microsoft Azure solutions, helping businesses define goals, identify risks, assess software requirements, and create an implementation road map.
Azure Virtual Desktop: 5-Day Proof of Concept: This service from Cisilion provides a proof-of-value deployment for organizations to test and understand the benefits of Microsoft Azure Virtual Desktop. The service includes setup and follow-up sessions to review feedback and agree on next steps.
Build and Modernize AI Apps: Lantern’s Build and Modernize AI Apps consulting services help organizations improve customer and employee experiences by integrating Microsoft Azure AI into their applications. It offers services for all stages of the digital innovation lifecycle, including advisory, strategy, envisioning, build and launch, and enhance, support, and optimize.
Cloud Centre of Excellence (CCoE): 2 Hour Workshop: The Cloud Centre of Excellence workshop from Cloud Direct helps build cloud maturity and aligns with security, compliance, and management policies. The workshop includes a two-way discussion, clarity on next steps, and building blocks for a business case.
Cloud Virtual Machine Service Azure Virtual Desktop: 2-Month Proof of Concept: T-Systems’ Bundle Quickstart offers Microsoft Azure Virtual Desktop configuration services, including onboarding workshop, network setup, resource groups, host pool, app group, and workspace implementation.
Data and AI: 2-hour Workshop: The Data and AI Workshop from Cloud Direct offers a 1:1 session on Microsoft Azure data management services, including machine learning, Data Factory, Databricks, and Data Lake. It provides insight from a cloud specialist, two-way discussion, clarity on next steps, and building blocks for a business case.
Design Thinking for AI: 8-Hour Workshop: Intellias’ workshop guides teams toward developing user-centric AI solutions with a focus on Microsoft Azure integration. The workshop is structured into distinct stages, including empathic exploration, ideation and brainstorming, solution drafting, and feedback and refinement.
Fabric: 4-Week Proof of Concept: This proof of concept from iLink Systems offers a rapid experimentation bench to showcase modern reporting infrastructure using OneLake with Microsoft Fabric and Power BI. It includes a 4-week implementation plan for creating data pipelines, semantic models, and data visualization, resulting in high-performance reports and adherence to BI best practices.
Fabric Data Modernization: This service from iLink Systems offers a three-phased approach to migrate and unify datasets from varied on-premises or other cloud systems for further processing using Microsoft Fabric. It ensures seamless communication between different data and analytics solutions, faster migration, and cost optimization.
LLMOps: 12-Week Implementation: Spyglass MTG offers a 12-week program for efficient management and performance measurement of generative AI prompts. It includes monitoring, accuracy assessment, stability analysis, customizable alerts, and performance metrics. The program also provides a comprehensive operations review, LLM use case and usage review, LLMOps strategy, design, and setup and configuration of performance and prompt evaluation tools.
Managed XDR for Financial Services: FIS uses Microsoft Extended Detection and Response (XDR) technology to protect financial data from diverse threats. The solution aggregates security data from various sources and expedites incident responses. FIS’ cybersecurity expertise ensures financial institutions benefit from advanced technology and security proficiency, safeguarding digital assets and sensitive financial data.
Market AI: 8-Week Proof of Concept: LTIMindtree offers a customizable solution for identifying and calculating the share of a target brand’s SKUs visibility on the shelf in a store against the competition. It provides prebuilt algorithms, scalable MLOps and APIs, and prescriptive actionable alerts.
Microsoft 365 Azure Tenant and Entra ID Management: 2-Month Proof of Concept: T-Systems’ Bundle Quickstart offers Microsoft Azure tenant support with minimal configuration of Entra ID for testing requirements. Services include onboarding workshop, Entra ID user and group creation, license linking, and portal branding.
Microsoft 365 Azure Universal Print Management: 2-Month Proof of Concept: T-Systems supports customers in configuring Microsoft Azure Universal Print for self-service testing. The bundle includes an onboarding workshop, implementation of one print queue and share, and one package for automatic deployment.
Microsoft Fabric: 1-Week Proof of Concept: Microsoft Fabric is an all-in-one analytics platform for businesses that covers everything from data movement to data science, real-time analytics, and business intelligence. This proof of concept from InSpark will show you how Microsoft Fabric can transform your unstructured data from various sources into real business value.
Microsoft Sentinel: 5-Week Workshop: Advens offers a 5-week workshop to help organizations understand and protect against the risks associated with cloud usage. The workshop includes threat monitoring, analysis, and improvement planning using Microsoft Sentinel. Available in French or English.
NSEIT SQLake Framework: SQLake is a serverless, SQL-based framework that provides end-to-end data lake solutions for businesses. It addresses challenges such as data fragmentation, operational inefficiencies, security vulnerabilities, and scalability hurdles. The framework offers unified data management, enhanced security protocols, efficient error handling, transparent audit mechanisms, and adaptable code base.
NSEIT_ChurnWise (US): ChurnWise is a customer churn propensity ML model that analyzes and interprets customer churn by harnessing the power of demographic and behavioral attributes. It computes churn propensity scores and categorizes customers into high, medium, and low segments based on churn probability, offering directional insights into the importance of model attributes.
NSEIT_ChurnWise: ChurnWise is a customer churn propensity ML model that analyzes and interprets customer churn by harnessing the power of demographic and behavioral attributes. It computes churn propensity scores and categorizes customers into high, medium, and low segments based on churn probability, offering directional insights into the importance of model attributes.
Secure Your Microsoft Azure Multi-Cloud Environments: 5-Week Workshop: Hitachi Solutions offers an Azure Multi-Cloud Security Workshop that helps organizations identify threats and vulnerabilities in their hybrid and multi-cloud environments and develop a plan to improve their security posture using Microsoft Security solutions.
Security Audit: 2-Hour Workshop: This workshop from Cloud Direct offers expert guidance from a cloud evangelist to enhance cloud security measures. It provides customized Microsoft solutions tailored to the organization’s unique challenges and actionable insights for immediate security improvements.
Skygrade Application Modernization: 10-Week Implementation: Cognizant Skygrade for Microsoft Azure helps enterprises modernize apps and infrastructure at a rapid pace with cloud optimization built into the process. It uses Azure at the core of a multi-cloud architecture to accelerate modernization and reduce risk.
Sunshine Migrate: 6-Week Implementation: Sunshine Migrate from LTIMindtree accelerates cloud migration on Microsoft Azure Synapse Analytics, reducing manual efforts and risks. It automates source discovery, schema conversion, and data migration using native capabilities or Azure Data Factory. The tool supports object, data, and script migration, and offers an automated validation toolkit.
Zoi Cloud Native Foundations Service: Zoi offers cloud native foundation services to accelerate your company’s cloud adoption journey. Its approach focuses on delivering value by providing reliable Microsoft Azure infrastructure, migrating and modernizing applications, and collaborating with Azure experts. The services include cloud native architecture, migration, security, and automation.
Contact our partners
Apache Spark and TensorFlow on CentOS Stream 9 with Finance-Related Python packages
Apache Web Server on Ubuntu 20.04
Apache Web Server on Ubuntu 22.04
Application Modernization: 2-Week Assessment
AutomationEdge Hyperautomation Platform
Azure File and Backup Implementation Service
Azure Management Assessment by CBTS
Azure Virtual Desktop (AVD) Deployment
Azure Well-Architected Framework (AWAF) Assessment
CIS Hardened Images on Oracle Linux
Cloud Adoption with Telefonica: 6-Week Assessment
Cloud Security Operation Center by glueckkanja
Control Room for Power BI by BI Samurai
Data Discovery: 1-Day Assessment
Data Governance with Microsoft Purview: 3-Day Assessment
DataGenie – Your Business Smart Watch
Digital Twin Consulting Services: 1-Hour Briefing
Eigen – Intelligent Document Processing and Data Extraction
Enlighten Custom Visual License: Dev Environment
Enterprise Data Platforms in Azure: 1-Hour Briefing
eXperts Hybrid Project Management Service
EY Digital Identity Solution Supported by Microsoft Entra
GitHub Copilot with Cloud Intel: 4-Week Assessment
GlobalRapide for Endpoint Management for Teams Rooms
Greenfield Landing Zone Deployment
HARC Assessment Service: 4-Week Evaluation
HAWK: AI Transaction and Customer Monitoring
HiddenLayer Machine Learning Detection and Response (MLDR)
Ivanti Neurons for Secure Access
Ivanti Neurons for Zero Trust Access
Jenkins on Windows Server 2016 Powered by Globalsolutions
Kanboard Server on Debian 10 Minimal
Kanboard Server on Debian 11 Minimal
Kanboard Server on Ubuntu 18.04 Minimal
Kanboard Server on Ubuntu 20.04 Minimal
LimeSurvey on Windows Server 2016 Powered by Globalsolutions
LimeSurvey on Windows Server 2019 Powered by Globalsolutions
Multifactor and Passwordless Authentication (MFA)
ODBC for Azure Synapse Analytics
OmniAnalytics for Dynamics 365 Business Central
ONNXRT – Ampere Optimized Framework on Ubuntu
OpenVPN Server on Oracle Linux 8.6
Python Connector for Dynamics 365
PyTorch – Ampere Optimized Framework on Ubuntu
Red Hat Enterprise Linux 8.6 Minimal with Trac System Server
Red Hat Advanced Cluster Security and Management for Kubernetes Subscriptions on OpenShift (US)
Red Hat Advanced Cluster Security and Management for Kubernetes Subscriptions on OpenShift
Rocky Linux 8.9 LVM-partitioned
Rocky Linux 9.3 LVM-partitioned
RustDesk Server on Debian 10 Minimal
RustDesk Server on Debian 11 Minimal
RustDesk Server on Ubuntu 18.04 Minimal
RustDesk Server on Ubuntu 20.04 Minimal
Security and Compliance Assessment
SharePoint Metadata Sync with Dynamics 365 Using Dataverse
SmartDocumentor Cloud (with Azure AI Document Intelligence)
Studio In a Box: 1-Hour Briefing
Tampnet Offshore Private Mobile Network (4G/5G)
Techila Distributed Computing Engine
TensorFlow – Ampere Optimized Framework on Ubuntu
Ubuntu 18.04 Minimal with Trac System Server
Videospace Video Search as a Service (VSaaS)
Wipro GenAI Investor Onboarding
Wipro Live Workspace Cognitive Automation
This content was generated by Microsoft Azure OpenAI and then revised by human editors.
Microsoft Tech Community – Latest Blogs –Read More
Change Azure Policy assignment’s system assigned managed identity location
When Azure Policy starts a template deployment when evaluating deployIfNotExists policies or modifies a resource when evaluating modify policies, it does so using a managed identity that is associated with the policy assignment. Policy assignments use managed identities for Azure resource authorization. You can use either a system-assigned managed identity that is created by the policy service or a user-assigned identity provided by the user.
Each Azure Policy assignment can be associated with only one managed identity and, after adding a managed identity to policy assignment, it is possible to edit only some managed identity related settings of the policy assignment. For instance, the type of managed identity can be switched from system assigned and user assigned, but if a system assigned managed identity has been selected and created before, its location can’t be changed. E.g.:
Azure Portal, CLI or PowerShell rely on the resource providers’ REST APIs and, in this case, on Policy Assignments – Update – REST API (Azure Policy) that allows the update of the identity property type, but does not allow the change of a system assigned managed identity location, for instance.
Therefore, to change the system assigned managed identity location, a new policy assignment should be created. As a suggestion, the policy assignment can be duplicated, and the system assigned managed identity location can be specified on the remediation section (that triggers the creation of a new system assigned managed identity) from the Azure Portal. Alternatively, a custom script (using CLI or PowerShell, for instance) can be used that gets the existing policy assignment’s properties and creates a new policy assignment with the same properties’ values except for the system assigned managed identity location. E.g.:
<# Disclaimer: This script is not supported under any Microsoft standard support program or service. This script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the current script or documentation, even if Microsoft has been advised of the possibility of such damages.
Additional note: The following script generates a new policy assignment id. Please take into consideration that any existing solution(s) built around the policy assignment id might be impacted.#>
$subscriptionId = “subscriptionId”
$policyAssignmentId = “policyAssignmentId”
$newSystemAssignedManagedIdentityLocation = “location”
Connect-AzAccount
Set-AzContext -Subscription $subscriptionId
# Check modules requirements
# Check if Az.Accounts module version 2.13.2 (or higher) is installed
if (-not((Get-Module -ListAvailable -Name Az.Accounts | Select-Object -ExpandProperty Version -First 1) -ge ([System.Version]”2.13.2″))) {
# Install Az module if not installed
Install-Module -Name Az.Accounts -Force
}
# Check if Az.Resources module version 6.12.1 (or higher) is installed
if (-not((Get-Module -ListAvailable -Name Az.Resources | Select-Object -ExpandProperty Version -First 1) -ge ([System.Version]”6.12.1″))) {
# Install Az.Resources module if not installed
Install-Module -Name Az.Resources -Force
}
Import-Module -Name Az.Accounts -RequiredVersion ([System.Version]”2.13.2″)
Import-Module -Name Az.Resources -RequiredVersion ([System.Version]”6.12.1″)
# Get policy assignment
$policyAssignment = Get-AzPolicyAssignment -Id $policyAssignmentId
# Get policy assignment’s policy definition
$policyDefinition = Get-AzPolicyDefinition -Id $policyAssignment.Properties.PolicyDefinitionId
# Get policy assignment’s managed identity
$policyIdentity = $policyAssignment.Identity
# Get policy’s managed identity role assignments
$policyIdentityRoleAssignments = Get-AzRoleAssignment -ObjectId $policyIdentity.PrincipalId
# Create new policy assignment’s parameters from previous assignment
$newPolicyAssignmentParameters = @{}
$policyAssignmentParametersObject = $policyAssignment.Properties.Parameters.psobject.Properties | Select-Object -ExpandProperty Value -Property Name
$policyAssignmentParametersObject | ForEach-Object { $newPolicyAssignmentParameters[$_.Name] = $_.Value }
# Generate a 24 character long alphanumeric string to be used on the new policy assignment as id
$newPolicyAssignmentName = -join ((48..57) + (97..122) | Get-Random -Count 24 | % {[char]$_})
# Create new policy assignment
$newPolicyAssignment = New-AzPolicyAssignment -Name $newPolicyAssignmentName -DisplayName $policyAssignment.Properties.DisplayName -PolicyDefinition $policyDefinition -Scope $policyAssignment.Properties.Scope -PolicyParameterObject $newPolicyAssignmentParameters -IdentityType SystemAssigned -Location $newSystemAssignedManagedIdentityLocation
# Get new policy assignment’s managed identity
$newPolicyIdentity = $newPolicyAssignment.Identity
if($newPolicyAssignment -ne $null) {
# Create new policy’s managed identity role assignments
foreach ($roleAssignment in $policyIdentityRoleAssignments) {
New-AzRoleAssignment -ObjectId $newPolicyIdentity.PrincipalId -ObjectType “ServicePrincipal” -Scope $roleAssignment.Scope -RoleDefinitionName $roleAssignment.RoleDefinitionName
}
# Delete previous policy assignment
Remove-AzPolicyAssignment -InputObject $policyAssignment
}
In the case of a policy initiative assignment, it is not possible to duplicate the policy assignment from the Azure Portal. Again, a custom script can be used that gets the existing policy assignment’s properties and creates a new policy assignment with the same properties’ values except for the system assigned managed identity location. E.g.:
<# Disclaimer: This script is not supported under any Microsoft standard support program or service. This script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the current script or documentation, even if Microsoft has been advised of the possibility of such damages.
Additional note: The following script generates a new policy assignment id. Please take into consideration that any existing solution(s) built around the policy assignment id might be impacted.#>
$subscriptionId = “subscriptionId”
$policyAssignmentId = “policyAssignmentId”
$newSystemAssignedManagedIdentityLocation = “location”
Connect-AzAccount
Set-AzContext -Subscription $subscriptionId
# Check modules requirements
# Check if Az.Accounts module version 2.13.2 (or higher) is installed
if (-not((Get-Module -ListAvailable -Name Az.Accounts | Select-Object -ExpandProperty Version -First 1) -ge ([System.Version]”2.13.2″))) {
# Install Az module if not installed
Install-Module -Name Az.Accounts -Force
}
# Check if Az.Resources module version 6.12.1 (or higher) is installed
if (-not((Get-Module -ListAvailable -Name Az.Resources | Select-Object -ExpandProperty Version -First 1) -ge ([System.Version]”6.12.1″))) {
# Install Az.Resources module if not installed
Install-Module -Name Az.Resources -Force
}
Import-Module -Name Az.Accounts -RequiredVersion ([System.Version]”2.13.2″)
Import-Module -Name Az.Resources -RequiredVersion ([System.Version]”6.12.1″)
# Get policy assignment
$policyAssignment = Get-AzPolicyAssignment -Id $policyAssignmentId
# Get policy assignment’s policy definition
$policySetDefinition = Get-AzPolicySetDefinition -Id $policyAssignment.Properties.PolicyDefinitionId
# Get policy assignment’s managed identity
$policyIdentity = $policyAssignment.Identity
# Get policy’s managed identity role assignments
$policyIdentityRoleAssignments = Get-AzRoleAssignment -ObjectId $policyIdentity.PrincipalId
# Create new policy assignment’s parameters from previous assignment
$newPolicyAssignmentParameters = @{}
$policyAssignmentParametersObject = $policyAssignment.Properties.Parameters.psobject.Properties | Select-Object -ExpandProperty Value -Property Name
$policyAssignmentParametersObject | ForEach-Object { $newPolicyAssignmentParameters[$_.Name] = $_.Value }
# Generate a 24 character long lower case alphanumeric string to be used on the new policy assignment as id
$newPolicyAssignmentName = -join ((48..57) + (97..122) | Get-Random -Count 24 | % {[char]$_})
# Create new policy assignment
$newPolicyAssignment = New-AzPolicyAssignment -Name $newPolicyAssignmentName -DisplayName $policyAssignment.Properties.DisplayName -PolicySetDefinition $policySetDefinition -Scope $policyAssignment.Properties.Scope -PolicyParameterObject $newPolicyAssignmentParameters -IdentityType SystemAssigned -Location $newSystemAssignedManagedIdentityLocation
if($newPolicyAssignment -ne $null) {
# Create new policy’s managed identity role assignments
foreach ($roleAssignment in $policyIdentityRoleAssignments) {
New-AzRoleAssignment -ObjectId $newPolicyIdentity.PrincipalId -ObjectType “ServicePrincipal” -Scope $roleAssignment.Scope -RoleDefinitionName $roleAssignment.RoleDefinitionName
}
# Delete previous policy assignment
Remove-AzPolicyAssignment -InputObject $policyAssignment
}
Microsoft Tech Community – Latest Blogs –Read More
Practice mode is now available in Microsoft Forms
We’re excited to announce that Forms now supports practice mode, enhancing students’ learning process by offering a new way to review, test, and reinforce their knowledge. Practice mode is only available for quizzes. You can also try out practice mode from this template.
Instant feedback after answering each question
In practice mode, questions will be displayed one at a time. Students will promptly receive feedback after answering each question, indicating whether their answer is right or wrong.
Try multiple times for the correct answer
If students provide an incorrect answer, they will be given the opportunity to reconsider and make another attempt until they arrive at the correct one, allowing for immediate re-learning, and consequently strengthening their grasp of specific knowledge.
Encouragement and autonomy during practice
Whether students answer a question correctly or not, they will receive an encouraging message, giving them a positive practice experience. And they have the autonomy to learn at their own pace. If they answer a question incorrectly, they can choose to retry, view the correct answer, or skip this question.
Recap questions
Once students finish the practice, they can recap all the questions, along with the correct answers, providing a comprehensive overview to help gauge their overall performance.
Enter practice mode
Practice mode is only available for quizzes. You can turn it on in the “…” icon in the upper-right corner. Once you distribute the quiz recipients will automatically enter practice mode. Try out practice mode from this template now!
Microsoft Tech Community – Latest Blogs –Read More