Category: Microsoft
Category Archives: Microsoft
How Augury’s AI-powered IoT monitoring solution in Azure Marketplace can deliver 3x-10x ROI
In this guest blog post, Sara Aldworth, Content Marketing Manager for Augury, discusses the benefits of manufacturing IoT monitoring. For more than 20 years, Sara has worked with tech start-ups and established SaaS companies, communicating the positive impact of their business solutions. Her most recent work has focused on the intersection of AI and IoT and its power to improve the health of humans and the health of machines in heavy industry.
Manufacturers added $2.9 trillion of value to the U.S. economy in 2023. With that much money at stake, manufacturers must keep their production high and their downtime low. Monitoring the real-time health of their rotating machines is necessary to ensure the consistent, reliable delivery of products promised to customers.
Benefits of AI-powered IoT machine health monitoring
Thanks to the internet of things (IoT) and artificial intelligence (AI), maintenance teams can leverage predictive, prescriptive condition-based monitoring to get ahead of machine failure. Benefits include, but are not limited to, cost savings by:
preventing unexpected breakdowns and allowing teams to plan their downtime for minimal impact on the business;
optimizing asset care to extend the lifespan of expensive machinery and reduce the need for costly repairs/replacements;
maximizing yield and capacity by fine-tuning processes and improving overall performance;
and reducing waste, loss, and emissions by improving energy usage to meet sustainability goals.
Success at achieving such benefits can be found through AI-driven machine health solutions. Microsoft partner Augury’s Machine Health on Azure Marketplace is one such example, and has been adopted by some of the most innovative companies in the world:
A leading global chemical manufacturer realized 7x ROI after implementing Augury at several pilot sites, with significant wins in preventing unplanned downtime and driving increased productivity.
Similarly, an industry leader in the U.S. pet food manufacturing vertical averted 700+ hours of downtime using Augury Machine Health to monitor their equipment and achieved an estimated $1M+ in cost savings.
Gaining 3x-10x ROI through data-driven decisions
Maintenance and reliability teams use Augury Machine Health to glean real-time insights into machine performance and to make data-driven decisions on caring for their equipment. Augury Machine Health combines advanced IoT sensors, AI expertise, and change management resources to create tangible business results, with most customers seeing 3x-10x ROI in a matter of months.
Augury’s AI has 99.9%+ diagnostic accuracy and has been trained on more than 450 million recorded machine hours, with a vast and growing database of unique machine signatures across over 65 distinct machine types.
Additionally, Augury’s team of Cat III & IV vibration analysts are on standby to assist maintenance and reliability teams as they create actionable plans to restore their equipment to optimal health.
How to gain additional benefits through Azure Marketplace
In early 2022, Augury migrated its cloud services to Microsoft Azure. Azure provides storage for Augury and enables our AI and machine learning algorithms to analyze the data collected from our customers. Augury has worked hand-in-hand with Microsoft sellers to transact multiple deals through the Azure Marketplace.
Subscribing to the Augury solution through Azure Marketplace has multiple benefits for customers. Those who have completed multimillion-dollar transactions with Augury through the marketplace experienced:
Microsoft Azure Consumption Commitment (MACC) contract decrement;
consolidation of multiple purchase orders (POs);
access to other budgets allocated for Microsoft;
streamlined procurement;
and security and governance through Azure.
Take for example one customer, who had been trying to finalize its MACC contract for months. By purchasing Augury’s Machine Health through the marketplace, the customer was able to:
increase and meet their required Azure usage goals;
sign a larger MACC contract to receive more benefits and funding from Microsoft;
and decrement their MACC contract instantaneously with the Augury purchase.
Given the large number of manufacturers who leverage Azure for their internal services, the partnership between Microsoft and Augury creates great synergies for customers, starting with the simple, seamless purchasing process.
To learn more about Augury or request a demo, visit augury.com.
Microsoft Tech Community – Latest Blogs –Read More
Easily Manage Privileged Role Assignments in Microsoft Entra ID Using Audit Logs
One of the best practices for securing your organization’s data is to follow the principle of least privilege, which means granting users the minimum level of permissions they need to perform their tasks. Microsoft Entra ID helps you apply this principle by offering a wide range of built-in roles as well as allowing you to create custom roles and assign them to users or groups based on their responsibilities and access needs. You can also use Entra ID to review and revoke any role assignments that are no longer needed or appropriate.
It can be easy to lose track of role assignments if admin activities are not carefully audited and monitored. Routine checks of role assignments and generating alerts on new role assignments are one way to track and manage privileged role assignment.
Chances are that when a user with privileged roles is approached, they’ll say they need the role. This may be true; however, many times users will unknowingly say they need those permissions to carry out certain tasks when they could be assigned a role with lower permissions. For example, a user will be able to reset user passwords as a Global Administrator, but that does not mean they can’t do that with another role with far less permissions.
Defining privileged permissions
Privileged permissions in Entra ID can be defined as “permissions that can be used to delegate management of directory resources to other users, modify credentials, authentication or authorization policies, or access restricted data.” Entra ID roles each have a list of permissions defined to them. When an identity is granted the role, the identity also inherits the permissions defined in the role.
It’s important to check the permissions of these roles. The permissions defined in all built-in roles can be found here. For example, there are a few permissions that are different for the Privileged Authentication Administrator role than the Authentication Administrator role, giving the former more permissions in Entra ID. The differences between the authentication roles can be viewed here.
Another example of having differences between similar roles is for the end user administration roles. The differences and nuances between these roles are outlined in detail here.
Auditing activity
To decide if a user really needs a role, it’s crucial to monitor their activities and find the role with the least privilege that allows them to carry out their work. You’ll need Entra ID audit logs for this. Entra ID audit logs can either be sent to a Log Analytics Workspace or connected to a Sentinel instance.
There are two methods that can be used to get the events of carried out by admin accounts. The first will make use of the IdentityInfo table, which is only available in Sentinel after enabling User and Entity Behavior Analytics (UEBA). If you aren’t using UEBA in Sentinel or if you’re querying a Log Analytics Workspace, then you’ll need to use the second method in the next heading.
Using Microsoft Sentinel
To ingest Entra ID audit logs into Microsoft Sentinel, the Microsoft Entra ID data connector must be enabled, and the Audit Logs must be ticked as seen below.
Figure 1 Entra ID data connector in Sentinel with Audit logs enabled
The IdentityInfo table stores user information gathered by UEBA. Therefore, it also includes the Entra ID roles a user has been assigned. This makes it very simple to get a list of accounts that have been assigned privileged roles.
The query below will give a unique list of activities an account has taken, as well as which roles the account has been assigned:
AuditLogs
| where TimeGenerated > ago(90d)
| extend ActorName = iif(
isnotempty(tostring(InitiatedBy[“user”])),
tostring(InitiatedBy[“user”][“userPrincipalName”]),
tostring(InitiatedBy[“app”][“displayName”])
)
| extend ActorID = iif(
isnotempty(tostring(InitiatedBy[“user”])),
tostring(InitiatedBy[“user”][“id”]),
tostring(InitiatedBy[“app”][“id”])
)
| where isnotempty(ActorName)
| join (IdentityInfo
| where TimeGenerated > ago(7d)
| where strlen(tostring(AssignedRoles)) > 2
| summarize arg_max(TimeGenerated, *) by AccountUPN
| project AccountObjectId, AssignedRoles)
on $left.ActorID == $right.AccountObjectId
| summarize Operations = make_set(OperationName) by ActorName, ActorID, Identity, tostring(AssignedRoles)
| extend OperationsCount = array_length(Operations)
| project ActorName, AssignedRoles, Operations, OperationsCount, ActorID, Identity
| sort by OperationsCount desc
This will give results for all accounts that carried out tasks in Entra ID and may generate too many operations that were not privileged. To filter for specific Entra ID roles, the following query can be run where the roles are defined in a list. Three roles have been added as examples, but this list can and should be expanded to include more roles:
let PrivilegedRoles = dynamic([“Global Administrator”,
“Security Administrator”,
“Compliance Administrator”
]);
AuditLogs
| where TimeGenerated > ago(90d)
| extend ActorName = iif(
isnotempty(tostring(InitiatedBy[“user”])),
tostring(InitiatedBy[“user”][“userPrincipalName”]),
tostring(InitiatedBy[“app”][“displayName”])
)
| extend ActorID = iif(
isnotempty(tostring(InitiatedBy[“user”])),
tostring(InitiatedBy[“user”][“id”]),
tostring(InitiatedBy[“app”][“id”])
)
| where isnotempty(ActorName)
| join (IdentityInfo
| where TimeGenerated > ago(7d)
| where strlen(tostring(AssignedRoles)) > 2
| summarize arg_max(TimeGenerated, *) by AccountUPN
| project AccountObjectId, AssignedRoles)
on $left.ActorID == $right.AccountObjectId
| where AssignedRoles has_any (PrivilegedRoles)
| summarize Operations = make_set(OperationName) by ActorName, ActorID, Identity, tostring(AssignedRoles)
| extend OperationsCount = array_length(Operations)
| project ActorName, AssignedRoles, Operations, OperationsCount, ActorID, Identity
| sort by OperationsCount desc
Once the query is run, the results will give insights into the activities performed in your Entra ID tenant and what roles those accounts have. In the example below, the top two results don’t pose any problems. However, the third row contains a user that has the Global Administrator role and has created a service principal. The permissions needed to create a service principal can be found in roles less privileged than the Global Administrator role. Therefore, this user can be given a less privileged role. To find out which role can be granted, check this list, which contains the least privileged role required to carry out specific tasks in Entra ID.
Figure 2 Actions taken by users in Entra ID
Using Log Analytics Workspace
Figure 3 Configuring the forwarding of Entra ID Audit logs to a Log Analytics Workspace
To ingest Entra ID audit logs into a Log Analytics Workspace follow these steps.
Because there is no table that contains the roles an identity has been granted, you’ll need to add the list of users to the query and filter them. There are multiple ways to get a list of users who have been assigned a specific Entra ID role. A quick way to do this is to go to Entra ID and then select Roles and administrators. From there, select the role and export the identities that have been assigned to it. It’s important to have the User Principal Names (UPNs) of the privileged users. You’ll need to add these UPNs, along with the roles the user has, to the query. Some examples have been given in the query itself. If the user has more than one role, then all roles must be added to the query.
datatable(UserPrincipalName:string, Roles:dynamic) [
“admin@contoso.com”, dynamic([“Global Administrator”]),
“admin2@contoso.com”, dynamic([“Global Administrator”, “Security Administrator”]),
“admin3@contoso.com”, dynamic([“Compliance Administrator”])
]
| join (AuditLogs
| where TimeGenerated > ago(90d)
| extend ActorName = iif(
isnotempty(tostring(InitiatedBy[“user”])),
tostring(InitiatedBy[“user”][“userPrincipalName”]),
tostring(InitiatedBy[“app”][“displayName”])
)
| extend ActorID = iif(
isnotempty(tostring(InitiatedBy[“user”])),
tostring(InitiatedBy[“user”][“id”]),
tostring(InitiatedBy[“app”][“id”])
)
| where isnotempty(ActorName) ) on $left.UserPrincipalName == $right.ActorName
| summarize Operations = make_set(OperationName) by ActorName, ActorID, tostring(Roles)
| extend OperationsCount = array_length(Operations)
| project ActorName, Operations, OperationsCount, Roles, ActorID
| sort by OperationsCount desc
Once you run the query, the results will give insights into the activities performed in your Entra ID tenant by the users you have filtered for. In the example below, the top two results can cause problems. Both have the Global Administrator role, but their operations don’t necessitate to have that role. The permissions needed for these operations can be found in roles less privileged than the Global Administrator role. Therefore, these users can be given a less privileged role. To find out which role can be granted, check this list, which contains the least privileged role required to carry out specific tasks in Entra ID.
Figure 4 Actions taken by users in Entra ID
If this user still requires the Global Administrator role then the Security Administrator role will become redundant as the Global Administrator contains more permissions than the Security Administrator role.
Conclusion
Keeping accounts with privileges that are not required is keeping your attack surface greater than it needs to be. By ingesting Entra ID Audit logs, you can query and identify users who have unnecessary and over-privileged roles. You can then find a suitable alternative role for them.
Timur Engin
Learn more about Microsoft Entra:
See recent Microsoft Entra blogs
Dive into Microsoft Entra technical documentation
Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID
Join the conversation on the Microsoft Entra discussion space and Twitter
Learn more about Microsoft Security
Microsoft Tech Community – Latest Blogs –Read More
Logic Apps Aviators Newsletter – January 2024
In this issue:
Ace Aviator of the Month
Customer Corner
News from our product group
News from our community
Ace Aviator of the Month
January’s Ace Aviator: Mark Brimble
What is your role and title? What are your responsibilities associated with your position?
My job title is integration architect at Bidone. I was responsible for moving their EDI solutions from BizTalk Server to Azure. That is now done. I am now elaborating the long-term strategy for Bidone EDI cloud architecture.
Can you provide some insights into your day-to-day activities and what a typical day in your role looks like?
My day at Bidone changes, it can include assessing requests for new EDI interfaces, reviewing existing architecture against latest Azure functionality, Azure cost assessment, reviewing our Azure security, implementing interfaces and third level Azure support.
What motivates and inspires you to be an active member of the Aviators/Microsoft community?
I have always written about what I do for 3 reasons. Firstly it clears my mind of stressful situations, it is a record of where I have been and what I write might help some else.
Looking back, what advice do you wish you would have been told earlier on that you would give to individuals looking to become involved in STEM/technology?
My advice to a younger self would be join or start a startup. Take some risks because you will learn from any failures.
What has helped you grow professionally?
I think presenting is a good way to learn because I always think that you don’t understand anything until you have to teach some else about it.
Imagine you had a magic wand that could create a feature in Logic Apps. What would this feature be and why?
I would have a universal mapping tool that creates XSLT and liquid templates. (I know you are halfway there but would like it for consumption too).
Customer Corner:
Check out this customer success story about Microsoft helping KPMG Netherlands increase operation speed and create opportunity for future enhancements. With the use of Azure API Management and Logic Apps, KPMG Netherlands reduced integration time from days to nearly instantaneous. Read more in this article about how Azure Integration Services benefit KPMG Netherlands, leading to plans for expansion to other branches.
News from our product group:
Use Logic Apps to build intelligent OpenAI applications
Read more about leveraging Logic Apps and OpenAI to develop intelligent workflows and applications.
Azure Integration Services year in review: An exciting innovative 2023!
Hello 2024 – catch up and review Azure Integration Services’ remarkable year.
Business Process Tracking – Frequently Asked Questions (microsoft.com)
Have any questions about the Business Process Tracking Public Preview? Check out this video with answers for some questions from the community.
Utilize legacy Web Service code in Logic App Standard
In this article, read about how to automatically create local function app classes that can be called by Logic Apps.
Passing Complex Objects to .NET Framework Custom Code in Azure Logic Apps (Standard)
Read more on how to pass a complex object to Custom Code in this post by Kent.
Author API Management policies using Microsoft Copilot for Azure
Microsoft Copilot for Azure introduces policy authoring capabilities for Azure API Management. Read more on how to leverage AI assistance to seamlessly author, maintain, and understand API Management policies.
BizTalk Server 2020 Cumulative Update 5
Read up on the CU5 for BizTalk Server 2020, now available for download.
Optimizing Service Bus message processing concurrency using Logic apps Stateless flow.
In this article, read how to utilize host configuration for Service Bus trigger in Logic App Standard.
Https Trace Tool For Logic Apps Standard
Learn more about enhancing Logic Apps Debugging with the HttpTraceForLogicApps tool.
News from our community:
Post by Massimo Crippa
In this article by Massimo, learn how to approach the APIM migration in a practical sample use case.
Azure Logic Apps: securing HTTP Triggers with Microsoft Entra ID authentication
Post by Stefano Demiliani
Read more on how to ensure safer data processing and access control for Azure Logic Apps’ HTTP triggers by implementing Microsoft Entra ID authentication.
Should I use a Function App or Inline Functions?
Post by Mike Stephenson
In this video, Mike answers the question of Function App or Inline Functions with the new release of inline .NET functions for Logic App Standard.
Azure Function to Apply XSLT Transformations
Post by Sandro Pereira
In this article, read up on how to use the ApplyXSLTTransformation function by setting up an Azure Storage Account and a container to store the XSLT files.
A Quick Introduction to Azure SQL Trigger for Functions
Post by Sri Gunnala
Watch this quick introduction and step-by-step walk-through of setting up and configuring SQL Trigger Azure Functions.
Azure API Management | Logic App (Standard) Backend
Post by Andrew Wilson
Learn in this post by Andrew a configurable and secure method to setup front-to-backend routing for a Logic App (Standard) workflow as an API in API Management.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #470: Resolving ‘EXECUTE Permission Denied’ Error on sp_send_dbmail in Azure SQL MI
We worked on a service request that our customer encountering an error message “Executed as user: user1. The EXECUTE permission was denied on the object ‘sp_send_dbmail’, database ‘msdb’, schema ‘dbo’. [SQLSTATE 42000] (Error 229). The step failed.“, I would like to share with you how was the resolution for this specific error message.
Understanding the Error
The error message explicitly points to a permission issue. The user (in this case, ‘user1’) does not have the necessary permission to execute the sp_send_dbmail stored procedure located in the msdb database. This procedure is essential for sending emails from Azure SQL Managed Instance, and lacking execute permissions will prevent the Database Mail feature from functioning correctly.
In this situation, we identified that the user1 was not part DatabaseMailUserRole role in the msdb database. Membership in this role is a prerequisite for using Database Mail.
USE msdb;
ALTER ROLE DatabaseMailUserRole ADD MEMBER [user1];
Once the permission was granted the ‘user1’ was able to successfully send emails through Database Mail in Azure SQL Managed Instance.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #469:Implementing a Linked Server Alternative with Azure SQL Database and C#
In scenarios where direct Linked Server connections are not feasible, such as between Azure SQL Database and an on-premise SQL Server, developers often seek alternative solutions. This blog post introduces a C# implementation that simulates the functionality of a Linked Server for data transfer between Azure SQL Database and SQL Server, providing a flexible and efficient way to exchange data.
Overview of the Solution
The proposed solution involves a C# class, ClsRead, designed to manage the data transfer process. The class connects to both the source (SQL Server) and the target (Azure SQL Database), retrieves data from the source, and inserts it into the target database.
Key Features
Connection Management: ClsRead maintains separate connection strings for the source and target databases, allowing for flexible connections to different SQL Server and Azure SQL Database instances.
Data Transfer Control: The class includes methods to execute a SQL query on the source database, retrieve the results into a DataTable, and then use SqlBulkCopy to efficiently insert the data into the target Azure SQL Database.
Error Handling: Robust error handling is implemented within each method, ensuring that any issues during the connection, data retrieval, or insertion processes are appropriately logged and can be managed or escalated.
Implementation Details
Class Properties
SourceConnectionString: Connection string to the source SQL Server.
TargetConnectionString: Connection string to the target Azure SQL Database.
SQLToExecuteFromSource: SQL query to be executed on the source database.
TargetTable: Name of the target table in Azure SQL Database where data will be inserted.
Methods
TransferData(): Coordinates the data transfer process, including validation of property values.
GetDataFromSource(): Executes the SQL query on the source database and retrieves the results.
InsertDataIntoAzureSql(DataTable TempData): Inserts the data into the target Azure SQL Database using SqlBulkCopy.
Error Handling
The methods include try..catch blocks to handle any exceptions, ensuring that errors are logged, and the process can be halted or adjusted as needed.
Usage Scenario
A typical use case involves setting up the ClsRead class with appropriate connection strings, specifying the SQL query and the target table, and then invoking TransferData(). This process can be used to synchronize data between different databases, migrate data, or consolidate data for reporting purposes.
For example, we have in our on-premise server the table PerformanceVarcharNVarchar that we need only the top 2000 rows and we need to compare with the table PerformanceVarcharNVarchar in our Azure SQL Database.
The first thing that we are going to perform is to create the temporal table, of course, we could create a normal table.
DROP TABLE IF EXISTS [##__MyTable__]
CREATE Table [##__MyTable__] (ID INT Primary Key)
Once we have created the table we are going to call our ClsRead with the following parameters:
static void Main(string[] args)
{
ClsRead oClsRead = new ClsRead();
oClsRead.SourceConnectionString = “Server=OnPremiseServer;User Id=userName;Password=Pwd1!;Initial Catalog=DbSource;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.TargetConnectionString = “Server=tcp:servername.database.windows.net,1433;User Id=username1;Password=pwd2;Initial Catalog=DBName;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.SQLToExecuteFromSource = “Select TOP 2000 ID from dbo.PerformanceVarcharNVarchar”;
oClsRead.TargetTable = “[##__MyTable__]”;
oClsRead.TransferData();
}
If everything has been executed correctly, we could execute, queries like this one:
select * from [##__MyTable__] A
INNER JOIN PerformanceVarcharNVarchar B
ON A.ID = B.ID
Conclusion
While a direct Linked Server connection is not possible from Azure SQL Database, the ClsRead class provides a viable alternative with flexibility and robust error handling. This approach is particularly useful in cloud-based and hybrid environments where Azure SQL Database is used in conjunction with on-premise SQL Server instances.
using System;
using System.Collections.Generic;
using System.Data;
using System.Text;
using Microsoft.Data.SqlClient;
namespace LinkedServer
{
class ClsRead
{
private string _sSourceConnectionString = “”;
private string _sTargetConnectionString = “”;
private string _sSQLToReadFromSource = “”;
private string _sTargetTable = “”;
public string SourceConnectionString
{
get
{
return _sSourceConnectionString;
}
set
{
_sSourceConnectionString = value;
}
}
public string TargetConnectionString
{
get
{
return _sTargetConnectionString;
}
set
{
_sTargetConnectionString = value;
}
}
public string SQLToExecuteFromSource
{
get
{
return _sSQLToReadFromSource;
}
set
{
_sSQLToReadFromSource = value;
}
}
public string TargetTable
{
get
{
return _sTargetTable;
}
set
{
_sTargetTable = value;
}
}
// Constructor por defecto
public ClsRead() { }
public void TransferData()
{
// Check that all properties are set
if (string.IsNullOrEmpty(SourceConnectionString) ||
string.IsNullOrEmpty(TargetConnectionString) ||
string.IsNullOrEmpty(SQLToExecuteFromSource) ||
string.IsNullOrEmpty(TargetTable))
{
throw new InvalidOperationException(“All properties must be set.”);
}
try
{
DataTable TempData = GetDataFromSource();
InsertDataIntoAzureSql(TempData);
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“Error during data transfer: ” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
private DataTable GetDataFromSource()
{
DataTable dataTable = new DataTable();
try
{
using (SqlConnection connection = new SqlConnection(SourceConnectionString))
{
using (SqlCommand command = new SqlCommand(SQLToExecuteFromSource, connection))
{
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
dataTable.Load(reader);
}
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Obtaining data from Source..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
return dataTable;
}
private void InsertDataIntoAzureSql(DataTable TempData)
{
try
{
using (SqlConnection connection = new SqlConnection(TargetConnectionString))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = TargetTable;
bulkCopy.BatchSize = 1000;
bulkCopy.BulkCopyTimeout = 50;
bulkCopy.WriteToServer(TempData);
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Saving data into target..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
}
}
Microsoft Tech Community – Latest Blogs –Read More
Monthly news – January 2024
Microsoft Defender for Cloud
Monthly news
January 2024 Edition
This is our monthly “What’s new” blog post, summarizing product updates and various new assets we released over the past month. In this edition, we are looking at all the goodness from December 2023.
Legend:
Product videos
Webcasts (recordings)
Docs on Microsoft
Blogs on Microsoft
GitHub
External content
Product improvements
Announcements
Microsoft Defender for Cloud
It is now possible to manage Defender for Servers on specific resources within your subscription, giving you full control over your protection strategy. With this capability, you can configure specific resources with custom configurations that differ from the settings configured at the subscription level.
Learn more about enabling Defender for Servers at the resource level.
The Coverage workbook allows you to keep track of which Defender for Cloud plans are active on which parts of your environments. This workbook can help you to ensure that your environments and subscriptions are fully protected. By having access to detailed coverage information, you can also identify any areas that might need other protection and take action to address those areas.
Learn more about the Coverage workbook.
As the landscape of DevOps continues to expand and confront increasingly sophisticated security threats, the need for proactive attack surface reduction measures has never been more critical. To enhance DevOps security and prevent attacks, Defender for Cloud, a Cloud Native Application Protection Platform (CNAPP), is enabling customers with new capabilities: DevOps Environment Posture Management, Code to Cloud Mapping for Service Principals, and new DevOps Attack Paths.
In this blog we dive deep into how these features represent a strategic shift towards a more integrated and holistic approach to cloud native application security throughout the entire development lifecycle.
The classic multicloud connector experience is retired and data is no longer streamed to connectors created through that mechanism. These classic connectors were used to connect AWS Security Hub and GCP Security Command Center recommendations to Defender for Cloud and onboard AWS EC2s to Defender for Servers.
The full value of these connectors has been replaced with the native multicloud security connectors experience, which has been Generally Available for AWS and GCP since March 2022 at no extra cost.
The new native connectors are included in your plan and offer an automated onboarding experience with options to onboard single accounts, multiple accounts (with Terraform), and organizational onboarding with auto provisioning for the following Defender plans: free foundational CSPM capabilities, Defender Cloud Security Posture Management (CSPM), Defender for Servers, Defender for SQL, and Defender for Containers.
Over the past three years, a notable shift has unfolded in the realm of cloud security. Increasingly, security vendors are introducing agentless scanning solutions to enhance the protection of their customers. These solutions empower users with visibility into their security posture and the ability to detect threats — all achieved without the need to install any additional software, commonly referred to as an agent, onto their workloads.
This transformative phase in cloud security, embracing the agentless approach, owes its development to the robust suite of management APIs offered by cloud service providers. In this blog post, our focus will center on the technical aspects of agentless scanning applicable to virtual machines operating in the cloud. Whether it be an Azure Virtual Machine, an AWS EC2 instance, or a Google Cloud Compute instance, for simplicity’s sake, we will term them as cloud-native virtual machines (VMs).
In this article we share the technical details of our agentless scanning platform.
PostgreSQL Flexible Server support in the Microsoft Defender for open-source relational databases plan is now generally available. Microsoft Defender for open-source relational databases provides advanced threat protection to PostgreSQL Flexible Servers, by detecting anomalous activities and generating security alerts.
Learn how to Enable Microsoft Defender for open-source relational databases.
Watch new episodes of the Defender for Cloud in the Field show to learn about the Agentless secret scanning for VMs, Native integration with ServiceNow, Defender for APIs General Availability and updates from Microsoft Ignite 2023.
Microsoft Defender for Cloud Labs have been updated and now include several new detailed step by step guidance on how to enable, configure and test the Defender for Cloud capabilities.
Discover how other organizations successfully use Microsoft Defender for Cloud to protect their cloud workloads. This month we are featuring Rabobank – a Dutch multinational banking and financial services company headquartered in Utrecht, Netherlands – that uses Microsoft security solutions, including Defender for Cloud, to secure their environment.
Join our experts in the upcoming webinars to learn what we are doing to secure your workloads running in Azure and other clouds.
Note: If you want to stay current with Defender for Cloud and receive updates in your inbox, please consider subscribing to our monthly newsletter: https://aka.ms/MDCNewsSubscribe
Microsoft Tech Community – Latest Blogs –Read More
Empower Azure Video Indexer Insights with your own models
Overview
Azure Video Indexer (AVI) offers a comprehensive suite of models that extract diverse insights from the audio, transcript, and visuals of videos. Recognizing the boundless potential of AI models and the unique requirements of different domains, AVI now enables integration of custom models. This enhances video analysis, providing a seamless experience both in the user interface and through API integrations.
The Bring Your Own (BYO) capability enables the process of integrating custom models. Users can provide AVI with the API for calling their model, define the input via an Azure Function, and specify the integration type. Detailed instructions are available here.
Demonstrating this functionality, a specific example involves the automotive industry: Users with numerous car videos can now detect various car types more effectively. Utilizing AVI’s Object Detection insight, particularly the Car class, the system has been expanded to recognize new sub-classes: Jeep and Family Car. This enhancement employs a model developed in Azure AI Vision Studio using Florence, based on a few-shots learning technique. This method, leveraging the foundational Florence Vision model, enables training for new classes with a minimal set of examples – approximately 15 images per class.
The BYO capability in AVI allows users to efficiently and accurately generate new insights by building on and expanding existing insights such as object detection and tracking. Instead of starting from scratch, users can begin with a well-established list of cars that have already been detected and tracked along the video, each with a representative image. Users can then use only numerous requests for the new Florence-based model to differentiate between the cars according to their model.
Note: This article is accompanied by a step-by-step code-based tutorial. Please visit the official Azure Video Indexer “Bring Your Own” Sample under the Video Indexer Samples Github Repository.
High Level Design and Flow
To demonstrate the usage of building customized AI pipeline, we will be using the following pipeline that leverages several key aspects of Video Indexer components and integrations:
1. Users employ their existing Azure Video Indexer account on Azure to index a video, either through the Azure Video Indexer Portal or the Azure Video Indexer API.
2. The Video Indexer account integrates with a Log Analytics workspace, enabling the publication of Audit and Events Data into a selected stream. For additional details on video index collection options, refer to: Monitor Azure Video Indexer | Microsoft Learn.
3. Indexing operation events (such as “Video Uploaded,” “Video Indexed,” and “Video Re-Indexed”) are streamed to Azure Event Hubs. Azure Event Hubs enhances the reliability and persistence of event processing and supports multiple consumers through “Consumer Groups.”
4. A dedicated Azure Function, created within the customer’s Azure Subscription, activates upon receiving events from the EventHub. This function specifically waits for the “Indexing-Complete” event to process video frames based on criteria like object detection, cropped images, and insights. The compute layer then forwards selected frames to the custom model via Cognitive Services Vision API and receives the classification results. In this example it sends the crops of the representative image for each tracked car in the video.
Note: The integration process involves strategic selection of video frames for analysis, leveraging AVI’s car detection and tracking capabilities, to only process representative cropped images of each tracked car in the custom model.
5. The compute layer (Azure Function) then transmits the aggregated results from the custom model back to the Azure API to update the existing indexing data using the Update Video Index API Call.
6. The enriched insights are subsequently displayed on the Video Indexer Portal. The ID in the custom model matches the ID in the original insights JSON.
Figure 2: New Insight widget in AVI for the custom model results
Note: for more in-depth step-by-step tutorial accomplished with code sample, please consult the official Azure Video Indexer GitHub Sample under the “Bring-Your-Own” Section.
Result Analysis
The outcome is a novel insight displayed in the user interface, revealing the outcomes from the custom model. This application allowed for the detection of a new subclass of objects, enhancing the video with additional, user-specific insights. In the examples provided below, each car is distinctly classified: for instance, the white car is identified as a family car (Figure 3), whereas the red car is categorized as a jeep (Figure 4).
Figure 3: Azure Video Indexer with the new custom insight for the white car classified as family car.
Figure 4: Azure Video Indexer with the new custom insight for the red car classified as family jeep.
Conclusions
With only a handful of API calls to the bespoke model, the system effectively conducts a thorough analysis of every car featured in the video. This method, which involves the selective use of certain images for the custom model combined with insights from AVI, not only reduces expenses but also boosts overall efficiency. It delivers a holistic analysis tool to users, paving the way for endless customization and AI integration opportunities.
Microsoft Tech Community – Latest Blogs –Read More
Check This Out! (CTO!) Guide (December 2023)
Hi everyone! Brandon Wilson here once again with this month’s “Check This Out!” (CTO!) guide.
These posts are only intended to be your guide, to lead you to some content of interest, and are just a way we are trying to help our readers a bit more, whether that is learning, troubleshooting, or just finding new content sources! We will give you a bit of a taste of the blog content itself, provide you a way to get to the source content directly, and help to introduce you to some other blogs you may not be aware of that you might find helpful.
From all of us on the Core Infrastructure and Security Tech Community blog team, thanks for your continued reading and support!
Title: Renew Certificate Authority Certificates on Windows Server Core. No Problem!
Source: Ask the Directory Services Team
Author: Robert Greene
Publication Date: 12/18/23
Content excerpt:
Today’s blog strives to clearly elucidate an administrative procedure that comes along more frequently with PKI Hierarchies being deployed to Windows Server Core operating systems.
Title: Keep your Azure optimization on the right track with Azure patterns and practices
Source: Azure Architecture
Author: Ben Brauer
Publication Date: 12/13/23
Content excerpt:
Businesses are at a pivotal juncture in their cloud migration journeys, as the question is no longer “Should we do this?”, but “What’s the best way to do this?” With questions of cost, reliability, and security looming over any migration plans, Microsoft is driven to fortify your organization for a successful transformation to Azure. That’s why we offer two complementary frameworks that together provide a comprehensive approach to cloud adoption and optimization. With best-practice guidance and checklists to keep your cloud modernization on track, our goal is to help your organization avoid costly mistakes and save time by leveraging proven strategies. The Microsoft Cloud Adoption Framework (CAF) and Well-Architected Framework (WAF) are resources that businesses can leverage to confidently transform their operations into being cloud-centric and build/manage cloud-hosted applications securely and cost-effectively. In this blog we’ll take you through the purpose of each framework and how you can start applying them to your cloud migration today.
Title: How to use Azure Front Door with Azure Kubernetes Service (Tips and Tricks)
Source: Azure Architecture
Author: Pranab Paul
Publication Date: 12/26/23
Content excerpt:
As its definition says – “Azure Front Door is a global, scalable, and secure entry point for fast delivery of your web applications. It offers dynamic site acceleration, SSL offloading, domain and certificate management, application firewall, and URL-based routing”. We can consider this as an Application Gateway at global scale with CDN profile thrown in to spice it up. AGIC or Application Gateway as Ingress Controller is already available and widely used. I received this question recently, asking whether Azure Front Door can be used in the same way. I didn’t have to reinvent the wheel as so many blog posts and YouTube videos are already there on this topic. In this article, I will only discuss different options to implement Azure Front Door with AKS and will add some critical tips you should be aware of.
Title: Public Preview Announcement: Azure VM Regional to Zonal Move
Source: Azure Compute
Author: Kaza Sriram
Publication Date: 12/12/23
Content excerpt:
We are excited to announce the public preview of single instance VM regional to zonal move, a new feature that allows you to move an existing VM in a regional configuration (deployed without any infrastructure redundancy) to a zonal configuration (deployed into specific Azure availability zone) within the same region. This feature announcement continues the momentum with our earlier announced VMSS Zonal expansion features and reinforces the Azure wide zonal strategy, that enables you to take advantage of higher availability with Azure availability zones and make them an integral part of your comprehensive business continuity and resiliency strategy.
This feature is intended for single instance VMs in regional configurations only and not for VMs already in availability zones, or VMs part of an availability set (AvSet) or Virtual Machine Scale Sets (VMSS).
Title: Interconnected guidance for an optimized cloud journey
Source: Azure Governance and Management
Author: Antonio Ortoll
Publication Date: 12/11/23
Content excerpt:
The cost of cloud computing can add up quickly, especially for businesses with a high volume of data, high traffic or mission-critical applications. As organizations increasingly put cloud capabilities to work, they are constantly looking for ways to trim costs and focus their cloud spend to align to the right business priorities. Cost optimization is key to making that happen. But how do you know when there are opportunities to optimize?
To make it easier for you to identify cost optimization opportunities during every step of your Azure journey, we provide resources, tools and guidance to help you evaluate your costs, identify efficiencies, and set you up for success. From building your business case to optimizing new workloads, you’ll find interconnected guidance and assessments designed to continually increase the value of your Azure investments and enable you to invest in projects that drive ongoing business growth and innovation. Whether you’re migrating to the cloud for the first time or already have Azure workloads in place, these cost management, governance and monitoring tools can help you visualize your costs and gain insights.
Let’s take a closer look at each of these tools and how you can use them to understand and forecast your bill, optimize workload costs, and control your spending.
Title: Azure Firewall: New Embedded Workbooks
Source: Azure Network Security
Author: Eliran Azulai
Publication Date: 12/4/23
Content excerpt:
After our previous announcement in August 2023, we want to delve deeper into the enhanced capabilities of the new embedded workbooks. Within Azure, Workbooks serve as a versatile canvas for conducting data analysis and generating visually compelling reports directly within the Azure portal. They empower users to access diverse data sources across Azure, amalgamating them into cohesive, interactive experiences. Workbooks enable the amalgamation of various visualizations and analyses, making them ideal for unrestricted exploration.
Notably, the Azure Firewall Portal has now incorporated embedded workbooks functionality, offering customers a seamless means to analyze Azure Firewall traffic. This feature facilitates the creation of sophisticated visual reports within the Azure portal, allowing users to leverage data from multiple Firewalls deployed across Azure and unify them into interactive, cohesive experiences.
Title: Azure Firewall’s Auto Learn SNAT Routes: A Guide to Dynamic Routing and SNAT Configuration
Source: Azure Network Security
Author: David Frazee
Publication Date: 12/21/23
Content excerpt:
Azure Firewall is a cloud-native network security service that protects your Azure virtual network resources. It is a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. However, some Azure Firewall customers may face challenges when they need to configure non-RFC-1918 address spaces to not SNAT through the Azure Firewall. This can cause issues with routing, connectivity, and performance. To address this problem, Azure Firewall has introduced a new feature that allows customers to specify which address spaces should not be SNATed by the firewall. This feature can help customers reduce the overhead of managing custom routes and NAT rules and improve the efficiency and reliability of their network traffic. In this blog, we will explain how the feature works, what Azure Route Server is, and how to enable it. We will also provide a QuickStart guide and some examples to help you get started with this feature.
Title: Securely uploading blob files to Azure Storage from API Management
Source: Azure PaaS
Author: Una Chen
Publication Date: 12/26/23
Content excerpt:
This article will provide a demonstration on how to utilize either SAS token authentication or managed identity from API Management to make requests to Azure Storage. Furthermore, it will explore and compare the differences between these two options.
Title: The Twelve Days of Blog-mas: No.4 – Sync Cloud Groups from AAD/Entra ID back to Active Directory
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/1/23
Content excerpt:
For a loooong time, you and I have been waiting for the ability to sync ‘cloud-born-and-managed’ security groups (and their memberships) back into on-premises AD. This takes us further on our journey of moving “the management plane” from on-prem AD to the cloud – and provides you the ability to create/manage groups in the cloud to manage resource access in Active Directory.
Title: The Twelve Days of Blog-mas: No.5 – The Endpoint Management Jigsaw
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/5/23
Content excerpt:
Most orgs (hopefully) have a well-developed ‘practice’ around Endpoint management, combining people, process and technology to deploy, configure, operate and support a fleet of devices that adhere to corporate policy. This has been a main-stay of endpoint IT Pros for decades.
As IT Pros, whether we like it or not, we’re continually expanding our knowledge and skills to account for the ever-growing scope that we’re accountable for and the winds of change in technology. The cloud, mobile devices, BYO, VDI and other flavors of endpoints – as well as a global pandemic – have all pushed or pulled (or dragged) us to where we are “today.”
Title: Switch to the New Defender for Resource Manager Pricing Plan
Source: Core Infrastructure and Security
Author: Felipe Binotto
Publication Date: 12/5/23
Content excerpt:
In case you missed it, a new pricing plan has been announced for Microsoft Defender for Resource Manager.
The legacy pricing plan (per-API call) is priced at $4 per 1M API Calls, which can become a bit expensive if there is a lot going on in your subscriptions.
The new pricing plan (per-subscription) is priced at $5 per subscription per month.
We have made available a workbook which provides a cost estimation for all the Defender plans across all your subscriptions.
Title: The Twelve Days of Blog-mas: No. 6 – The Reporting Edition – Microsoft Community Hub
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/6/23
Content excerpt:
Good morning, Internet! At first glance, this post may appear a weeee bit thin … but sometimes, less is more. Who doesn’t need/want more reporting/visualizations and tracking of what’s going on within an environment?
I think it’s safe to say that when it comes to “Reporting,” it often feels like less actually is ‘less’ (and sometimes, ‘more’ is even less ‘less,’ or ‘more less?’ How should one say that?). Reporting is never ‘enough’ or ‘done’ but we steadily expand and improve that aspect of our services – and we’re constantly doing more.
Title: The Twelve Days of Blog-mas: No. 7 – Architecture Visuals – for Your Reference or Your Own Docs
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/7/23
Content excerpt:
A softball for #7 … enjoy!
Title: The Twelve Days of Blog-mas: No. 8 – The Evolution of Windows Server Management
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/8/23
Content excerpt:
As was discussed previously, our Endpoint Management modernization story is compelling. The server team overheard that good news and is curious – but the Server Management discipline is quite different than Endpoint management.
Server teams manage/operate systems that are usually locked away in datacenters – either their own and/or a cloud provider. They’re usually not exposed to physical loss or theft, nor people shoulder-surfing at a coffee shop. They’re usually only accessible via remote management capabilities. They usually have much more stringent change control and update processes – and often extreme business sensitivity to reboots (especially unplanned, but planned ones, too).
So, what is our Server Management story then, circa ‘Holidays 2023?’
Well, I’m glad you asked – and I get this question a lot these days.
Title: Introduction to Network Trace Analysis 4: DNS (it’s always DNS)
Source: Core Infrastructure and Security
Author: Will Aftring
Publication Date: 12/11/23
Content excerpt:
Howdy everyone! I’m back to talk about one of my favorite causes of heartache, the domain name system (DNS). This will be our first foray into an application layer protocol. The concept of DNS is simple enough, but it can lead to some confusing situations if you don’t keep its function in mind. No time to waste, let’s get going!
Title: The Twelve Days of Blog-mas: No.9 – It’s a Multi-Tenant and Cross-Platform World: Part I
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/12/23
Content excerpt:
Greetings! Before the cloud, when on-prem Active Directory was the hub of many enterprise architectures, business needs often drove the requirement to expand single-domain AD forests into multi-domain AD forests. Even in the NT days, one might have ‘Account Domains’ and ‘Resource Domains’ – connected via one-ways trusts. As was often the case, multiple existing NT 4.0 domains were ‘upgraded’ into a single AD forest, as additional domains. These days, a single-domain AD Forest is pretty rare for main-stream use.
Title: The Twelve Days of Blog-mas: No.10 – It’s a Multi-Tenant and Cross-Platform World: Part II
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/13/23
Content excerpt:
In Part I of this mini-series, I discussed some of the new hotness around multi-tenant capabilities in our Entra ID space. In Part II, I’ll cover cross-platform support across several of our cloud services. The cloud era ushered in mainstream cross-platform support from many Microsoft services. Like the title of this post says, anymore, it’s a cross-platform world.
Title: The Twelve Days of Blog-mas: No.11 – The Kitchen Sink
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/14/23
Content excerpt:
I am running out of days for my “Twelve Days” timeframe, so I’m dropping a pile of topics here that I feel are important/helpful but less-known.
Apologies in advance for the brevity and link-breadcrumbs.
Title: The Twelve Days of Blog-mas: No.12 – Copilot(s) – Your AI Assistant(s)
Source: Core Infrastructure and Security
Author: Michael Hildebrand
Publication Date: 12/15/23
Content excerpt:
Now, you didn’t really think I would go for 12 without one about Copilot, did you?
Our AI/ML efforts have been on-going for a long time, but very recently, they’ve gone mainstream -and SUCH a cool logo/icon. Be aware, though, for now, this space changes frequently, varies by region/market and software version (Windows, Office apps, Edge, etc.). Docs, product names, major and minor functionality are all moving very fast. Do your brain a favor and make some peace with that – but then, jump into the pool!
Title: Designing Cloud Architecture: Creating Professional Azure Diagrams with PowerPoint
Source: Core Infrastructure and Security
Author: Werner Rall
Publication Date: 12/17/23
Content excerpt:
In the fast-evolving landscape of cloud computing, the ability to visually represent complex architectures is not just a skill but a necessity. Among the myriad of tools and platforms, Microsoft Azure stands as a titan, offering a vast array of services that cater to diverse computing needs. However, the true challenge lies in effectively communicating the structure and functionality of Azure-based solutions. This is where the power of visualization comes into play, and surprisingly, a tool as familiar as PowerPoint emerges as an unlikely ally.
Title: Windows 365 deployment checklist
Source: FastTrack
Author: Josh Gutierrez
Publication Date: 12/22/23
Content excerpt:
We’re excited to announce that we’ve just released an updated Windows 365 deployment checklist in the Microsoft 365 admin center (MAC).
Title: Known Issue: Some management settings become permanent on Android 14
Source: Intune Customer Success
Author: Intune Support Team
Publication Date: 12/18/23
Content excerpt:
Google recently identified two issues in Android 14 that make some management policies permanent on non-Samsung devices. When a device is upgraded from Android 13 to Android 14, certain settings are made permanent on the device. Additionally, when devices that have been upgraded to Android 14 are rebooted, other settings are made permanent on the device.
Title: Transforming the iOS/iPadOS ADE experience in Microsoft Intune – Microsoft Community Hub
Source: Intune Customer Success
Author: Intune Support Team
Publication Date: 12/19/23
Content excerpt:
In July of 2021, we announced that Running the Company Portal in Single App Mode until authentication is not a supported flow by Apple for iOS/iPadOS automated device enrollment (ADE). Since then, we’ve been hard at work to improve the ADE experience through the release of Setup Assistant with modern authentication, Just in Time (JIT) registration and compliance remediation, and the “Await until configuration” setting.
Title: Wired for Hybrid – What’s New in Azure Networking December 2023 edition
Source: ITOps Talk
Author: Pierre Roman
Publication Date: 12/20/23
Content excerpt:
Azure Networking is the foundation of your infrastructure in Azure. Each month we bring you an update on What’s new in Azure Networking.
In this blog post, we’ll cover what’s new with Azure Networking in December 2023. In this blog post, we will cover the following announcements and how they can help you.
Enjoy!
Title: Deploy secret-less Conditional Access policies with Microsoft Entra ID Workload Identity Federation
Source: Microsoft Entra (Azure AD)
Author: Claus Jespersen
Publication Date: 12/4/23
Content excerpt:
Many customers face challenges in managing their Conditional Access (CA) policies. Over time, they accumulate more and more policies that are created ad-hoc to solve specific business scenarios, resulting in a loss of overview and increased troubleshooting efforts. Microsoft has provided guidance on how to structure your Conditional Access policies in a way that follows the Zero Trust principles, using a persona-based approach. The guidance includes a set of Conditional Access policies that can serve as a starting point. These CA policies can be automated from a CI/CD pipeline using various tools. One such tool is Microsoft365DSC, an open-source tool developed by members of the Microsoft Graph Product Group, who are still actively involved in its maintenance.
Title: Enhancements to Microsoft Entra certificate-based authentication
Source: Microsoft Entra (Azure AD)
Author: Alex Weinert; Vimala Ranganathan
Publication Date: 12/13/23
Content excerpt:
At Ignite 2022, we announced the general availability of Microsoft Entra certificate-based authentication (CBA) as part of Microsoft’s commitment to Executive Order 14028, Improving the Nation’s Cybersecurity. Based on our experience working with government customers, PIV/CAC cards are the most common authentication method used within the federal government. While valuable for all customers, the ability to use X.509 certificate for authentication directly against Entra ID is particularly critical for federal government organizations using PIV/CAC cards and looking to easily comply with the Executive Order 14028 requirements as well as customers who want to migrate from a federated server like Active Directory Federated Server to Entra ID for CBA.
Since then, we’ve added many new features and enhancements, which made CBA available on all platforms, including mobile, with support for certificates on devices as well as external security keys like YubiKeys. Customers now have more control and flexibility to tailor authentication policies by certificate and resource type, as well as user group and select certificate strength for different users, use CBA with other methods for multi-factor or step-up authentication, and set high affinity (strong) binding for either the entire tenant or by user group.
Vimala Ranganathan, Product Manager on Microsoft Entra, will now talk about how these new features will help in your journey toward phishing-resistant MFA.
Title: Introducing New Features of Microsoft Entra Permissions Management
Source: Microsoft Entra (Azure AD)
Author: Joseph Dadzie
Publication Date: 12/14/23
Content excerpt:
Microsoft Entra Permissions Management is a Cloud Infrastructure Entitlement Management (CIEM) solution that helps organizations manage the permissions of any identity across organizations’ multicloud infrastructure. With Permissions Management, organizations can assess, manage, and monitor identities and their permissions continuously and right-size them based on past activity.
Today, we’re thrilled to unveil the details of our Ignite announcement and introduce new features and APIs for Permissions Management, enhancing your overall permissions management experience.
Title: Advancing Cybersecurity: The Latest enhancement in Phishing-Resistant Authentication
Source: Microsoft Entra (Azure AD)
Author: Alex Weinert
Publication Date: 12/15/23
Content excerpt:
Today, I’m excited to share with you several new developments in the journey towards phishing-resistant authentication for all users! This isn’t just essential for compliance with Executive Order 14028 on Improving the Nation’s Cybersecurity but is increasingly critical for the safety of all the orgs and users who bet on digital identity.
Title: Strengthening identity protection in the face of highly sophisticated attacks
Source: Security, Compliance, and Identity
Author: Alex Weinert
Publication Date: 12/12/23
Content excerpt:
When it comes to security at Microsoft, we’re customer zero as our Chief Security Advisor and CVP Bret Arsenault often emphasizes. That means we think a lot about how we build security into everything we do—not only for our customers—but for ourselves. We continuously work to improve the built-in security of our products and platforms. With the unparalleled breadth of our digital landscape and the integral role we play in our customers’ businesses, we feel a unique responsibility to take a leadership role in securing the future for our customers, ourselves, and our community.
To that end, on November 2nd, 2023, we launched the Secure Future Initiative (SFI). It’s a multi-year commitment to advance the way we design, build, test, and operate our technology to ensure we deliver solutions that meet the highest possible standards of security.
Title: A new, modern, and secure print experience from Windows
Source: Security, Compliance, and Identity
Author: Johnathan Norman
Publication Date: 12/13/23
Content excerpt:
Over the past year, the MORSE team has been working in collaboration with the Windows Print team to modernize the Windows Print System. This new design represents one of the largest changes to the Windows Print stack in more than 20 years. The goal was to build a more modern and secure print system that maximizes compatibility and puts users first. We are calling this new platform Windows Protected Print Mode (WPP). We believe users should be Secure-by-Default which is why WPP will eventually be on by default in Windows.
Title: Plan for Windows 10 EOS with Windows 11, Windows 365, and ESU
Source: Windows IT Pro
Author: Jason Leznek
Publication Date: 12/5/23
Content excerpt:
Windows 10 will reach end of support (EOS) on October 14, 2025. While two years may seem like a long runway, ensuring a modernized infrastructure will help keep your organization productive and its data secure. We’re encouraged to see organizations realizing the benefits of Windows 11 by upgrading eligible devices to Windows 11 well ahead of the EOS date. Consider joining organizations like Westpac who recently leveraged Microsoft Intune, Windows Autopatch, and App Assure to efficiently move 40,000 employees to Windows 11, while also incorporating new Windows 11 devices as part of a regular hardware refresh cycle.
In this post, learn about the various options you have to smoothly transition to Windows 11, including extended protection for those needing more time.
Title: Upcoming changes to Windows Single Sign-On
Source: Windows IT Pro
Author: Adam Steenwyk
Publication Date: 12/14/23
Content excerpt:
Microsoft has been working to ensure compliance with the Digital Markets Act (DMA) in the European Economic Area (EEA). As part of this ongoing commitment to provide your organization with solutions that comply with global regulations like the DMA, we will be changing the ways Windows works. Signing in to apps on Windows is one area where we will be making such changes.
Title: Skilling snack: Network security basics for endpoints
Source: Windows IT Pro
Author: Clay Taylor
Publication Date: 12/14/23
Content excerpt:
Why is network security important? In the chip-to-cloud environment, every component adds a layer of protection. It’s the Zero Trust approach to Windows security. We’ve already covered the basics of endpoint, identity, and data security in Skilling snack: Windows security fundamentals. You can also dig into another layer with Skilling snack: Windows application security. Today, let’s bake in a high-level overview of network security capabilities and options.
Previous CTO! Guides:
CIS Tech Community-Check This Out! (CTO!) Guides
Additional resources:
Azure documentation
Azure pricing calculator (VERY handy!)
Microsoft Azure Well-Architected Framework
Microsoft Cloud Adoption Framework
Windows Server documentation
Windows client documentation for IT Pros
PowerShell documentation
Core Infrastructure and Security blog
Microsoft Tech Community blogs
Microsoft technical documentation (Microsoft Docs)
Sysinternals blog
Microsoft Learn
Microsoft Support (Knowledge Base)
Microsoft Archived Content (MSDN/TechNet blogs, MSDN Magazine, MSDN Newsletter, TechNet Newsletter)
Microsoft Tech Community – Latest Blogs –Read More
Domain and certificate bindings for IDN hostnames in Azure App Service
Overview
When it comes to website security, one important step is to add a custom domain and connect it with a TLS/SSL certificate. This not only enhances the trust and safety of your website but also ensures that your visitors’ information is encrypted and protected. Azure App Service provides TLS bindings for the most common custom domains. This blog discusses the special domain and certificate binding situations in Azure App Service for IDN hostnames.
What is an IDN hostname?
An IDN hostname is a domain name that includes characters used in the local representation of languages not written with the basic Latin alphabet “a-z”. These characters can be Arabic, Hebrew, Chinese, Cyrillic, Tamil, Hindi, and more.
What is Punnycode?
Domain bindings for IDN hostnames in Azure App Service
There are several common ways to bind a domain to the Azure App Service, such as Azure portal, Azure CLI/ PowerShell, and ARM template. For domain bindings on the portal, it is not supported yet to add IDN hostnames. For now, we only support domains with alphanumeric characters (A-Z, a-z, 0-9), period (‘.’), dash (‘-‘), and asterisk (‘*’) to be added.
For domain bindings with Azure CLI/ PowerShell/ ARM template, we could currently bypass this validation and could add those special punny code characters successfully. Referencing this blog: Create and bind the custom domain contains special Unicode character in App Service Using Azure CLI – Microsoft Community Hub
Certificate validations for IDN hostnames in Azure App Service
To enable secure communication between the App Service and the client, a TLS/SSL certificate is necessary. There are two types of certificates to secure your domain: wildcard certificates and standard certificates. A wildcard certificate secures multiple subdomains under a single domain, while a standard certificate is specific to a single domain. In most use cases, a wildcard cert will be used to secure different subdomains as this is more manageable.
In the scenario of binding the certificate to the IDN hostnames, a wildcard cert is not recommended as it will encounter unexpected errors.
Error message:
Error screenshot:
Getting an error from PowerShell command lines as well
Workaround:
We are splitting the wildcard certificate when validating from the backend and this is why we are getting the unmatching error. The quickest workaround for now is to request a standard certificate specific to this hostname.
Summary
Overall, Azure App Service somehow enables you to configure domain bindings for IDN hostnames. You can associate an IDN hostname with your Azure App Service app with command lines. Additionally, you can manage the certificate bindings for these domains with standard certificates, ensuring security.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #469:Implementing a Linked Server Alternative with Azure SQL Database and C#
In scenarios where direct Linked Server connections are not feasible, such as between Azure SQL Database and an on-premise SQL Server, developers often seek alternative solutions. This blog post introduces a C# implementation that simulates the functionality of a Linked Server for data transfer between Azure SQL Database and SQL Server, providing a flexible and efficient way to exchange data.
Overview of the Solution
The proposed solution involves a C# class, ClsRead, designed to manage the data transfer process. The class connects to both the source (SQL Server) and the target (Azure SQL Database), retrieves data from the source, and inserts it into the target database.
Key Features
Connection Management: ClsRead maintains separate connection strings for the source and target databases, allowing for flexible connections to different SQL Server and Azure SQL Database instances.
Data Transfer Control: The class includes methods to execute a SQL query on the source database, retrieve the results into a DataTable, and then use SqlBulkCopy to efficiently insert the data into the target Azure SQL Database.
Error Handling: Robust error handling is implemented within each method, ensuring that any issues during the connection, data retrieval, or insertion processes are appropriately logged and can be managed or escalated.
Implementation Details
Class Properties
SourceConnectionString: Connection string to the source SQL Server.
TargetConnectionString: Connection string to the target Azure SQL Database.
SQLToExecuteFromSource: SQL query to be executed on the source database.
TargetTable: Name of the target table in Azure SQL Database where data will be inserted.
Methods
TransferData(): Coordinates the data transfer process, including validation of property values.
GetDataFromSource(): Executes the SQL query on the source database and retrieves the results.
InsertDataIntoAzureSql(DataTable TempData): Inserts the data into the target Azure SQL Database using SqlBulkCopy.
Error Handling
The methods include try..catch blocks to handle any exceptions, ensuring that errors are logged, and the process can be halted or adjusted as needed.
Usage Scenario
A typical use case involves setting up the ClsRead class with appropriate connection strings, specifying the SQL query and the target table, and then invoking TransferData(). This process can be used to synchronize data between different databases, migrate data, or consolidate data for reporting purposes.
For example, we have in our on-premise server the table PerformanceVarcharNVarchar that we need only the top 2000 rows and we need to compare with the table PerformanceVarcharNVarchar in our Azure SQL Database.
The first thing that we are going to perform is to create the temporal table, of course, we could create a normal table.
DROP TABLE IF EXISTS [##__MyTable__]
CREATE Table [##__MyTable__] (ID INT Primary Key)
Once we have created the table we are going to call our ClsRead with the following parameters:
static void Main(string[] args)
{
ClsRead oClsRead = new ClsRead();
oClsRead.SourceConnectionString = “Server=OnPremiseServer;User Id=userName;Password=Pwd1!;Initial Catalog=DbSource;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.TargetConnectionString = “Server=tcp:servername.database.windows.net,1433;User Id=username1;Password=pwd2;Initial Catalog=DBName;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
oClsRead.SQLToExecuteFromSource = “Select TOP 2000 ID from dbo.PerformanceVarcharNVarchar”;
oClsRead.TargetTable = “[##__MyTable__]”;
oClsRead.TransferData();
}
If everything has been executed correctly, we could execute, queries like this one:
select * from [##__MyTable__] A
INNER JOIN PerformanceVarcharNVarchar B
ON A.ID = B.ID
Conclusion
While a direct Linked Server connection is not possible from Azure SQL Database, the ClsRead class provides a viable alternative with flexibility and robust error handling. This approach is particularly useful in cloud-based and hybrid environments where Azure SQL Database is used in conjunction with on-premise SQL Server instances.
using System;
using System.Collections.Generic;
using System.Data;
using System.Text;
using Microsoft.Data.SqlClient;
namespace LinkedServer
{
class ClsRead
{
private string _sSourceConnectionString = “”;
private string _sTargetConnectionString = “”;
private string _sSQLToReadFromSource = “”;
private string _sTargetTable = “”;
public string SourceConnectionString
{
get
{
return _sSourceConnectionString;
}
set
{
_sSourceConnectionString = value;
}
}
public string TargetConnectionString
{
get
{
return _sTargetConnectionString;
}
set
{
_sTargetConnectionString = value;
}
}
public string SQLToExecuteFromSource
{
get
{
return _sSQLToReadFromSource;
}
set
{
_sSQLToReadFromSource = value;
}
}
public string TargetTable
{
get
{
return _sTargetTable;
}
set
{
_sTargetTable = value;
}
}
// Constructor por defecto
public ClsRead() { }
public void TransferData()
{
// Check that all properties are set
if (string.IsNullOrEmpty(SourceConnectionString) ||
string.IsNullOrEmpty(TargetConnectionString) ||
string.IsNullOrEmpty(SQLToExecuteFromSource) ||
string.IsNullOrEmpty(TargetTable))
{
throw new InvalidOperationException(“All properties must be set.”);
}
try
{
DataTable TempData = GetDataFromSource();
InsertDataIntoAzureSql(TempData);
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“Error during data transfer: ” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
private DataTable GetDataFromSource()
{
DataTable dataTable = new DataTable();
try
{
using (SqlConnection connection = new SqlConnection(SourceConnectionString))
{
using (SqlCommand command = new SqlCommand(SQLToExecuteFromSource, connection))
{
connection.Open();
using (SqlDataReader reader = command.ExecuteReader())
{
dataTable.Load(reader);
}
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Obtaining data from Source..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
return dataTable;
}
private void InsertDataIntoAzureSql(DataTable TempData)
{
try
{
using (SqlConnection connection = new SqlConnection(TargetConnectionString))
{
connection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(connection))
{
bulkCopy.DestinationTableName = TargetTable;
bulkCopy.BatchSize = 1000;
bulkCopy.BulkCopyTimeout = 50;
bulkCopy.WriteToServer(TempData);
}
}
}
catch (Exception ex)
{
// Handle the exception as necessary
Console.WriteLine(“General Error: Saving data into target..” + ex.Message);
// You can rethrow the exception or handle it according to your application’s needs
throw;
}
}
}
}
Microsoft Tech Community – Latest Blogs –Read More
Decoding the Dynamics: Dapr vs. Service Meshes
Dapr and Service Meshes are more and more usual suspects in Cloud native architectures. However, I noticed that there is still some confusion about their purpose, especially because of some overlapping features. People sometimes wonder how to choose between Dapr and a Service Mesh or even if both should be enabled at the same time.
The purpose of this post is to highlight the differences, especially on the way they handle mTLS, as well as the impact on the application code itself. You can already find a summary about how Dapr and Service Meshes differ on the Dapr web site but the explanations are not deep enough to really understand the differences. This blog post is an attempt to dive deeper and give you a real clue on what’s going on behind the scenes. Let me first start with what Dapr and Service Meshes have in common.
Things that Dapr and Service Meshes have in common
Secure service-to-service communication with mTLS encryption
Service-to-service metric collection
Service-to-service distributed tracing
Resiliency through retries
Yes, this is the exact same list as the one documented on the Dapr web site! However, I will later focus on the mTLS bits because you might think that these are equivalent, overlapping features but the way Dapr and Service Meshes enforce mTLS is not the same. I’ll show some concrete examples with Dapr and the Linkerd Service Mesh to illustrate the use cases.
On top of the above list, I’d add:
They both leverage the sidecar pattern, although the Istio Service Mesh is exploring the Ambient Mesh, which is sidecar free, but the sidecar approach is still mainstream today. Here again, the role of the sidecars and what happens during the injection is completely different between Dapr and Service Meshes.
They both allow you to define fine-grained authorization policies
They both help deal with distributed architectures
Before diving into the meat of it, let us see how they totally differ.
Differences between Dapr and Service Meshes
Applications are Mesh-agnostic, while they must explicitly be Dapr-aware to leverage the Dapr capabilities. Dapr infuses the application code. Being Dapr-aware does not mean that you must use a specific SDK. Every programming language that has an HTTP client and/or gRPC client can benefit from the great Dapr features. However, the application must comply to some Dapr pre-requisites, as it must expose an API to initialize Dapr’s app channel.
Meshes can deal with both layer-4 (TCP) and layer-7 traffic, while Dapr is focused on layer-7 only protocols such as HTTP, gRPC, AMQP, etc.
Meshes serve infrastructure purposes while Dapr serves application purposes
Meshes typically have smart load balancing algorithms
Meshes typically let you define dynamic routes across multiple versions of a given web site/API
Some meshes ship with extra OAuth validation features
Some meshes let you stress your applications through Chaos Engineering techniques, by injecting faults, artificial latency, etc.
Meshes typically incur a steep learning curve while Dapr is much smoother to learn. On the contrary, Dapr even eases the development of distributed architectures.
Dapr provides true service discovery, not meshes
Dapr is designed from the ground up to deal with distributed and microservice architectures, while meshes can help with any architecture style, but prove to be a good ally for microservices.
Demo material
I will reuse one demo app that I developed 4 years ago (time flies), which is a Linkerd Calculator. The below figure illustrates it:
Some services talking together. MathFanBoy, a console app randomly talking to the arithmetic operations, while the percentage operation also calls multiplication and division. The goal of this app was to generate traffic and show how Linkerd helped us see in near real time what’s going on. I also purposely introduced exceptions by performing divisions by zero…to also demo how Linkerd (or any other mesh) helps spot errors. Feel free to clone the repo and try it out on your end if you want to test what is later described in this post. I have now created the exact same app, using Dapr, which is made available here. Let us now dive into the technical details.
Diving into the technical differences
Invisible to the application code vs code awareness
As stated earlier, an application is agnostic to the fact that it is injected or not by a Service Mesh. If you look at the application code of the Linkerd Calculator, you won’t find anything related to Linkerd. The magic happens at deployment time where we annotate our K8s deployment to make sure the application gets injected by the Mesh. On the other hand, the application code of the Dapr calculator is directly impacted in multiple ways:
– While I could use a mere .NET Console App for the Linkerd Calculator, I had to turn MathFanBoy into a web host, to comply with the Dapr app initialization channel. However, because MathFanBoy generates activity by calling random operations, I could not just turn it as an API, so I had to run different tasks in parallel. Here are the most important bits:
class Program
{
static string[] endpoints = null;
static string[] apis = new string[5] { “addition”, “division”, “multiplication”, “substraction”, “percentage” };
static string[] operations = new string[5] { “addition/add”, “division/divide”, “multiplication/multiply”, “substraction/substract”, “percentage/percentage” };
static async Task Main(string[] args)
{
var host = CreateHostBuilder(args).Build();
var runHostTask = host.RunAsync();
var loopTask = Task.Run(async () =>
{
while (true)
{
var pos = new Random().Next(0, 5);
using var client = new DaprClientBuilder().Build();
var operation = new Operation { op1 = 10, op2 = 2 };
try
{
var response = await client.InvokeMethodAsync<object, object>(
apis[pos], // The name of the Dapr application
operations[pos], // The method to invoke
operation); // The request payload
Console.WriteLine(response);
}
catch(Exception ex) {
Console.WriteLine(ex.ToString());
}
await Task.Delay(5000);
}
});
await Task.WhenAll(runHostTask, loopTask);
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
Lines 9 and 10 create the web host. Between lines 13 and 35, I generate random calls to the operations, but here again we have another difference as the application is using the Dapr Client’s InvokeMethodAsync to perform the calls. As you might have noticed, the application does not need to know the URL of these services. Dapr will discover where the services are located, thanks to its Service Discovery feature. The only thing we need to provide is the App ID and the operation that we want to call. With the Linkerd calculator, I had to know the endpoints of the target services, so they were injected through environment variables during the deployment. The same principles apply to the percentage operation, which is a true API. I had to inject the Dapr client through Dependency Injection:
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers().AddDapr();
}
In order to to get an instance through the controller’s constructor:
public PercentageController(ILogger<PercentageController> logger, DaprClient dapr)
{
_logger = logger;
_dapr = dapr;
}
and use that instance to call the division and multiplication operations from within another controller operation, using again the Invoke method as for MathFanBoy. As you can see, the application code is explicitly using Dapr and must comply to some Dapr requirements. Dapr has many features other than Service Discovery but I’ll stick to that since the point is made that a Dapr-injected Application must be Dapr-aware while it is completely agnostic of a Service Mesh.
mTLS
Now things will get a bit more complicated. While both Service Meshes and Dapr implement mTLS as well as fine-grained authorization policies based on the client certificate presented by the caller to the callee, the level of protection of Dapr-injected services is not quite the same as the one from Mesh-injected services.
Roughly, you might think that you end up with something like this:
A very comparable way of working between Dapr and Linkerd. This is correct but only to some extents. If we take the happy path, meaning every pod is injected by Linkerd or Dapr, we should end up in the above situation. However, in a K8s cluster, not every pod is injected by Dapr nor Linkerd. The typical reason why you enable mTLS is to make sure injected services are protected from the outside world. By outside world, I mean anything that is not either Dapr-injected, either Mesh-injected. However, with Dapr, nothing prevents the following situation:
The blue path is taking the Dapr route and is both encrypted and authenticated using mTLS. However, the green paths from both a Dapr-injected pod and a non-Dapr pod still goes through in plain text and anonymously. How come is that possible?
For the blue path, the application is going through the Dapr route ==> http://localhost:3500/ this is the port that the Daprd sidecar listens to. In that case, the sidecar will find out the location of the target and will talk to the target service’s sidecar. However, because Dapr does not intercept network calls, nothing prevents you from taking a direct route, from both a Dapr-injected pod and a non-Dapr one (green paths). So, you might end up in a situation where you enforce a strict authorization policy as shown below:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: multiplication
namespace: dapr-calculator
spec:
accessControl:
defaultAction: deny
trustDomain: “public”
policies:
– appId: mathfanboy
defaultAction: allow
trustDomain: ‘public’
namespace: “dapr-calculator”
– appId: percentage
defaultAction: allow
trustDomain: ‘public’
namespace: “dapr-calculator”
where you only allow MathFanBoy and Percentage to call the multiplication operation, and yet have other pods bypass the Dapr sidecar, which ultimately defeats the policy itself. Make no mistake, the reason why we define such policies is to enforce a certain behavior and I don’t have peace of mind if I know that other routes are still possible.
So, in summary, Dapr’s mTLS and policies are only effective if you take the Dapr route but nothing prevents you from taking another route.
Let us see how this works with Linkerd. As stated on their web site, Linkerd also does not enforce mTLS by default and has added this to their backlog. However, with Linkerd (same and even easier with Istio), we can make sure that only authorized services can talk to meshed ones. So, with Linkerd, we would not end up in the same situation:
First thing to notice, we simply use the service name to contact our target because there is no such Dapr route in this case nor any service discovery feature. However, because Linkerd leverages the Ambassador pattern, which intercepts all network calls coming in and going outside of a pod. Therefore, when the application container of a Linkerd-injected pod tries to connect to another service, Linkerd’s sidecar performs the call to the target, which lands onto the other sidecar (if the target is well a Linkerd-injected service of course). In this case no issue. Of course, as for Dapr, nothing prevents us from directly calling the pod IP of the target. Yet, from an injected pod, the Linkerd sidecar will intercept that call. From a non-injected pod, there is no such outbound sidecar, but our target’s sidecar will still tackle inbound calls, so you can’t bypass it. By default, because Linkerd does not enforce mTLS, it will let it go, unless you define fine-grained authorizations as shown below:
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
namespace: rest-calculator
name: multiplication
spec:
podSelector:
matchLabels:
app: multiplication
port: 80
proxyProtocol: HTTP/1
—
apiVersion: policy.linkerd.io/v1beta1
kind: ServerAuthorization
metadata:
namespace: rest-calculator
name: multiplication-from-mathfanboy
spec:
server:
name: multiplication
client:
meshTLS:
identities:
– mathfanboy
– percentage
In this case, only MathFanBoy and and Percentage will be allowed to call the multiplication operation. In other words, Linkerd allows us to enforce mTLS, whatever route is taken. With Istio, it’s even easier since you can simply enforce mTLS through the global mesh config. You do not even need to specify explicit authorization policies (although it is a best practice). Just to illustrate the above diagrams, here are some screenshots showing these routes in action:
I’m first calling the multiplication operation from the addition pod, while we told Dapr that only MathFanboy and Percentage could call multiplication. As you can see, the Dapr policy kicks in and forbids the call as expected.
but while this policy is defined, I can still call the multiplication using a direct route (pod IP):
and the same applies to non-injected pods of course.
While, with the Linkerd policy in place, there will be no way to call multiplication other than from MathFanBoy and Percentage. For sake of brevity, I won’t show you the screenshots but trust me, you will be blocked if you try.
Let us now focus on the injection process which will clarify what is going on behind the scenes.
Injection process Dapr vs Service Mesh
Both Dapr and Service Meshes will inject application pods according to annotations. They both have controllers in charge of injecting their victims. However, when looking at the lifecycle of a Dapr-injected pod as well as a Linkerd-injected pod, we can see noticeable differences.
When injecting Linkerd to an application, in plain Kubenet (not using the CNI plugin), we notice that Linkerd injects not only the sidecar but also an Init Container:
When looking more closely at the init container, we can see that it requires a few capabilities such as NET_ADMIN and NET_RAW, and that is because the init container will rewrite IP tables to make sure network traffic entering and leaving the pod is captured by Linkerd’s sidecar. When using Linkerd together with a CNI, the same principle applies but route tables are not rewritten by the init container. No matter how you use Linkerd, all traffic is redirected to its sidecar. This means that the sidecar cannot be bypassed.
When injecting Dapr, we see that there is no Init Container and only the daprd container (sidecar) is injected:
There is no rewrite of any IP table, meaning that the sidecar can be bypassed without any problem, thus bypass Dapr routes and Dapr policies. In other words, we can easily escape the Dapr world.
Wrapping up
As stated initially, I mostly focused on the impact of Dapr or a Service Mesh on the application itself and how the overall protection given by mTLS varies according to whether you use Dapr or a Service Mesh. I hope it is clear by now that Dapr is definitely an application framework that infuses the application code, while a Service Mesh is completely transparent for the application. Note that the latter is only true when using a decent Service Mesh. By decent, I mean something stable, performant and reliable. I have been recently confronted to a Mesh that I will not name here, but this was a true nightmare for the application and it kept breaking it.
Although Dapr & Service Meshes seem to have overlapping features, they are not equally covering the workloads. With regards to the initial question about when to use Dapr or a Service Mesh, I would take the following elements into account:
– For distributed architectures that are also heavily event-driven, Dapr is a no brainer because Dapr brings many features on the table to interact with message and event brokers, as well as state stores. Yet, Service Meshes could still help measure performance, spot issues and load balance traffic by understanding protocols such as HTTP/2, gRPC, etc. Meshes would also help in the release process of the different services, splitting traffic across versions, etc.
– For heterogeneous workloads, with a mix of APIs, self-hosted databases, self-hosted message brokers (such as Rabbit MQ), etc., I would go for Service Meshes.
– If the trigger of choosing a solution is more security-centric, I would go for a Service Mesh
– If you need to satisfy all of the above, I would combine Dapr and a Service Mesh for microservices, while using Service Mesh only for the other types of workloads. However, when combining, you must consider the following aspects:
– Disable Dapr’s mTLS and let the Service Mesh manage this, including fine-grained authorization policies. Beware that doing so, you would loose some Dapr functionality such as defining ACLs on the components
– Evaluate the impact on the overall performance as you would have two sidecars instead of one. From that perspective, I would not mix Istio & Dapr together, unless Istio’s performance dramatically improves over time.
– Evaluate the impact on the running costs because each sidecar will consume a certain amount of CPU and memory, which you will have to pay for.
– Assess whether your Mesh goes well with Dapr. While an application is agnostic to a mesh, Dapr is not, because Dapr also manipulates K8s objects such as K8s services, ports, etc. There might be conflicts between what the mesh is doing and what Dapr is doing. I have seen Dapr and Linkerd be used together without any issues, but I’ve also seen some Istio features being broken because of Dapr naming its ports dapr-http instead of http. I reported this problem to the Dapr team 2 years ago but they didn’t change anything.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Viva Glint Monthly Newsletter – January 2024
Happy New Year from our Microsoft Viva Glint family!
Welcome to the January edition of our Viva Glint newsletter! This communication is full of information that will help you get the most from your Viva Glint programs.
Our next feature release
Viva Glint’s next feature release is scheduled for January 13, 2024. Your dashboard will provide date and timing details two or three days before the release.
In your Viva Glint programs
Customize a message to your survey takers – Within General Settings, enter any org-specific guidance that you’d like added to your existing privacy statement. This message will apply to all new and scheduled surveys and can be translated into additional languages. Within a specific survey, go ahead and edit that statement in Program Setup, as needed.
Customize your logo and survey email content, too! Your customization capabilities are enhanced! By following in-platform guidance, we’re empowering you to take the reins and deliver customized email communications that meet Microsoft compliance requirements. Use the Microsoft Admin Center to set a custom logo and sending domain to create customized survey emails in Viva Glint that resonate with your organization.
We are just a few weeks away from the Copilot in Viva Glint Private Preview! This innovative new tool within the Viva Glint platform is designed to help organizational leaders and HR analysts easily understand, interpret, and act on employee feedback. Say “goodbye” to the tedious task of sifting through thousands of comments – Microsoft Copilot in Viva Glint provides short, natural language summaries that accurately represent the feedback you need to see.
Changes to how you’ll set up your employee attributes – As an admin, the changes we’re rolling out will allow you to view and edit your original schema after its initial setup, incorporate user time zones, setup survey and dashboard language fields, set up personal email fields for surveying exiting employees, and we’ve updated tenure buckets, too. Read about the new attribute setup experience.
News from Viva People Science
The Microsoft Viva People Science team has been busy hosting events and authoring blogs on current tips and trends to empower you to improve your business. Check out our most recent content:
• People Science Predictions: The impact of AI on the Employee Experience – Read our blog from the Viva People Science team, who has been busy making predictions about how AI is likely to impact employees and organizations. Read the 12 Predictions blog.
Connect and learn with Glint
Join us for our first Viva Glint: Ask the Experts session! Use this early registration link to join our new series to have questions answered about your Viva Glint programs.
We have platform trainings for Viva Glint admins and managers on Microsoft Learn! Use step-by-step guides to understand our dashboards, reports, and how to have quality team conversations.
All Viva Glint users can benefit from our new module – Navigate and Share your Viva Glint results module located on Microsoft Learn. Use these step-by-step guides to understand our dashboards, reports, and how to have quality team conversations.
Thanks to all our Viva Glint Learning Circles first-time joiners! The Viva Glint Learning Circles program is open to all customers who want to connect with other like-minded talent professionals to share knowledge, experiences, and challenges related to employee experience. Watch for news of our next sign-up period in this monthly newsletter.
How are we doing?
Please share feedback with your Customer Experience Program Manager (CxPM) if you have one, or by emailing us here. Also, if you do not want to receive these emails in the future, please let us know and you will be removed from the distribution list. Conversely, if there are people on your teams that should be receiving this monthly update, send us those emails and we’ll be sure they are added.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Gain AI and cloud technical skills with Microsoft Depth Workshops
Amid the increasing integration of AI into various business applications, cloud and AI skilling is essential for partners to reach their full potential. Microsoft is committed to helping address these skilling needs so that partners can help customers enable new services and products that use these quickly advancing technologies. Our partners have asked for our help in giving their employees Microsoft Azure Depth Enablement—in-depth knowledge specifically focused on Azure solutions for the Microsoft platforms and technology they use every day.
As part of our training efforts, Microsoft is now offering partner employees the opportunity to elevate their technical skills in specific lines of technology related to Microsoft AI and the Microsoft Cloud. These multi-day Microsoft Depth Workshops focus on practical technical aspects, including architecture and implementation considerations.
We encourage all partner technical learners to register for the skilling events relevant to their business to enhance architecting, deployment, and implementation skills and help unlock the capabilities of AI and cloud technology and applications.
Hone your technical skills in Microsoft Azure, Business Applications, and Security solutions
Microsoft Depth Workshops embody their titles by offering deeper, hands-on training that builds on advanced certifications. Their goal is to equip partner employees with the knowledge to help customers confidently adopt and optimize Microsoft AI and cloud products and services.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Learn lança novo conteúdo de IA generativa para inovadores
IA generativa para inovadores
Os módulos são os seguintes:
Módulo de aprendizagem
Resumo
Link
Neste projeto de desafio, você usará o Bing Chat para conduzir uma sessão de brainstorming e criar o resumo de uma ideia em um único slide, pronto para implementação. Este seria um grande desafio para concluir no início de um hackathon.
Usando IA generativa para ideação
Desafio: Usando IA Generativa para Prototipagem e MVP (Minimum Viable Product)
Neste módulo, o Bing Chat irá orientá-lo sobre como você pode criar protótipos ou modelos para sua ideia e como você pode implementar o projeto.
Uso de IA generativa para prototipagem e MVP (Minimum Viable Product)
Desafio – Use IA generativa para criar um modelo de negócio para sua startup.
No módulo, você é um Chief Strategy Officer (CSO) e assume a tarefa de criar um Modelo de Negócios/Estratégia usando o guia Business Model Canvas Template.
Mas você não fará isso sozinho. Você vai cocriar essa visão com a Inteligência Artificial: juntos vocês vão idealizar, pesquisar e preparar tudo para preparar sua startup para o sucesso.
Use IA generativa para criar um modelo de negócios para sua startup
Complete um desses projetos de desafio para ganhar um certificado digital no Microsoft Learn hoje!
Prepare-se para a Imagine Cup 2024!
Complete estes módulos para transformar suas ideias brilhantes em projetos de Startup e prepare-se para a competição estudantil Imagine Cup 2024. Os módulos também podem ajudá-lo a se preparar para seu próximo hackathon, portanto, para ajudá-lo a criar materiais de alta qualidade para uma próxima ideia de hackathon e melhorar suas chances de ganhar.
Por fim, ajude-nos a melhorar esse conteúdo para você.
Depois de concluir esses módulos de aprendizado, comente abaxio onde podemos melhorar e conteúdo adicional que você acharia útil para começar sua jornada como empreendedor de IA.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Gain AI and cloud technical skills with Microsoft Depth Workshops
Amid the increasing integration of AI into various business applications, cloud and AI skilling is essential for partners to reach their full potential. Microsoft is committed to helping address these skilling needs so that partners can help customers enable new services and products that use these quickly advancing technologies. Our partners have asked for our help in giving their employees Microsoft Azure Depth Enablement—in-depth knowledge specifically focused on Azure solutions for the Microsoft platforms and technology they use every day.
As part of our training efforts, Microsoft is now offering partner employees the opportunity to elevate their technical skills in specific lines of technology related to Microsoft AI and the Microsoft Cloud. These multi-day Microsoft Depth Workshops focus on practical technical aspects, including architecture and implementation considerations.
We encourage all partner technical learners to register for the skilling events relevant to their business to enhance architecting, deployment, and implementation skills and help unlock the capabilities of AI and cloud technology and applications.
Hone your technical skills in Microsoft Azure, Business Applications, and Security solutions
Microsoft Depth Workshops embody their titles by offering deeper, hands-on training that builds on advanced certifications. Their goal is to equip partner employees with the knowledge to help customers confidently adopt and optimize Microsoft AI and cloud products and services.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Watch the newly released Surface videos for device repair
The engineers at Surface have created new instructional videos demonstrating how to disassemble the newly available Surface devices, along with a high-level overview of how to replace the components. The latest videos are for Surface Laptop Studio 2 and Surface Go 4.
Use these videos as a companion to the Surface Service Guides documentation,
Surface Laptop Studio 2
Surface Go 4
Surface Laptop Go 2 & Surface Laptop Go 3
Surface Laptop 3 & Surface Laptop 4
Surface Pro 9 with 5G
Surface Pro 8
Surface Pro 7+
See also:
Hands-on videos for Surface device repair (Part 1)
Surface Laptop Studio 2
Contents
Introduction
Removing feet and cover
Removing SSD
Removing display module
Removing Surface Connect port and audio jack
Removing micro SD port
Removing USB ports
Removing fans
Removing subwoofer speakers
Removing motherboard
Removing tweeters
Surface Go 4
Contents
Introduction
Removing kickstand
Debonding and removal of the display
Removing hinges
Removing antennae deck
Removing SD connector
Removing blade connector
Removing camera modules
Removing motherboard
Removing speakers
Surface Laptop Go 2 & Laptop Go 3
Surface Laptop 3 & Surface Laptop 4
Surface Pro 9 with 5G
Surface Pro 8
Surface Pro 7+
Learn more
Hands-on videos for Surface device repair (Part 1)
Full playlist of Surface repair videos
Surface for Business service and repair
Microsoft Tech Community – Latest Blogs –Read More
Armchair Architects: Artificial Intelligence, Large Language Models, and Architects (Part 1 of 2)
Welcome back to the fourth season of Armchair Architects! You asked for more, and we’re here to deliver. This season, we’re diving deep into the world of Artificial Intelligence (AI), specifically focusing on large language models (LLMs) with our host David Blank-Edelman and our armchair architects Uli Homann and Eric Charran.
Our conversation kicks off with Eric and Uli, two seasoned architects, discussing their experiences with ChatGPT and Bard. The topic of discussion? Large Language Models (LLMs), a term you’ll hear a lot throughout the season.
Eric shares his disruptive experience with these hosted foundational models, like ChatGPT, which have changed our lives in unexpected and delightful ways. The most impactful way he’s seen it affect his work as an architect is in supporting his role as an architect.
The Architect’s New Assistant
As architects, understanding the product features, prioritized requirements, and non-functional requirements is crucial. Traditionally, this would involve extensive research and application of various patterns like the bulkhead pattern and the orchestrator pattern.
However, the advent of generative AI has revolutionized this process. Eric shares an instance where he plugged some requirements into ChatGPT, suggested the orchestrator model’s relevance, and asked for its opinion. The result? A cogent response on how to meet the requirements, understand all the features (both functional and non-functional), adhere to the architectural patterns, and even get recommendations on other potentially relevant patterns.
This process, which Eric refers to as ‘prompt engineering’, has transformed what used to be a manual activity into an automated one. Now, architects have a research assistant, through AI, that can perform architectural jobs. However, it’s important to note that the architect still needs to be the arbiter of whether the AI’s suggestions are correct to avoid dealing with ‘hallucinations’ or false information generated by the AI, but it’s a great starting point for what used to be a manual activity.
Unpacking the Jargon
During their discussion, Eric mentioned some interesting terms like ‘prompt engineering’ and ‘hallucinations’. They also take a moment to define what a large language model is for those unfamiliar with the term.
In essence, a large language model is a continuation of two technologies that have been growing bigger and bigger – neural networks is an outcome of all of the whole AI work which is a technology invented in the 90s .and deep learning, which was invented by Google in 2015.
The Power of Deep Learning
If you’re a Dune fan, you might liken the process of deep learning to space folding. It’s about folding the neural network to allow for greater depth, hence the term ‘deep learning’. The OpenAI folks, in collaboration with the Azure AI infrastructure, have managed to push this to a size of trillions of variables, creating a large language model.
These large language models focus on human language. It’s not just about speech or words, but also images, code, and other forms of human expression. Essentially, large language models are communication models. This is evident in the work done by OpenAI, Bard, and the Llama models for Meta.
Prompt Engineering: Steering the Model
Prompt engineering is about utilizing human expertise within a specific domain to steer the model to produce productive outputs. A large language model uses its vast training corpus of information to predict the next most likely cogent word in a sequence of words. Prompt engineering structures a query so that the most accurate output is achieved based on the results of the input question.
For instance, instead of asking the model for great patterns to create a microservice, which might result in a dump of information, prompt engineering refines the question. It constructs a prompt so that it specifically outputs the information in a way that can be used effectively.
The Hallucination Check
Of course, there’s the hallucination check. This is a crucial step to ensure the accuracy of the model’s output. But before we delve into hallucinations, it’s important to understand that prompt engineering is not just about direction, but also about constraining.
The corpus that the system has access to is incredibly wide, encompassing human knowledge acquired over thousands of years. Prompt engineering effectively tells the model to constrain what it’s looking at. One of the niftiest tricks in prompt engineering is asking the model to take on a persona. For example, asking the model to assume the role of a software architect looking for patterns for microservices implementations. This allows the model to switch its perspective and provide better and deeper outputs.
As we wrap up Part 1 of this episode, we’re about to head in a slightly different direction. Join us for Part 2 as we continue our exploration of AI and large language models.
Recommended Next Steps
If you’d like to learn more about the general principles prescribed by Microsoft, we recommend Microsoft Cloud Adoption Framework for platform and environment-level guidance and Azure Well-Architected Framework. You can also register for an upcoming workshop led by Azure partners on cloud migration and adoption topics and incorporate click-through labs to ensure effective, pragmatic training.
You can view the whole video below and check our more videos from the Azure Enablement Show.
Microsoft Tech Community – Latest Blogs –Read More
Armchair Architects: Artificial Intelligence, Large Language Models, and Architects (Part 2 of 2)
Large Language Models: A Deep Dive
Welcome to the second part of our exploration into large language models. In this episode, we delve deeper into the intricacies of these models, discussing everything from the formulation of effective prompts to the phenomenon of hallucinations with our host David Blank-Edelman and our armchair architects Uli Homann and Eric Charran.
Crafting Effective Prompts
One of the key aspects of working with large language models is the ability to craft effective prompts. These prompts need to be suitably constrained to elicit useful responses. For instance, an architect might want to ask for a solution architecture perspective response that meets specific functional and non-functional requirements.
Eric used an example where he prompted “from a software architect perspective come up and recommend a solution architecture that accomplishes all of these functional and non-functional requirements and then write it as if I’m creating a specification for a developer. The architecture requires investments from the organization in terms of CapEx and OpEx, new services, new cloud subscriptions.” In another prompt, he took the output and prompted “Then take this and write it as an e-mail to the CIO.”
It took the outputs, raised it up a level and created a good foundation, it wasn’t perfect, but a good foundation for executive messaging as to why the CIO would lobby the CFO to invest in these particular technologies.
Then Eric asked it to switch personas. “Assume that I’m an SRE or platform engineering team lead and I need to support this thing that I just created. Write me a quick spec for the SRE or platform engineering team lead who will support the architecture.”
This process involves a form of ‘code switching’, where the language and level of detail are adjusted based on the audience. It provided a great starting point for refinement.
Understanding Hallucinations
As we delve deeper into the workings of large language models, we encounter the phenomenon of ‘hallucinations’. These occur when the model makes assumptions based on the patterns it has seen so far. For example, if the model sees the sequence 1, 2, 3, it might assume that 4 should naturally follow.
While this extrapolation can work in many scenarios, it can also lead to inaccurate or even dangerous assumptions, especially in sensitive domains like healthcare. It’s crucial to remember that these models cannot make assumptions or extrapolations when dealing with diagnostic information.
Hallucinations were quite prevalent in large language models at the beginning of the year. However, thanks to the concerted efforts of the research community, their occurrence has decreased dramatically. Techniques like fine-tuning allow users to constrain and limit the number of hallucinations.
The Mystery of AI Outputs
In the world of artificial intelligence, large language models have emerged as a fascinating area of study. However, their workings often remain a mystery to the users, leading to a myriad of questions and concerns.
One of the intriguing aspects of these models is the generation of outputs. Users often find themselves puzzled by the responses they receive, unsure of the rationale behind them. This lack of understanding can be problematic, especially for professionals like architects who rely on these models for their work.
The key here is to understand how these models function. While reading the response, it’s crucial to fact-check the information to ensure its accuracy. There’s got to be a voice in the back of your head saying, “all right, let me just factually check this thing to make sure it just didn’t make this up because it wants to.” The models are designed to link concepts together and generate a response based on the input. However, they might sometimes fabricate links between concepts, leading to inaccurate outputs.
Understand how the process works and then as you’re reading the output just quality check it to make sure it makes sense before you proffer it as the as the answer.
The Role of the User
Large language models are tools designed to assist users in creating artifacts more efficiently and in greater depth. They provide proposals based on the input given by the user. It’s important to remember that these proposals need to be validated by the user before they can be accepted as the final output.
The user plays a vital role in this process. They need to understand what they’re asking the model to do and validate the output once it’s produced, then it becomes your proposal. The responsibility of the final output lies with the user, not the AI. The user cannot simply blame the AI if something goes wrong.
The World Beyond Natural Language
While much of the discussion around large language models revolves around natural language text, these models are capable of much more. They can understand and generate code, making them useful for tasks beyond generating human language text.
For instance, OpenAI has three model families: GPT for language, DALL-E for images, and Codex for code. These models can express anything that can be represented in code, including schemas. This capability opens up a whole new realm of possibilities for users, allowing them to leverage these models in a variety of ways.
Type Chat: Prompt Engineering with JSON
In the realm of artificial intelligence, large language models have emerged as powerful tools capable of generating a wide array of outputs. From creating JSON schemas to documenting legacy code, these models are revolutionizing the way we approach problem-solving.
One innovative application of large language models is ‘TypeChat’, an application developed by Anders Hejlsberg, and his colleagues. TypeChat leverages the power of prompt engineering to generate outputs in the form of a JSON schema.
Users can instruct the large language model to generate a specific output and format it using JSON. By providing a JSON schema as part of the prompt, the system can automatically respond in a JSON schema. This approach offers an elegant way to create programmable outputs, as parsing a JSON schema is much easier than parsing text.
Large language models can also generate code in various forms. One area where this capability has proven particularly useful is in the documentation of legacy code bases.
For instance, many Cobalt code bases are quite old and lack proper documentation. Generative AI can be used to document these code bases and explain what the code does. This is especially useful when the original purpose or functionality of the code is no longer known.
Knocking Their Socks Off: Impressive Prompts for Architects
When it comes to impressing architects with the capabilities of large language models, one effective approach is to use these models to analyze problematic code. For example, a piece of legacy code or code that’s leaking memory can be plugged into ChatGPT with the prompt “find the memory leak” or “find another way to write this optimally”.
However, it’s important to remember that the outputs generated by these models need to be quality checked to ensure their correctness. It’s also crucial to ensure that your organization is comfortable with the data being fed into the model, especially when using the consumer version of ChatGPT.
ChatGPT Enterprise and Bing Chat Enterprise.
For those concerned about data privacy, there are private versions of ChatGPT available, such as ChatGPT Enterprise and Bing Chat Enterprise. These versions ensure that the data fed into them stays within your organizational boundaries, offering an added layer of security.
Persona-Based Modeling and Prompt Engineering
Another effective strategy when working with large language models is persona-based modeling. This involves framing prompts as if the model is a specific persona, such as a software architect or a support person for a specific technology. This approach helps the model better understand the problem scenario and generate more relevant responses.
As we continue to explore the capabilities of large language models, it’s clear that these tools offer immense potential in a variety of fields. From prompt engineering to code generation, these models are paving the way for innovative solutions to complex problems. Stay tuned for more insights into the fascinating world of AI in our upcoming discussions.
Recommended Next Steps
If you’d like to learn more about the general principles prescribed by Microsoft, we recommend Microsoft Cloud Adoption Framework for platform and environment-level guidance and Azure Well-Architected Framework. You can also register for an upcoming workshop led by Azure partners on cloud migration and adoption topics and incorporate click-through labs to ensure effective, pragmatic training.
You can read Part 1 of this blog if you just read Part 2.
You can view the whole video below and check our more videos from the Azure Enablement Show,
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #468: Understanding and Resolving the “Could not find prepared statement with handle”
Introduction:
In the realm of SQL Server, encountering errors is a part of the development process. One such common error is “Could not find prepared statement with handle“. In this article, we’ll explore what this error means, why it occurs, and how to resolve it.
Understanding the Error:
The error message “Could not find prepared statement with handle” occurs in SQL Server when there’s an attempt to execute a prepared statement with a handle that is unrecognized or unavailable. A handle in SQL Server is an identifier used to execute or deallocate a prepared statement.
A Functional Script Example: Let’s consider a functional script example:
DECLARE @P1 INT;
EXEC sp_prepare @P1 OUTPUT,
N’@P1 NVARCHAR(128)’,
N’SELECT state_desc FROM sys.databases WHERE name=@P1′;
EXEC sp_execute @P1, N’testdb’
EXEC sp_unprepare @P1;
In this script, sp_prepare prepares a statement and assigns it a handle (@P1). Then, sp_execute executes the prepared statement using this handle. Finally, sp_unprepare deallocates the prepared statement.
Common Causes of the Error: This error commonly occurs due to:
Incorrect or modified handle used between preparation and execution.
The prepared statement is unprepared before execution.
Client-server application synchronization issues, leading to lost or altered handles.
Solutions and Best Practices: To avoid this error, consider the following practices:
Handle Verification: Always ensure the handle used in sp_execute matches the one generated by sp_prepare.
Order of Operations: Check to make sure that sp_unprepare isn’t called before sp_execute.
Error Handling in Applications: Implement robust error handling in your client applications to manage unforeseen errors effectively.
Conclusion:
Understanding the “Could not find prepared statement with handle” error in SQL Server is crucial for database management and application development. By recognizing the common causes and adopting best practices, developers can efficiently navigate and resolve this error, leading to more stable and reliable SQL applications.
Remember that depending on the driver or application language that your are using the implementation could be different but, normally, this error needs to be managed by developer to review why the handle has been lost or incorrect.
Enjoy!
Microsoft Tech Community – Latest Blogs –Read More
Anna AI:n ideoida AI-käyttötapauksia asiakkaallesi
Generatiivista tekoälyä voi hyödyntää monin tavoin ideoinnissa, koska se on nimensä mukaisesti on hyvä keksimään asioita, kunhan sille annetaan riittävästi kontekstia. Miksi emme kokeilisi käyttää AI:ta tukiälynä myös AI:n myynnissä? Microsoft tarjoaa kumppaneilleen (ja miksei myös asiakkailleen) helppokäyttöisen AI Use Cases -palvelun, jossa voi luoda AI-käyttötapauksia organisaation verkkosivun julkisen tiedon perusteella. Palvelussa on 13 kategoriaa, joihin pyritään ensin löytämään informaatiota, joista syntyy konteksti, jonka perusteella luodaan ehdotuksia tekoälyn käytön hyödyntämisestä ko. organisaatiossa. Simple!
AI Use Cases -palvelu sijaitsee osoitteessa: https://azureopenaiusecases.azurewebsites.net/
Käyttäjä syöttää verkkosivuston osoitteen tai jonkun alla olevan sivun vaikkapa https://www.mustigroup.com/fi/tietoa-meista/ ja painaa Extract Profile, jolloin Azure OpenAi alkaa kerätä informaatiota sivulta ja muodostaa organisaatioprofiilin.
Seuraavaksi painetaan Generate Use Cases -painiketta, jolloin tekoäly luo valmiin asiakaskirjepohjan, joka sisältää muutaman Azure OpenAI -käyttötapauksen.
Tämän pohjan voi kopioida ja muokata siitä sopivan sähköpostin tai käyttää muuten vain ideoinnin pohjana.
Microsoft Tech Community – Latest Blogs –Read More