Category: Microsoft
Category Archives: Microsoft
Storing OPC UA Information Models in Azure Data Explorer
Most Azure users deploy Azure Data Explorer (ADX) for storing and analyzing OPC UA PubSub telemetry data sent from industrial sites via a cloud broker. For the last several years, customers have also added OPC UA PubSub metadata to ADX as documented here (https://www.linkedin.com/pulse/using-azure-data-explorer-opc-ua-erich-barnstedt).
However, many customers are unaware that they can store entire OPC UA Information Models in ADX, imported from the UA Cloud Library (https://uacloudlibrary.opcfoundation.org).
This has several advantages:
OPC UA PubSub metadata only describes the semantics of the associated OPC UA PubSub telemetry data, but not the entire OPC UA Information Model where the data originally came from. However, customers want to have all semantic information in one location, ideally in the cloud for global access.
OPC UA PubSub metadata only includes a subset of the rich OPC UA semantics. For example, OPC UA complex type definitions or references to other data within the Information Model are not included but needed for deeper analysis of the telemetry data.
Customers want to be able to see what other telemetry data is available from their sites for potential publishing to the cloud and need the entire OPC UA Information Model to make a selection.
To get started with importing OPC UA Information Models into ADX, you first need an instance of ADX in your Azure subscription as well as a login to the UA Cloud Library, hosted by the OPC Foundation. You can get registered for accessing the UA Cloud Library for free from here: https://uacloudlibrary.opcfoundation.org/Identity/Account/Register
Once you have registered, you can browse the OPC UA Information Models you are interested in via the built-in browser accessible from here: https://uacloudlibrary.opcfoundation.org/Explorer
To get the unique ID of the OPC UA Information Models you are interested in, you can simply execute the “namespaces” REST API from here: https://uacloudlibrary.opcfoundation.org/infomodel/namespaces. For example, the “Robotics” Information Model has the unique ID 4172981173.
Configure an Azure Data Explorer callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you are an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure Portal):
Then, from the Azure Portal UI of your ADX instance, simply run the following query to import the OPC UA Information Model into ADX:
let uri=’https://uacloudlibrary.opcfoundation.org/infomodel/download/<insert information model identifier from cloud library here>’;
let headers=dynamic({‘accept’:’text/plain’});
let options=dynamic({‘Authorization’:’Basic <insert your cloud library credentials hash here>’});
evaluate http_request(uri, headers, options)
| project title = tostring(ResponseBody.[‘title’]), contributor = tostring(ResponseBody.contributor.name), nodeset = parse_xml(tostring(ResponseBody.nodeset.nodesetXml))
| mv-expand UAVariable=nodeset.UANodeSet.UAVariable
| project-away nodeset
| extend NodeId = UAVariable.[‘@NodeId’], DisplayName = tostring(UAVariable.DisplayName.[‘#text’]), BrowseName = tostring(UAVariable.[‘@BrowseName’]), DataType = tostring(UAVariable.[‘@DataType’])
| project-away UAVariable
| take 10000
You need to provide two things in the query above:
The Information Model’s unique ID from the UA Cloud Library and enter it into the <insert information model identifier from cloud library here> field of the ADX query.
Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the <insert your cloud library credentials hash here> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
And voila! You have just imported an entire OPC UA Information Model into a temporary table in Azure Data Explorer which you can then use in your queries!
Microsoft Tech Community – Latest Blogs –Read More
AI in Operations (Part 2 of 2)
We have been through the use cases of AI in Development in part 1 and in this blog we will cover the final 4 stages of the DevOps lifecycle and look into how AI can be used in operations at scale.
Release: The release stage in DevOps refers to the phase in the Software Development Lifecycle (SDLC) where a new version or iteration of a product is cut and made available to the end users. Here are two examples of where AI can help in this stage:
Automated release note generation:
Natural Language Processing (NLP) and Generative AI for release notes: When writing release notes, it can be quite an arduous task and it is imperative to get it correct. Using NLP and Generative AI, you can analyse code changes and automatically generate comprehensive release notes in natural language for end users. This ensures that the release documentation is comprehensive, up-to-date, and easily understandable for your user base.
Deployment risk assessment:
Machine Learning for risk prediction: The releasing of a new iteration of a product is always exciting but it comes with inherent risk. By implementing machine learning models to assess the risk associated with a release, using historical data it will help the team by providing insights, potential risks and employ mitigation contingencies that can be put in place ahead of time.
Deploy: The deploy stage in DevOps refers to the process of deploying the tested software or infrastructure changes from a development / test / pre-production environment to a production environment. Here are two examples of how AI can assist in this stage:
Dynamic rollback strategies:
AI-Driven rollbacks: It is expected that at one point or another, you will need to rollback your recently deployed environment. Mistakes happen and that is ok. The hard part of this is that it is not always automatically taken care of and the “what” and “why” is also not always clear. Here you can utilise AI models to analyse real-time performance metrics during deployment. If anomalies or performance issues are detected post-deployment, it can autonomously decide whether to initiate a rollback, ensuring there is a quick response to potential issues.
Deployment Optimisation:
Using AI for optimal traffic routing: There are several different deployment methods that are widely used. These include Canary, All-at-once, shadow deployments and more. Blue-Green is one of the most commonly used in production systems but that does not always mean it yields the results as expected. By utilising AI tools, you can dynamically optimise the traffic distribution between blue and green environments in a blue-green deployment, potentially better than a regular load balancer. This ensures that the new version receives sufficient traffic for testing and validation without impacting user experience.
Operate: In this stage the focus is on maintaining and managing the production environment. Here is where you will triage and address any incidents that occur and yes, AI can help!
Cognitive incident analysis:
Cognitive AI for incident triage: When an incident occurs, as a DevOps Engineer (or otherwise stated) you need to be able to categorise and explain it in basic language to report it and help other team members understand. This can be a hard task and time consuming, especially when there is pressure involved. Here would be a good time to implement cognitive AI tooling that can understand and categorise incidents based on natural language descriptions, such as application logs. By doing this, it will assist in a faster and somewhat accurate incident triage, allowing the team to prioritise and address critical issues promptly.
Monitor: In this stage of DevOps you are continuously checking the health of the service, performance and even the behaviour. This can be time consuming and costly. You can do this in a few ways from cherry picking logs and analysing them to reading user feedback and calculating costs. Here is how AI can help you:
Predictive cost analysis:
Cost prediction and optimisation: Building on cloud infrastructure comes with a sense of anxiety that you may be charged unknowingly for the usage of a service or tool you may not be aware of. With the integration of AI into monitoring tools you could use it to predict future resource allocation and associated costs without needing to work through a manual cost calculator. This enables proactive cost management and optimisations with very little lift from you, the end user.
Sentiment analysis of user feedback:
AI-Based sentiment analysis: User feedback is imperative to improve the product or service you are providing, and this is the stage in the DevOps lifecycle where you will review it and begin to plan any actionable items into the next sprint. By applying sentiment analysis on user feedback and logs you can get an overall picture of how the product is being perceived and behaving at any given time. This in turn leads to having a quicker turn around on the feedback loop, it can help to prioritise feature improvements, bug fixes, or infrastructure changes.
By incorporating AI into your DevOps and Software Development Lifecycle, you will be able to speed up and improve your delivery of services in several ways, as shown above in this blog and in part 1. When using AI tools there must always be human interaction and oversight to ensure what is being changed, provided, or reported by the models is correct.
To fully immerse yourself in the different AI tools available to help at these different stages of operations, I would suggest visiting the Microsoft AI website.
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #11 for SQL Server 2022 RTM
The 11th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU11 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate11
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Removal of several Microsoft Graph Beta API’s for Intune device configuration reports
In February 2024, the following Microsoft Graph Beta API’s that leverage the old Intune reporting framework for device configuration policy reports will stop working:
Device configuration report:
https://microsoft.graph.com/Beta/deviceManagement/managedDevices(‘device_id’)/deviceConfigurationStates
Device status:
https://microsoft.graph.com/Beta/deviceConfiguration/StatelessDeviceConfigurationFEService/deviceManagement/deviceConfigurations(‘policy_id’)/deviceStatuses
If you’re impacted by this change, look for MC688107 in the Message center. If you’re using automation or scripts to retrieve reporting data from the Graph Beta API’s listed above, we recommend moving to newer Intune reporting framework by making POST requests to the corresponding endpoint for each report:
Device configuration report: getConfigurationPoliciesReportForDevice
Device and user check-in status report: getConfigurationPolicyDevicesReport
Device assignment status report: getCachedReport
For more information on the updated reporting experience read, Announcing updated policy reporting experience in Microsoft Intune.
Example: Device configuration report
POST: https://graph.microsoft.com/beta/deviceManagement/reports/getConfigurationPoliciesReportForDevice
Payload:
{
“select”: [
“IntuneDeviceId”,
“PolicyBaseTypeName”,
“PolicyId”,
“PolicyStatus”,
“UPN”,
“UserId”,
“PspdpuLastModifiedTimeUtc”,
“PolicyName”,
“UnifiedPolicyType”
],
“filter”: “((PolicyBaseTypeName eq ‘Microsoft.Management.Services.Api.DeviceConfiguration’) or (PolicyBaseTypeName eq ‘DeviceManagementConfigurationPolicy’) or (PolicyBaseTypeName eq ‘DeviceConfigurationAdmxPolicy’) or (PolicyBaseTypeName eq ‘Microsoft.Management.Services.Api.DeviceManagementIntent’)) and (IntuneDeviceId eq ‘adce2b4a-0000-0000-0000-0000000000’)”,
“skip”: 0,
“top”: 50,
“orderBy”: [
“PolicyName”
]
}
Response:
{
“TotalRowCount”: 2,
“Schema”: [{
“Column”: “IntuneDeviceId”,
“PropertyType”: “String”
},
{
“Column”: “PolicyBaseTypeName”,
“PropertyType”: “String”
},
{
“Column”: “PolicyId”,
“PropertyType”: “String”
},
{
“Column”: “PolicyName”,
“PropertyType”: “String”
},
{
“Column”: “PolicyStatus”,
“PropertyType”: “Int32”
},
{
“Column”: “PspdpuLastModifiedTimeUtc”,
“PropertyType”: “DateTime”
},
{
“Column”: “UnifiedPolicyType”,
“PropertyType”: “String”
},
{
“Column”: “UnifiedPolicyType_loc”,
“PropertyType”: “String”
},
{
“Column”: “UPN”,
“PropertyType”: “String”
},
{
“Column”: “UserId”,
“PropertyType”: “String”
}],
“Values”: [[“adce2b4a-0000-0000-0000-0000000000”, “DeviceManagementConfigurationPolicy”, “fdb08003-0000-0000-0000-00000000000”, “ASR Rules 02”, 2, “2023-08-13T01:51:46”, “SettingsCatalog”, “Settings Catalog”, “admin@xxx.net”, “132aa545-0000-0000-0000-00000000000”], [“adce2b4a-0000-0000-0000-00000000000”, “DeviceManagementConfigurationPolicy”, “09e3c028-0000-0000-0000-00000000000”, “Intent Policy with AF”, 6, “2023-08-10T01:53:20”, “MicrosoftDefenderAntivirus”, “Microsoft Defender Antivirus”, “admin@xxxx.net”, “132aa545- 0000-0000-0000-00000000000”]],
“SessionId”: “”
}
If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam!
Microsoft Tech Community – Latest Blogs –Read More
Exciting news: Teams Essential plus Teams Phone promotion extended!
PROMO EXTENDED!
We are excited to share that the Teams Essential plus Teams Phone bundle promotion (available in USA, UK, Puerto Rico, and Canada) has been extended through July 1st, 2024. Teams Phone is one of the biggest bets with the highest potential for partners in the new fiscal year and can help partners increase profitability.
Teams Phone enables customers to always be connected through a cloud-based phone solution. They will benefit from intelligent phone features such as auto transcriptions of mailbox messages, smart call control such as scheduling call queues, screen pop, and more.
We are providing CSP partners with the following tools to support Teams Phone enablement and sales guidance:
SMB Masters Program on-demand trainings | Microsoft Teams Phone Learning Path
New Teams Phone SMB 1:many customer workshop | book for partners as a part of the SMB workshop motion
With the promo we launched some Phone discounts, Teams Essentials plus Phone System Promo FAQ
Call to Action
Download and share the SMB 1:Many Customer Workshops to help partners grow their practices TE+PS business: Modern Work for Partners – SMB Briefings (microsoft.com)
Evangelize new resources on https://aka.ms/TeamsEssentialsPartner
Resources
Teams Phone Partner Portal
Teams Phone SMB partner opportunity deck & Teams Phone SMB pitch deck
Teams Essentials plus Phone System Promo FAQ
CSP Masters Program readiness: https://aka.ms/M365MastersProgram
Microsoft Tech Community – Latest Blogs –Read More
What's new in security for Azure SQL and SQL Server | Data Exposed
Check out this episode to learn the newest information on security for Azure SQL and SQL Server!
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
What’s new in security for Azure SQL and SQL Server | Data Exposed
Check out this episode to learn the newest information on security for Azure SQL and SQL Server!
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Add LLM Prompts to Reports using Power BI Copilot for Microsoft Fabric
Interested in learning more about Power BI Copilot for Microsoft Fabric? I’ve published a new video walking through the Power BI Narrative visual with Copilot that provides a no-code (SaaS) mechanism for report developers to embed Azure OpenAI (Copilot) prompts into their reports.
There are a few great videos out there on the web for building and editing reports using Power BI Copilot, but the new Copilot Narrative (still in preview at time of recording) visual deserves more attention. LLM prompts can be added to the visual, which can be re-run every time an end user filters a report. Switching your filters from “Florida in December” to “Maine in January,” and you’d like to enhance the report with some external demographic data that ties to the data from your Power BI Semantic Model? All you need to do is push a button for a new narrative.
Also, by enabling report developers to store prompts in the visual, you can instruct the Azure OpenAI LLM that is powering Copilot to add urls and citations for the data that was used in the response.
The demo in the video is using over 220 million rows of data from the Git Repo that I put together with Inderjit Rana for customers to try out Microsoft Fabric and the Power BI Direct Lake connector, and you can recreate it yourself at this link: https://lnkd.in/gRavJURT
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #473:Harnessing the Synergy of Linked Server, Python, and sp_execute_external_script
In an era where data management transcends individual database systems, SQL Server offers a sophisticated feature set that includes Linked Server integration, Python scripting, and the powerful sp_execute_external_script function. The main objective of this approach is to leverage a Python script within SQL Server using sp_execute_external_script connecting to other database outside of SQL Server On-premise, for example, Azure SQL Database or Azure SQL Managed Instance as an alternative to employing the pyodbc library. This method not only streamlines processes but also addresses key concerns in security and network configuration, such as opening ports, which are prevalent when using external libraries for database connections. By focusing on querying a Linked Server, we can achieve seamless data integration and manipulation while maintaining a secure and efficient environment.
Section 1: Unpacking Linked Servers in SQL Server
Linked Servers act as bridges, enabling SQL Server to execute commands and access data across different database systems. This capability is crucial for enterprises managing data across multiple platforms, offering a unified approach to data interaction. Utilizing Linked Servers, SQL Server can effectively communicate with various data sources, ensuring flexibility and scalability in data management.
Section 2: The Power of Python in SQL Server
The integration of Python into SQL Server, particularly through the sp_execute_external_script function, marks a significant advancement in data processing capabilities. This integration allows for the utilization of Python’s comprehensive libraries and analytical prowess directly within the SQL Server environment. It opens doors to sophisticated data analysis, complex transformations, and advanced machine learning applications, all while leveraging the robust security and performance features of SQL Server.
Section 3: Preparing the Groundwork
To embark on this integration, certain prerequisites must be met. This includes enabling SQL Server Machine Learning Services for Python support and configuring a Linked Server for external data access. Detailed steps guide you through this setup process, ensuring a smooth integration.
Section 4: Executing a Practical Use-case
We present a practical scenario where sp_execute_external_script is employed to query data from a Linked Server. The walkthrough covers creating a stored procedure that harnesses Python’s prowess to access and process data from an external database, illustrating the script’s development and execution.
Definition of Linked Server
USE [master]
GO
/****** Object: LinkedServer [MYSERVER2] Script Date: 11/01/2024 19:07:42 ******/
EXEC master.dbo.sp_addlinkedserver @server = N’MYSERVER2′, @srvproduct=N”, @Provider=N’MSOLEDBSQL’, @datasrc=N’tcp:servername.database.windows.net,1433′, @catalog=N’dotnetexample’
/* For security reasons the linked server remote logins password is changed with ######## */
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N’MYSERVER2′,@useself=N’False’,@locallogin=NULL,@rmtuser=N’username’,@rmtpassword=’########’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’collation compatible’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’data access’, @optvalue=N’true’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’dist’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’pub’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’rpc’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’rpc out’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’sub’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’connect timeout’, @optvalue=N’0′
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’collation name’, @optvalue=null
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’lazy schema validation’, @optvalue=N’false’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’query timeout’, @optvalue=N’0′
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’use remote collation’, @optvalue=N’true’
GO
EXEC master.dbo.sp_serveroption @server=N’MYSERVER2′, @optname=N’remote proc transaction promotion’, @optvalue=N’true’
GO
Store procedure definition
CREATE PROCEDURE FetchDataFromLinkedServer
AS
BEGIN
EXEC sp_execute_external_script
@language = N’Python’,
@script = N’
import pandas as pd
customer_data = my_input_data
OutputDataSet = customer_data
‘,
@input_data_1 = N’SELECT TOP 50 ID, TextToSearch FROM [MyServer2].[dotnetexample].[dbo].[PErformanceVarcharNvarchar]’,
@input_data_1_name = N’my_input_data’
WITH RESULT SETS ((ID INT NOT NULL, TextToSearch VARCHAR(200) NOT NULL));
END
Just we need to call our store procedure to obtain the data from another datasource.
EXEC dbo.FetchDataFromLinkedServer
Microsoft Tech Community – Latest Blogs –Read More
The Intrinsic Value of DevOps for the US Department of Defense
DevOps is defined as the union of people, process, and technology to remove siloed roles, development, IT operations, quality engineering, and security to coordinate and collaborate to produce better, more reliable products. Every major cloud provider, Independent Software Vendor (ISV), and software consultancy has promoted this approach to reduce time to market, eliminate bugs, introduce new features rapidly, implement governance, and streamline the software development lifecycle. Microsoft has outlined the benefits of DevOps in the following online post By adopting a DevOps culture along with DevOps practices and tools, teams gain the ability to better respond to customer needs, increase confidence in the applications they build, and achieve business goals faster .
The challenge of cultivating a DevOps culture requires deep changes in the way people work and collaborate. The bureaucratic nature, culture, and traditional software process management of the Department of Defense (DoD) can be an inhibitor for adoption. There are signs as the various branches of the DoD are starting to introduce DevOps as a path forward not only with the adoption of cloud, but also for traditional on-premises programs. The following program Iron Bank (dso.mil) and DSOP (af.mil) are positive sign the DoD is moving away from the traditional Waterfall process, but the change is not reflected in every program. The adoption and implementation of DevOps practices are ever more crucial as we are currently at the intersection of technology and geopolitical world events.
In the last three years we have witnessed the conflict in Ukraine advance the technical capabilities through automation and computer engineering to execute military objectives. The very nature of conventional war has changed and there is a term used the “transparent battlefield” where drones are playing a greater role in providing real time intelligence updates as well as first strike capabilities.
At the start of the war there were various weapon systems that were introduced and heralded as “game changing” with measurable impact in shaping the battlefield. Let us take the example of modern rocket systems provided to Ukraine. The introduction had an immediate impact on the battlefield. Russian air defense missile batteries and radars systems could not properly intercept incoming attacks against their forward operating bases or military positions. Initial indicators pointed that Russian systems did not understand the incoming signatures from incoming rocket attacks. Soon after there was a slow but gradual degrading of the effectiveness of Ukrainian attacks. Interception of launches started to become more common. What changed? DevOps. The Russians learned common technical signs for an incoming strike, developed patches for their air defense systems, evaluated as well as assessed the code, and deployed the necessary patches for their systems. The Russians were able to provide an effective countermeasure during an active conflict across distributed systems within Ukraine and Russia performing out of band updates to mitigate attacks by the Ukrainians. The Ukrainians are also equally doing the same and paving the way in this methodology. This is the ethos of DevOps. When we think of DevOps, we tend to visualize a developer deploying code against a system on-premises or in the cloud. The same technical methodology can be implemented to support future war fighter efforts and advanced weapon systems.
As the DoD is starting to introduce more complex systems, including Space, the need for continuous system updates over low bandwidth communication has new significance. A streamlined process for continuous code improvement, resiliency, and self-healing software on an active battlefield will need to be accounted by military planners in support of the mission. Engineering a rapid recovery on the macro and micro level of systems that fail fast and heal quickly are essential in the software design, delivery, and long-term sustainment. Additional benefits of DevOps can be a force multiplier, optimize total cost of ownership, lower risk, introduce capabilities enhancements faster, lower lead time, and increasing return. Although the DoD budget presently has a significant percentage of the US Federal Budget, there may be additional pressures for leaders in the current and near future to look at ways to streamline their software development and sustainment process. This makes it imperative that organizations within the DoD begin to train staff and implement DevOps as part of their overall strategic plan. The traditional Waterfall model, change management, and promotion of code through various environments before it goes into production will need to go through a radical change across the department.
Recommended Readings and Videos
Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better
by Jennifer Pahlka
The DevOps Handbook, Second Edition: How to Create World-Class Agility, Reliability, & Security in Technology Organizations
by Gene Kim
The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win 5th Anniversary Edition
by Gene Kim, Kevin Behr, Goerge Spafford, and Chris Ruen.
War and Peace and DevOps – Mark Schwartz
https://youtu.be/2BM0xYfcexY
What the Military Taught Me about DevOps – Chris Short
https://youtu.be/TIE1rKkJWyY
Acknowledgements
I would like to thank Chris Ayers and Erik Munson for both reviewing and proving edits in the formulation of this article.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #472:Why It’s Important to Add the TCP Protocol When Connecting to Azure SQL Database
In certain service requests, our customers encounter the following error while connecting to the database, similar like this one: “Connection failed: (‘08001’, ‘[08001] [Microsoft][ODBC Driver 17 for SQL Server]Named Pipes Provider: Could not open a connection to SQL Server [65]. (65) (SQLDriverConnect); [08001] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0); [08001] [Microsoft][ODBC Driver 17 for SQL Server]A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if the instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. (65)’. I would like to give some insights about this.
The crucial point to mention is that Azure SQL Database only responds to TCP, and any attempt to use Named Pipes will result in an error.
1. Understanding the Error Message:
The error message encountered by our customer is typically associated with attempts to connect using the Named Pipes protocol, which Azure SQL Database does not support. It signifies a network-related or instance-specific error in establishing a connection to SQL Server, often caused by incorrect protocol usage.
2. Azure SQL Database’s Protocol Support:
Azure SQL Database is designed to work exclusively with the TCP protocol for network communication. TCP is a reliable, standard network protocol that ensures the orderly and error-checked transmission of data between the server and client.
3. Why Specify TCP in Connection Strings:
Specifying “TCP:” in the server name within your connection strings ensures that the client application directly attempts to use the TCP protocol. This bypasses any default attempts to use Named Pipes, leading to a more straightforward and faster connection process.
4. Error Diagnosis and Efficiency:
By using TCP, any connectivity issues encountered will return errors specific to the TCP protocol, making diagnosis more straightforward. This direct approach eliminates the time spent on protocol negotiation and reduces the time to connect.
5. Recommendations for Azure SQL Database Connectivity:
Always use TCP in your connection strings when connecting to Azure SQL Database.
Ensure that your client and network configuration are optimized for TCP/IP connectivity.
Regularly update your ODBC drivers and client software to the latest versions to benefit from improved performance and security features.
6. Prioritizing TCP to Avoid Unnecessary Delays in Connectivity:
An important aspect to consider in database connectivity is the order in which different protocols are attempted by the client or application. Depending on the configuration, the client may try to connect using Named Pipes before or after TCP in the event of a connectivity issue. This can lead to unnecessary delays in the validation process.
When Named Pipes is attempted first and fails (as it is unsupported in Azure SQL Database), the client then falls back to TCP, thereby wasting valuable time. This scenario is particularly common when default settings are left unchanged in client applications or drivers.
To mitigate this, it is strongly recommended to explicitly use “TCP:” in the server name within your connection strings. This directive ensures that the TCP protocol is prioritized from the outset, facilitating a more direct and efficient connection attempt.
By doing so, not only do we avoid the overhead of an unsuccessful attempt with Named Pipes, but we also gain clarity in error reporting. If a connectivity issue arises, the error returned will be specific to TCP, allowing for a more accurate diagnosis and faster resolution.
Additionally, this approach can significantly reduce the time taken to establish a connection. In high-performance environments or situations where rapid scaling is required, this efficiency can have a substantial impact on overall system responsiveness and resource utilization.
In summary, explicitly specifying the TCP protocol in your connection strings is a best practice for Azure SQL Database connectivity. It ensures a more streamlined connection process, clearer error diagnostics, and can contribute to overall system efficiency.
Enjoy!
Microsoft Tech Community – Latest Blogs –Read More