Tag Archives: microsoft
Partner Blog | Accelerating our collective progress with women-led innovation.
Women are a powerful force, and the businesses they lead contribute to a more inclusive world. This International Women’s Day and Women’s History Month in the United States, we celebrate the strides the world is making towards gender equality, while acknowledging there is much more work to be done.
The United Nations recognizes that gender equality is a fundamental human right—the basis for a peaceful, prosperous, and sustainable world. According to the Global Gap Report 2023, women’s global economic participation and representation in STEM (science, technology, engineering, and mathematics) and in senior leadership positions are declining. Closing the global gender gap is critical to sustainable economic growth, increasing access to new markets and opportunities for more people. When we invest in women, we can accelerate progress.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Securing the Clouds: Achieving a Unified Security Stance and threat-based approach to Use Cases
Note: this is the second of a four-part blog series that explores the complexities of securing multiple clouds and the limitations of traditional Security Information and Event Management (SIEM) tools.
With the first post, we discussed the importance of adopting a multi-cloud approach to Observability, centralizing in a single SIEM all the events generated by your infrastructure to enable a more comprehensive analysis of potential security incidents by correlating events independently from their origin. We also hinted to the complexity of such endeavor.
You can read the first post here: Securing the Clouds: Navigating Multi-Cloud Security with Advanced SIEM Strategies – Microsoft Community Hub
With this new post, we focus on a different topic: the importance of adopting a threat-based approach. In the process, we discuss how this can be achieved and provide you with a few practical ideas you can apply to your scenarios.
The Threat-Based Approach
The threat-based approach for creating use cases consists in the identification of potential attacks to the system, considering each Cloud environment, on-prem environment, and then how they interrelate and interact. You then derive attack uses cases which drive the definition of the logic to identify those attacks and then trigger remediation activities. Those potential attacks are also known as threats or, more precisely, as threat events.
The threat-based approach is not the only possibility. Actually, the most common approaches are vulnerability-based. With them, the focus is on the identification of vulnerabilities like the infamous Log4Shell, and consequently on indicators which may identify attacks in progress.
Vulnerability-Based Approaches and How They Compare
Vulnerability-driven approaches have various shortcomings. For instance, they tend to grow the detection capabilities to respond to known vulnerabilities, and as a reaction to successful attacks. Therefore, they are designed to prevent such attacks from occurring. Conversely, threat driven approaches are based on the understanding of how attacks may happen, and are designed to detect them independently from the presence of specific vulnerabilities. We have chosen to adopt a threat driven approach to improve the detection and response capabilities and to reduce the impact of security incidents and response.
Another advantage of the threat-driven approach is that it is often independent from the existence of specific vulnerabilities. For example, you can analyze the possibility for an attacker to inject code in its requests leveraging some vulnerability to execute such code, without referring to a specific vulnerability. This allows you to design detection mechanisms that are vulnerability independent. Therefore, they would apply to many similar vulnerabilities, including yet undisclosed zero-day vulnerabilities.
A threat-driven approach is a proactive and strategic way of analyzing a complex infrastructure for identifying This approach is much more effective of the corresponding ones based on vulnerability analysis because it takes in account the exploitability of the vulnerabilities. In other words, you determine what a malicious actor can do to compromise your system. This is much more effective than adopting a vulnerability-driven approach because it allows you to discard vulnerabilities that do not matter because they are hardly exploitable.
How to Apply the Threat-Based Approach
By adopting a threat-driven approach we started with the following steps:
Analyze the organization threat landscape focusing on factors like geographical location, industry, externally exposed services, and potentially much more.
Leverage the organization’s threat intelligence to gain broader understanding of the threat landscape, and the specific threats and attacks that target them.
Identify and prioritize the most impactful threats and attack vectors that target the organization’s assets, operations, and objectives, using your telemetry.
Assess and understand the capabilities, tactics, techniques, and procedures of the threat actors and their motivations and goals.
Monitor and evaluate the effectiveness and performance of the security controls and countermeasures and adjust them over time as the threat landscape evolves.
To apply a threat-driven approach, organizations need to incorporate threat analysis and threat intelligence across their systems development and operational processes.
A Practical Example of Threat-Based Approach
Now that we have understood what the threat-driven approach is, it is time to get a few ideas about how you can implement it in your organization.
We can consider as an example a Multi-Cloud organization. It is not uncommon for those organizations to adopt identity and security solutions from various vendors. This complicates integration and soars the risk of supply chain attacks due to the lack of comprehensive visibility and increased complexity. To address this situation, it is best to adopt a structured approach based on the following steps:
Get a good understanding of the various environments, focusing your attention on their components and on how they interact with each other.
Perform a risk analysis on the said environments to identify threats, monitoring capabilities included with each service, and identifying eventual gaps to be covered with additional events. This could be done by applying some lightweight threat modeling.
Evaluate the typical attack scenarios seen by the organization towards the systems in scope and ensure that they are represented within the threat analysis. This may include an analysis of the specific threat landscape for the organization.
The “threat modeling” specified in the second point is a security process to understand security threats to a system, determine risks from those threats, and establish appropriate mitigations. There are various ways to perform threat modeling, and all of them are well represented by the Threat Modeling Manifesto. Microsoft has developed one of the first threat modeling processes, called Microsoft STRIDE Threat Modeling. This approach has evolved over the years and is continuously evolving. If you want to learn more, please go to Microsoft Security Development Lifecycle Threat Modelling.
Creating the Use Cases
The three steps defined above define the threats for the system. This represents the first phase of our journey. The next one consists of the definition of Use Cases. You will often have a Use Case for each Threat, but this is not an absolute rule. In various situations you will want to cover more threats with a single Use Case. Nevertheless, each Use Cases describes the associated threats as a single story and identifies the events necessary to detect such attacks and where they can be found. The Use Case typically also contains the definition of the actions that can be performed for controlling the risk, and the conditions under which they are triggered. Those actions will be both manual and automated.
Responding to Attacks
It is very important to define automated activities that are executed when a potential attack is detected. This allows you to respond to attacks faster than you could if you rely only on manual response.
Time is a critical factor in the realm of cybersecurity. Let’s delve into why swift responses matter:
When a cyberattack occurs, a rapid response allows organizations to identify and contain the breach promptly. By doing so, they prevent the attack from spreading or escalating into a larger incident. This quick action minimizes the damage inflicted on systems, data, and operations.
Fast response enables organizations to restore normal operations swiftly. By minimizing downtime, they mitigate financial losses and maintain business continuity.
The cybersecurity landscape involves a constant race between defenders and attackers. Cybercriminals leverage evolving tools, tactics, and procedures, including zero-day exploits. While it takes an independent cybercriminal around 9.5 hours to gain illicit access to a target, defenders must act even faster to thwart such attempts. See The Importance Of Time And Speed In Cybersecurity (forbes.com).
While automation plays a major role in reducing the time required to respond to attacks, manual remediation activities are still essential. The problem is that automated actions cannot be too drastic: it’s fine to disable an IP address or an account that seems to attack the organization, but you will not want to have some automated procedure to take down the whole infrastructure automatically, even if you detect a possible data exfiltration.
Ultimately, the production of the Use Cases is instrumental in the creation of the rules in your SIEM system to detect attacks, and to configure your SOAR system for automatically respond to them.
Conclusions
Complex Multi-Cloud environments represent a significant challenge when you must create a monitoring infrastructure. Yet, achieving a comprehensive view of what happens in your organization is more important than ever. A structured approach like the Threat-Based Approach described in this post may help you to conquer complexity and get the results your organization needs.
Nevertheless, implementing a Threat-Based Approach is not the end. Organizations face new attacks every day. New software is acquired, extending the attack surface for the Organization. And new vulnerabilities are regularly found. For these reasons, it is essential to adopt a continuous improvement approach. The threat assessment must be regularly reiterated, and new Use Cases must be created. This leads to updating the existing rules in the SIEM or SOAR. If you do so, your Organization will gradually but surely improve its security posture, and will eventually become one of the main tools to guarantee your Business’ security.
Future posts in this series will cover the following topics:
How Microsoft has implemented its security solutions across Azure, Oracle, AWS, and on-premises environments, thus enabling a unified and comprehensive defense against threats, for any enterprise
Key benefits and outcome examples for some of our multi-cloud security projects, including improved detection capabilities, enhanced visibility across enterprise, efficiency, and cost savings.
Microsoft Tech Community – Latest Blogs –Read More
Frequently Asked Question about TLS and Cipher Suite configuration
Disclaimer: Microsoft does not endorse the products listed in this article. They are provided for informational purposes and their listing does not constitute an endorsement. We do not guarantee the quality, safety, or effectiveness of listed products and disclaim liability for any related issues. Users should exercise their own judgment, conduct research, and seek professional advice before purchasing or using any listed products.
Disclaimer: This article contains content generated by Microsoft Copilot.
What versions of Windows support TLS 1.3?
Starting with Windows Server 2022, TLS 1.3 is supported by default in all versions. The protocol is not available in down level OS versions.
What Linux distros will not support TLS 1.3?
Most modern Linux distributions have support for TLS 1.3. TLS 1.3 is a significant improvement in security and performance over earlier versions of TLS, and it’s widely adopted in modern web servers and clients. However, the specific versions of Linux and software components that support TLS 1.3 can vary, and it’s essential to keep your software up-to-date to benefit from the latest security features.
To ensure TLS 1.3 support, consider the following factors:
**Linux Kernel:** Most modern Linux kernels have support for TLS 1.3. Kernel support is essential for low-level network encryption. Ensure that your Linux distribution is running a reasonably recent kernel.
**OpenSSL or OpenSSL-Compatible Libraries:** TLS 1.3 support is primarily dependent on the version of OpenSSL or other TLS libraries in use. OpenSSL 1.1.1 and later versions generally provide support for TLS 1.3. However, the specific version available may depend on your Linux distribution and the software you’re using.
**Web Servers and Applications:** The web servers and applications you run on your Linux system need to be configured to enable TLS 1.3. Popular web servers like Apache, Nginx, and others have been updated to support TLS 1.3 in newer versions. Ensure that you are using an updated version of your web server software and have TLS 1.3 enabled in its configuration.
**Client Software:** If you are using Linux as a client to connect to servers over TLS, your client software (e.g., web browsers, email clients) should support TLS 1.3. Most modern web browsers and email clients on Linux have added support for TLS 1.3.
**Distribution Updates:** Regularly update your Linux distribution to receive security updates and new software versions, including those with TLS 1.3 support. Each Linux distribution may have different release schedules and package versions.
Since the state of software support can change over time, it’s crucial to check the specific versions and configurations of the software components you are using on your Linux system to determine their TLS 1.3 compatibility. Generally, using up-to-date software and keeping your Linux system patched with the latest security updates will ensure that you have the best support for TLS 1.3 and other security features.
How do remove my dependency on Legacy TLS encryption?
At high level, resolving legacy TLS encryption issues requires understanding your TLS 1.0 and TLS 1.1 dependencies, upgrading to TLS 1.2+ compliant OS versions, updating applications and testing.
Given the length of time TLS 1.0 has been supported by the software industry, it is highly recommended that any TLS 1.0 deprecation plan include the following:
Code analysis to find/fix hardcoded instances of TLS 1.0 or older security protocols.
Network endpoint scanning and traffic analysis to identify operating systems using TLS 1.0 or older protocols.
Full regression testing through your entire application stack with TLS 1.0 disabled.
Migration of legacy operating systems and development libraries/frameworks to versions capable of negotiating TLS 1.2 by default.
Compatibility testing across operating systems used by your business to identify any TLS 1.2 support issues.
Coordination with your own business partners and customers to notify them of your move to deprecate TLS 1.0.
Understanding which clients may no longer be able to connect to your servers once TLS 1.0 is disabled.
How do I configure protocols and cipher suites for Apache?
Configuring cipher suites and protocols for the Apache web server involves modifying the server’s SSL/TLS settings in its configuration file. This process can help you enhance the security and compatibility of your web server. Here are the steps to configure cipher suites and protocols for Apache:
**Backup Configuration Files:**
Before making any changes, it’s essential to create backups of your Apache configuration files to ensure you can revert if something goes wrong. Common configuration files include `httpd.conf` or `apache2.conf`, and the SSL/TLS configuration file, often named something like `ssl.conf`.
**Edit SSL/TLS Configuration:**
Open the SSL/TLS configuration file for your Apache server using a text editor. The location of this file can vary depending on your Linux distribution and Apache version. Common locations include `/etc/httpd/conf.d/ssl.conf`, `/etc/apache2/sites-available/default-ssl.conf`, or similar. You may need root or superuser privileges to edit this file.
Example command to open the file in a text editor:
“`
sudo nano /etc/httpd/conf.d/ssl.conf
“`
**Specify Protocol Versions:**
To configure the allowed SSL/TLS protocols, you can use the `SSLProtocol` directive. For example, to allow only TLS 1.2 and TLS 1.3, you can add the following line to your configuration:
“`
SSLProtocol -all +TLSv1.2 +TLSv1.3
“`
This configuration disables SSL (SSLv2 and SSLv3) and enables TLS 1.2 and TLS 1.3.
**Specify Cipher Suites:**
To configure the allowed cipher suites, use the `SSLCipherSuite` directive. You can specify a list of cipher suites that you want to enable. Ensure that you use secure and modern cipher suites. For example:
“`
SSLCipherSuite TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256
“`
This example includes cipher suites that offer strong security and forward secrecy.
**Save and Close the Configuration File**
Save your changes and exit the text editor.
**Test Configuration**
Before you restart Apache, it’s a good practice to test your configuration for syntax errors. You can use the following command:
“`
apachectl configtest
“`
If you receive a “Syntax OK” message, your configuration is valid.
**Restart Apache:**
Finally, restart the Apache web server to apply the changes:
“`
sudo systemctl restart apache2 # On systemd-based systems
“`
“`
sudo service apache2 restart # On non-systemd systems
“`
Your Apache web server should now be configured to use the specified SSL/TLS protocols and cipher suites. Remember that keeping your SSL/TLS configuration up to date and secure is crucial for the overall security of your web server. Be sure to monitor security advisories and best practices for SSL/TLS configuration regularly.
How do I configure protocols and cipher suites for nginx?
To configure cipher suites and protocols for the Nginx web server, you’ll need to modify its SSL/TLS settings in the server block configuration. This process allows you to enhance the security and compatibility of your web server. Here are the steps to configure cipher suites and protocols for Nginx:
**Backup Configuration Files:**
Before making any changes, create backups of your Nginx configuration files to ensure you can revert if needed. Common configuration files include `nginx.conf`, `sites-available/default`, or a custom server block file.
**Edit the Nginx Configuration File:**
Open the Nginx configuration file in a text editor. The location of the main configuration file varies depending on your Linux distribution and Nginx version. Common locations include `/etc/nginx/nginx.conf`, `/etc/nginx/sites-available/default`, or a custom configuration file within `/etc/nginx/conf.d/`.
Example command to open the file in a text editor:
“`bash
sudo nano /etc/nginx/nginx.conf
“`
**Specify Protocol Versions:**
To configure the allowed SSL/TLS protocols, you can use the `ssl_protocols` directive in your `server` block or `http` block. For example, to allow only TLS 1.2 and TLS 1.3, add the following line:
“`nginx
ssl_protocols TLSv1.2 TLSv1.3;
“`
This configuration disables SSL (SSLv2 and SSLv3) and enables TLS 1.2 and TLS 1.3.
**Specify Cipher Suites:**
To configure the allowed cipher suites, use the `ssl_ciphers` directive. Specify a list of cipher suites that you want to enable. Ensure that you use secure and modern cipher suites. For example:
“`nginx
ssl_ciphers ‘TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256’;
“`
This example includes cipher suites that offer strong security and forward secrecy.
**Save and Close the Configuration File:**
Save your changes and exit the text editor.
**Test Configuration:**
Before you reload Nginx to apply the changes, test your configuration for syntax errors:
“`bash
sudo nginx -t
“`
If you receive a “syntax is okay” message, your configuration is valid.
**Reload Nginx:**
Finally, reload Nginx to apply the new SSL/TLS settings:
“`bash
sudo systemctl reload nginx # On systemd-based systems
“`
“`bash
sudo service nginx reload # On non-systemd systems
“`
Your Nginx web server should now be configured to use the specified SSL/TLS protocols and cipher suites. Ensure that you stay updated with best practices and security advisories for SSL/TLS configurations to maintain the security of your web server.
What open-source tools can be used to test client connections?
There are several open-source tools available to test client connections for TLS (Transport Layer Security) connections, either for troubleshooting or security auditing purposes. Here are some popular ones:
Nmap
Nmap, a powerful network scanning tool, can be used to test TLS/SSL configurations and identify supported cipher suites on a server. Here are a couple of ways you can utilize Nmap for testing TLS client connections:
Using the ssl-enum-ciphers Script:
Nmap includes a script called ssl-enum-ciphers, which assesses the cipher suites supported by a server and rates them based on cryptographic strength.
It performs multiple connections using SSLv3, TLS 1.1, and TLS 1.2.
To check the supported ciphers on a specific server (e.g., Bing), run the following command:
nmap –script ssl-enum-ciphers -p 443 www.bing.com
The output will provide information about the supported ciphers and their strengths123.
Checking for Weak Ciphers:
If you specifically want to identify weak ciphers, you can use the following command:
nmap –script ssl-enum-ciphers -p 443 yoursite.com | grep weak
This command will highlight any weak ciphers detected during the scan4
Remember that Nmap is a versatile tool, and its ssl-enum-ciphers script can help you assess the security of your TLS connections.
SSLyze
SSLyze is a powerful Python tool designed to analyze the SSL configuration of a server by connecting to it. It helps organizations and testers identify misconfigurations affecting their SSL servers. Here’s how you can use SSLyze to assess TLS connections:
Basic Scan with sslyze:
To perform a basic scan of a website’s HTTPS configuration, run the following command, replacing example.com with the domain you want to scan:
sslyze –regular example.com
This command will display information about the protocol version, cipher suites, certificate chain, and more.
Specific Scan Commands:
You can use various scan commands to test specific aspects of TLS connections:
–sslv3: Test for SSL 3.0 support.
–tlsv1: Test for TLS 1.0 support.
–early_data: Test for TLS 1.3 early data support.
–sslv2: Test for SSL 2.0 support.
Online SSL Scan:
If you prefer an online approach, you can use SSLyze to test any SSL/TLS-enabled service on any port. It checks for weak ciphers and known cryptographic vulnerabilities (such as Heartbleed).
Remember to adjust the scan parameters based on your specific requirements.
testssl.sh
testssl.sh is a powerful open-source command-line tool that allows you to check TLS/SSL encryption on various services. Here are some features and instructions for using it:
Installation:
You can install testssl.sh by cloning its Git repository:
git clone –depth 1 https://github.com/drwetter/testssl.sh.git
cd testssl.sh
Make sure you have bash (usually preinstalled on most Linux distributions) and a newer version of OpenSSL (1.1.1 recommended) for effective usage.
Basic Usage:
To test a website’s HTTPS configuration, simply run:
./testssl.sh https://www.bing.com/
To test STARTTLS-enabled protocols (e.g., SMTP, FTP, IMAP, etc.), use the -t option:
./testssl.sh -t smtp https://bing.com:25
Additional Options:
Parallel Testing:
By default, mass tests are done in serial mode. To enable parallel testing, use the –parallel flag:
./testssl.sh –parallel
Custom OpenSSL Path:
If you want to use an alternative OpenSSL program, specify its path using the –openssl flag:
./testssl.sh –parallel –sneaky –openssl /path/to/your/openssl
Logging:
To keep logs for later analysis, use the –log (store log file in the current directory) or –logfile (specify log file location) options:
./testssl.sh –parallel –sneaky –logging
Disable DNS Lookup:
To speed up tests, disable DNS lookup using the -n flag:
./testssl.sh -n –parallel –sneaky –logging
Single Checks:
You can run specific checks for protocols, server defaults, headers, vulnerabilities, and more. For example:
To check each local cipher remotely, use the -e flag.
To omit some checks and make the test faster, include the –fast flag.
To test TLS/SSL protocols (including SPDY/HTTP2), use the -p option.
To view the server’s default picks and certificate, use the -S option.
To see the server’s preferred protocol and cipher, use the -P flag.
Remember that testssl.sh provides comprehensive testing capabilities, including support for mass testing and logging.
TLS-Attacker
TLS-Attacker is a powerful Java-based framework designed for analyzing TLS libraries. It serves as both a manual testing tool for TLS clients and servers and a software library for more advanced tools. Here’s how you can use it:
Compilation and Installation:
To get started, ensure you have Java and Maven installed. On Ubuntu, you can install Maven using:
sudo apt-get install maven
TLS-Attacker currently requires Java JDK 11 to run. Once you have the correct Java version, clone the TLS-Attacker repository:
git clone https://github.com/tls-attacker/TLS-Attacker.git
cd TLS-Attacker
mvn clean install
The resulting JAR files will be placed in the “apps” folder. If you want to use TLS-Attacker as a dependency, include it in your pom.xml like this:
<dependency><groupId>de.rub.nds.tls.attacker</groupId><artifactId>tls-attacker</artifactId><version>5.2.1</version><type>pom</type></dependency>
Running TLS-Attacker:
You can run TLS-Attacker as a client or server:
As a client:
cd apps
java -jar TLS-Client.jar -connect [host:port]
As a server:
java -jar TLS-Server.jar -port [port]
TLS-Attacker also ships with example attacks on TLS, demonstrating how easy it is to implement attacks using the framework:
java -jar Attacks.jar [Attack] -connect [host:port]
Although the example applications are powerful, TLS-Attacker truly shines when used as a programming library.
Customization and Testing:
You can define custom TLS protocol flows and test them against your TLS library.
TLS-Attacker allows you to send arbitrary protocol messages in any order to the TLS peer and modify them using a provided interface.
Remember that TLS-Attacker is primarily a research tool intended for TLS developers and pentesters. It doesn’t have a GUI or green/red lights—just raw power for analyzing TLS connections!
ssldump
ssldump is a versatile SSL/TLS network protocol analyzer that can help you examine, decrypt, and decode SSL-encrypted packet streams. Here’s how you can use it for testing TLS connections:
Capture the Target Traffic:
First, capture a packet trace containing the SSL traffic you want to examine. You can use the tcpdump utility to capture the traffic.
To write the captured packets to a file for examination with ssldump, use the -w option followed by the name of the file where the data should be stored.
Specify the interface or VLAN from which traffic is to be captured using the -i option.
Use appropriate tcpdump filters to include only the traffic you want to examine.
Examine the SSL Handshake and Record Messages:
When you run ssldump on the captured data, it identifies TCP connections and interprets them as SSL/TLS traffic.
It decodes SSL/TLS records and displays them in text format.
You’ll see details about the SSL handshake, including the key exchange.
Example command:
ssldump -i en0 -w captured_traffic.pcap
Decrypt Application Data (If Possible):
If you have the private key used to encrypt the connections, ssldump may also decrypt the connections and display the application data traffic.
Keep in mind that ssldump cannot decrypt traffic for which the handshake (including the key exchange) was not seen during the capture.
Remember to follow best practices when capturing SSL conversations for examination. For more information, refer to the official documentation.
sslscan
sslscan is a handy open-source tool that tests SSL/TLS-enabled services to discover supported cipher suites. It’s particularly useful for determining whether your configuration has enabled or disabled specific ciphers or TLS versions. Here’s how you can use it:
Installation:
If you’re using Ubuntu, you can install sslscan using the following command:
sudo apt-get install sslscan
Basic Usage:
To scan a server and list the supported algorithms and protocols, simply point sslscan at the server you want to test. For example:
sslscan example.com
The output will highlight various aspects, including SSLv2 and SSLv3 ciphers, CBC ciphers on SSLv3 (to detect POODLE vulnerability), 3DES and RC4 ciphers, and more.
Additional Options:
You can customize the scan by using various options:
–targets=<file>: Specify a file containing a list of hosts to check.
–show-certificate: Display certificate information.
–failed: Show rejected ciphers.
Remember that sslscan provides valuable insights into your SSL/TLS configuration.
curl
You can use curl to test TLS connections. Here are some useful commands and tips:
Testing Different TLS Versions:
To test different TLS versions, you can use the following options with curl:
–tlsv1.0: Test TLS 1.0
–tlsv1.1: Test TLS 1.1
–tlsv1.2: Test TLS 1.2
–tlsv1.3: Test TLS 1.3
For example, to test TLS 1.2, use:
curl –tlsv1.2 https://example.com
Replace example.com with the URL you want to test1.
Debugging SSL Handshake:
While curl can provide some information, openssl is a better tool for checking and debugging SSL.
To troubleshoot client certificate negotiation, use:
openssl s_client -connect www.example.com:443 -prexit
This command will show acceptable client certificate CA names and a list of CA certificates from the server2.
Checking Certificate Information:
To see certificate information, use:
curl -iv https://example.com
However, for detailed TLS handshake troubleshooting, prefer openssl s_client instead of curl. Use options like -msg, -debug, and -status for more insights3.
Remember that curl can be handy for quick checks, but for in-depth analysis, openssl provides more comprehensive details about SSL/TLS connections.
OpenSSL
OpenSSL is a versatile tool that allows you to test and verify TLS/SSL connections. Here are some useful commands and examples:
Testing TLS Versions:
To specify the TLS version for testing, use the appropriate flag with openssl s_client. For instance:
To test TLS 1.3, run:
openssl s_client -connect example.com:443 -tls1_3
Other supported SSL and TLS version flags include -tls1_2, -tls1_1, -tls1, -ssl2, and -ssl31.
Checking Certificate Information:
To see detailed certificate information, use:
openssl s_client -connect your.domain.io:443
For more in-depth analysis, consider using openssl instead of curl. Options like -msg, -debug, and -status provide additional insights2.
Upgrading Plain Text Connections:
You can upgrade a plain text connection to an encrypted (TLS or SSL) connection using the -starttls option. For example:
openssl s_client -connect mail.example.com:25 -starttls smtp
This command checks and verifies secure connections, making it a valuable diagnostic tool for SSL servers3.
Remember, openssl s_client is your go-to for testing and diagnosing SSL/TLS connections.
Can you use WireShark to inspect the TLS connections?
Most modern Linux distributions have support for TLS 1.3. TLS 1.3 is a significant improvement in security and performance over earlier versions of TLS, and it’s widely adopted in modern web servers and clients. However, the specific versions of Linux and software
Capture the Traffic:
Start Wireshark and select the network interface you want to capture traffic from.
Click the Start button (usually a green shark fin icon) to begin capturing packets.
Browse to a website or perform any action that involves TLS communication (e.g., visiting an HTTPS website).
Filter for TLS Traffic:
In the packet list, you’ll see various packets. To focus on TLS traffic, apply a display filter:
Click on the Display Filter field (located at the top of the Wireshark window).
Type tls or ssl and press Enter.
Wireshark will now display only packets related to TLS/SSL.
Inspect TLS Handshake and Records:
Look for packets with the TLS Handshake Protocol (such as Client Hello, Server Hello, Certificate Exchange, Key Exchange, and Finished messages).
Expand these packets to view details about the handshake process, including supported cipher suites, certificate information, and key exchange.
You can also examine the Application Data packets to see encrypted data being exchanged after the handshake.
Decryption (Optional):
If you have access to the pre-master secret or an RSA private key, you can decrypt the TLS traffic:
Go to Edit → Preferences.
Open the Protocols tree and select TLS.
Configure the (Pre)-Master-Secret log filename or provide the RSA private key.
Wireshark will use this information to decrypt the TLS packets.
Tool references
gnutls-cli(1) – Linux manual page (man7.org)
Testing TLS/SSL configuration using Nmap – Web Penetration Testing with Kali Linux – Third Edition [Book] (oreilly.com)
Testing SSL ports using nmap and check for weak ciphers | Global Security and Marketing Solutions (gss-portal.com)
How to use sslyze to assess your web server HTTPS TLS? – Full Security Engineer
Overview of packet tracing with the ssldump utility (f5.com)
Curl – Test TLS and HTTP versions – Kerry Cordero
openssl s_client commands and examples – Mister PKI
TLS – Wireshark Wiki
GitHub – drwetter/testssl.sh: Testing TLS/SSL encryption anywhere on any port
GitHub – tls-attacker/TLS-Attacker: TLS-Attacker is a Java-based framework for analyzing TLS libraries. It can be used to manually test TLS clients and servers or as as a software library for more advanced tools.
GitHub – rbsec/sslscan: sslscan tests SSL/TLS enabled services to discover supported cipher suites
Other references
Restricting TLS 1.2 Ciphersuites in Windows using PowerShell
Solving the TLS 1.0 Problem, 2nd Edition
Support for legacy TLS protocols and cipher suites in Azure Offerings
Microsoft Tech Community – Latest Blogs –Read More
Optimizing Azure OpenAI: A Guide to Limits, Quotas, and Best Practices
This blog focuses on good practices for monitoring Azure Open AI limits and quotas. With the growing interest and application of Generative AI, Open AI models have emerged as pioneers in this transformative era. To maintain consistent and predictable performance for all users, these models impose certain limits and quotas. For Independent Software Vendors (ISVs) and Digital Natives utilizing these models, understanding these limits and establishing efficient monitoring strategies is paramount to ensures a good customer experience to the end-users of their products and services. This blog seeks to provide a comprehensive understanding of these monitoring strategies, thereby enabling ISVs and Digital Natives to optimally leverage AI technologies for their respective customer bases.
Understanding Limits and Quotas
Azure OpenAI’s quota feature enables assignment of rate limits to your deployments, up-to a global limit called your “quota”. Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute** (TPM). Your subscription is onboarded with a default quota for most models.
Refer to this document for default TPM values. You can allocate TPM among deployments until reaching quota. If you exceed a model’s TPM limit in a region, you can reassign quota among deployments or request a quota increase. Alternatively, if viable, consider creating a deployment in a new Azure region in the same geography as the existing one.
For example, with a 240,000 TPM quota for GPT-35-Turbo in East US, you could create one deployment of 240K TPM, two of 120K TPM each, or multiple deployments adding up to less than 240K TPM in that region.
TPM rate limits are based on the maximum tokens **estimated** to be processed when the request is received. It is different than the token count used for billing, which is computed after all processing is completed. Azure OpenAI calculates a max processed-token count per request using:
– Prompt text and count
– The max_tokens setting
– The best_of setting
This estimated count is added to a running token count of all requests, which resets every minute. A 429 response code is returned once the TPM rate limit is reached within the minute.
A **Requests-Per-Minute** (RPM) rate limit is also enforced. It is set proportionally to the TPM assignment at a ratio of 6 RPM per 1000 TPM. If requests aren’t evenly distributed over a minute, a 429 response may be received. Azure OpenAI Service evaluates incoming requests’ rate over a short period, typically 1 or 10 seconds, and issues a 429 response if requests surpass the RPM limit. For example, if the service monitors with a 1-second interval, a 600-RPM deployment would be throttled if more than 10 requests are received per second.
In addition to the standard quota, there is also a provisioned throughput capability, or PTU. It is useful to think of the standard quota as a serverless mode, where your requests are served from a pool of resources and no capacity is reserved for you, hence the overall latency could vary. In contrast, with a provisioned throughput capability, you specify the amount of throughput you require for your application. The service then provisions the necessary compute and ensures it is ready for you. This gives you a more predictable performance and stable max latency. For high throughput workloads this may provide cost savings versus the token-based consumption. At the time of the writing, provisioned throughput units are not available by default. For more details about it, contact your Microsoft Account team.
There is also a limit of 30 Azure OpenAI resource instances per region. For an exhaustive and up-to-date list of quotas and limits please check this document. It is important to plan ahead on how you will manage and segregate tenant data and traffic in order to ensure reliable performance and optimal costs. Please check the Azure Open AI service specific guidance for considerations and strategies pertinent to multitenant solutions.
Choosing between tokens-per-minute and provisioned throughput models
To choose effectively between TPM and PTU you need to understand that there are minimum PTUs per deployment required. If your current usage is above the requirement and expected to grow, it might be more economically feasible to purchase provisioned capacity. In high token usage scenarios, this provides a lower per token price and stable max latency. It is important to understand that with PTUs, you are isolated and protected from the noisy neighbor problem of a SaaS application with shared resources. However, you can still experience higher than average latency caused by other factors, such as the total load you send to the service, length of the prompt and response, etc.
The table below shows the minimum TPMs per model type and an approximate relation to TPMs:
![Minimum PTU per model and TPM equivalent](/.attachments/PTU-to-TPM-b606cdfd-192d-427b-9720-6aea7443597a.png)
Source: https://github.com/Azure/aoai-apim
Effective Monitoring Techniques
Now that we understand better the limits and quotas of the service, let’s discuss how to effectively monitor usage and set up alerts to be notified and take action when you reach the limits and quotas assigned.
Azure OpenAI service has metrics and logs available as part of the Azure Monitor capabilities. Metrics are available out of the box, at no additional cost. By default, a history of 30 days is kept. If you need to keep these metrics for longer, or route to a different destination, you can do so by enabling it in the Diagnostic settings.
Metrics are grouped into four categories:
– HTTP Requests dimensions: Model Name, Model Version, Deployment, Status Code, Stream Type, and Operation.
– Tokens-Based Usage: Active tokens, Generated Completions Tokens, Processed Inference and Prompt Tokens.
– PTU Utilization dimensions: Model Name, Model Version, Deployment, and Stream Type.
– Fine-tuning: Training Hours by Deployment and Training Hours by Model Name.
Additionally, each API response header contains the RateLimit-Global-Remaining and RateLimit-Global-Reset. And the response body contains a usage section with the prompt tokens, completion tokens, and total tokens values that shows the billing tokens per request.
The available logs in Azure OpenAI are Audit logs, Request and Response logs, and Trace Logs. Once you enable these through the Diagnostic settings, you can send these to a Log Analytics workspace, Storage account, Event Hub, or a partner solution. Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has other costs associated with it. For more information, see Azure Monitor Logs cost calculations and options.
My colleagues created an Azure Monitor Workbook that serves as a great baseline to start monitoring your Azure Open AI service logs and metrics.
Optimization Recommendations
Use LLMs for what they are best at – natural language understanding and fluent language generation. This means understanding that LLMs are boxes trying to predict the most likely next token and just because you could use an LLM for a task, it doesn’t necessarily make it the most optimal tool for it.
1. Always start with can this be done in code? Are there existing libraries, tools, patterns that can perform the task? If yes, use those. These will probably be more performant and cost less.
Examples: use Azure AI Language service for key phrase extraction instead of the LLM; use standard libraries to do math operations, data aggregation, etc.
2. Control the size of the input prompt (e.g. set a limit on the user input field; in RAG, depending on scenario, restrict the number of relevant chunks sent to the LLM) and completion (with max_tokens and best_of).
3. Call the GPT models as few times as possible. Ensure you gather all the data you need to generate an optimal response, and only then call the model.
4. Use the cheapest model that gets the task done. This could mean using GPT 3.5 instead of GPT 4 for tasks where the cheapest model performs at an acceptable level.
Prevention and Response Strategies for Limit Exceeding
Here are some best practices and strategies to avoid rate limiting errors in a tokens per minute, i.e. Pay-As-You-Go model:
– Use minimum feasible values for max_tokens and best_of in your scenario. For instance, don’t set a high max-tokens if expecting small responses.
– Manage your quota to allocate more TPM to high-traffic deployments and less to those with limited needs.
– Avoid sharp changes in the workload. Increase the workload gradually.
– Test different load increase patterns.
– Check the size of prompts against the model limits before sending the request to the Azure Open AI service. For example, for GPT-4 (8k), a max request token limit of 8,192 is supported. If your prompt is 10K in size, then this will fail, and also any subsequent retries would fail as well, consuming your quota.
– retrying with exponential backoff – in practice it means performing a short sleep when a rate limit error is hit, then retrying the unsuccessful request. If the request is still unsuccessful, the sleep length is increased and the process is repeated. Note that unsuccessful requests contribute to your per-minute limit, so continuously resending a request won’t work. This strategy is useful for real-time requests from users.
– batching requests – If you’re hitting the limit on requests per minute, but have headroom on tokens per minute, you can increase your throughput by batching multiple tasks into each request. This will allow you to process more tokens per minute, especially with the smaller models.
– when handling batch processing, maximizing throughput matters more than latency. Proactively adding delay between batch requests can help. E.g., if your rate limit 20 requests per minute, add a delay of 3–6 seconds to each request. This can help you operate near the rate limit ceiling without hitting it and incurring wasted requests.
For more details on these strategies and an example of a parallel processing script, please see this notebook and documentation from Azure Open AI.
If your workload is particularly sensitive to latency and cannot tolerate latency spikes, you can consider implementing a mechanism that checks the latency of Azure Open AI in different Azure regions and send requests to the region with the smallest latency. You can group regions into geographies, like Americas, EMEA and Asia, and perform these checks on a per geography basis. This should also account for any compliance regulation and data residency requirements. For a more detailed walkthrough of this strategy, please check this blog.
In Azure, API Management (APIM) service can help you implement some of these best practices and strategies. APIM supports queueing, rate throttling, error handling, managing user quotas, as well as distributing requests to different Azure Open AI instances, potentially located in different regions to implement the pattern described above.
Conclusion
In conclusion, understanding the limits, quotas, and optimization techniques for Azure Open AI is crucial for effectively utilizing the service and achieving optimal performance and cost efficiency. By carefully monitoring usage, setting up alerts, and implementing prevention and response strategies for limit exceeding, you can ensure reliable performance and avoid unnecessary disruptions.
The insights and recommendations provided in this document serve as a valuable guide to help you make informed decisions and optimize your Azure Open AI use-cases. By following these best practices, such as leveraging existing libraries and tools, controlling input prompt size, minimizing API calls, and using the most cost-effective models, you can maximize the value and efficiency of your AI applications.
Remember to plan ahead, allocate resources wisely, and continuously monitor and adjust your usage based on the metrics and logs available through Azure Monitor. By doing so, you can proactively address any potential issues, avoid rate limiting errors, and deliver a seamless and responsive experience to your users.
Microsoft Tech Community – Latest Blogs –Read More
Records are not getting updated/deleted in Search Index despite enabling Track Deletions in SQL DB
Symptom:
The count of records in the indexer and the index did not align even after activating the change detection policy. Even with record deletions, the entries persisted in the Index Search Explorer.
To enable incremental indexing, configure the “dataChangeDetectionPolicy” property within your data source definition. This setting informs the indexer about the specific change tracking mechanism employed by your table or view.
For Azure SQL indexers, you can choose the change detection policy below:
“SqlIntegratedChangeTrackingPolicy” (applicable to tables exclusively)
It is recommended using “SqlIntegratedChangeTrackingPolicy” for its efficiency and its ability to identify deleted rows.
Database requirements:
Prerequisites:-
SQL Server 2012 SP3 and later, if you’re using SQL Server on Azure VMs
Azure SQL Database or SQL Managed Instance
Tables only (no views)
On the database, enable change tracking for the table.
No composite primary key (a primary key containing more than one column) on the table.
No clustered indexes on the table. As a workaround, any clustered index would have to be dropped and re-created as NonClustered index, however, performance may be affected in the source compared to having a clustered index.
When using SQL integrated change tracking policy, don’t specify a separate data deletion detection policy. The SQL integrated change tracking policy has built-in support for identifying deleted rows.
However, for the deleted rows to be detected automatically, the document key in your search index must be the same as the primary key in the SQL table.
Once you have done all the above steps, still you see the discrepancy in the count of Indexer and Index Count
Approach:
Enabling change tracking before or after inserting data can affect how the system tracks changes, and the order in which you enable it matters. It’s important to understand how change tracking works in your specific context to resolve the issue.
Check whether you have enabled Change tracking at the Table level as well along with Database level.
Check whether you have enabled Change Tracking before or after Data Insertion.
ALTER TABLE [TableName] ENABLE CHANGE_TRACKING
Here are some general guidelines on how change tracking typically works:
Enable Change Tracking Before Inserting Data:
– If you enable change tracking before inserting data, the system will start tracking changes from the beginning.
– This is the recommended approach if you want to track changes to existing data and any new data that will be added.
Enable Change Tracking After Inserting Data:
– If you enable change tracking after inserting data, the system might not have a baseline for the existing data.
– You may encounter errors if you attempt to retrieve change information for data that was already in the system before change tracking was enabled.
Solution :
To ensure that the Indexer starts tracking deletions from the beginning, it is important to enable Change Tracking before inserting data.
This approach also helps to match the count of the Indexer and Index without having to reset the Indexer repeatedly.
Reference Links :–
Enable and Disable Change Tracking – SQL Server | Microsoft Learn
Azure SQL indexer – Azure AI Search | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
[Some] SQL Server and Azure SQL DB Security Fundamentals | Data Exposed
Learn about SQL Server and Azure SQL Database security fundamentals you won’t want to miss.
Resources:
Microsoft Tech Community – Latest Blogs –Read More
Tech Community Live: Microsoft Intune – RSVP now
Join us March 20th for another Microsoft Intune edition of Tech Community Live! We will be joined by members of our product engineering and customer adoption teams to help you explore, expand, and improve the way you cloud manage devices – or learn the first steps to take to get to the cloud – we’re here to help you.
In this edition of Tech Community Live, we are focusing on cloud management for your entire device estate – specifically for those of you managing Windows or macOS devices with Intune. We’ll also cover some of the newly available solutions in Intune Suite including Enterprise App Management, Advanced Analytics and Cloud PKI.
As always, the focus of this series is on your questions! In addition to open Q&A with our product experts, we will kick off each session with a brief demo to get everyone warmed up and excited to engage.
How do I attend?
Choose a session name below and add any (or all!) of them to your calendar. Then, click RSVP to event and post your questions in the Comments anytime! We’ll note if we answer your question in the live stream and follow up in the chat with a reply as well.
Can’t find the option to RSVP? No worries, sign in on the Tech Community first.
Afraid to miss out due to scheduling or time zone conflicts? We got you! Every AMA will be recorded and available on demand the same day.
Time
AMA Topic
7:30 AM – 8:30 a.m. (Pacific Time)
Securely manage macOS with Intune
8:30 AM – 9:30 a.m. (Pacific Time)
Windows management with Intune
9:30 AM – 10:30 a.m. (Pacific Time)
Enterprise App Management, Advanced Analytics in Intune Suite
10:30 AM – 11:30 a.m. (Pacific Time)
Microsoft Cloud PKI in Intune Suite
More ways to engage
Join the Microsoft Management Customer Connection Program (MM CCP) community to engage more with our product team.
Check out our monthly series, Unpacking Endpoint Management, to view upcoming topics and catch up on everything we’ve covered so far.
Did you know this is a series? Check out our on-demand sessions from Tech Community Live: Intune – the series!
Stay up to date! Bookmark the Microsoft Intune Blog and follow us on LinkedIn or @MSIntune on X to continue the conversation.
Microsoft Tech Community – Latest Blogs –Read More
Unlock the full potential of Copilot for Microsoft 365
The Microsoft 365 Copilot Adoption Accelerator engagement is crafted to ensure the seamless adoption of Copilot for Microsoft 365.
This engagement comprises three key phases: Readiness, Build the Plan, and Drive Adoption. It is recommended to undertake the Adoption Accelerator after completing the Copilot for Microsoft 365 engagement, wherein high-value scenarios and the technical and organizational baseline are identified. The Adoption Accelerator Engagement will specifically target these high-value scenarios.
The adoption process should involve key stakeholders such as Adoption Managers, Business Decision Makers, End-User Support, and Champions. Responsibility for sustained success should be effectively transitioned during the adoption process.
Click here for more information
Microsoft Tech Community – Latest Blogs –Read More
Scaling up: Customer-driven enhancements in the FHIR service enable better healthcare solutions
This blog has been authored by Ketki Sheth, Principal Program Manager, Microsoft Health and Life Sciences Platform
We’re always listening to customer feedback and working hard to improve the FHIR service in Azure Health Data Services. In the past few months, we rolled out several new features and enhancements that enable you to build more scalable, secure, and efficient healthcare solutions.
Let’s explore some highlights.
Unlock new possibilities with increased storage capacity up to 100 TB
In January 2024 we increased storage capacity within the FHIR service to enable healthcare organizations to manage vast volumes of data for analytical insights and transactional workloads. Previously constrained by a 4 TB limit, customers can build streamline workflows with native support for up to 100 TB of storage.
More storage means more possibilities for analytics with large data sets. For example, you can explore health data to improve population health, conduct research, and discover new insights. More storage also allows Azure API for FHIR customers who have more than 4 TB of data to switch to the evolved FHIR service in Azure Health Data Services before September 26, 2026, when Azure API for FHIR will be retired.
If you need storage greater than 4 TB, let us know by creating a support request on the Azure portal with the issue type Service and Subscription limit (quotas). We’d be happy to enable your organization to take advantage of this expanded storage capacity.
Connect any OpenID Connect (OIDC) identity provider to the FHIR service with Azure Active Directory B2C
In January 2024 we also released the integration of the FHIR service with Azure Active Directory B2C. The integration gives organizations a secure and convenient way to grant access with fine-grained access control for different users or groups – without creating or comingling user accounts in the same Microsoft Entra ID tenant. Plus, along with the support for Azure Active Directory B2C (Azure AD B2C), we announced the general availability of the integration with OpenID Connect (OIDC) compliant identity providers (IDP) as part of the expanded authentication and authorization model for the FHIR service.
With Azure AD B2C and OIDC integration, organizations building SMART on FHIR applications can integrate non-Microsoft Entra identity providers with EHRs (Electronic Health Records) and other healthcare applications.
Learn more: Use Azure Active Directory B2C to grant access to the FHIR service
Ingest FHIR resource data at high throughput with incremental import
The incremental import capability was released in August last year. With incremental import, healthcare organizations can ingest FHIR resource data at high throughput in batches, without disrupting transactions through the API on the same server. You can also ingest multiple versions of a resource in the same batch without worrying about the order of ingestion.
Incremental import allows healthcare organizations to:
Import data concurrently while executing API CRUD operations on the FHIR server.
Ingest multiple versions of FHIR resources in single batch while maintaining resource history.
Retain the lastUpdated field value in FHIR resources during the ingestion process, while also maintaining the chronological order of resources. In other words, you no longer need to pre-load historical data before importing the latest version of FHIR resources.
Take advantage of initial and incremental mode import. Initial mode import can be used to hydrate the FHIR service. Also, call out using Execution of initial mode import operation does not incur any charge. For incremental import, a charge is incurred per successfully ingested resource, following the pricing model of the API request.
Visit pricing page for more details Pricing – Azure Health Data Services | Microsoft
Why incremental import matters
Healthcare organizations using the FHIR service often need to run synchronous and asynchronous data flows simultaneously. The asynchronous data flow includes receiving batches of large data sets that contain patient records from various sources, such as Electronic Medical Record (EMR) systems. These data sets must be imported into a FHIR server simultaneously with the synchronous data flow to execute API CRUD (Create, Read, Update, Delete) operations in the FHIR service.
Performing data import and API CRUD operations concurrently on the FHIR server is crucial to ensure uninterrupted healthcare service delivery and efficient data management. Incremental import allows organizations to run both synchronous and asynchronous data flows at the same time, eliminating this issue. Incremental import also enables efficient migration and synchronization of data between FHIR servers, and from the Azure API for FHIR service to the FHIR service in Azure Health Data Services.
Learn more: Import data into the FHIR service in Azure Health Data Services
Delete FHIR resources in bulk (preview)
In late 2023, the ability to delete FHIR resources in bulk became available for preview. We heard feedback from customers about the challenges they faced when deleting individual resources. Now, with the bulk delete operation, you can delete data from the FHIR service asynchronously. The FHIR service bulk delete operation allows you to delete resources at different levels – system, resource level, and per search criteria. Healthcare organizations that use the FHIR service need to comply with data retention policies and regulations. Incorporating the bulk delete operation in the workflow enables organizations to delete data at high throughput.
Learn more: Bulk-delete operation for the FHIR service in Azure Health Data Services
Selectable search parameters (preview)
As of January 2024, selectable search parameters are available for preview. This capability allows you to tailor and enhance searches on FHIR resources. You can choose which standard search parameters to enable or disable for the FHIR service according to your unique requirements. By enabling only the search parameters you need, you can store more FHIR resources and potentially improve performance of FHIR search queries.
Searching for resources is fundamental to the FHIR® service. During the provisioning of FHIR service, standard search parameters are enabled by default. The FHIR service performs efficient searches by extracting and indexing specific properties from FHIR resources during the ingestion of data. Search parameters indexes may take majority of the overall database size.
This new capability gives you the control to enable or disable search parameter according to your needs.
Selectable search parameters help healthcare organizations:
Store more data at reduced cost. Reduction in search parameter indexes provides space to store more resources in the FHIR service. Depending on your organization’s need for search parameter values, on average the efficiency gained in storage is assumed to be 2X-3X. In other words, you’ll be able to store more resources and save on any additional storage cost.
Positively impact performance. During API interactions or while using the import operation, selecting a subset of search parameters can have significant positive performance impact.
Learn more: Selectable search parameters for the FHIR service in Azure Health Data Services
In conclusion
We are constantly working to improve the FHIR service to meet your needs and expectations. With new features such as increased storage capacity up to 100 TB, integration with Azure Active Directory B2C, and incremental import, we are excited to see how you leverage these new capabilities to create innovative healthcare solutions that improve outcomes and experiences for patients and providers.
Do more with your data with the Microsoft Cloud for Healthcare
In the era of AI, Microsoft Cloud for Healthcare enables healthcare organizations to accelerate their data and AI journey by augmenting the Microsoft Cloud with industry-relevant data solutions, templates, and capabilities. With Microsoft Cloud for Healthcare, healthcare organizations can create connected patient experiences, empower their workforce, and unlock the value from clinical and operational data using data standards that are important to healthcare. And we’re doing all of this on a foundation of trust. Every organization needs to safeguard their business, their customers, and their data. Microsoft Cloud runs on trust, and we’re helping every organization build safety and responsibility into their AI journey from the very beginning.
We’re excited to help your organization gain value from your data and use AI innovation to deliver meaningful outcomes across the entire healthcare journey.
Learn more about Azure Health Data Services
Explore Microsoft Cloud for Healthcare
Stay up to date with Azure Health Data Services Release Notes
Microsoft Tech Community – Latest Blogs –Read More
Asking the right questions: Q&AI with Trevor Noah
Trevor Noah, Microsoft Chief Questions Officer and renowned comedian, author, and former host The Daily Show, joined the keynote stage at the Global Nonprofit Leaders Summit with Kate Behncken, Global Head of Microsoft Philanthropies, for a conversation about social impact.
From his first childhood encounter with a PC (it was a Pentium 386!) to working with Microsoft AI for Good, Trevor meets the opportunity of technology with natural curiosity and optimism that inspires everyone to find ways to use AI to build equity, fairness, and security for people around the world.
He shares examples from the AI for Good projects he’s featured on his series “The Prompt” and talks in depth about how AI is creating a critical moment for expanding education and opportunities in developing countries. Then with his inimitable humor, he somehow manages to include buffalo wings as an example of how we should always ask ourselves, “What if I’m wrong?”
What did you learn from Trevor Noah’s insights? What are some examples of when you’ve asked yourself, “What if I’m wrong?”
Microsoft Tech Community – Latest Blogs –Read More
AI/ML ModelOps is a Journey. Get Ready with SAS® Viya® Platform on Azure
Why do you need a ModelOps Platform for your organization?
If you are a data scientist or an analytics leader, you know the challenges of developing and deploying analytical models in a fast-paced and competitive business environment. You may have hundreds or thousands of models in various stages of the analytics life cycle, but only a fraction of them are actually delivering value to your organization. You may face issues such as long development cycles, manual processes, lack of visibility, poor performance, and loss of intellectual property. These issues can prevent you from realizing the full potential of your analytics investments and hinder your ability to innovate and respond to changing needs.
That’s why you need model ops in your organization. Model ops is a set of practices and technologies that enable you to automate, monitor, and manage your analytical models throughout their life cycle. Model ops can help you streamline your workflows, improve your model quality, increase your productivity, and ensure your models are always aligned with your business goals. Model ops can also help you foster collaboration and trust among your stakeholders, such as data scientists, IT, business users, and regulators. With model ops, you can turn your models into assets that drive value and competitive advantage for your organization.
SAS Viya Platform: A powerful AI/ML Model management Platform
SAS Viya is a powerful cloud-based analytics platform built by Microsoft’s coveted partner SAS Institute Inc. that combines AI (Artificial Intelligence) and traditional analytics capabilities. SAS Viya seamlessly integrates with Microsoft Azure services, enhancing the analytics capabilities and providing a powerful platform for data-driven decision-making.
Simplified Acquisition Process for SAS Viya Platform @Azure Marketplace
The simplified acquisition process of SAS Viya on Azure marks a significant departure from the traditional method of obtaining SAS Viya. With the introduction of SAS Viya on Azure, users now benefit from a streamlined approach via the Azure Marketplace. This marketplace serves as a user-friendly platform where, with just a click on the “Create” button, customers initiate the acquisition process.
The automated deployment, a key feature of this approach eliminating the need for extensive IT involvement. Azure Marketplace not only expedites the transaction offering a faster and more accessible path for users to access and leverage the advanced analytics capabilities of SAS Viya.
Key Features of SAS Viya on Azure:
SAS Visual Machine Learning (VML): SAS Viya includes robust machine learning capabilities. With VML, you can build, train, and deploy machine learning models efficiently. Whether you’re a Python enthusiast, an R aficionado, or prefer Jupyter Notebooks, SAS Viya integrates seamlessly with these languages to enhance your data science workflows.
Integration with Python, R, and Jupyter Notebooks: SAS Viya provides native integration with popular programming languages. You can leverage your existing Python or R code within the SAS Viya environment. Additionally, Jupyter Notebooks allow for interactive exploration and documentation of your analyses.
Azure Data Sources and Services Integration:
Azure Synapse: SAS Viya connects to Azure Synapse, enabling you to enrich data from various sources. Use SAS Information Catalog and SAS Visual Analytics to explore and prepare data efficiently.
Azure Machine Learning: Collaborate between SAS Viya and Azure Machine Learning to build and deploy analytic models. You can choose your preferred programming language and even use a visual drag-and-drop interface for model components. Seamlessly move models to production.
Power Automate and Power Apps: Automate decision-making processes by integrating SAS Viya with Microsoft Power Automate and Power Apps. Enable real-time, calculated decisions across various domains, such as claims processing, credit decisioning, fraud detection, and more.
Azure IoT Hub and Azure IoT Edge: Stream data from IoT devices across environments, allowing real-time decisioning and analysis. Benefit from in-stream automation, data curation, and modeling while maintaining total governance.
Azure Data Sources: SAS Viya uses high-performance connectors to source data from Azure environments, provisioning data for downstream AI needs.
SAS Analytic Lifecycle Capabilities
Access and Prepare Data:
SAS Viya allows you to handle complex and large datasets efficiently. You can perform data preparation tasks such as cleaning, transforming, and structuring data for analysis.
Whether your data resides in databases, spreadsheets, or cloud storage, SAS Viya provides seamless connectivity to various data sources.
Visualize Data:
Data visualization is crucial for understanding data relationships, patterns, and trends. SAS Viya offers powerful visualization tools to create insightful charts, graphs, and dashboards.
Explore your data visually, identify outliers, and gain valuable insights before diving into modeling.
Build Models:
SAS Viya leverages AI techniques to build predictive and prescriptive models. Whether you’re solving real-world business problems or conducting research, you can utilize machine learning algorithms, statistical methods, and optimization techniques.
Experiment with different models, evaluate their performance, and choose the best one for your specific use case.
Automation:
Automate repetitive tasks within your analytic workflows. SAS Viya allows you to create data pipelines, schedule data refreshes, and automate model deployment.
Collaborate with other users by sharing workflows and automating decision-making processes.
Integration:
Connect seamlessly with open-source languages such as Python and R. If you have existing code or libraries, integrate them into your SAS Viya environment.
Leverage the power of both SAS and open-source tools to enhance your analytics capabilities.
ModelOps:
Managing models over time is critical for maintaining their accuracy and relevance. SAS Viya provides tools for monitoring model performance, adapting to changes in data, and retraining models as needed.
Stay on top of your models’ health and ensure they continue to deliver value.
Demonstration of Deployment on Azure:
Let’s walk through the steps for deploying SAS Viya on Microsoft Azure. This process typically takes about an hour to complete. Here’s how you can get started:
Access SAS Viya on Azure:
Visit the Azure Marketplace and search for “SAS Viya (Pay-As-You-Go)
Click “Get It Now” and then select “Continue.”
Deployment Form:
Click “Create” next to the plan.
Complete the form with the necessary information:
Project Details:
Specify your Subscription and Resource Group. These values depend on your organization’s Azure resource management practices. (Don’t forget to prepare a suitable landing zone for your Analytical platform)
Instance Details:
Personalize the URL that users will use to access SAS Viya. The Region and Deployment DNS Prefix contribute to the URL. Choose a region geographically close to your users.
Provide a Deployment Name for reference within the Azure interface.
Security and Access:
Set an administrator password for SAS Viya (remember or record it).
Choose one of the following options for SSH public key source:
Generate a new key pair.
Use a key stored in Azure.
Copy and paste a public key.
Optionally, secure access by specifying Authorized IP Ranges.
Review and Create:
Click “Review + create” to proceed.
Confirm the information, accept the terms and conditions, and click “Create.”
If you opted to create SSH keys, select “Download + create” when prompted.
Deployment Completion:
The deployment process may take up to an hour.
Once completed, sign in to SAS Viya using the URL you personalized earlier
Summary
SAS Viya on Azure provides a user-friendly, automated approach to access advanced analytics capabilities.
Offers seamless integration with Azure tools and services, catering to a wide range of users in the analytics space.
Next Steps
In the next part, I will dive deep into the model ops capabilities and advantages of combining Azure and SAS Viya platform.
Microsoft Tech Community – Latest Blogs –Read More
New opportunities for sales, services, and education partners
Today, we are excited to announce the expansion of Copilot for Microsoft 365 with the general availability of new offerings for sales, services, and education—providing new ways for customers and partners to embrace AI to achieve their goals and significantly enhance how they work.
Copilot for Sales is an AI assistant for sales professionals that brings together the power of generative AI through Copilot for Microsoft 365, Copilot Studio, and data from any CRM system to accelerate productivity, keep data fresh, unlock seller-specific insights, and help sellers personalize customer interactions – all leading to closure of more deals.
Copilot for Service modernizes existing service solutions with generative AI to enhance customer experiences and boost agent productivity. It infuses AI into the contact center to accelerate time to production with point-and-click setup, direct access within major service vendors (including Salesforce, ServiceNow, and Zendesk), and connection to public websites, SharePoint, knowledgebase articles, and offline files. Agents can ask questions in natural language and answers are delivered in the tools they use every day—Outlook, Teams, Word, and others.
Learn more about Copilot for Sales and Copilot for Service announcements.
In addition, Copilot for Microsoft 365 is now also generally available for education customers to purchase for their faculty users through the Cloud Solution Provider (CSP) program.
Read the announcement
Plan to join us on Wednesday, March 6, 2024, for the Reimagine education event: the future of AI in Education for an opportunity to deep dive on Copilot for Education.
Read more in our blog about specific opportunities for education partners with Copilot for Microsoft 365.
Additional opportunities for all partners
Register for the Copilot partner incentives overview webinar – February 28/29 to learn about the latest priorities, strategy, and earning opportunities for Copilot & AI.
Sign up for a Copilot for Microsoft 365 pre-sales and technical bootcamp
Join the conversation on our Copilot for Microsoft 365 community
Microsoft Tech Community – Latest Blogs –Read More
Security review for Microsoft Edge version 122
We are pleased to announce the security review for Microsoft Edge, version 122!
We have reviewed the new settings in Microsoft Edge version 122 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 117 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit.
Microsoft Edge version 122 introduced 4 new computer settings and 4 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.
As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.
Please continue to give us feedback through the Security Baselines Discussion site or this post.
Microsoft Tech Community – Latest Blogs –Read More
March 2024 Viva Glint newsletter
Welcome to the March edition of our Viva Glint newsletter. Our recurring communications will help you get the most out of the Viva Glint product. You can always access the current edition and past editions of the newsletter on our Viva Glint blog.
Our next features release date
Viva Glint’s next feature release is scheduled for March 9, 2024*. Your dashboard will provide date and timing details two or three days before the release.
In your Viva Glint programs
The Microsoft Copilot Impact Survey template has premiered in the Viva Glint platform. AI tools are increasingly integrated into the workplace to enhance workforce productivity and the employee experience. This transformational shift in work means leaders need to understand their early investments in Microsoft Copilot and how it is being adopted. Deploying the Copilot Impact Survey template in Viva Glint, organizations can measure the impact of Microsoft Copilot enabling leaders to plan AI readiness, drive adoption, and measure their ROI. Learn about the Copilot Impact survey here.
Changing item IDs for expired cycles will be self-serve. Comparing survey trend is essential to tracking focus area progress over time. When a survey is retired, you can still use the data for an item from that survey as a comparison in a new survey which uses the identical item. And you can do it quickly and independently! Learn how to change survey item IDs here.
We’ve updated our Action Plan templates! Action Plan templates provide resources to help organizations act on feedback. Content comes from our new learning modules, WorkLab articles, and LinkedIn Learning. Now we’re exploring opportunities across all Viva and Copilot products to harness sentiment and data to enhance the employee experience and surface relevant, contextualized action recommendations. Check out Action Plan guidance here.
Support survey takers with new help content
Simplify your support process during live Viva Glint surveys to help users easily submit their valuable feedback. Use support guidance as an admin to communicate proactively and create resources to address commonly asked questions by survey takers. Share help content directly with your organization so that survey takers have answers to all their questions.
Announcing our new Viva Glint product council
Viva Glint is launching a product council! We are keen to listen to you, our customers, to help inform the future of our product. By enrolling, you will hear directly from our product and design teams, have an impact in shaping our product, and connect with like-minded customers to discuss your Viva Glint journey. To learn more and express an interest in signing up, visit this blog post.
Connect and learn with Viva Glint
We are officially launching our badging program! We are excited to announce that Viva Glint users can now earn badges upon completion of recommended training modules and then publish them to their social media networks. We’re kicking off this program by offering both a Foundations Admin badge and a Manager badge course. Learn more here about badging.
Get ready for our next Viva Glint: Ask the Experts session on March 12. Geared towards new Viva Glint customers who are in the process of deploying their first programs, this session focuses on User Roles and Permissions. You must be registered to attend the session. Bring your questions! Register here for Ask the Experts.
Join us at our upcoming Microsoft and Viva hosted events
Attend our Think like a People Scientist webinar series. Premiering in February (if you missed it, you can catch the recording here!), this series, created based on customer feedback, will deep dive into important topics that you may encounter on your Viva Glint journey. Register for our upcoming sessions below:
March 20: Telling a compelling story with your data
April 23: Influencing action without authority
May 28: Designing a survey that meets your organization’s needs
We are also kicking off our People Science x AI Empowerment series. Check out and register for our upcoming events that will help empower HR leaders with the knowledge and resources to feel confident, excited, and ready to bring AI to their organizations:
March 14: AI overview and developments for Viva Glint featuring Viva Glint People Science and Product leaders
April 18: AI: the game-changer for the employee experience featuring Microsoft research and applied science leaders
For those in the Vancouver area, join us for Microsoft Discovery Day on March 6. During this in-person event at Microsoft Vancouver you will learn from Microsoft leaders and industry experts about fundamental shifts in the workplace and the implications for your business. Gain an understanding of the value of AI-powered insights and experiences to build engagement and inspire creativity. Register.
Join the Viva People Science team at upcoming industry events
Are you attending the Wharton People Analytics Conference on March 14-15? As sponsors of the event, we will be there, and we would love to see you at our booth! This conference explores the latest advances and urgent questions in people analytics, including AI and human teaming, neurodiversity, new research on hybrid and remote work, and the advancement of frontline workers. Learn more about the conference here.
Our Microsoft Viva People Scientists are among the featured speakers at the Society for Industrial and Organizational Psychology (SIOP) annual conference in April. Live in Chicago, and also available virtually, the SIOP conference inspires and galvanizes our community through sharing knowledge, building connections, fostering inclusion, and stimulating new ideas. Learn more here.
Join Rick Pollak on April 18 for a panel discussion, Your Employee Survey is Done. Now What? Rick and leading experts will address best practices and advice about survey reporting, action taking, and more.
Join Caribay Garcia and other industrial organizational psychology innovators on April 19 for IGNITE-ing Innovation: Uses of Generative AI in Industrial Organization Psychology. This session will help psychologists conduct timelier research by fostering cross-collaborative communication between academics and practitioners.
Join Stephanie Downey and other industry experts on April 19 for Ask the Experts: Crowdsource Solutions to Your Top Talent Challenges. This session brings together industry experts to facilitate roundtable discussions focused on key talent and HR challenges.
Again, join Stephanie Downey on April 19 for Alliance: Unlocking Whole Person Management: Benefits, Hidden Costs, and Solutions. Explore the multifaceted dimensions of whole person management (WPM) by delving into the benefits and challenges this approach creates.
Join Carolyn Kalafut on April 19 for Path to Product. This seminar provides an intro to understanding product and the ability to influence the software development lifecycle and to embed responsible and robust I-O principles in it.
Join Caribay Garcia on April 20 for Harnessing Large Language Models in I-O Psychology: A Revolution in HR Offerings. Delve into the practical implications, ethical concerns, and the future of large language models (LLMs) in HR.
Check out our most recent blog content on the Microsoft Viva Community
Assess how your organization feels about Microsoft Copilot
Viva People Science Industry Trends: Retail
How are we doing?
If you have any feedback on this newsletter, please reply to this email. Also, if there are people on your teams that should be receiving this update, please have them sign up using this link.
*Viva Glint is committed to consistently improving the customer experience. The cloud-based platform maintains an agile production cycle with fixes, enhancements, and new features. Planned program release dates are provided with the best intentions of releasing on these dates, but dates may change due to unforeseen circumstances. Schedule updates will be provided as appropriate.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft and open-source software
Microsoft has embraced open-source software—from offering tools for coding and managing open-source projects to making some of its own technologies open source, such as .NET and TypeScript. Even Visual Studio Code is built on open source. For March, we’re celebrating this culture of open-source software at Microsoft.
Explore some of the open-source projects at Microsoft, such as .NET on GitHub. Learn about tools and best practices to help you start contributing to open-source projects. And check out resources to help you work more productively with open-source tools, like Python in Visual Studio Code.
.NET is open source
Did you know .NET is open source? .NET is open source and cross-platform, and it’s maintained by Microsoft and the .NET community. Check it out on GitHub.
Python Data Science Day 2024: Unleashing the Power of Python in Data Analysis
Celebrate Pi Day (3.14) with a journey into data science with Python. Set for March 14, Python Data Science Day is an online event for developers, data scientists, students, and researchers who want to explore modern solutions for data pipelines and complex queries.
C# Dev Kit for Visual Studio Code
Learn how to use the C# Dev Kit for Visual Studio Code. Get details and download the C# Dev Kit from the Visual Studio Marketplace.
Visual Studio Code: C# and .NET development for beginners
Have questions about Visual Studio Code and C# Dev Kit? Watch the C# and .NET Development in VS Code for Beginners series and start writing C# applications in VS Code.
Reactor series: GenAI for software developers
Step into the future of software development with the Reactor series. GenAI for Software Developers explores cutting-edge AI tools and techniques for developers, revolutionizing the way you build and deploy applications. Register today and elevate your coding skills.
Use GitHub Copilot for your Python coding
Discover a better way to code in Python. Check out this free Microsoft Learn module on how GitHub Copilot provides suggestions while you code in Python.
Getting started with the Fluent UI Blazor library
The Fluent UI Blazor library is an open-source set of Blazor components used for building applications that have a Fluent design. Watch this Open at Microsoft episode for an overview and find out how to get started with the Fluent UI Blazor library.
Remote development with Visual Studio Code
Find out how to tap into more powerful hardware and develop on different platforms from your local machine. Check out this Microsoft Learn path to explore tools in VS Code for remote development setups and discover tips for personalizing your own remote dev workflow.
Using GitHub Copilot with JavaScript
Use GitHub Copilot while you work with JavaScript. This Microsoft Learn module will tell you everything you need to know to get started with this AI pair programmer.
Generative AI for Beginners
Want to build your own GenAI application? The free Generative AI for Beginners course on GitHub is the perfect place to start. Work through 18 in-depth lessons and learn everything from setting up your environment to using open-source models available on Hugging Face.
Use OpenAI Assistants API to build your own cooking advisor bot on Teams
Find out how to build an AI assistant right into your app using the new OpenAI Assistants API. Learn about the open playground for experimenting and watch a step-by-step demo for creating a cooking assistant that will suggest recipes based on what’s in your fridge.
What’s new in Teams Toolkit for Visual Studio 17.9
What’s new in Teams Toolkit for Visual Studio? Get an overview of new tools and capabilities for .NET developers building apps for Microsoft Teams.
Embed a custom webpage in Teams
Find out how to share a custom web page, such as a dashboard or portal, inside a Teams app. It’s easier than you might think. This short video shows how to do this using Teams Toolkit for Visual Studio and Blazor.
Get to know GitHub Copilot in VS Code and be more productive
Get to know GitHub Copilot in VS Code and find out how to use it. Watch this video to see how incredibly easy it is to start working with GitHub Copilot…Just start coding and watch the AI go to work.
Customize Dev Containers in VS Code with Dockerfiles and Docker Compose
Dev containers offer a convenient way to deliver consistent and reproducible environments. Follow along with this video demo to customize your dev containers using Dockerfiles and Docker Compose.
Designing for Trust
Learn how to design trustworthy experiences in the world of AI. Watch a demo of an AI prompt injection attack and learn about setting up guardrails to protect the system.
AI Show: LLM Evaluations in Azure AI Studio
Don’t deploy your LLM application without testing it first! Watch the AI Show to see how to use Azure AI Studio to evaluate your app’s performance and ensure it’s ready to go live. Watch now.
What’s winget.pro?
The Windows Package Manager (winget) is a free, open-source package manager. So what is winget.pro? Watch this special edition of the Open at Microsoft show for an overview of winget.pro and to find out how it differs from the well-known winget.
Use Visual Studio for modern development
Want to learn more about using Visual Studio to develop and test apps. Start here. In this free learning path, you’ll dig into key features for debugging, editing, and publishing your apps.
Build your own assistant for Microsoft Teams
Creating your own assistant app is super easy. Learn how in under 3 minutes! Watch a demo using the OpenAI Assistants, Teams AI Library, and the new AI Assistant Bot template in VS Code.
GitHub Copilot fundamentals – Understand the AI pair programmer
Improve developer productivity and foster innovation with GitHub Copilot. Explore the fundamentals of GitHub Copilot in this free training path from Microsoft Learn.
How to get GraphQL endpoints with Data API Builder
The Open at Microsoft show takes a look at using Data API Builder to easily create Graph QL endpoints. See how you can use this no-code solution to quickly enable advanced—and efficient—data interactions.
Microsoft, GitHub, and DX release new research into the business ROI of investing in Developer Experience
Investing in the developer experience has many benefits and improves business outcomes. Dive into our groundbreaking research (with data from more than 2000 developers at companies around the world) to discover what your business can gain with better DevEx.
Build your custom copilot with your data on Teams featuring Azure the AI Dragon
Build your own copilot for Microsoft Teams in minutes. Watch this video to see how in this demo that builds an AI Dragon that will take your team on a cyber role-playing adventure.
Microsoft Graph Toolkit v4.0 is now generally available
Microsoft Graph Toolkit v4.0 is now available. Learn about its new features, bug fixes, and improvements to the developer experience.
Microsoft Mesh: Now available for creating innovative multi-user 3D experiences
Microsoft Mesh is now generally available, providing a immersive 3D experience for the virtual workplace. Get an overview of Microsoft Mesh and find out how to start building your own custom experiences.
Global AI Bootcamp 2024
Global AI Bootcamp is a worldwide annual event that runs throughout the month of March for developers and AI enthusiasts. Learn about AI through workshops, sessions, and discussions. Find an in-person bootcamp event near you.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28. Learn more and register now.
Microsoft Tech Community – Latest Blogs –Read More
Leverage anomaly management processes with Microsoft Cost Management
The cloud comes with the promise of significant cost savings compared to on-premises costs. However, realizing those savings requires diligence to proactively plan, govern, and monitor your cloud solutions. Your ability to detect, analyze, and quickly resolve unexpected costs can help minimize the impact on your budget and operations. When you understand your cloud costs you can make more informed decisions on how to allocate and manage those costs. But even with proactive cost management, surprises can still happen. That’s why we developed several tools in Microsoft Cost Management to help you set up thresholds and rules so you can detect problems early and ensure the timely detection of out-of-scope changes in your cloud costs. Let’s take a closer look at some of these tools and how you can use them to discover anomalous costs and usage patterns.
Identify atypical usage patterns with anomaly detection
Anomaly detection is a powerful tool that can help you minimize unexpected charges by identifying atypical usage patterns like cost spikes or dips based on your cost and usage trends and take corrective actions. For example, you might notice that something has changed, but you’re not sure what. Suppose you have a subscription that consumes around $100 every day. A new service was added into the subscription by mistake, resulting in the daily cost doubling to $200. With anomaly detection, you will be notified about the steep spike in daily cost, which you can then investigate to see if it’s an expected increase or a mistake, leading to early corrective measure.
You can also embed time-series anomaly detection capabilities into your apps to identify problems quickly. AI Anomaly Detector ingests time-series data of all types and selects the best anomaly detection algorithm for your data to ensure high accuracy. Detect spikes, dips, deviations from cyclic patterns, and trend changes through both univariate and multivariate APIs. Customize the service to detect any level of anomaly. Deploy the anomaly detection service where you need it—in the cloud or at the intelligent edge.
Use Alerts to get notified when an anomalous usage change is detected
You can subscribe to anomaly alerts to be automatically notified when an anomalous usage change is detected, with a subscription-scope email displaying the underlying resource groups that contributed to the anomalous behavior. Alerts can also be set up for your Azure reserved instances usage to receive email notifications, so you can take remedial action when your reservations have low utilization.
Here’s an example of how to create an anomaly alert rule:
Select the scope as the subscription which needs monitoring.
Navigate to the ‘Cost alerts’ page in Cost Management. Select ‘Anomaly’ as the Alert type.
Specify the recipient email IDs.
Click on ‘Create alert rule.’
In the event that an anomaly is detected, you will receive alert emails which give you basic information to help you start your investigation.
Get deeper insights with smart views
Use smart views in Cost Analysis to view anomaly insights that were automatically detected for each subscription. To drill into the underlying data for something that has changed, select the Insight link. You can also create custom views for anomalous usage detection such as unused costs from Azure reserved instances and savings plans that could point to further optimization for specific workloads.
You can also group related resources in Cost Analysis and smart views. For example, group related resources, like disks under virtual machines or web apps under App Service plans, by adding a “cm-resource-parent” tag to the child resources with a value of the parent resource ID. Or use Charts in Cost Analysis smart views to view your daily or monthly cost over time.
Use Copilot for AI-based assistance
For quick identification and analysis of anomalies in your cloud spend, try the AI-powered Copilot in Cost Management––available in preview on the Azure Portal. For example, if a cost doubles you can ask Copilot natural language questions to understand what happened and get the insights you need faster. You don’t need to be an expert in navigating the cost management UI or analyzing the data, you simply let the AI do it for you. For example, you can ask, “why did my cost increase this month?” or “which service led to the increase in cost this month?” Copilot will then provide a breakdown by categories of spend and their percentage impact on your total invoice. From there, you can leverage the generated suggestions to investigate your bill further.
Learn more about streamlining anomaly management
Optimizing your cloud spend with Azure becomes much easier when you streamline your anomaly management processes with tools like anomaly detection, alerts, and smart views in Microsoft Cost Management. You can learn even more about using FinOps best practices to manage anomalies in your resource usage at aka.ms/finops/solutions.
Microsoft Tech Community – Latest Blogs –Read More
Inclusive and productive Windows 11 experiences for everyone
Today we begin to release new features and enhancements to Windows 11 Enterprise—features that offer a more intuitive and user-friendly experience for both workers and IT admins. Most of these new features will be enabled by default in the March 2024 optional non-security preview release for all editions of Windows 11, versions 23H2 and 22H2. IT admins who want to get the new Windows 11 features can enable optional updates for their managed devices via policy.
New in accessibility
One of the most exciting areas of enhancement involves voice access, a feature in Windows 11 that enables everyone, including people with mobility disabilities, to control their PC and author text using only their voice and without an internet connection. Voice access now supports multiple languages, including French, German, and Spanish. People can create custom voice shortcuts to quickly access frequently used commands. And, voice access now works across multiple displays with number and grid overlays that help people easily switch between screens using only voice commands.
Enhancements to Narrator, the built-in screen reader, are also coming. You’ll be able to preview natural voices before downloading them and utilize a new keyboard command that allows you to more easily move between images on a screen. Narrator’s detection of text in images, including handwriting, has been improved, and it now announces the presence of bookmarks and comments in Microsoft Word.
If you’re interested in learning about Windows 11 accessibility features, please check out the following resources:
Inside Windows 11 accessibility setting and tools
Skilling snack: Accessibility in Windows 11
Skilling snack: Voice access in Windows
Enhanced sharing
Sharing content is now easier with updates to Windows share and Nearby Share. The Windows share window now displays different apps for “Share using” based on the account you use to sign in. Nearby Share has also been improved, with faster transfer speeds for people on the same network and the ability to give your device a friendly name for easier identification when sharing.
Casting
Casting, the feature that allows you to wirelessly send content from your device to a nearby display, has been enhanced. You will receive notifications suggesting the use of Cast when multitasking, and the Cast menu in quick settings now provides more help in finding nearby displays and fixing connections.
Snap layouts
Snap layouts, the feature that helps you organize the apps on your screen, now allows you to hover over the minimize or maximize button of an app to open the layout box, and to view various layout options. This makes it easier for you to choose the best layout for the task at hand.
New Windows 365 features now available
Windows 365 now offers new features including a new, dedicated mode for Windows 365 Boot that allows you to sign in to your Cloud PC using passwordless authentication. A fast account switching experience has also been added. For Windows 365 Switch, which lets you sign in and connect to your Cloud PC using Windows 11 Task view, you’ll now find it easier to disconnect from your Cloud PC and see desktop indicators to help you easily see whether you are on your Cloud PC or local PC.
For more information, see today’s post, New Windows 365 Boot and Switch features now available.
Unified enterprise update management
We are also releasing enhancements to Windows Autopatch in direct response to your feedback. Several new and upcoming enhancements give you more control, extend the value of your investments, and help you streamline update management, including:
The ability to import Update rings for Windows 10 and later (preview)
Customer defined service outcomes (preview)
Improved data refresh speed and reporting accuracy
Looking ahead, one of the most noticeable changes in Windows Autopatch will be a simplified update management interface that will make the update ecosystem easier to understand. We are unifying our update management offering for enterprise organizations—bringing together Windows Autopatch and the Windows Update for Business deployment service into a single service that enterprise organizations can use to update and upgrade Windows devices as well as update Microsoft 365 Apps, Microsoft Teams, and Microsoft Edge.
We invite you to read our ongoing Windows Autopatch updates in the Windows IT Pro Blog to find out more about richer functionality planned for Windows Autopatch. For the latest, see What’s new in Windows Autopatch: February 2024.
Get familiar with the latest innovations, including Copilot, creator apps, and more
Today’s announcement from Yusuf Mehdi offers more details about new innovations coming to Windows 11 including availability and rollout plans. You can find a summary of all the new enhancements and features in the Windows Update configuration documentation and, as always, stay up to date on rollout plans and known issues (identified and resolved) via the Windows release health dashboard.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
What is AI? Jared Spataro at the Global Nonprofit Leaders Summit
Jared Spataro, Microsoft Corporate Vice President, AI at Work, presented an engaging keynote at the Global Nonprofit Leaders Summit that left everyone amazed and optimistic about the abilities and simplicity of AI for everyone.
Watch Jared’s session for a walk through that shows how Microsoft Copilot can be a powerful tool for productivity and creativity. From the fun and fantastic, to the practical and powerful, Jared queries Copilot in a real-time demo using his own workstreams in Outlook, Teams, and more:
Can elephants tow a car?
What will the workplace of the future look like?
Can you write a Python script to extract insights from this data?
Can you summarize and prioritize the latest emails from my boss?
Jared shares important tips for prompt engineering, previews the new “Sounds like me” feature to co-create responses in your own voice, and talks about the value of AI being “usefully wrong.”
And he reminds us to say please and thank you.
What did you learn from Jared’s session? How are you using Copilot to enhance creativity and productivity?
Microsoft Tech Community – Latest Blogs –Read More
Updates from 162.1 and 162.2 releases of SqlPackage and the DacFx ecosystem
Within the past 4 months, we’ve had 2 minor releases and a patch release for SqlPackage. In this article, we’ll recap the features and notable changes from SqlPackage 162.1 (October 2023) and 162.2 (February 2024). Several new features focus on giving you more control over the performance of deployments by preventing potential costly operations and opting in to online operations. We’ve also introduced an alternative option for data portability that can provide significant speed improvements to databases in Azure. Read on for information about these improvements and more, all from the recent releases in the DacFx ecosystem. Information on features and fixes is available in the itemized release notes for SqlPackage.
.NET 8 support
The 162.2 release of DacFx and SqlPackage introduces support for .NET 8. SqlPackage installation as a dotnet tool is available with the .NET 6 and .NET 8 SDK. Install or update easily with a single command if the .NET SDK is installed:
# install
dotnet tool install -g microsoft.sqlpackage
# update
dotnet tool update -g microsoft.sqlpackage
Online index operations
Starting with SqlPackage 162.2, online index operations are supported during publish on applicable environments (including Azure SQL Database, Azure SQL Managed Instance, and SQL Server Enterprise edition). Online index operations can reduce the application performance impact of a deployment by supporting concurrent access to the underlying data. For more guidance on online index operations and to determine if your environment supports them, check out the SQL documentation on guidelines for online index operations.
Directing index operations to be performed online across a deployment can be achieved with a command line property new to SqlPackage 162.2, “PerformIndexOperationsOnline”. The property defaults to false, where just as in previous versions of SqlPackage, index operations are performed with the index temporarily offline. If set to true, the index operations in the deployment will be performed online. When the option is requested on a database where online index operations don’t apply, SqlPackage will emit a warning and continue the deployment.
An example of this property in use to deploy index changes online is:
sqlpackage /Action:Publish /SourceFile:yourdatabase.dacpac /TargetConnectionString:”yourconnectionstring” /p:PerformIndexOperationsOnline=True
More granular control over the index operations can be achieved by including the ONLINE=ON/OFF keyword in index definitions in your SQL project. The online property will be included in the database model (.dacpac file) from the SQL project build. Deployment of that object with SqlPackage 162.2 and above will follow the keyword used in the definition, superseding any options supplied to the publish command. This applies to both ONLINE=ON and ONLINE=OFF settings.
DacFx 162.2 is required for SQL project inclusion of ONLINE keywords with indexes and is included with the Microsoft.Build.Sql SQL projects SDK version 0.1.15-preview. For use with non-SDK SQL projects, DacFx 162.2 will be included in future releases of SQL projects in Azure Data Studio, VS Code, and Visual Studio. The updated SDK or SQL projects extension is required to incorporate the index property into the dacpac file. Only SqlPackage 162.2 is required to leverage the publish property “PerformIndexOperationsOnline”.
Block table recreation
With SqlPackage publish operations, you can apply a new desired schema state to an existing database. You define what object definitions you want in the database and pass a dacpac file to SqlPackage, which in turn calculates the operations necessary to update the target database to match those objects. The set of operations are known as a “deployment plan”.
A deployment plan will not destroy user data in the database in the process of altering objects, but it can have computationally intensive steps or unintended consequences when features like change tracking are in use. In SqlPackage 162.1.167, we’ve introduced an optional property, /p:AllowTableRecreation, which allows you to stop any deployments from being carried out that have a table recreation step in the deployment plan.
/p:AllowTableRecreation=true (default) SqlPackage will recreate tables when necessary and use data migration steps to preserve your user data
/p:AllowTableRecreation=false SqlPackage will check the deployment plan for table recreation steps and stop before starting the plan if a table recreation step is included
SqlPackage + Parquet files (preview)
Database portability, the ability to take a SQL database from a server and move it to a different server even across SQL Server and Azure SQL hosting options, is most often achieved through import and export of bacpac files. Reading and writing the singular bacpac files can be difficult when databases are over 100 GB and network latency can be a significant concern. SqlPackage 162.1 introduced the option to move the data in your database with parquet files in Azure Blob Storage, reducing the operation overhead on the network and local storage components of your architecture.
Data movement in parquet files is available through the extract and publish actions in SqlPackage. With extract, the database schema (.dacpac file) is written to the local client running SqlPackage and the data is written to Azure Blob Storage in Parquet format. With publish, the database schema (.dacpac file) is read from the local client running SqlPackage and the data is read from or written to Azure Blob Storage in Parquet format.
The parquet data file feature benefits larger databases hosted in Azure with significantly faster data transfer speeds due to the architecture shift of the data export to cloud storage and better parallelization in the SQL engine. This functionality is in preview for SQL Server 2022 and Azure SQL Managed Instance and can be expected to enter preview for Azure SQL Database in the future. Dive into trying out data portability with dacpacs and parquet files from the SqlPackage documentation on parquet files.
Microsoft.Build.Sql
The Microsoft.Build.Sql library for SDK-style projects continues in the preview development phase and version 0.1.15-preview was just released. Code analysis rules have been enabled for execution during build time with .NET 6 and .NET 8, opening the door to performing quality and performance reviews of your database code on the SQL project. To enable code analysis rules on your project, add the item seen on line 7 of the following sample to your project definition (<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>).
<Project DefaultTargets=”Build”>
<Sdk Name=”Microsoft.Build.Sql” Version=”0.1.15-preview” />
<PropertyGroup>
<Name>synapseexport</Name>
<DSP>Microsoft.Data.Tools.Schema.Sql.Sql160DatabaseSchemaProvider</DSP>
<ModelCollation>1033, CI</ModelCollation>
<RunSqlCodeAnalysis>True</RunSqlCodeAnalysis>
</PropertyGroup>
</Project>
During build time, the objects in the project will be checked against a default set of code analysis rules. Code analysis rules can be customized through DacFx extensibility.
Ways to get involved
In early 2024, we added preview releases of SqlPackage to the dotnet tool feed, such that not only do you have early access to DacFx changes but you can directly test SqlPackage as well. Get the quick instructions on installing and updating the preview releases in the SqlPackage documentation.
Most of the issues fixed in this release were reported through our GitHub community, and in several cases the person reporting put together great bug reports with reproducing scripts. Feature requests are also discussed within the GitHub community in some cases, including the online index operations and blocking table recreation capabilities. All are welcome to stop by the GitHub repository to provide feedback, whether it is bug reports, questions, or enhancement suggestions.
Microsoft Tech Community – Latest Blogs –Read More
Azure Sphere OS version 24.03 is now available for evaluation
Azure Sphere OS version 24.03 is now available for evaluation in the Retail Eval feed. The retail evaluation period for this release provides 28 days (about 4 weeks) of testing. During this time, please verify that your applications and devices operate properly with this release before it is deployed broadly to devices in the Retail feed.
The 24.03 OS Retail Eval release includes bug fixes and security updates including additional security updates to the cancelled 23.10 release, and a bugfix to a sporadic issue with OS update that caused the cancellation of that release.
For this release, the Azure Sphere OS contains an updated version of cURL. Azure Sphere OS provides long-term ABI compatibility, however the mechanisms of how cURL-multi operates, particularly with regard to recursive calls, have changed since the initial release of the Azure Sphere OS. Microsoft has performed additional engineering to provide backward compatibility for previously compiled applications to accommodate these changes. However, this is a special area of focus for compatibility release during this evaluation.
If your application leverages cURL-multi (as indicated by the usage of the `curl_multi_add_handle ()` API) we would encourage you to perform additional testing against the 24.03 OS. These changes do not impact applications that use the cURL-easy interface (as indicated by the usage of `curl_easy_perform()` API.).
Areas of special focus for compatibility testing with 24.03 include apps and functionality utilizing:
cURL and cURL-multi
wolfSSL, TLS-client, and TLS-server
Azure IoT, DPS, IoT Hub, IoT Central, Digital Twins, C SDK
Mutual Authentication
For more information on Azure Sphere OS feeds and setting up an evaluation device group, see Azure Sphere OS feeds and Set up devices for OS evaluation.
For self-help inquiries or technical support, review the Azure Sphere support options.
Microsoft Tech Community – Latest Blogs –Read More