Category: Microsoft
Category Archives: Microsoft
Contributor Stories: Carlos Solís Salazar
If you’ve ever engaged with the content on the Microsoft Learn platform, it’s highly likely that the material you utilized was written or co-authored by dedicated contributors. These contributors, often volunteers, generously offer their time and expertise to fill knowledge gaps within our content portfolio by suggesting valuable updates to our material, sharing their knowledge within the Microsoft community, and/or answering questions on the Q&A area of the Microsoft Learn platform!
In this interview series, we aim to acquaint ourselves with some of these valuable contributors. Through these conversations, we seek to understand their motivations for sharing their knowledge on Microsoft Learn and gain insights into their experiences.
Today, we’re pleased to introduce one of our top Microsoft Community Champions contributors, Carlos Solís Salazar. Carlos is a dedicated IT professional currently serving as the DOJO Cloud Leader within the Communidad DOJO sub-community and a Presales Engineer at Business IT. With a specialization in Microsoft Azure and a keen focus on information security, Carlos brings a wealth of experience in spearheading digital transformation initiatives and fortifying technological infrastructures across diverse organizations. Renowned for his innovative mindset and effective problem-solving, Carlos has made a significant impact on the cloud computing community as an Ambassador and Leader of DOJO Cloud, organizing and coordinating educational events. Additionally, his commitment to education is evident through his role as a lecturer at Colegio Universitario Francisco de Miranda, emphasizing skill development in IT security.
Carlos was nominated for this month’s Contributor Spotlight Series because of his long-standing contributions to our Microsoft Learn platform! He’s helped answer over 300+ questions on Microsoft Learn Q&A and is a recognized champion of the Microsoft Community Champions program. Carlos holds key certifications, including Microsoft Certified: Azure Solutions Architect Expert, Microsoft Certified: DevOps Engineer, and Microsoft Certified: Azure Security Engineer Associate, underscoring his dedication to continuous learning and professional excellence. Aside from his Microsoft contributions and certifications, he’s also a regular contributor in various forums like Stack Overflow.
Meet Carlos:
Sumit: Hello Carlos! It’s great to meet you and spend some time with you today. Please tell us about yourself and your experience.
Carlos: Hey there, Sumit. Nice to meet you, as well. I’m an IT pro with a strong focus on Microsoft Azure and info security. I’m all about digital transformation and beefing up tech infrastructures. Day-to-day, I’m knee-deep in crafting cool solutions with Microsoft Azure and Microsoft 365 to make businesses smoother and safer.
Outside the office grind, I’m super active in the DOJO community, especially around cloud computing. As a DOJO Cloud Leader, I get to organize workshops and talks, creating a space for people to learn and connect over cloud tech. I even spearheaded Microsoft Cloud Day Panama this September, and we’re planning to make it an annual thing!
I also love hitting the stage at conferences to chat about IT trends and my experiences. It’s a great way to spread knowledge and sharpen my public speaking skills. Some events I’ve attended include DOJO Conf 2023, Bsides Panama 2023, Amalia Conf 2022, and morterueloCON 2021.
In short, my world’s a blend of tech know-how, community vibes, and an endless journey of learning and growing – for me and everyone I meet.
Sumit: That’s so exciting. Love the community work that you’ve been doing. As a community leader, what inspired you to start contributing to Microsoft Learn Q&A, and what motivates you to keep contributing?
Carlos: Jumping into the IT world, especially Microsoft Azure, was driven by my love for tech innovation and wanting to make a real difference. I’ve always been intrigued by how tech can revolutionize organizations and enhance daily life. This curiosity pulled me deeper into cloud computing and security, which I think are super crucial today. Stumbling across champions program, I was inspired by the desire to belong to a community of professionals who share the same goal which is to help others solve their problems in the use of azure technologies.
What keeps me contributing to the Q&A site? It‘s seeing the real-world impact of my contributions. When I share what I know, I see others grow, adapt, and solve their own tech challenges. Plus, the feedback and stories from the community are super valuable—they keep me on my toes, always learning and improving.
Another big push is how fast technology evolves. Staying on top of the latest trends is a challenge, but it’s thrilling, and I enjoy sharing it with others so they can also stay up to date! I’m stoked about being part of this ongoing tech evolution and helping shape how it positively transforms our lives and society.
Sumit: Have your contributions to Microsoft Learn Q&A helped you advance your learning journey and/or career path?
Carlos: Oh, absolutely! Diving into the Microsoft Learn Q&A platform really turbocharged my career and skill set. It’s been awesome for showing off my knack for tackling Microsoft Azure challenges. And dealing with real-world scenarios on the Q&A platform has sharpened my problem-solving skills. I get to look at these problems with a ‘let’s-fix-this’ attitude. Every time I bump into a new question or issue there, it’s not just about using what I already know. It’s also a chance to dig into fresh areas and technologies in the Azure world, which keeps me right at the cutting edge of what’s happening in the industry.
Plus, explaining solutions on the Q&A platform has really upped my communication game. There’s a big difference between solving a problem versus explaining it in a way that’s clear and helpful to others. That ability to communicate well has been a game-changer in my professional life, especially when I’m working with teams or pitching ideas to bigwigs.
Sumit: That’s amazing – really appreciate the growth mindset with which you operate. What about our Champions program, Carlos? I’d love to know how your partnership with the Microsoft Community Champions program has been?
Carlos: Being part of Microsoft Community Champions? It’s been amazing! The vibe here is all about teamwork and support, from both Microsoft folks and fellow champs. They’ve really helped me up my game, especially in understanding and tackling Azure challenges. What’s cool is that this program isn’t just about tech skills. It’s also boosted my communication and problem-solving abilities big time, thanks to all the great feedback and shared knowledge from the community.
And the perks? They’re like the cherry on top! They’ve got some neat incentives that make being active in the program super rewarding. Won’t give away the surprises, but they definitely make you feel appreciated. I recommend everyone to join the Microsoft Community Champions program.
In short, this partnership with Microsoft has been a total win. It’s been fun, enlightening, and really pushed my professional growth, connecting me more with the Azure world.
Sumit: Glad to hear that. Community work is an integral part of our culture and having said that, is there any advice you would like to give to your fellow peers who are interested in contributing to open-source content?
Carlos: For anyone thinking about diving into open-source content, I say, yes – jump right in! Getting involved in open-source is a game-changer. It’s great for personal growth and getting noticed professionally because when you contribute, you’re not just visible to that project’s community, but also to others in similar fields and folks interested in your work.
And here’s the cool part: you don’t need to be a pro to make a real difference. Just by jumping in, sharing ideas, and doing what you can, you start building a reputation as a knowledgeable and dedicated contributor. And that can happen even if you’re still learning.
Plus, open-source is all about teamwork with a diverse bunch of people from all sorts of backgrounds. It’s an awesome way to expand your network and polish those team skills, which are super important in any job.
So, my advice? Go for it! It’s a rewarding journey that opens up loads of opportunities.
Sumit: Thank you, Carlos – both for your insight and motivation! We’d also like to understand the person behind all this great work. Do you mind sharing something about yourself outside of work?
Carlos: When I’m off the clock, I really focus on staying healthy and connecting with my family and friends. It’s all about finding that sweet spot between work and personal life, you know?
I used to be big on running, but these days, I’m more into chill walks. They’re awesome for getting some fresh air, clearing my head, and keeping fit. Plus, it’s a great way to just slow down and enjoy the moment. Meditation’s another big thing for me. It’s been a game-changer for my mental health, keeping me cantered and sharp, both in my personal life and at work.
And I love to read! Lately, I’ve been diving into Stoicism. It’s fascinating stuff – talks a lot about resilience, virtue, and just living a good life. It’s not just brain food; it’s really helped me grow as a person. So yeah, my downtime’s all about doing things that bring me joy and balance. Keeps me energized for everything else!
Sumit: Same goes for me as well. I love taking my amazing peaceful walks with my dog! It’s so rejuvenating. Thank you for sharing a part of your life with us. It was wonderful getting to know you. Before we wrap up, how would you summarize your experience being a Community Champion and a selfless leader who helps the tech ecosystem in various ways?
Carlos: Looking back on my contributions, what really stands out is the importance of sharing knowledge, keeping an active online profile, and building a solid professional brand. It’s about more than just you – it’s how you fit into and enrich the community, whether it’s with DOJO or Microsoft Learn.
Sharing isn’t just passing on info; it’s about creating a learning environment where everyone benefits. Engaging on platforms like Microsoft Learn is awesome for this – you’re both teaching and learning, and that’s a win-win. Having a strong online presence has been key for me. It’s not just about visibility; it’s about connecting, staying updated, and really seeing your impact. These platforms are more than Q&A sites – they’re where professional relationships and reputations are built.
And it’s not just about what you gain. Being part of a community means contributing to something bigger, and there’s a huge sense of fulfillment in that. It’s all about giving back and watching the community grow. I would suggest everyone to join the program and help the community grow.
Sumit: Thank you, Carlos, for your time and for all the hard work you’ve dedicated to helping the tech community. It was really nice to meet you.
Keep up with Carlos:
Personal blog: https://www.soysoliscarlos.com/
LinkedIn: https://www.linkedin.com/in/soysoliscarlos/
Twitter: https://twitter.com/soysoliscarlos
Credly: https://www.credly.com/users/soysoliscarlos/badges
To learn more about:
Contributing to Microsoft Learn, visit Microsoft Learn Contributor Home.
Contribute to the Microsoft Learn Q&A by joining the Microsoft Community Champions program using nomination form.
Microsoft Tech Community – Latest Blogs –Read More
Introducing the new Microsoft Copilot experience in Teams
Starting today, Copilot for Microsoft 365 users will begin to see a new Copilot experience in Microsoft Teams. Here’s what you need to know.
What’s the new Copilot experience in Teams?
The new Copilot experience is an improved version of the Copilot app in Teams that you may have used before (formerly known as Microsoft 365 Chat). It offers the same functionality and content access as the current app, but with additional benefits, including:
Better prompts, including details specific to you like the people you work with, your meetings, and your files.
Ability to see your Copilot chat history
Access to a library of prompts via Copilot Lab
How is it different from the current experience in Teams?
It’s better than the current Microsoft 365 Chat in Teams. The new Copilot in Teams is easier to find, to use, and has better functionality for a better experience.
This will give you the same Copilot experience in Teams as you get from microsoft365.com, Bing, and copilot.microsoft.com.
What’s the difference between the Copilot app in the Teams app store and the Copilot pinned above chats?
They’re the same, and it’s up to you how you’d like to use Copilot.
The new Copilot app experience is an improved version of the previous M365 Chat app. If you still have Microsoft 365 Chat (or M365 Chat), this is the older version and we recommend unpinning.
When will this be available?
The new pinned Copilot app in Teams is rolling out now.
Where can I find it and how do I get started?
If you have a Copilot for Microsoft 365, the new app will be automatically pinned above your chats. Just open Teams, select “chats” and you’ll see Copilot. Open Copilot and start catching up, creating, and asking! Learn more.
Microsoft Tech Community – Latest Blogs –Read More
SQL Server – Minimum downtime schema updates
Introduction and Motivation
Many customers want to deploy application and database schema updates to a production environment with zero or minimal downtime. The SQL Server dev team has done a lot of work to reduce or eliminate locking during schema updates (i.e. add a column), but there are still situations where such schema changes can have a direct and negative impact on production workloads.
One model that has emerged to facilitate minimal downtime deployments is the Blue/Green model. This technique was described in Continuous Delivery by Jez Humble and David Farley and Martin Fowler’s Blue Green Deployment model. The basic concept is that you have two parallel systems; the blue system is the current production, while the green system is kept up to date with the DML changes on the blue. By making this green update process asynchronous, blocking schema changes can be made on green and then the data changes to the blue database can be caught up afterward.
For application servers, you can either have a parallel set of blue and green servers, unless you can simultaneously deploy green application updates to blue servers. The challenge is how to seamlessly migrate users from the blue to the green version of the application, since in most cases, the application maintains some kind of state. Obviously, there may be timing issues and conflicting write scenarios, but this is outside of the scope of this blog. In this blog we will focus on the database tier.
Michael J Swart, describes a blue/green type deployment, but his company does not actually use separate blue and green databases, but instead use an “aqua” database with versioned SQL objects. Martin Fowler suggested potentially making the database read-only during the transition, but that would be very difficult to achieve, so instead Michael’s company decided to combine multiple versions into a single database. This technique requires very careful planning of schema changes and may even preclude certain types of changes.
In this blog we want to explore the ability to do a full blue/green set of databases and how we can enable change propagation from blue to green databases using an asynchronous update method with minimal performance impact on the blue server.
One option worth exploring that may offer an alternative to handling change data and schema changes is a combination of Azure Data Factory (ADF) – Native Change Data Capture (CDC) and ADF Data Flow – Schema Drift features.
In developing this Proof of Concept (POC) application, we considered using Availability Groups (AGs), Replication, Change Data Capture (CDC) and Change Tracking (CT). The synchronous nature of AGs and the fact that the secondary (green) is read only disqualifies using them in this application. Replication and CDC both turn on verbose logging and require a logreader etc. therefore we decided that CT was the lighter weight solution that we would focus on.
The key requirement of the POC was to allow green schema changes without breaking the data change synchronization from blue. Not only do we need to be able to handle any schema differences between tables in blue and green, but we need to be able to pause the update activity during impactful schema changes.
During a review of the initial solution, it became clear that there was a requirement of additional column mapping capabilities from blue to green, i.e. the ability to replace a column with a primary key from a lookup table using the original column value, the ability to rename columns, change column types and split columns between two tables.
The result of our POC work is a .NET 7 C# console application that uses an application configuration file and an optional Excel file to drive the synchronization and mapping process from Blue to Green. Although .Net 7 is cross platform, the OLEDB driver used to read Excel is Windows only.
Once the switch over from Blue to Green happens, the intent behind this design is to start the process again by using a backup of the new Blue database, restore it over the old Blue database making it the new Green database. This would likely be easier than trying to synchronize schema by applying all of the change scripts applied to the Green and then synchronizing the data.
Application Installation
There is no installation required, just unzip the application into a folder. The installation includes an application configuration (app.config) file and optional Excel file. The app.config includes blue (source) and green (target) connection strings, the last synchronization version starting point, optional Excel input and several other parameters:
The input Excel file allows five mapping operations from blue to green (operation input column):
N = Name change.
V = Write CT current Version into a new column.
T = Type change.
S = Split column.
L = Replace column with Lookup.
The input spreadsheet has the following columns:
The application assumes that Change Tracking has been turned on and at least one table has been enabled for change tracking (see script in Appendix A). At the app.config configurable wait interval, the application checks if there have been any data changes made in the blue environment and if so, it will bulk insert the changes into a staging table in the green database, from where it will apply the changes to the green table(s). In this way, synchronization between Azure SQL DB databases is also supported.
Tables are processed in dependency order, so if there are foreign key (FK) constraints defined, the “dimension” table insert, and update operations will be processed first and then the referenced “fact” tables will be processed. For dimension tables with references, the row deletes from those tables will be deferred until after the dependent fact tables are processed, to avoid FK delete conflicts.
To avoid processing changes more than once, when starting a synchronization, the application gets the current change tracking version and only processes changes between the previous sync and this version. This avoids the need to use read only snapshot isolation (RCSI) during change processing. There is also some additional processing done on the green database, to avoid duplicate inserts and updates of non-existing records, to ensure the set operations succeed and are not rolled back.
To run the application, make the appropriate changes to the app.config file, the optional Excel file and run the application. While the application is running the following single key commands are available:
X – Exit
E – Reload Excel Mapping File
T – Shows Statistics at a Table level – i.e. DML operations per table.
S – Shows Total Statistics – i.e. Total DML operations.
P – Pause synchronization
R – Resume synchronization
The application creates an XEvent listener on the green database to monitor schema changes to tables. When a change tracked table is modified the application automatically reloads the metadata for the green table and re-evaluates the column mappings. To prevent application failures, you should pause synchronization during green schema modifications.
Every time there is a synchronization, a console message is written that includes the starting and ending version and the number of inserts, updates, and deletes. The number of deletes has two numbers, the total deletes, and the number of deferred deletes (in brackets).
The total operations are displayed when the applications exits but are also available during application execution using the T command for totals by table or the S command for overall totals.
The sample application can be downloaded here: CTSync application.
Note: The sample application is a Proof of Concept to explore the feasibility of the technique proposed. It comes with no warrantees or support, therefore use it at your own risk. There are additional details in the User’s Guide included in the package. For information related to support for this tool, please refer the license agreement that is included in the download.
Feedback and suggestions
If you have feedback or suggestions for improving this team asset, please contact the Azure Databases SQL Customer Success Engineering Team. Thank you for your support!
Microsoft Tech Community – Latest Blogs –Read More
Converting an AWS Lambda Function in Go into an Azure Function using LLMs
Converting an AWS Lambda into an Azure Function using LLMs in Go
Disclaimer: This article is an experimental implementation about applying LLMs to convert AWS Lambda functions into Azure Functions. It is not intended as a defined guide for the process and does not guarantee successful conversion. The outcome depends on the specific code you intend to convert and the LLM that you are using.
What would it take to convert a Lambda Function into an Azure Function?
How does a Lambda look like?
import (
“context”
“fmt”
“github.com/aws/aws-lambda-go/lambda”
)
type MyEvent struct {
Name string `json:”name”`
}
type MyResponse struct {
Message string `json:”message”`
}
func HandleRequest(ctx context.Context, event *MyEvent) (*MyResponse, error) {
if event == nil {
return nil, fmt.Errorf(“received nil event”)
}
message := fmt.Sprintf(“Hello %s!”, event.Name)
return &MyResponse{Message: message}, nil
}
func main() {
lambda.Start(HandleRequest)
}
func () error
func (TIn) error
func () (TOut, error)
func (context.Context) error
func (context.Context, TIn) error
func (context.Context) (TOut, error)
func (context.Context, TIn) (TOut, error)
What should be the result?
import (
“fmt”
“net/http”
“github.com/gin-gonic/gin”
)
type MyEvent struct {
Name string `json:”name”`
}
type MyResponse struct {
Message string `json:”message”`
}
func HandleRequest(ctx *gin.Context) {
if ctx.Request.Body == nil {
errorMsg := “received nil event”
ctx.JSON(http.StatusBadRequest, gin.H{“error”: errorMsg})
return
}
var event MyEvent
err := ctx.ShouldBindJSON(&event)
if err != nil {
errorMsg := fmt.Sprintf(“error on reading request body: %vn”, err.Error())
ctx.JSON(http.StatusBadRequest, gin.H{“error”: errorMsg})
return
}
message := fmt.Sprintf(“Hello %s!”, event.Name)
ctx.JSON(http.StatusOK, &MyResponse{Message: message})
}
func main() {
r := gin.Default()
r.Handle(http.MethodPost, “/HandleRequest”, HandleRequest)
r.Run()
}
The attempts
Chain-of-Thought (CoT) Prompt
Few shots using an example selector
Fine tuning
The solution
Language agent tree search
Simple prompt
You will be given a lambda function code. Rewrite the code without using lambda code and using a GinGonic server instead.
import (
“fmt”
“net/http”
“github.com/gin-gonic/gin”
)
type MyEvent struct {
Name string `json:”name”`
}
type MyResponse struct {
Message string `json:”message”`
}
func HandleRequest(c *gin.Context) {
var event MyEvent
if err := c.ShouldBindJSON(&event); err != nil {
c.JSON(http.StatusBadRequest, gin.H{“error”: err.Error()})
return
}
message := fmt.Sprintf(“Hello %s!”, event.Name)
c.JSON(http.StatusOK, MyResponse{Message: message})
}
func main() {
router := gin.Default()
router.POST(“/handle-request”, HandleRequest)
router.Run() // By default, it listens on :8080
}
and intuitive unit tests for functions. You will be given a Go AWS Lambda function, that
is being converted to a GinGonic http server. Your job is to generate a comprehensive set
of tests to ensure its functionality remains consistent. The tests should cover all major
functionality of the function, including error handling, input validation, and expected output.
Here is the Go code for the AWS Lambda function:
{{lambdaCode}}
Here is the Go code for the GinGonic http server:
{{azureCode}}
import (
“bytes”
“net/http”
“net/http/httptest”
“testing”
“github.com/gin-gonic/gin”
“github.com/stretchr/testify/assert”
)
func setupRouter() *gin.Engine {
gin.SetMode(gin.TestMode)
r := gin.Default()
r.POST(“/event”, HandleRequest)
return r
}
func TestHandleRequest_Successful(t *testing.T) {
router := setupRouter()
w := httptest.NewRecorder()
body := bytes.NewBufferString(`{“name”:”John Doe”}`)
req, _ := http.NewRequest(“POST”, “/event”, body)
req.Header.Set(“Content-Type”, “application/json”)
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusOK, w.Code)
assert.JSONEq(t, `{“message”:”Hello John Doe!”}`, w.Body.String())
}
func TestHandleRequest_BadRequest(t *testing.T) {
router := setupRouter()
w := httptest.NewRecorder()
body := bytes.NewBufferString(`{“nom”:”John Doe”}`) // Incorrect JSON field name
req, _ := http.NewRequest(“POST”, “/event”, body)
req.Header.Set(“Content-Type”, “application/json”)
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
}
func TestHandleRequest_EmptyBody(t *testing.T) {
router := setupRouter()
w := httptest.NewRecorder()
body := bytes.NewBufferString(`{}`) // Empty JSON body
req, _ := http.NewRequest(“POST”, “/event”, body)
req.Header.Set(“Content-Type”, “application/json”)
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusOK, w.Code)
assert.JSONEq(t, `{“message”:”Hello !”}`, w.Body.String()) // Expect default behavior for empty name
}
import (
“net/http”
“net/http/httptest”
“strings”
“testing”
“github.com/gin-gonic/gin”
“github.com/stretchr/testify/assert”
)
func TestHandleRequest(t *testing.T) {
type args struct {
request *http.Request
}
tests := []struct {
name string
request func() *http.Request
expectedCode int
expectedBody string
}{
{
name: “success”,
request: func() *http.Request {
req := `{“name”:”Ana”}`
return httptest.NewRequest(http.MethodPost, “/handle”, strings.NewReader(req))
},
expectedCode: http.StatusOK,
expectedBody: `{“message”:”Hello Ana!”}`,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
router := gin.Default()
router.POST(“/handle”, HandleRequest)
w := httptest.NewRecorder()
httpHandler := router
httpHandler.ServeHTTP(w, tt.request())
assert.Equal(t, tt.expectedCode, w.Code)
assert.Equal(t, tt.expectedBody, w.Body.String())
})
}
}
Feedback: |
TestsPassed: … <The code for the test I manually added>
TestsFailed: … <The code for the auto-generated one>
ErrorTrace:
C:/Users/XXXXX/AppData/Local/Temp/go-lats/lats_test.go:44
Error: Not equal:
expected: 400
actual : 200
Test: TestHandleRequest_BadRequest
Score: 0.75
series of unit tests. Your goal is to write a few sentences to explain why your
implementation is wrong as indicated by the tests. You will need this as a hint when you
try again later. Only provide the few sentence description in your answer, not the
implementation.
{{examples}}
[function impl]:
{{azure_code}}
[unit test results]:
{{feedback}}
[self-reflection]:”
request (such as wrong field names), the error should be reported with a status code of
`http.StatusBadRequest`. However, the failing unit test `TestHandleRequest_BadRequest`
suggests that this behavior is not occurring as expected. The test sends a JSON body with
an incorrect field name (`”nom”` instead of `”name”`) and expects a
`http.StatusBadRequest` status code due to binding failure, but the test reports receiving
a `http.StatusOK` status code instead. This indicates that the `ShouldBindJSON` method
might not be detecting the discrepancy between the expected and received JSON fields or
the error handling logic after the `ShouldBindJSON` call is not functioning correctly.
You are an AI Go assistant. You will be given your past function implementation, a
series of unit tests, and a hint to change the implementation appropriately. Write your
full implementation, Rewrite the code without using lambda code and using a GinGonic
server instead. Do not by any chance use lambda code.
Use a Go code block to write your response. For example:
“`go
func main() {
fmt.Println(“Hello, World!”)
}
{{examples}}
[user message]:
{{lambda_code}}
[assistant message]:
{{azure_code}}
[user message]:
[unit test results from previous impl]:
{{feedback}}
[assistant message]:
[reflection on previous impl]:
{{self-reflection}}
[user message]:
Try to convert this code again:
{{lambda_code}}
import (
“fmt”
“net/http”
“github.com/gin-gonic/gin”
)
type MyEvent struct {
Name string `json:”name”`
}
type MyResponse struct {
Message string `json:”message”`
}
// HandleRequest processes the event and returns a response
func HandleRequest(c *gin.Context) {
var event MyEvent
// Bind the JSON to the struct; if nil or incorrect, return an error.
if err := c.ShouldBindJSON(&event); err != nil {
c.JSON(http.StatusBadRequest, gin.H{“error”: err.Error()})
return
}
// Check if the event is nil or the name is an empty string
if event.Name == “” {
// Respond with an error message if Name is empty
c.JSON(http.StatusBadRequest, gin.H{“error”: “name field is required”})
return
}
// Constructing the message
message := fmt.Sprintf(“Hello %s!”, event.Name)
response := MyResponse{Message: message}
// Sending back the response as JSON
c.JSON(http.StatusOK, response)
}
func main() {
// Set up Gin router
r := gin.Default()
// Define POST route
r.POST(“/event”, HandleRequest)
// Run the Gin server on a specified port
r.Run(“:8080”)
}
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Intelligent App Chronicles: Azure API Management as an Enterprise API Gateway
The Intelligent App Chronicles for Healthcare is a webinar series designed to provide health and life sciences companies with a comprehensive guide to building intelligent healthcare applications.
The series will cover a wide range of topics including Azure Container Services, Azure AI Services, Azure Integration Services, and innovative solutions that can accelerate your Intelligent app journey. By attending these webinars, you will learn how to leverage the power of intelligent systems to build scalable and secure healthcare solutions that can transform the way you deliver care. Our hosts will be: (99+) Shelly (Finch) Avery | LinkedIn, (99+) Matthew Anderson | LinkedIn
Our next session will be on Feb 20th at 9:00 PT / 10:00 MT / 11:00 CT / 12:00 ET – Click here to Register.
Overview:
Please join us for an informative session on how to use Azure API Management as an enterprise API gateway. You will discover how to use Azure API Management as an enterprise API gateway to create intelligent and secure healthcare applications.
Our speaker this week is Rob McKenna, Principal Technical Specialist for Azure Apps and Innovation, he will cover topics such as:
Benefits of a centralized and shared API gateway
the steps to get your enterprise teams started
networking considerations for regulated industries.
How to ensure the internal and external availability of your APIs
How to improve your developer velocity, and how to use DevOps for API management and developer experience tooling.
Don’t miss this opportunity to learn from the experts and take your healthcare applications to the next level. Register now for the Intelligent App Chronicles for Healthcare webinar series! here!
Thanks for reading!
Please follow the aka.ms/HLSBlog for all this great content.
Thanks for reading, Shelly Avery | Email, LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Hunting for QR Code AiTM Phishing and User Compromise
In the dynamic landscape of adversary-in-the-middle (AiTM) attacks, the Microsoft Defender Experts team has recently observed a notable trend – QR code-themed phishing campaigns. The attackers employ deceptive QR codes to manipulate users into accessing fraudulent websites or downloading harmful content.
These attacks exploit the trust and curiosity of users who scan QR codes without verifying their source or content. Attackers can create QR codes that redirect users to phishing sites that mimic legitimate ones, such as banks, social media platforms, or online services. The targeted user scans the QR code, subsequently being redirected to a phishing page. Following user authentication, attackers steal the user’s session token, enabling them to launch various malicious activities, including Business Email Compromise attacks and data exfiltration attempts. Alternatively, attackers can create QR codes that prompt users to download malware or spyware onto their devices. These attacks can result in identity theft, financial loss, data breach, or device compromise.
This blog explains the mechanics of QR code phishing, and details how Defender Experts hunt for these phishing campaigns. Additionally, it outlines the procedures in place to notify customers about the unfolding attack narrative and its potential ramifications.
Why is QR code phishing a critical threat?
The Defender Experts team has observed that QR code campaigns are often massive and large-scale in nature. Before launching these campaigns, attackers typically conduct reconnaissance attempts to gather information on targeted users. The campaigns are then sent to large groups of people within an organization, often exceeding 1,000 users, with varying parameters across subject, sender, and body of the emails.
The identity compromises and stolen session tokens resulting from these campaigns are proportional to their large scale. In recent months, Defender Experts have observed QR code campaigns growing from 10% to 30% of total phishing campaigns. Since the campaigns do not follow a template, it can be difficult to scope and evaluate the extent of compromise. It is crucial for organizations to be aware of this trend and take steps to protect their employees from falling victim to QR code phishing attacks.
Understanding the intent of QR code phishing attacks
The QR code phishing email can have one of the below intents:
Credential theft: The majority of these campaigns are designed with the intent where the user is redirected to an AiTM phishing website for session token theft. The authentication method can be single factor authentication, where only the user’s password is compromised and the sign-in attempts are unsuccessful; in these scenarios, the attacker signs in later with the compromised password and bypasses multifactor authentication (MFA) through MFA fatigue attacks.Alternatively, the user can be redirected to an AiTM phishing page where the credentials, MFA parameters and session token are compromised in real-time.
Malware distribution: In these scenarios, once the user scans the QR code, malware/spyware/adware is automatically downloaded on the mobile device.
Financial theft: These campaigns use QR codes to trick the user into making a fake payment or giving away their banking credentials. The user may scan the QR code and be taken to a bogus payment gateway or a fake bank website. The attacker can then access the user’s account later and bypass the second factor authentication by contacting the user via email or phone.
How Defender Experts approach QR code phishing
In QR code phishing attempts, the targeted user scans the QR code on their personal non-managed mobile device, which falls outside the scope of the Microsoft Defender protected environment. This is one of the key challenges for detection. In addition to detections based on Image Recognition or Optical Character Recognition, a novel approach was necessary to detect the QR code phishing attempts.
Defender Experts have researched identifying patterns across the QR code phishing campaigns and malicious sign-in attempts and devised the following detection approaches:
Pre-cursor events: User activities
Suspicious Senders
Suspicious Subject
Email Clustering
User Signals
Suspicious Sign-in attempts
1. Hunting for user behavior:
This is one of the primary detections that helps Defender Experts surface suspicious sign-in attempts from QR code phishing campaigns. Although the user scans the QR code from an email on their personal mobile device, in the majority of the scenarios, the phishing email being accessed is recorded with MailItemsAccessed mail-box auditing action.
The majority of the QR code campaigns have image (png/jpg/jpeg/gif) or document attachments (pdf/doc/xls) – Yes! QR codes are embedded in Excel attachments too! The campaigns can include a legitimate URL that redirects to a phishing page with malicious QR code as well.
A malicious sign-in attempt with session token compromise that follows the QR code scan is always observed from non-trusted devices with medium/high risk score for the session.
This detection approach correlates a user accessing an email with image/document attachments and a risky sign-in attempt from non-trusted devices in closer proximity and validates if the location from where the email item was accessed is different from the location of sign-in attempt.
Advanced Hunting Query:
let successfulRiskySignIn = materialize(AADSignInEventsBeta
| where Timestamp > ago(1d)
| where isempty(DeviceTrustType)
| where IsManaged != 1
| where IsCompliant != 1
| where RiskLevelDuringSignIn in (50, 100)
| project Timestamp, ReportId, IPAddress, AccountUpn, AccountObjectId, SessionId, Country, State, City
);
let suspiciousSignInUsers = successfulRiskySignIn
| distinct AccountObjectId;
let suspiciousSignInIPs = successfulRiskySignIn
| distinct IPAddress;
let suspiciousSignInCities = successfulRiskySignIn
| distinct City;
CloudAppEvents
| where Timestamp > ago(1d)
| where ActionType == “MailItemsAccessed”
| where AccountObjectId in (suspiciousSignInUsers)
| where IPAddress !in (suspiciousSignInIPs)
| where City !in (suspiciousSignInCities)
| join kind=inner successfulRiskySignIn on AccountObjectId
| where AccountObjectId in (suspiciousSignInUsers)
| where (Timestamp – Timestamp1) between (-5min .. 5min)
| extend folders = RawEventData.Folders
| mv-expand folders
| extend items = folders.FolderItems
| mv-expand items
| extend InternetMessageId = tostring(items.InternetMessageId)
| project Timestamp, ReportId, IPAddress, InternetMessageId, AccountObjectId, SessionId, Country, State, City
2. Hunting for sender patterns:
The sender attributes play a key role in the detection of QR code campaigns. Since the campaigns are typically large scale in nature, 95% of the campaigns do not involve phishing emails from compromised trusted vendors. Predominant emails are sent from newly-created domains or non-prevalent domains in the organization.
Since the attack involves multiple user actions involving scanning the QR code from a mobile device and completing the authentication, unlike typical phishing with simple URL clicks, the attackers induce a sense of urgency by impersonating IT support, HR support, payroll, administrator team, or the display name indicates the email is sent on-behalf of a known high value target in the organization (e.g., “Lara Scott on-behalf of CEO”).
In this detection approach, we correlate email from non-prevalent senders in the organization with impersonation intents.
Advanced Hunting Query:
let PhishingSenderDisplayNames = ()
{
pack_array(“IT”, “support”, “Payroll”, “HR”, “admin”, “2FA”, “notification”, “sign”, “reminder”, “consent”, “workplace”,
“administrator”, “administration”, “benefits”, “employee”, “update”, “on behalf”);
};
let suspiciousEmails = EmailEvents
| where Timestamp > ago(1d)
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| join kind=inner (EmailAttachmentInfo
| where Timestamp > ago(1d)
| where isempty(SenderObjectId)
| where FileType has_any (“png”, “jpg”, “jpeg”, “bmp”, “gif”)
) on NetworkMessageId
| where SenderDisplayName has_any (PhishingSenderDisplayNames())
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId;
let suspiciousSenders = suspiciousEmails | distinct SenderFromDomain;
let prevalentSenders = materialize(EmailEvents
| where Timestamp between (ago(7d) .. ago(1d))
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where SenderFromDomain in (suspiciousSenders)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| distinct SenderFromDomain);
suspiciousEmails
| where SenderFromDomain !in (prevalentSenders)
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId
Correlating suspicious emails with image attachments from a new sender with risky sign-in attempts for the recipients can also surface the QR code phishing campaigns and user compromises.
3. Hunting for subject patterns:
In addition to impersonating IT and HR teams, attackers also craft the campaigns with actionable subjects. (e.g., MFA completion required, Digitally sign documents). The targeted user is requested to complete the highlighted action by scanning the QR code in the email and providing credentials and MFA token.
In most cases, these automated phishing campaigns also include a personalized element, where the user’s first name/last name/alias/email address is included in the subject. The email address of the targeted user is also embedded in the URL behind the QR code. This serves as a unique tracker for the attacker to identify emails successfully delivered and QR codes scanned.
In this detection, we track emails with suspicious keywords in subjects or personalized subjects. To detect personalized subjects, we track campaigns where the first three words or last three words of the subject are the same, but the other values are personalized/unique.
For example:
Alex, you have an undelivered voice message
Bob, you have an undelivered voice message
Charlie, you have an undelivered voice message
Your MFA update is pending, Alex
Your MFA update is pending, Bob
Your MFA update is pending, Charlie
Advanced Hunting Query:
Personalized campaigns based on the first few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstWord = tostring(words[0]), secondWord = tostring(words[1]), thirdWord = tostring(words[2]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstWord, secondWord, thirdWord
, SenderFromAddress
| where SubjectsCount >= 10
Personalized campaigns based on the last few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstLastWord = tostring(words[-1]), secondLastWord = tostring(words[-2]), thirdLastWord = tostring(words[-3]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstLastWord, secondLastWord, thirdLastWord
, SenderFromAddress
| where SubjectsCount >= 10
Campaign with suspicious keywords:
let PhishingKeywords = ()
{
pack_array(“account”, “alert”, “bank”, “billing”, “card”, “change”, “confirmation”,
“login”, “password”, “mfa”, “authorize”, “authenticate”, “payment”, “urgent”, “verify”, “blocked”);
};
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| where Subject has_any (PhishingKeywords())
4. Hunting for attachment name patterns:
Based on the historical QR code campaigns investigations, Defender Experts have identified that the attachment names of the campaigns are usually randomized by the attackers, meaning every email has a different attachment name for the QR code with high levels of randomization. Emails with randomly named attachment names from the same sender to multiple recipients, typically more than 50, can potentially indicate a QR code phishing campaign.
Campaign with randomly named attachments:
EmailAttachmentInfo
| where hasNonPrevalentSenders
| where Timestamp between (emailStartTime .. emailEndTime)
| where SenderFromAddress in (nonPrevalentSenders)
| where FileType in (“png”, “jpg”, “jpeg”, “gif”, “svg”)
| where isnotempty(FileName)
| extend firstFourFileName = substring(FileName, 0, 4)
| summarize RecipientsCount = dcount(RecipientEmailAddress), FirstFourFilesCount = dcount(firstFourFileName), suspiciousEmails = make_set(NetworkMessageId, 10) by SenderFromAddress
| where FirstFourFilesCount >= 10
5. Hunting for user signals/clusters
In order to craft effective large scale QR code phishing attacks, the attackers perform reconnaissance across social media to gather target user email addresses, their preferences and much more. These campaigns are sent across to 1,000+ users in the organization with luring subjects and contents based on their preferences. However, Defender Experts have observed that, at least one user finds the campaign suspicious and reports the email, which generates this alert: “Email reported by user as malware or phish.”
This alert can be another starting point for hunting activity to identify the scope of the campaign and compromises. Since the campaigns are specifically crafted for each group of users, scoping based on sender/subject/filename might not be an effective approach. Microsoft Defender for Office offers a heuristic based approach based on the email content as a solution for this problem. Emails with similar content that are likely to be from one attacker are clustered together and the cluster ID is populated in the EmailClusterId field in EmailEvents table.
The clusters can include all phishing attempts from the attackers so far against the organization, it can aggregate emails with malicious URLs, attachments, and QR codes as one, based on the similarity. Hence, this is a powerful approach to explore the persistent phishing techniques of the attacker and the repeatedly targeted users.
Below is a sample query on scoping a campaign from the email reported by the end user. The same scoping logic can be used on the previously discussed hunting hypotheses as well.
let suspiciousClusters = EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where NetworkMessageId in (<List of suspicious Network Message Ids from Alerts>)
| distinct EmailClusterId;
EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where EmailClusterId in (suspiciousClusters)
| summarize make_set(Subject), make_set(SenderFromDomain), dcount(RecipientObjectId), dcount(SenderDisplayName) by EmailClusterId
6. Hunting for suspicious sign-in attempts:
In addition to detecting the campaigns, it is critical that we identify the compromised identities. To surface the identities compromised by AiTM, we can utilize the below approaches.
Risky sign-in attempt from a non-managed device
Any sign-in attempt from a non-managed, non-compliant, untrusted device should be taken into consideration, and a risk score for the sign-in attempt increases the anomalous nature of the activity. Monitoring these sign-in attempts can surface the identity compromises.
AADSignInEventsBeta
| where Timestamp > ago(7d)
| where IsManaged != 1
| where IsCompliant != 1
//Filtering only for medium and high risk sign-in
| where RiskLevelDuringSignIn in (50, 100)
| where ClientAppUsed == “Browser”
| where isempty(DeviceTrustType)
| where isnotempty(State) or isnotempty(Country) or isnotempty(City)
| where isnotempty(IPAddress)
| where isnotempty(AccountObjectId)
| where isempty(DeviceName)
| where isempty(AadDeviceId)
| project Timestamp,IPAddress, AccountObjectId, ApplicationId, SessionId, RiskLevelDuringSignIn, BrowserId
Suspicious sign-in attributes
Sign-in attempts from untrusted devices with empty user agent, operating system or anomalous BrowserId can also be an indication of identity compromises from AiTM.
Defender Experts also recommend monitoring the sign-ins from known malicious IP addresses. Although the mode of delivery of the phishing campaigns differ (QR code, HTML attachment, URL), the sign-in infrastructure often remains the same. Monitoring the sign-in patterns of compromised users, and continuously scoping the sign-in attempts based on the known patterns can also surface the identity compromises from AiTM.
Mitigations
Apply these mitigations to reduce the impact of this threat:
Educate users about the risks of QR code phishing emails.
Implement Microsoft Defender for Endpoint – Mobile Threat Defense on mobile devices used to access enterprise assets.
Enable Conditional Access policies in Microsoft Entra, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP address location information, and device status, among others, are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, Azuretrusted IP address requirements, or risk-based policies with proper access control. If you are still evaluating Conditional Access, use security defaults as an initial baseline set of policies to improve identity security posture.
Implement continuous access evaluation.
Leverage Microsoft Edge to automatically identify and block malicious websites, including those used in this phishing campaign, and Microsoft Defender for Office 365 to detect and block malicious emails, links, and files.
Monitor suspicious or anomalous activities in Microsoft Entra ID Protection. Investigate sign-in attempts with suspicious characteristics (e.g., location, ISP, user agent, and use of anonymizer services).
Implement Microsoft Entra passwordless sign-in with FIDO2 security keys.
Turn on network protection in Microsoft Defender for Endpoint to block connections to malicious domains and IP addresses.
If you’re interested in learning more about our Defender Experts services, visit the following resources:
Microsoft Defender Experts for XDR web page
Microsoft Defender Experts for XDR docs page
Microsoft Defender Experts for Hunting web page
Microsoft Defender Experts for Hunting docs page
Microsoft Tech Community – Latest Blogs –Read More
📎Chat Service demo running Phi-2 LLM locally with Semantic Kernel
Hi!
It’s time to go back to AI and NET, so today’s post is a small demo on how to run a LLM (large language model, this demo using Phi-2) in local mode, and how to interact with the model using Semantic Kernel.
LM Studio
I’ve tested several products and libraries to run LLMs locally, and LM Studio is on my top 3. LM Studio is a desktop application that allows you to run open-source models locally on your computer. You can use LM Studio to discover, download, and chat with models from Hugging Face, or create your own custom models. LM Studio also lets you run a local inference server that mimics the OpenAI API, so you can use any model with your favourite tools and frameworks. LM Studio is available for Mac, Windows, and Linux, and you can download it from their website.
Source: https://github.com/lmstudio-ai
Running a local server with LM Studio
Here are the steps to run a local server with LM Studio
Launch LM Studio and search for a LLM from Hugging Face using the search bar. You can filter the models by compatibility, popularity, or quantization level.
Select a model and click Download. You can also view the model card for more information about the model.
Once the model is downloaded, go to the Local Server section and select the model from the drop-down menu. You can also adjust the server settings and parameters as you wish.
Click Start Server to run the model on your local machine. You will see a URL that you can use to access the server from your browser or other applications. The server is compatible with the OpenAI API, so you can use the same code and format for your requests and responses.
To stop the server, click Stop Server. You can also delete the model from your machine if you don’t need it anymore.
The following image shows the Phi-2 model running locally in a server on the port 1234:
SLM: Phi-2
And hey, Phi-2, what is Phi-2?
Phi-2 is a small language model (SLM) developed by Microsoft Research that has 2.7 billion parameters and demonstrates outstanding reasoning and language understanding capabilities. It was trained on a mix of synthetic and web datasets for natural language processing and coding. It achieves state-of-the-art performance among base language models with less than 13 billion parameters and matches or outperforms models up to 25x larger on complex benchmarks. We can use Phi-2 to generate text, code, or chat with it using the Azure AI Studio or the Hugging Face platform. :smiling_face_with_smiling_eyes:
Here are some additional resources related to Phi-2:
Phi-2: The surprising power of small language models. https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/
Microsoft/phi-2 · Hugging Face. https://huggingface.co/microsoft/phi-2
Semantic Kernel and Custom LLMs
There is an amazing sample on how to create your own LLM Service class to be used in Semantic Kernel. You can view the Sample here: https://github.com/microsoft/semantic-kernel/blob/3451a4ebbc9db0d049f48804c12791c681a326cb/dotnet/samples/KernelSyntaxExamples/Example16_CustomLLM.cs
Based on that sample, I created a custom one that interacts with the Phi-2 model running in my local server with LM Studio. The following image shows the simple test project that includes
LM Studio folder, with the Phi-2 Generation Service for Chat and Text Generation.
Model folder to manage the request / response JSON messages.
A main program with literally 10 lines of code.
This is the sample code of the main program. As you can see, it’s very simple and runs in a simple way.
using Microsoft.Extensions.DependencyInjection;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using SKPhi2Local.LMStudio;
// Phi-2 in LM Studio
var phi2 = new Phi2GenerationService();
phi2.ModelUrl = “http://localhost:1234/v1/chat/completions”;
// semantic kernel builder
var builder = Kernel.CreateBuilder();
builder.Services.AddKeyedSingleton<IChatCompletionService>(“phi2Chat”, phi2);
var kernel = builder.Build();
// init chat
var chat = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();
history.AddSystemMessage(“You are a useful assistant that replies using a funny style and emojis. Your name is Goku.”);
history.AddUserMessage(“hi, who are you?”);
// print response
var result = await chat.GetChatMessageContentsAsync(history);
Console.WriteLine(result[^1].Content);
You can check the complete solution in this public repository on GitHub: https://aka.ms/repo-skcustomllm01
Best,
Bruno
Microsoft Tech Community – Latest Blogs –Read More
One Logic Solutions offers new transactable partner solutions in AppSource
Microsoft partners like One Logic Solutions offer transact-capable offers, which allow you to purchase directly from AppSource. Learn about these offers below:
OLS360 Essential and Advanced (North America): Designed for smaller teams seeking collaboration and efficiency as well as businesses that are ready to optimize and expand, this no-code business management platform from One Logic Solutions drives growth and innovation with quick and cost-effective solutions. Manage everything from workflows to administrative tasks in one place for a clear picture of what needs to be done and who is responsible for every task.
OLS360 Corporate (North America): For organizations that demand the highest level of customization and support, this no-code business management platform from One Logic Solutions drives growth and innovation with quick and cost-effective solutions. Manage everything from workflows to administrative tasks in one place for a clear picture of what needs to be done and who is responsible for every task.
OLS360 Essential and Advanced (Brazil): Designed for smaller teams seeking collaboration and efficiency as well as businesses that are ready to optimize and expand, this no-code business management platform from One Logic Solutions drives growth and innovation with quick and cost-effective solutions. Manage everything from workflows to administrative tasks in one place for a clear picture of what needs to be done and who is responsible for every task. This offer is only available in Portuguese.
OLS360 Corporate (Brazil): For organizations that demand the highest level of customization and support, this no-code business management platform from One Logic Solutions drives growth and innovation with quick and cost-effective solutions. Manage everything from workflows to administrative tasks in one place for a clear picture of what needs to be done and who is responsible for every task. This offer is only available in Portuguese.
Microsoft Tech Community – Latest Blogs –Read More
Hyper-V VM Migration to Azure Stack HCI, version 23H2
Written by Kerim Hanif, Senior Program Manager on the Azure Edge & Platform team
Azure Migrate is a unified platform that simplifies migration, modernization, and optimization of on-premises resources to Azure. We have been working very closely with Azure Migrate team to add more destinations for Azure Migrate like VMware and Hyper-V. Last year we launched the private preview of Hyper-V virtual machine (VM) migration with Azure Migrate, and today we are very happy to announce the public preview of this capability.
Note: VMware migration is currently in private preview. Please fill this form if you would like to be part of the private preview and help us build a high-quality migration capability.
What is new in this public preview?
This feature enables you to migrate your Windows and Linux VMs running on Hyper-V to Azure Stack HCI, version 23H2 clusters (GA as of Feb 1st of this year). A wide range of source environments starting from Hyper-V on Windows Server 2012 R2 to Windows Server 2022 are supported.
This feature uses the agentless migration option of Azure Migrate. This means that you don’t need any prep such as installing an agent on the source VMs. All you need are two appliances, one on the source, and one on the target.
While you can manage, monitor, and configure via the cloud (Azure Migrate), the data transfer between the source and the target is kept local.
All the migrated VMs are Arc-enabled by default. This means that the full power of Arc VM management is immediately available to you once the migration is complete.
Migrating variety of VMs (Windows and Linux) from Hyper-V to Azure Stack HCI
How to get started?
Make sure that you have an Azure Stack HCI cluster running version 23H2.
Create a migration project in Azure Migrate.
Discover the VMs on your source Hyper-V servers and clusters.
Select the VMs you want to migrate and start replicating them to the target Azure Stack HCI cluster (source VMs can continue running at this stage).
When ready, start the migration, and migrate your VMs with minimal downtime.
Track the progress from the Azure portal.
Where to learn more?
For more information and detailed steps, please visit the following links:
Options for migrating virtual machines to Azure Stack HCI (preview)
Use Azure Migrate to move Hyper-V VMs to Azure Stack HCI (preview)
Microsoft Tech Community – Latest Blogs –Read More
Hunting for QR Code AiTM Phishing and User Compromise
In the dynamic landscape of adversary-in-the-middle (AiTM) attacks, the Microsoft Defender Experts team has recently observed a notable trend – QR code-themed phishing campaigns. The attackers employ deceptive QR codes to manipulate users into accessing fraudulent websites or downloading harmful content.
These attacks exploit the trust and curiosity of users who scan QR codes without verifying their source or content. Attackers can create QR codes that redirect users to phishing sites that mimic legitimate ones, such as banks, social media platforms, or online services. The targeted user scans the QR code, subsequently being redirected to a phishing page. Following user authentication, attackers steal the user’s session token, enabling them to launch various malicious activities, including Business Email Compromise attacks and data exfiltration attempts. Alternatively, attackers can create QR codes that prompt users to download malware or spyware onto their devices. These attacks can result in identity theft, financial loss, data breach, or device compromise.
This blog explains the mechanics of QR code phishing, and details how Defender Experts hunt for these phishing campaigns. Additionally, it outlines the procedures in place to notify customers about the unfolding attack narrative and its potential ramifications.
Why is QR code phishing a critical threat?
The Defender Experts team has observed that QR code campaigns are often massive and large-scale in nature. Before launching these campaigns, attackers typically conduct reconnaissance attempts to gather information on targeted users. The campaigns are then sent to large groups of people within an organization, often exceeding 1,000 users, with varying parameters across subject, sender, and body of the emails.
The identity compromises and stolen session tokens resulting from these campaigns are proportional to their large scale. In recent months, Defender Experts have observed QR code campaigns growing from 10% to 30% of total phishing campaigns. Since the campaigns do not follow a template, it can be difficult to scope and evaluate the extent of compromise. It is crucial for organizations to be aware of this trend and take steps to protect their employees from falling victim to QR code phishing attacks.
Understanding the intent of QR code phishing attacks
The QR code phishing email can have one of the below intents:
Credential theft: The majority of these campaigns are designed with the intent where the user is redirected to an AiTM phishing website for session token theft. The authentication method can be single factor authentication, where only the user’s password is compromised and the sign-in attempts are unsuccessful; in these scenarios, the attacker signs in later with the compromised password and bypasses multifactor authentication (MFA) through MFA fatigue attacks.Alternatively, the user can be redirected to an AiTM phishing page where the credentials, MFA parameters and session token are compromised in real-time.
Malware distribution: In these scenarios, once the user scans the QR code, malware/spyware/adware is automatically downloaded on the mobile device.
Financial theft: These campaigns use QR codes to trick the user into making a fake payment or giving away their banking credentials. The user may scan the QR code and be taken to a bogus payment gateway or a fake bank website. The attacker can then access the user’s account later and bypass the second factor authentication by contacting the user via email or phone.
How Defender Experts approach QR code phishing
In QR code phishing attempts, the targeted user scans the QR code on their personal non-managed mobile device, which falls outside the scope of the Microsoft Defender protected environment. This is one of the key challenges for detection. In addition to detections based on Image Recognition or Optical Character Recognition, a novel approach was necessary to detect the QR code phishing attempts.
Defender Experts have researched identifying patterns across the QR code phishing campaigns and malicious sign-in attempts and devised the following detection approaches:
Pre-cursor events: User activities
Suspicious Senders
Suspicious Subject
Email Clustering
User Signals
Suspicious Sign-in attempts
1. Hunting for user behavior:
This is one of the primary detections that helps Defender Experts surface suspicious sign-in attempts from QR code phishing campaigns. Although the user scans the QR code from an email on their personal mobile device, in the majority of the scenarios, the phishing email being accessed is recorded with MailItemsAccessed mail-box auditing action.
The majority of the QR code campaigns have image (png/jpg/jpeg/gif) or document attachments (pdf/doc/xls) – Yes! QR codes are embedded in Excel attachments too! The campaigns can include a legitimate URL that redirects to a phishing page with malicious QR code as well.
A malicious sign-in attempt with session token compromise that follows the QR code scan is always observed from non-trusted devices with medium/high risk score for the session.
This detection approach correlates a user accessing an email with image/document attachments and a risky sign-in attempt from non-trusted devices in closer proximity and validates if the location from where the email item was accessed is different from the location of sign-in attempt.
Advanced Hunting Query:
let successfulRiskySignIn = materialize(AADSignInEventsBeta
| where Timestamp > ago(1d)
| where isempty(DeviceTrustType)
| where IsManaged != 1
| where IsCompliant != 1
| where RiskLevelDuringSignIn in (50, 100)
| project Timestamp, ReportId, IPAddress, AccountUpn, AccountObjectId, SessionId, Country, State, City
);
let suspiciousSignInUsers = successfulRiskySignIn
| distinct AccountObjectId;
let suspiciousSignInIPs = successfulRiskySignIn
| distinct IPAddress;
let suspiciousSignInCities = successfulRiskySignIn
| distinct City;
CloudAppEvents
| where Timestamp > ago(1d)
| where ActionType == “MailItemsAccessed”
| where AccountObjectId in (suspiciousSignInUsers)
| where IPAddress !in (suspiciousSignInIPs)
| where City !in (suspiciousSignInCities)
| join kind=inner successfulRiskySignIn on AccountObjectId
| where AccountObjectId in (suspiciousSignInUsers)
| where (Timestamp – Timestamp1) between (-5min .. 5min)
| extend folders = RawEventData.Folders
| mv-expand folders
| extend items = folders.FolderItems
| mv-expand items
| extend InternetMessageId = tostring(items.InternetMessageId)
| project Timestamp, ReportId, IPAddress, InternetMessageId, AccountObjectId, SessionId, Country, State, City
2. Hunting for sender patterns:
The sender attributes play a key role in the detection of QR code campaigns. Since the campaigns are typically large scale in nature, 95% of the campaigns do not involve phishing emails from compromised trusted vendors. Predominant emails are sent from newly-created domains or non-prevalent domains in the organization.
Since the attack involves multiple user actions involving scanning the QR code from a mobile device and completing the authentication, unlike typical phishing with simple URL clicks, the attackers induce a sense of urgency by impersonating IT support, HR support, payroll, administrator team, or the display name indicates the email is sent on-behalf of a known high value target in the organization (e.g., “Lara Scott on-behalf of CEO”).
In this detection approach, we correlate email from non-prevalent senders in the organization with impersonation intents.
Advanced Hunting Query:
let PhishingSenderDisplayNames = ()
{
pack_array(“IT”, “support”, “Payroll”, “HR”, “admin”, “2FA”, “notification”, “sign”, “reminder”, “consent”, “workplace”,
“administrator”, “administration”, “benefits”, “employee”, “update”, “on behalf”);
};
let suspiciousEmails = EmailEvents
| where Timestamp > ago(1d)
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| join kind=inner (EmailAttachmentInfo
| where Timestamp > ago(1d)
| where isempty(SenderObjectId)
| where FileType has_any (“png”, “jpg”, “jpeg”, “bmp”, “gif”)
) on NetworkMessageId
| where SenderDisplayName has_any (PhishingSenderDisplayNames())
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId;
let suspiciousSenders = suspiciousEmails | distinct SenderFromDomain;
let prevalentSenders = materialize(EmailEvents
| where Timestamp between (ago(7d) .. ago(1d))
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where SenderFromDomain in (suspiciousSenders)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| distinct SenderFromDomain);
suspiciousEmails
| where SenderFromDomain !in (prevalentSenders)
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId
Correlating suspicious emails with image attachments from a new sender with risky sign-in attempts for the recipients can also surface the QR code phishing campaigns and user compromises.
3. Hunting for subject patterns:
In addition to impersonating IT and HR teams, attackers also craft the campaigns with actionable subjects. (e.g., MFA completion required, Digitally sign documents). The targeted user is requested to complete the highlighted action by scanning the QR code in the email and providing credentials and MFA token.
In most cases, these automated phishing campaigns also include a personalized element, where the user’s first name/last name/alias/email address is included in the subject. The email address of the targeted user is also embedded in the URL behind the QR code. This serves as a unique tracker for the attacker to identify emails successfully delivered and QR codes scanned.
In this detection, we track emails with suspicious keywords in subjects or personalized subjects. To detect personalized subjects, we track campaigns where the first three words or last three words of the subject are the same, but the other values are personalized/unique.
For example:
Alex, you have an undelivered voice message
Bob, you have an undelivered voice message
Charlie, you have an undelivered voice message
Your MFA update is pending, Alex
Your MFA update is pending, Bob
Your MFA update is pending, Charlie
Advanced Hunting Query:
Personalized campaigns based on the first few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstWord = tostring(words[0]), secondWord = tostring(words[1]), thirdWord = tostring(words[2]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstWord, secondWord, thirdWord
, SenderFromAddress
| where SubjectsCount >= 10
Personalized campaigns based on the last few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstLastWord = tostring(words[-1]), secondLastWord = tostring(words[-2]), thirdLastWord = tostring(words[-3]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstLastWord, secondLastWord, thirdLastWord
, SenderFromAddress
| where SubjectsCount >= 10
Campaign with suspicious keywords:
let PhishingKeywords = ()
{
pack_array(“account”, “alert”, “bank”, “billing”, “card”, “change”, “confirmation”,
“login”, “password”, “mfa”, “authorize”, “authenticate”, “payment”, “urgent”, “verify”, “blocked”);
};
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| where Subject has_any (PhishingKeywords())
4. Hunting for attachment name patterns:
Based on the historical QR code campaigns investigations, Defender Experts have identified that the attachment names of the campaigns are usually randomized by the attackers, meaning every email has a different attachment name for the QR code with high levels of randomization. Emails with randomly named attachment names from the same sender to multiple recipients, typically more than 50, can potentially indicate a QR code phishing campaign.
Campaign with randomly named attachments:
EmailAttachmentInfo
| where hasNonPrevalentSenders
| where Timestamp between (emailStartTime .. emailEndTime)
| where SenderFromAddress in (nonPrevalentSenders)
| where FileType in (“png”, “jpg”, “jpeg”, “gif”, “svg”)
| where isnotempty(FileName)
| extend firstFourFileName = substring(FileName, 0, 4)
| summarize RecipientsCount = dcount(RecipientEmailAddress), FirstFourFilesCount = dcount(firstFourFileName), suspiciousEmails = make_set(NetworkMessageId, 10) by SenderFromAddress
| where FirstFourFilesCount >= 10
5. Hunting for user signals/clusters
In order to craft effective large scale QR code phishing attacks, the attackers perform reconnaissance across social media to gather target user email addresses, their preferences and much more. These campaigns are sent across to 1,000+ users in the organization with luring subjects and contents based on their preferences. However, Defender Experts have observed that, at least one user finds the campaign suspicious and reports the email, which generates this alert: “Email reported by user as malware or phish.”
This alert can be another starting point for hunting activity to identify the scope of the campaign and compromises. Since the campaigns are specifically crafted for each group of users, scoping based on sender/subject/filename might not be an effective approach. Microsoft Defender for Office offers a heuristic based approach based on the email content as a solution for this problem. Emails with similar content that are likely to be from one attacker are clustered together and the cluster ID is populated in the EmailClusterId field in EmailEvents table.
The clusters can include all phishing attempts from the attackers so far against the organization, it can aggregate emails with malicious URLs, attachments, and QR codes as one, based on the similarity. Hence, this is a powerful approach to explore the persistent phishing techniques of the attacker and the repeatedly targeted users.
Below is a sample query on scoping a campaign from the email reported by the end user. The same scoping logic can be used on the previously discussed hunting hypotheses as well.
let suspiciousClusters = EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where NetworkMessageId in (<List of suspicious Network Message Ids from Alerts>)
| distinct EmailClusterId;
EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where EmailClusterId in (suspiciousClusters)
| summarize make_set(Subject), make_set(SenderFromDomain), dcount(RecipientObjectId), dcount(SenderDisplayName) by EmailClusterId
6. Hunting for suspicious sign-in attempts:
In addition to detecting the campaigns, it is critical that we identify the compromised identities. To surface the identities compromised by AiTM, we can utilize the below approaches.
Risky sign-in attempt from a non-managed device
Any sign-in attempt from a non-managed, non-compliant, untrusted device should be taken into consideration, and a risk score for the sign-in attempt increases the anomalous nature of the activity. Monitoring these sign-in attempts can surface the identity compromises.
AADSignInEventsBeta
| where Timestamp > ago(7d)
| where IsManaged != 1
| where IsCompliant != 1
//Filtering only for medium and high risk sign-in
| where RiskLevelDuringSignIn in (50, 100)
| where ClientAppUsed == “Browser”
| where isempty(DeviceTrustType)
| where isnotempty(State) or isnotempty(Country) or isnotempty(City)
| where isnotempty(IPAddress)
| where isnotempty(AccountObjectId)
| where isempty(DeviceName)
| where isempty(AadDeviceId)
| project Timestamp,IPAddress, AccountObjectId, ApplicationId, SessionId, RiskLevelDuringSignIn, BrowserId
Suspicious sign-in attributes
Sign-in attempts from untrusted devices with empty user agent, operating system or anomalous BrowserId can also be an indication of identity compromises from AiTM.
Defender Experts also recommend monitoring the sign-ins from known malicious IP addresses. Although the mode of delivery of the phishing campaigns differ (QR code, HTML attachment, URL), the sign-in infrastructure often remains the same. Monitoring the sign-in patterns of compromised users, and continuously scoping the sign-in attempts based on the known patterns can also surface the identity compromises from AiTM.
Mitigations
Apply these mitigations to reduce the impact of this threat:
Educate users about the risks of QR code phishing emails.
Implement Microsoft Defender for Endpoint – Mobile Threat Defense on mobile devices used to access enterprise assets.
Enable Conditional Access policies in Microsoft Entra, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP address location information, and device status, among others, are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, Azuretrusted IP address requirements, or risk-based policies with proper access control. If you are still evaluating Conditional Access, use security defaults as an initial baseline set of policies to improve identity security posture.
Implement continuous access evaluation.
Leverage Microsoft Edge to automatically identify and block malicious websites, including those used in this phishing campaign, and Microsoft Defender for Office 365 to detect and block malicious emails, links, and files.
Monitor suspicious or anomalous activities in Microsoft Entra ID Protection. Investigate sign-in attempts with suspicious characteristics (e.g., location, ISP, user agent, and use of anonymizer services).
Implement Microsoft Entra passwordless sign-in with FIDO2 security keys.
Turn on network protection in Microsoft Defender for Endpoint to block connections to malicious domains and IP addresses.
If you’re interested in learning more about our Defender Experts services, visit the following resources:
Microsoft Defender Experts for XDR web page
Microsoft Defender Experts for XDR docs page
Microsoft Defender Experts for Hunting web page
Microsoft Defender Experts for Hunting docs page
Microsoft Tech Community – Latest Blogs –Read More
Azure Data @ Microsoft Fabric Community Conference 2024 | Data Exposed Exclusive
In this Data Exposed Exclusive, join Anna Hoffman, Bob Ward, and Jason Himmelstein as they discuss everything you need to know about the upcoming Microsoft Fabric Community Conference!
Microsoft Fabric Community Conference registration: https://aka.ms/fabcon (Enter the code DATAEXPOSED100 for a $100 savings)
Microsoft Tech Community – Latest Blogs –Read More
Learn how to power your AI transformation with the Microsoft Cloud at NVIDIA GTC.
Welcome to the new era where AI is driving innovation and rapidly changing what applications look like, how they’re designed and built, and how they’re delivered. Nearly every industry is undergoing a collective transformation and Microsoft is committed to helping bring your AI ambitions to production. Learn how you can power your AI transformation with the Microsoft cloud: the cloud built for the era of AI.
Join Microsoft at the NVIDIA GTC conference March 18 – 21, in-person at the San Jose Convention center and virtually. With over 900 inspiring sessions, 200+ exhibits, 20+ technical workshops, and tons of unique networking events, GTC offers attendees the opportunity to connect with a dream team of industry luminaries, developers, researchers, and business strategists helping shape what’s next in AI and accelerated computing.
Register for NVIDIA GTC today!
Learn how you can power your AI innovations with cutting-edge AI services and supercomputing infrastructure from Microsoft.
Add below sessions to your NVIDIA GTC conference schedule.
Featured sessions
Power your AI transformation with the Microsoft Cloud [S63275]
March 19 | 10:00 – 10:25 AM PT
Nidhi Chappell, General Manager, AI HPC infrastructure, Microsoft Azure
As new AI technology grabs headlines and captures imaginations for businesses and consumers alike, organizations have a unique opportunity to harness the power of AI to drive value faster in their own businesses. Learn how you can power your AI transformation with the Microsoft Cloud.
Behind the Scenes with Azure AI infrastructure [S63276]
March 19 | 10:30 – 10:55 AM PT
Madhavi Tadepalli, General Manager, Hardware Engineering, Microsoft Azure | Matt Vegas, Principal Product Manager, Microsoft Azure
Explore in depth the AI infrastructure that powers the most sophisticated AI models such as GPT. Uncover how Azure builds its AI Infrastructure with NVIDIA GPUs and networking technologies, and advanced datacenter development from silicon to systems to VMs.
Deep Dive into Training and Inferencing with Large Language Models on Azure [S63273]
March 20 | 3:00 – 3:25 PM PT
Hugo Affaticati, Technical Program Manager, Microsoft Azure | Kushal Datta, Principal Software Engineer, Microsoft Azure
Deep dive into training and inferencing with Large Language Models (LLMs) using Azure’s cutting-edge infrastructure with NVIDIA H100 SXM Tensor Core GPUs and H100 NVL 94GB PCIe Tensor Core GPUs. Discover optimized performance and scaling for LLMs, like GPT-3 and T5 using the JAX framework.
Unlocking Generative AI in the Enterprise with NVIDIA on Azure [S63277]
March 20 | 3:30 – 3:55 PM PT
Robin Wood, Director Partner Development, Microsoft Azure
Learn how Microsoft Azure and NVIDIA provide an end-to-end platform for building and deploying generative AI applications, including pre-trained models, workflow tools and immediate access to multi-node GPU infrastructure accelerating the delivery of industrial-grade generative AI applications.
The Next Level of GenAI with Azure OpenAI Service and Copilot [S63274]
55-min on-demand
Andy Beatman, Senior Product Marketing Manager, Microsoft Azure
Discover the power of Generative AI, LLMs and Copilots in action with Microsoft’s Azure OpenAI Service. Explore cutting-edge multi-modal models, including GPT-4 Visual, Text-to-Speech avatars and Azure AI Content Safety. Discover AI voices for gaming and the newest features in Azure AI Studio.
Talks and panel sessions
S61190 The Small Models Revolution
S62777 The Role of Generative AI in Modern Medicine
S61664 PIT: Optimizing Dynamic Sparse Deep Learning Models via Permutation Invariant Transformation
S61936 A Deep Dive into Sustainable Cloud Computing
S62336 ONNX Runtime: Accelerated AI Deployment for PC Apps
S62730 Generative AI Adoption and Operational Challenges in Government
S62783 Digitalizing the World’s Largest Industries with OpenUSD and Generative AI
S62504 Optimizing Your AI Strategy to Develop and Deploy Novel Deep Learning Models in the Cloud for Medical Image Analysis
S62447 Best Practices in Networking for AI: Perspectives from Cloud Service Providers
Come visit us at booth #1108 where we’ll showcase the latest in AI services and supercomputing infrastructure.
Live discussion sessions (in-booth theater)
Connect with Microsoft AI experts
Demo pods
Hands-on hardware bar
Get fresh coffee served by Artly the Barista Bot!
Can’t attend in person? Visit our virtual booth, live on March 18th.
Microsoft will be hosting a welcome reception and a private executive luncheon with NVIDIA. More information to come. If you are interested in attending, please reach out to your Microsoft representative.
Microsoft is proud to host NVIDIA Hands-On Training at GTC. Attend full-day, hands-on, instructor-led workshops or two-hour free training labs to get up to speed on the latest technology and breakthroughs. These sessions enable and empower you to leverage NVIDIA GPUs on the Microsoft Azure platform to solve the world’s most interesting and relevant problems. Register for NVIDIA Hands-on Training today.
Learn more about Azure AI
Azure AI, data, and application innovations help turn your AI ambitions into reality
Azure delivers purpose-built cloud infrastructure in the era of AI
Microsoft Tech Community – Latest Blogs –Read More
More Speaking in Ciphers and other Enigmatic Tongues with a focus on SCHANNEL hardening.
More Speaking in Ciphers and other Enigmatic Tongues with a focus on SCHANNEL hardening.
Hi! Jim Tierney here again to talk to you about Cryptographic Algorithms, SCHANNEL and other bits of crypto excitement. I have elucidated at length on this topic in this post which had been updated a few years back to the aptly titled, Speaking in Ciphers and other Enigmatic tongues…update!
I am creating this brand-new piece of content in this crypto space to further discuss different Microsoft supported methods that can be used to disable weak cipher suites and protocols.
The scenario we are addressing is that your company is doing a vulnerability and compliance assessment, and they just ran a scanning tool against all your Windows Servers. The software reports back that you have weak ciphers enabled, highlighted in RED and including a link to the following Microsoft documentation –
KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll:
http://support.microsoft.com/kb/245030/en-us
You immediately open a case with Microsoft asking…. What can I do? What can I do?
There are two Microsoft supported methods of configuring cipher suites:
Via GP: https://msdn.microsoft.com/en-us/library/windows/desktop/bb870930(v=vs.85).aspx
Via cmdlets: https://technet.microsoft.com/en-us/library/dn296632.aspx
How to limit the Cipher Suites that Windows will support
The Default location and ordering of Cipher Suites is located here:
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlCryptographyConfigurationLocalSSL0010002
NOTE: We strongly suggest NOT modifying the real registry location. Instead, we recommend leveraging the Group Policy setting below to manage the list of ciphers supported in the Operating System. If the Microsoft development team supports a new cipher, they could end up putting back ciphers you removed from this default location if you do this.
Configuring the Group Policy for Cipher suite ordering/content will overrule what is listed in this default location.
Here is the location of the Cipher Suite ordering group policy:
Computer ConfigurationAdministrative TemplatesNetworkSSL Configuration SettingsSSL Cipher Suite Order
Remember, when configuring the Cipher suite order policy, If the 1023 size is passed, Cipher suites will be truncated because the list exceeds the 1023-character limitation for the
*In addition, Windows Server 2016 and newer do not require the _PXXX suffixes, so the list of cipher suites is a lot shorter. Please note that Win10/2016 and above solves this problem in 2 ways:
Elliptical Curve (EC) suffixes (also known as the _P values) are no longer part of the cipher suite names, therefore there is no more Cartesian explosion of cipher suite names (e. g. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384, …)
In Windows 10, curves are prioritized separately from cipher suites, which means the cipher suite list in the GP Editor is much shorter.
NOTE: These EC suffixes ARE required for Windows Server 2012 operating systems to limit the ciphers on the OS. However, Windows 10/2016 OS DOES NOT support these cipher names. So, if you still need to support Windows Server 2012 (you have my sympathy) then you will need to have a GPO for this OS specifically, and then we would also recommend that the GPO be configured with a WMI Filter for the OS version.
Create WMI Filters for the GPO | Microsoft Learn
PowerShell cmdlets are provided for cipher suites enumeration/enabling/disabling/prioritization as indicated earlier: https://learn.microsoft.com/en-us/powershell/module/tls/?view=windowsserver2022-ps
Specifically for Windows PowerShell, the article below mentions how to update PowerShell scripts or the related registry settings to ensure 1.2 is used:
When the SSL Cipher Suite Order group policy is modified and applied successfully it modifies the following location in the registry:
HKEY_LOCAL_MACHINESOFTWAREPoliciesMicrosoftCryptographyConfigurationSSL0010002
Also remember, you should be eliminating weak ciphers from the list, not adding them to accommodate older operating systems.
Please take some time and review my previous blog – https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/speaking-in-ciphers-and-other-enigmatic-tongues-8230-update/ba-p/400362
And the following information as well – Protocols in TLS/SSL (Schannel SSP) – Win32 apps | Microsoft Docs
Words of Wisdom
Restricting supported TLS / SSL Protocols that are used.
If you have been using an old moldy script to configure SCHANNEL content on your Windows servers, you must seriously consider updating or rethinking this method and figure out the SCHANNEL protocols you want to disable on ALL these servers and configure ONLY WHAT YOU WANT DISABLED. TLS 1.2 is ENABLED by default in EVERY OS starting with WINDOWS 2012. YOU DO NOT NEED TO CREATE A REGISTRY SETTING FOR TLS 1.2
Enforcing the use of TLS 1.2 will require DISABLING any other protocol (i.e., TLS 1.0 and 1.1). Disabling SCHANNEL protocols and cipher suites can affect interoperability. Especially connectivity to applications, services and servers that are not current versions of their product.
What Ciphers should I leave enabled?
My advice regarding ciphers is to stick with the default cipher suites for their Windows version. These ciphers are carefully chosen and prioritized to provide a balance of interoperability, performance, and security. If there are specific security requirements, then a change to the list of the cipher suites and their priorities is needed. Some applications (third party, or Microsoft) may still need lesser TLS versions, so testing any SCHANNEL registry modifications is necessary.
Applications that might need older protocol versions.
.NET-based applications.
One glaringly apparent example of this is .NET.
Any .NET application written before 4.7 WILL have problems using TLS 1.2. By default, older versions of .NET prefer TLS 1.0 ONLY. See the following – https://learn.microsoft.com/en-us/dotnet/framework/network-programming/tls#configure-security-via-the-windows-registry
Example of the settings in the article above –
[HKEY_LOCAL_MACHINESOFTWAREWOW6432NodeMicrosoft.NETFrameworkv2.0.50727]
“SystemDefaultTlsVersions”=dword:00000001
“SchUseStrongCrypto”=dword:00000001
[HKEY_LOCAL_MACHINESOFTWAREWOW6432NodeMicrosoft.NETFrameworkv4.0.30319]
“SystemDefaultTlsVersions”=dword:00000001
“SchUseStrongCrypto”=dword:00000001
[HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv2.0.50727]
“SystemDefaultTlsVersions”=dword:00000001
“SchUseStrongCrypto”=dword:00000001
[HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv4.0.30319]
“SystemDefaultTlsVersions”=dword:00000001
“SchUseStrongCrypto”=dword:00000001
WinHTTP based applications.
WINHTTP – Typically this is services or applications that run as background services, and usually run as SYSTEM or NetworkService accounts.
https://learn.microsoft.com/en-us/windows-server/networking/configure-secure-protocol-options-winhttp?tabs=x86
HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
DefaultSecureProtocols = (DWORD): 0xAA0
HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
DefaultSecureProtocols = (DWORD): 0xAA0
WinINET based applications.
WinINET – Typically this is a user base application like any Office application that runs as the user account logged onto the system. They are going to be applications that run with an interactive desktop. It would be Internet Explorer, Or Edge running in IE (Internet Explorer) Mode. It DOES not include Edge/Chromium browsers, however.
KB5017811—Manage Transport Layer Security (TLS) 1.0 and 1.1 after default behavior change on September 20, 2022
https://support.microsoft.com/en-us/topic/kb5017811-manage-transport-layer-security-tls-1-0-and-1-1-after-default-behavior-change-on-september-20-2022-e95b1b47-9c7c-4d64-9baf-610604a64c3e
Group Policy:
HKEY_LOCAL_MACHINESOFTWAREPoliciesMicrosoftWindowsCurrentVersionInternet Settings
SecureProtocols = (DWORD): 0xAA0
HKEY_CURRENT_USERSOFTWAREPoliciesMicrosoftWindowsCurrentVersionInternet Settings
SecureProtocols = (DWORD): 0xAA0
Registry:
HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet Settings
SecureProtocols = (DWORD): 0xAA0
HKEY_CURRENT_USERSOFTWAREMicrosoftWindowsCurrentVersionInternet Settings
SecureProtocols = (DWORD): 0xAA0
Modifying Signature/Hashing Algorithms
If you are still with me and have been poking around in the registry (on a test computer), you may have noticed the following location and would like some information regarding –
HKLMSYSTEMCurrentControlSetControlCryptographyConfigurationLocalSSL0010003
The value content of this location only affects TLS 1.2
Operating systems prior to Windows 2008 SP2 standard do not support this value item.
The data in the Functions value refer to the signature/hash combinations that are supported on TLS 1.2 certificate chains (excluding the root) as well as the signature/hash combinations that can be used when signing TLS 1.2 messages such as the ServerKeyExchange message and the CertificateVerify message.
The value in the (Default) location, NCRYPT_SCHANNEL_SIGNATURE_INTERFACE tells the server which signatures it can use to sign the ServerKeyExchange message and which signatures are allowed when verifying the server certificate chain.
These settings have nothing to do with disabling weak protocols or ciphers and should not be modified EVER!
The same hold true for this location as well –
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlCryptographyConfigurationLocalDefault
Reference –
I Just want the SCHANNEL Registry values to implement please.
If you are looking for just a quick list of SCHANNEL registry values to implement to help you pass a Security Scan/Audit here is an incredibly good list of values to implement to make sure the OS is not vulnerable to these older exploits.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersDES 56
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersNULL
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC2 40/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC2 56/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC2 128/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC4 128/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC4 40/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC4 56/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersRC4 64/128
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersTriple DES 168
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELCiphersTriple DES 168/168
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELHashesMD5
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELHashesSHA
Enabled = (DWORD): 0xFFFFFFFF
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsMulti-Protocol Unified HelloServer
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESystemCurrentControlSetControlSecurityProvidersSchannelProtocolsMulti-Protocol Unified HelloClient
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESystemCurrentControlSetControlSecurityProvidersSchannelProtocolsPCT 1.0Client
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESystemCurrentControlSetControlSecurityProvidersSchannelProtocolsPCT 1.0Server
Enabled = (DWORD): 0x0
KEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0Client
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 2.0Server
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 3.0Client
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsSSL 3.0Server
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0Client
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.0Server
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS1.1Client
Enabled = (DWORD): 0x0
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.1Server
Enabled = (DWORD): 0x0
HKLMSystemCurrentControlSetControlLSASecurityProvidersSchannelProtocolsTLS 1.2Client
DisabledByEnabled = (DWORD): 0x0
Enabled = (DWORD): 0x1
HKLMSystemCurrentControlSetControlLSASecurityProvidersSchannelProtocolsTLS 1.2Server
DisabledByEnabled = (DWORD): 0x0
Enabled = (DWORD): 0x1
HKEY_LOCAL_MACHINESOFTWAREWOW6432NodeMicrosoft.NETFrameworkv2.0.50727
SystemDefaultTlsVersions = (DWORD): 0x1
SchUseStrongCrypto = (DWORD): 0x1
HKEY_LOCAL_MACHINESOFTWAREWOW6432NodeMicrosoft.NETFrameworkv4.0.30319
SystemDefaultTlsVersions = (DWORD): 0x1
SchUseStrongCrypto = (DWORD): 0x1
HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv2.0.50727
SystemDefaultTlsVersions = (DWORD): 0x1
SchUseStrongCrypto = (DWORD): 0x1
HKEY_LOCAL_MACHINESOFTWAREMicrosoft.NETFrameworkv4.0.30319
SystemDefaultTlsVersions = (DWORD): 0x1
SchUseStrongCrypto = (DWORD): 0x1
HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
DefaultSecureProtocols = (DWORD): 0x1
HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
DefaultSecureProtocols = (DWORD): 0x1
Vulnerabilities:
RC4 based Cipher Suites
SCHANNEL – RC4/Sweet32 Vulnerability information
These two updates are specific to RC4 based information here –
https://nvd.nist.gov/vuln/detail/CVE-2013-2566
https://nvd.nist.gov/vuln/detail/CVE-2015-2808
RC4 ciphers are NO LONGER SUPPORTED
See the following – Features that are removed or deprecated in Windows 10 Fall Creators Update
TLS RC4 Ciphers to be disabled by default. For more information, see the following Windows IT Center topic:
TLS (Schannel SSP) changes in Windows 10 and Windows Server 2016 –
DisabledByDefault change for the following cipher suites:
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 (RFC 5246) in Windows 10, version 1703
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256 (RFC 5246) in Windows 10, version 1703
TLS_DHE_DSS_WITH_AES_256_CBC_SHA (RFC 5246) in Windows 10, version 1703
TLS_DHE_DSS_WITH_AES_128_CBC_SHA (RFC 5246) in Windows 10, version 1703
TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA (RFC 5246) in Windows 10, version 1703
TLS_RSA_WITH_RC4_128_SHA in Windows 10, version 1709
TLS_RSA_WITH_RC4_128_MD5 in Windows 10, version 1709
Once again please refer to the previous blog I wrote that explains SCHANNEL and Cipher Suite changes and what is and is not supported in Windows operating systems – https://techcommunity.microsoft.com/t5/ask-the-directory-services-team/speaking-in-ciphers-and-other-enigmatic-tongues-8230-update/ba-p/400362
You should use this as a guide for modifying the SCHANNEL protocols, list of default ciphers, and removing the weaker ones completely. It should be a favorite in your browser settings and currently be open in the tab right next to the one you are using to read this article
Regarding 3DES:
Sweet32 is a cryptographic attack against short block size (64-bit block) ciphers.
Vulnerability scanners will trigger this if a 3DES cipher suite is present. In Windows server, 3DES cannot be used as the only cipher but it is acceptable as an optional cipher suite for backward compatibility.
This is the minimum cipher in the negotiation list, so it is used only as a last resort.
TLS_RSA_WITH_3DES_EDE_CBC_SHA must not be offered on its own as it is considered inferior to the other cipher suites but should be offered for FIPS (Federal Information Processing Standards) constrained clients that do not have AES-based cipher suites available.
Microsoft also mitigates usage of this cipher by removing 3DES from available ciphers in the FalseStart list which prevents MiTM (Machine in the Middle) attack forcing encryption downgrade.
https://technet.microsoft.com/library/security/3155527.aspx
This mitigation is also listed on the website https://sweet32.info/
Vulnerability scanners should not be simply searching for registry keys indicating something is disabled (3DES). They should be reporting on configured Cipher Suites if they include 3DES.
Lucky Thirteen vulnerability mitigation
Disabling TLS 1.0 entirely.
The removal of all cipher block chaining (CBC) ciphers. EXAMPLE – TLS_RSA_WITH_AES_256_CBC_SHA256
There are a couple of CBC ciphers that are still supported in Windows 10
See the following – TLS Cipher Suites in Windows 10 v1903, v1909, and v2004 – Win32 apps | Microsoft Docs
3rd (non-Microsoft) party TLS implementations
I made all the changes to the SChannel registry values, and even rebooted my server but some endpoints are still showing as vulnerable when I run my security scanning software again. Why did this not fix all my problems?
Keep in mind that Microsoft is not the only TLS implementation on the scene. Java and OpenSSL are just a couple of third-party SSL/TLS implementations that do not leverage the Microsoft SCHANNEL Security Support Provider Interface (SSPI) at all. If you have implemented the above registry values and rebooted the server and the scanning tool is still showing a vulnerability it is time to start thinking that this may not be an application that is using Microsoft implementation of SSL/TLS. To investigate this:
The first thing to do is look at your scan report and determine what network port or ports the scanning tool is indicating are still vulnerable.
On the computer being reported as vulnerable open an elevated command prompt and type: NetStat –ANOB > %ComputerName%_Netstat.txt.
Once it is done, then you can open the text file created, and search for the port determined from step 1.
It will give you the process name that is listening on that port. If it is Java.exe/Javaw.exe or OpenSSL.exe then this is not something Microsoft support is going to be able to help with. We will redirect you to the vendor of your 3rd party application.
If this is the case, you will need to contact those vendors to get those configured applications configured properly.
Enabling verbose SCHANNEL logging may also help you determine what third party SCHANNEL applications are installed on your servers by configuring verbose logging. Verbose logging will show successful and failing connections providing the protocol and ciphers being used in addition to the computer from which the connection is coming from:
HKLMSYSTEMCurrentControlSetControlSecurityProvidersSCHANNEL
EventLogging (DWORD)
1 (Basic)
7 (Verbose)
You should also be aware that Intune policy can also be leveraged to manage cipher suites as well. These settings may interfere with your SCHANNEL policies and configurations.
HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlCryptographyMDMPoliciesSSL
CipherSuites REG_SZ
I trust you have found this content both illuminating and enjoyable in your efforts to secure your SCHANNEL environment without sacrificing the necessary functionality. If you should you encounter any hurdles along the way, please don’t hesitate to reach out to us for assistance. We’re here to support your continued success with Windows. Happy Hunting!
Jim “How I learned to stop worrying and ♥ Crypto” Tierney
Microsoft Tech Community – Latest Blogs –Read More
Right-size your PTU deployment and save big
Context
Azure OpenAI Service’s Provisioned Throughput Units (known as “PTUs”) have been all the rage over the past few months. Every enterprise customer has been wanting to get their hands on their own slice of Azure Open AI Service. With PTUs, they can run their GenAI workloads in production at scale with predictable latency and without having to worry about noisy neighbors. Customers of all sizes and from all verticals have been developing groundbreaking applications, usually starting with the Pay-as-you-go (PayGo) flavor of Azure Open AI. When the time comes to deploy an enterprise–grade application to production however, most rely on reserving capacity with PTUs. These are deployed within your own Azure Subscription and allow you to enjoy unencumbered access to the latest models from Open AI such as GPT-4 Turbo. Because PTUs are available 24/7 throughout the month for use, customers need to shift the paradigm of utilizing tokens into utilizing time when considering cost. With this shift often comes the challenge of knowing how to right-size their PTUs.
To aid in that exercise, Microsoft provides tools such as the PTU calculator within the AI Studio experience. These tools, however, make assumptions such as PTUs being able to handle peak load. While this could be a valid approach in many cases, it’s only one way of thinking about choosing the right size for a deployment. Customers often need to consider more variables, including sophisticated architectures to get the best return on their investment.
One pattern that we have seen emerge is the spillover, or bursting, pattern. With this pattern, you do not provision PTUs for peak traffic. Instead, you define the amount of PTU serviced traffic that the business can agree upon, and you route the overflow to a PayGo deployment. For example, your business may decide that it’s acceptable to have 90% of the traffic serviced by the PTU deployment with a known latency and to have 10% of overflow traffic serviced with unpredictable performance through a PayGo deployment. I’ll go into detail below on when to invoke this pattern more precisely, but if you are looking for a technical solution to implement this, you may check out this post: Enable GPT failover with Azure OpenAI and Azure API Management – Microsoft Community Hub .The twist is that depending on the profile of your application, this 10% degraded performance can save you north of 50% in unused PTU cost.
If as you’re reading this, you have found yourself in this predicament, you have come to the right place. In this blog post, we try to convey the message that PTUs done right are not necessarily expensive by characterizing customer’s scenarios anecdotally. The three application scenarios we will review are known as: The Unicorn, The No-Brainer, and The Problem Child.
The Unicorn
We will go quickly over the unicorn since nobody has ever seen it and it might not even exist. But just in case, the Unicorn application sends/receives token on a perfectly steady basis, weekdays, weekends, daytime, nighttime. If you ever have one of those, PTU makes perfect sense, you get maximum value and leave no crumb on the table. And if your throughput is meaningful in addition to being constant, you will likely also save lots of money compared to a PayGo deployment, in addition from reaping the predictable and low latency that comes with PTUs.
The No–Brainer
Next up is our No-Brainer application. The No-Brainer application profile has mild peaks and valleys. The application sends a constant baseline of tokens to the model, but perhaps there are a couple of peak hours during the day where the application sends a little more. In this case, you sure could provision your PTU deployment to cover the valley traffic and send anything extra to a PayGo deployment. However, in the No-Brainer application, the distance between our peak and valley is minimal, and, in this case, the juice might now be worth the squeeze. Do we want to add complexity to our application? Do we want to invest the engineering time and effort to add routing logic? Do we want to introduce possibly- degraded service to our application and perhaps not even be able to provision a lesser amount of PTUs increments? Again, it all comes down the distance between your peaks and valleys. If those are close, purchase enough PTU to cover for peak. No brainer.
The Problem Child
The Problem Child is that application where the traffic is bursty in nature and the variance in throughput is high. Perhaps the end of the quarter is near, and the company is behind on revenue, so every seller is hitting their sales copilot hard for a couple days in an attempt to bridge the gap to quota. How do we best cover the Problem Child with PTUs?
Option 1: Provision for peak
As we discussed above, our first inclination could be to provision for peak and that is also what most calculators will assume that you want to do so that you can cover all demand conservatively. In this instance, you maximize user experience because 100% of your traffic is covered by your PTU deployment and there is no such thing as degraded service. Everyone gets the same latency for the same request every time. However, this is the costly way to manage this application. If you cannot use your PTU deployment outside peak time, you are leaving PTU value on the table. Some customers are lucky enough to have both real-time and batch use cases. In this case, the real-time use cases utilize the PTU deployment during business hours; during downtime, the customer is then free to utilize the deployment for the batch inferencing use cases and still reap the PTU value. Other customers operate on several time zones and when one team goes offline for the day, somewhere 8 hours behind, another team comes online, and the application maintains a steady stream of tokens to the PTU deployment. But for a lot of customers, there isn’t a way to use the PTU deployment outside of peak time and provisioning for peak might not always be the soundest business decision. It depends on budgets, UX constraints and importantly, how narrow, tall and frequent the peak is.
Option 2: Provision for baseline
In option 2, the business is amenable to a trade-off. With this tradeoff, we bring our Azure Open AI cost significantly down at the expense of “some” user experience. And the hard part is to determine how much of the user experience we are willing to sacrifice and at what monetary gain. The idea here is to evaluate the application on a PayGo deployment and see how it performs. We can consider this to be our degraded user experience. If it so happens that our peaks are tall, narrow and rare, and if we are willing to say that it’s acceptable for a small slice of our traffic to experience degraded performance during peak time, then it is highly conceivable that sacrificing 5% of your requests by sending them to a Paygo deployment could yield, 30%, 40% maybe 50% savings compared to the option 1 and provisioning for peak.
The recognized savings will be a function of the area in green below.
Conclusion
PTUs can be perceived as expensive. But this could be that provisioning for peak is assumed. And this could be the best way to go if your business is such that you want to always ensure minimum latency and the best possible user experience. However, if you are willing to combine PTU with a little bit of PayGo (if your application profile lends itself to it), you could realize significant savings and reinvest the leftovers in your next GenAI project on Microsoft Azure… and also buy me a latte.
Much appreciation to my co-author Kit Papandrew
Microsoft Tech Community – Latest Blogs –Read More
Migrate classic Cloud Service to CSES when SKU is unsupported without original project
With the impending retirement of the classic Cloud Service (CS) on August 31st, 2024, an increasing number of users have initiated the migration of their classic Cloud Service to Cloud Service Extended Support (CSES). To facilitate this transition, an official feature known as in-place migration has been introduced, enabling the seamless migration of classic CS to CSES without incurring any downtime. However, certain limitations exist, with the VM size used by the CS role being a notable factor. As per documentation, the A-series, encompassing Small, Medium, and Large VM sizes, is no longer supported in CSES, necessitating their conversion to corresponding VM sizes as a preliminary step.
To apply a change in the VM size utilized in classic CS, a redeployment/upgrade is required subsequent to modifying the VM size in the .csdef file. While this is generally a straightforward operation, in cases where the project deployed in the classic CS is considerably dated, there is a possibility that the original project may be lost. Consequently, re-packaging the project into .cspkg and .cscfg files for redeployment/upgrade becomes unfeasible.
This blog will primarily address this specific scenario and outline strategies for resolving this predicament.
Pre-requirement:
A healthy and running classic CS with project deployed in production slot. The used VM size of at least one role is A-series.
A classic Storage Account under same subscription as classic CS. (The UI to create classic Storage Account in Azure Portal is already invisible because this resource type is supposed to be retired for new creation. But it’s still possible to use command to create it for now. Sample command like: New-AzResource -ResourceName <accountname> -ResourceGroupName <resourcegroupname> -ResourceType “Microsoft.ClassicStorage/StorageAccounts” -Location <location> -Properties @{ AccountType = “Standard_LRS” } -ApiVersion “2015-06-01”)
Install Az PowerShell module in local machine.
Attention! By following this way, a short downtime is unavoidable. If this needs to be applied on a production environment, please do the same test in another environment at first.
Details:
Refer to New Deployment Based On Existing Classic Cloud Service – Microsoft Community Hub to get the .cspkg and .cscfg files at first. (Please remember to install the .pfx certificate in the machine where the Get Package request will be sent.)
The expected result is that the .cspkg and .cscfg files will be found in the classic storage account container. Please download them to local machine.
(optional) If the new CSES needs to use same IP address as original classic CS, please follow these steps.
Install legacy Azure PowerShell module
Reserve current IP as reserved IP address
New-AzureReservedIP –ReservedIPName <reserved ip name> –Location <classic CS location> -ServiceName <classic CS name>
The reserved IP address will be found in a resource group called Default-Networking.
c. Cut down the association between reserved IP and classic CS.
After running this command, the classic CS IP address will be changed. If the application/client side is using IP address to connect to classic CS, it will start failing.
Remove-AzureReservedIPAssociation –ReservedIPName <reserved ip name> -ServiceName <classic CS name>
Reference of step b and c can be found here: Manage Azure reserved IP addresses (Classic) | Microsoft Learn
d. Refer to this document, convert the reserved IP address into a public IP address which can be used by CSES.
Move-AzureReservedIP -ReservedIPName <reserved IP name> -Validate
Move-AzureReservedIP -ReservedIPName <reserved IP name> -Prepare
Move-AzureReservedIP -ReservedIPName <reserved IP name> -Commit
After the command is finished, you will find the converted public IP in a resource group called <publicipaddress-name>-Migrated.
3. By default the public IP address is without any domain name. If it’s needed, please configure it in Configuration page.
4. Move the public IP address to the same resource group where the new CSES resource will be created.
(optional) If the classic CS is using certificate, create a Key Vault resource in the same subscription and same region, then upload the certificate(s) into Key Vault, Certificates. For more information, please refer to here.
Create a Virtual Network in the same resource group and same region as new CSES resource.
Open the downloaded .cscfg file with any text editor and add/modify the NetworkConfiguration part:
<NetworkConfiguration>
<VirtualNetworkSite name=”xxx” />
<AddressAssignments>
<InstanceAddress roleName=”xxx”>
<Subnets>
<Subnet name=”xxx” />
</Subnets>
</InstanceAddress>
</AddressAssignments>
</NetworkConfiguration>
(optional) If the step 2 is followed, please also add ReservedIPs part.
<NetworkConfiguration>
<VirtualNetworkSite name=”xxx” />
<AddressAssignments>
<InstanceAddress roleName=”xxx”>
<Subnets>
<Subnet name=”xxx” />
</Subnets>
</InstanceAddress>
<ReservedIPs>
<ReservedIP name=”xxx” />
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
After modifying the .cscfg file, please upload the .cscfg and .cspkg file into a storage account, blob container, then generate and note down the SAS URL of these two files.
If the step 2 is not followed, please manually create a public IP address with Basic sku and static IP address assignment mode.
To create the new CSES resource, there are two possible ways: Using PowerShell command and using ARM template. The key point is to use the SKU override feature to replace the VM size setting in the .csdef file. (Attention! Since the VM size configured inside of the .csdef file is still the unsupported VM size, please remember to use the override SKU feature in ARM template or PowerShell command every time in the future as well. Otherwise the deployment/upgrade will be failed.)
Using PowerShell script:
If the Key Vault is not used, remove the first $osProfile part and the last OSProfile parameter of New-AzCloudService command.
$keyVault = Get-AzKeyVault -ResourceGroupName <key vault resource group> -VaultName <key vault resource name>
$certificate = Get-AzKeyVaultCertificate -VaultName <key vault resource name> -Name <certificate name in Key Vault>
$secretGroup = New-AzCloudServiceVaultSecretGroupObject -Id $keyVault.ResourceId -CertificateUrl $certificate.SecretId
$osProfile = @{secret = @($secretGroup)}
$cspkgSAS = <SAS URL of cspkg file>
$cscfgSAS = <SAS URL of cscfg file>
$role1 = New-AzCloudServiceRoleProfilePropertiesObject -Name <Role1 name> -SkuName <new supported vm size> -SkuTier ‘Standard’ -SkuCapacity <instance number>
$role2 = New-AzCloudServiceRoleProfilePropertiesObject -Name <Role2 name> -SkuName <new supported vm size> -SkuTier ‘Standard’ -SkuCapacity <instance number>
$roleProfile = @{role = @($role1, $role2)}
$publicIP = Get-AzPublicIpAddress -ResourceGroupName <public IP resource group> -Name <public IP name>
$feIpConfig = New-AzCloudServiceLoadBalancerFrontendIPConfigurationObject -Name <frontend IP config name> -PublicIPAddressId $publicIP.Id
$loadBalancerConfig = New-AzCloudServiceLoadBalancerConfigurationObject -Name <load balancer config> -FrontendIPConfiguration $feIpConfig
$networkProfile = @{loadBalancerConfiguration = $loadBalancerConfig}
# Create Cloud Service
New-AzCloudService -Name <CSES name> -ResourceGroupName <resource group name> -Location <CSES Location> -AllowModelOverride -PackageUrl $cspkgSAS -ConfigurationUrl $cscfgSAS -UpgradeMode ‘Auto’ -RoleProfile $roleProfile -NetworkProfile $networkProfile -OSProfile $osProfile
Using ARM template:
If Key Vault is not used, remember to remove the secrets in osprofile, keep it empty as
“osProfile”: {}, remove secrets parameter part in template file and remove secrets parameter from parameter file.
Template file:
{
“$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“cloudServiceName”: {
“type”: “string”,
“metadata”: {
“description”: “Name of the cloud service”
}
},
“location”: {
“type”: “string”,
“metadata”: {
“description”: “Location of the cloud service”
}
},
“deploymentLabel”: {
“type”: “string”,
“metadata”: {
“description”: “Label of the deployment”
}
},
“packageSasUri”: {
“type”: “securestring”,
“metadata”: {
“description”: “SAS Uri of the CSPKG file to deploy”
}
},
“configurationSasUri”: {
“type”: “securestring”,
“metadata”: {
“description”: “SAS Uri of the service configuration (.cscfg)”
}
},
“roles”: {
“type”: “array”,
“metadata”: {
“description”: “Roles created in the cloud service application”
}
},
“publicIPName”: {
“type”: “string”,
“defaultValue”: “contosocsIP”,
“metadata”: {
“description”: “Name of public IP address”
}
},
“upgradeMode”: {
“type”: “string”,
“defaultValue”: “Auto”,
“metadata”: {
“UpgradeMode”: “UpgradeMode of the CloudService”
}
},
“secrets”: {
“type”: “array”,
“metadata”: {
“description”: “The key vault id and certificates referenced in the .cscfg file”
}
}
},
“variables”: {
“cloudServiceName”: “[parameters(‘cloudServiceName’)]”,
“subscriptionID”: “[subscription().subscriptionId]”,
“lbName”: “[concat(variables(‘cloudServiceName’), ‘LB’)]”,
“lbFEName”: “[concat(variables(‘cloudServiceName’), ‘LBFE’)]”,
“resourcePrefix”: “[concat(‘/subscriptions/’, variables(‘subscriptionID’), ‘/resourceGroups/’, resourceGroup().name, ‘/providers/’)]”
},
“resources”: [
{
“apiVersion”: “2020-10-01-preview”,
“type”: “Microsoft.Compute/cloudServices”,
“name”: “[variables(‘cloudServiceName’)]”,
“location”: “[parameters(‘location’)]”,
“tags”: {
“DeploymentLabel”: “[parameters(‘deploymentLabel’)]”
},
“properties”: {
“packageUrl”: “[parameters(‘packageSasUri’)]”,
“configurationUrl”: “[parameters(‘configurationSasUri’)]”,
“upgradeMode”: “[parameters(‘upgradeMode’)]”,
“allowModelOverride”: true,
“roleProfile”: {
“roles”: “[parameters(‘roles’)]”
},
“networkProfile”: {
“loadBalancerConfigurations”: [
{
“id”: “[concat(variables(‘resourcePrefix’), ‘Microsoft.Network/loadBalancers/’, variables(‘lbName’))]”,
“name”: “[variables(‘lbName’)]”,
“properties”: {
“frontendIPConfigurations”: [
{
“name”: “[variables(‘lbFEName’)]”,
“properties”: {
“publicIPAddress”: {
“id”: “[concat(variables(‘resourcePrefix’), ‘Microsoft.Network/publicIPAddresses/’, parameters(‘publicIPName’))]”
}
}
}
]
}
}
]
},
“osProfile”: {
“secrets”: “[parameters(‘secrets’)]”
}
}
}
]
}
Parameter file:
{
“$schema”: “https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#”,
“contentVersion”: “1.0.0.0”,
“parameters”: {
“cloudServiceName”: {
“value”: <CSES name>
},
“location”: {
“value”: <CSES region>
},
“deploymentLabel”: {
“value”: “deployment label of cses by ARM template”
},
“packageSasUri”: {
“value”: <.csdef SAS URL>
},
“configurationSasUri”: {
“value”: <.cscfg SAS URL>
},
“roles”: {
“value”: [
{
“name”: <role1 name>,
“sku”: {
“name”: <new supported VM size>,
“tier”: “Standard”,
“capacity”: <instance number>
}
},
{
“name”: <role2 name>,
“sku”: {
“name”: <new supported VM size>,
“tier”: “Standard”,
“capacity”: <instance number>
}
}
]
},
“publicIPName”: {
“value”: <public IP address name>
},
“upgradeMode”: {
“value”: “Auto”
},
“secrets”: {
“value”: [
{
“sourceVault”: {
“id”: “/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.KeyVault/vaults/<key vault name>”
},
“vaultCertificates”: [
{
“certificateUrl”: “https://<key vault name>.vault.azure.net/secrets/<certificate name>/<secret ID>”
}
]
}
]
}
}
}
Result:
Microsoft Tech Community – Latest Blogs –Read More