Month: October 2024
sites need to be excluded from dpi/ssl for installing windows products
Hi guys.
After activating dpi/ssl on our SonicWall i cannot install Microsoft products.
I need to know which sites/addresses I need to exclude from this rule.
Ex *.microsoft.com
Thanks,
Bogdan
Hi guys.After activating dpi/ssl on our SonicWall i cannot install Microsoft products.I need to know which sites/addresses I need to exclude from this rule.Ex *.microsoft.comThanks,Bogdan Read More
Building Real-Time Web Apps with SignalR, WebAssembly, and ASP.NET Core API
Have you ever worked on a chat application where you need to display messages in real-time or on applications that need to reflect changes based on backend updates? One common method used previously is polling the backend for changes at specific intervals. However, this approach is not ideal as it slows down the client-side application and increases traffic on the backend. One solution to this problem is SignalR.
In this blog, we’ll explore how to build a real-time web application using three cutting-edge technologies: SignalR, WebAssembly, and ASP.NET Core API. SignalR simplifies the process of adding real-time web functionality, WebAssembly brings near-native performance to web apps, and ASP.NET Core API provides a robust and scalable backend. By combining these technologies, you can create web applications that are both highly responsive and performant.
What is SignalR?
SignalR is a library for ASP.NET that enables real-time web functionality. It allows server-side code to push content to connected clients instantly as it becomes available. SignalR supports multiple communication protocols, including WebSockets, and automatically falls back to other compatible protocols for older browsers. This makes it an excellent choice for applications that require high-frequency updates, like chat applications, gaming, or live data feeds.
Setting Up the ASP.NET Core API
Create a New ASP.NET Core Project
First, create a new ASP.NET Core Web API project. Open your terminal or command prompt and run the following command:
dotnet new webapi -n RealTimeApp
This command creates a new ASP.NET Core Web API project named RealTimeApp.
Install SignalR
Next, navigate to the project directory and install the SignalR package:
cd RealTimeApp
dotnet add package Microsoft.AspNetCore.SignalR
Configure SignalR in program.cs
Open the program.cs file and configure SignalR. Your program.cs file should look like this:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddCors(options =>
{
options.AddPolicy(“CorsPolicy”, builder =>
{
builder.WithOrigins(“http://localhost:5051”)
.AllowAnyHeader()
.AllowAnyMethod()
.AllowCredentials();
});
});
builder.Services.AddControllers();
// Adding SignalR
builder.Services.AddSignalR();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseCors(“CorsPolicy”);
app.UseRouting();
// Adding the endpoint for the ChatHub
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapHub<ChatHub>(“/chatHub”);
});
app.Run();
Create the SignalR Hub
Create a new class called ChatHub.cs in the project and define the SendMessage method:
using Microsoft.AspNetCore.SignalR;
public class ChatHub : Hub
{
public async Task SendMessage(string user, string message)
{
await Clients.All.SendAsync(“ReceiveMessage”, user, message);
}
}
Integrating WebAssembly
Create a New Blazor WebAssembly Project
First, create a new Blazor WebAssembly project. Open your terminal or command prompt and run the following command:
dotnet new blazorwasm -n RealTimeApp.Client
This command creates a new Blazor WebAssembly project named RealTimeApp.Client.
Add SignalR Client Library
Navigate to the project directory and add the SignalR client library:
cd RealTimeApp.Client
dotnet add package Microsoft.AspNetCore.SignalR.Client
Create the Blazor Component
Create a new Razor component for the chat interface. You can name it Chat.razor.
Add the following content to Chat.razor:
“/chat”
@using Microsoft.AspNetCore.SignalR.Client
@inject NavigationManager NavigationManager
<h3>Chat</h3>
<input @bind=”userInput” placeholder=”Your name” class=”form-control” />
<br>
<input @bind=”messageInput” placeholder=”Your message” @onkeypress=”@(e => { if (e.Key == “Enter”) Send().GetAwaiter().GetResult(); })” class=”form-control” />
<br>
<div style=”text-align: right”>
<button @onclick=”Send” class=”btn btn-primary”>Send</button>
</div>
<ul>
@foreach (var message in messages)
{
<li>@message</li>
}
</ul>
@code {
private HubConnection hubConnection;
private string userInput;
private string messageInput;
private List<string> messages = new List<string>();
protected override async Task OnInitializedAsync()
{
hubConnection = new HubConnectionBuilder()
.WithUrl(“http://localhost:5258/chathub”)
.Build();
hubConnection.On<string, string>(“ReceiveMessage”, (user, message) =>
{
var encodedMsg = $”{user}: {message}”;
messages.Add(encodedMsg);
InvokeAsync(StateHasChanged);
});
await hubConnection.StartAsync();
}
private async Task Send()
{
await hubConnection.SendAsync(“SendMessage”, userInput, messageInput);
messageInput = string.Empty;
}
}
Add Navigation to Chat.razor
Ensure your Chat.razor component can be accessed via navigation. Update your NavMenu.razor file to include a link to the chat component:
<ul class=”nav flex-column”>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=””>
<span class=”oi oi-home” aria-hidden=”true”></span> Home
</NavLink>
</li>
<li class=”nav-item px-3″>
<NavLink class=”nav-link” href=”chat”>
<span class=”oi oi-chat” aria-hidden=”true”></span> Chat
</NavLink>
</li>
</ul>
Run the Application
Run the ASP.NET Core API:
dotnet run –project RealTimeApp
Run the Blazor WebAssembly app:
dotnet run –project RealTimeApp.Client
Navigate to /chat in your Blazor WebAssembly app to see the chat interface. You should be able to send messages, which will be broadcasted to all connected clients in real-time using SignalR.
Conclusion
In this blog, we’ve explored how to build a real-time web application using SignalR, WebAssembly, and ASP.NET Core API. By leveraging these technologies, we created a responsive chat application capable of instant updates without the need for constant polling. SignalR facilitated real-time communication, WebAssembly provided near-native performance, and ASP.NET Core API served as a robust backend.
We started by setting up an ASP.NET Core API and integrating SignalR to handle real-time messages. Then, we created a Blazor WebAssembly client that interacts with the SignalR hub to send and receive messages. This combination of technologies not only enhances performance but also simplifies the development of real-time web applications.
I hope this tutorial has provided you with a solid foundation for building your own real-time applications. To see the complete source code and explore further, visit the GitHub repository: GitHub Repository: RealTimeApp
Feel free to clone the repository, experiment with the code, and extend the application to suit your needs.
To explore more on this topic, you can check out the following resources:
Tutorial: Get started with ASP.NET Core SignalR
Introduction to SignalR
Happy coding!
Microsoft Tech Community – Latest Blogs –Read More
Evaluating generative AI: Best practices for developers
As a developer working with generative AI, you’ve likely marveled at the impressive outputs your models can produce. But how do you ensure these outputs consistently meet your quality standards and business requirements? Enter the essential world of generative AI evaluation!
Why evaluation matters
Evaluating generative AI output is not just a best practice—it’s essential for building robust, reliable applications. Here’s why:
Quality assurance: Ensures your AI-generated content meets your standards.
Performance tracking: Helps you monitor and improve your app’s performance over time.
User trust: Builds confidence in your AI application among end-users.
Regulatory compliance: Helps meet emerging AI governance requirements.
Best practices for generative AI evaluation
In the rapidly advancing field of generative AI, ensuring the reliability and quality of your apps’ output is paramount. As developers, we strive to create applications that not only astound users with their capabilities but also maintain a high level of trust and integrity. Achieving this requires a systematic approach to evaluating our AI systems. Let’s dive into some best practices for evaluating generative AI.
Define clear metrics
Establishing clear metrics serves as the cornerstone for evaluating the efficacy and reliability of your app. Without well-defined criteria, the evaluation process can become subjective and inconsistent, leading to misleading conclusions. Clear metrics transform abstract notions of “quality” into tangible, measurable targets, providing a structured framework that guides both development and iteration. This clarity is crucial for aligning the output with business goals and user expectations.
Context is key
Always evaluate outputs in the context of their intended use case. For example, while generative AI used in a creative writing app may prioritize originality and narrative flow, these same criteria would be inadequate for evaluating a customer support app. Here, the primary metrics would focus on accuracy and relevance to user queries. The context in which the AI operates fundamentally shifts the framework of evaluation, demanding tailored criteria that align with the specific goals and user expectations of the application. Therefore, understanding the context ensures that the evaluation process is both relevant and rigorous, providing meaningful insights that drive improvement.
Use a multi-faceted approach
Relying on a single method for evaluating generative AI can yield an incomplete and potentially skewed understanding of its performance. By adopting a multi-faceted approach, you leverage the strengths of various evaluation techniques, providing a more holistic view of your AI’s capabilities and limitations. This comprehensive strategy combines quantitative metrics and qualitative assessments, capturing a broader range of performance indicators.
Quantitative metrics, such as perplexity and BLEU scores, offer objective, repeatable measurements that are essential for tracking improvements over time. However, these metrics alone often fall short of capturing the nuanced requirements of real-world applications. This is where qualitative methods, including expert reviews and user feedback, come into play. These methods add a layer of human judgment, accounting for context and subjective experience that automated metrics might miss.
Implement continuous evaluation
The effectiveness and reliability of your applications are not static metrics. They require regular and ongoing scrutiny to ensure they consistently meet the high standards set forth during their development. Continuous evaluation is therefore essential, as it allows developers to identify and rectify issues in real-time, ensuring that the AI systems adapt to new data and evolving user needs. This approach fosters a proactive stance, enabling swift improvements and maintaining the trust and satisfaction of the end-users.
Frequent and scheduled evaluations should be embedded into the development cycle. Ideally, evaluations should be conducted after every significant iteration or update to the AI model or system prompt. Additionally, periodic assessments, perhaps monthly or quarterly, can help in tracking the long-term performance and stability of the AI system. By maintaining this rhythm, developers can quickly respond to any degradation in quality, keeping the application robust and aligned with its intended objectives.
Don’t treat evaluation as a one-time task! Set up systems for ongoing monitoring and feedback loops.
Dive deeper with our new evaluations Learn Path
We’re excited to announce our new Learn path designed to take your evaluation skills to the next level!
Module 1: Evaluating generative AI applications
In this module, you’ll learn the fundamental concepts of evaluating generative AI applications. This module serves as a great starting point for anyone who’s new to evaluations in the context of generative AI. This module explore topics such as:
Applying best practices for choosing evaluation data
Understanding the purpose of and types of synthetic data for evaluation
Comprehending the scope of built-in metrics
Choosing the appropriate metrics based on your AI system use case
Understanding how to interpret evaluation results
Module 2: Run evaluations and generate synthetic datasets
In this self-paced, code-first module, you’ll run evaluations and generate synthetic datasets with the Azure AI Evaluation SDK. This module provides a series of exercises within Jupyter notebooks, created to provide step-by-step instruction across various scenarios. The exercises in this module include:
Assessing a model’s response using performance and quality metrics
Assessing a model’s response using risk and safety metrics
Running an evaluation and tracking the results in Azure AI Studio
Creating a custom evaluator with Prompty
Sending queries to an endpoint and running evaluators on the resulting query and response
Generating a synthetic dataset using conversation starters
We recommend completing both modules together within the Learn path to maximize comprehension by applying the skills that you’ll learn!
Visit the Learn path to get started: aka.ms/RAI-evaluations-path!
The path forward
As generative AI continues to evolve and integrate into more aspects of our digital lives, robust evaluation practices will become increasingly critical. By understanding these techniques, you’re not just improving your current projects—you’re better prepared to develop trustworthy AI apps.
We encourage you to make evaluation an integral part of your generative AI development process. Your users, stakeholders, and future self will thank you for it.
Happy evaluating!
Microsoft Tech Community – Latest Blogs –Read More
جـلب الـحبيب في السحر العجيب” 57838.5770 66 009-
جـلب الـحبيب في السحر العجيب” 57838.5770 66 009-
جـلب الـحبيب في السحر العجيب” 57838.5770 66 009- Read More
Button style positive not working on mobile app Teams
I just use style “positive” in action Action.ShowCard but still see as default on mobile app Teams
{ “type”: “Container”, “id”: “feedback-block_result_mobile”, “separator”: True, “targetWidth”: “atMost:narrow”, “items”: [ { “type”: “ActionSet”, “actions”: [ { “type”: “Action.ToggleVisibility”, “targetElements”: [“textToToggle”], “id”: “result_like_mo”, “tooltip”: “Helpful”, “title”: “👍“, “style”: “positive” }, { “type”: “Action.ToggleVisibility”, “targetElements”: [“textToToggle”], “id”: “result_dislike_mo”, “tooltip”: “Not helpful”, “title”: “👎“, }, ], }, ], }
I just use style “positive” in action Action.ShowCard but still see as default on mobile app Teams { “type”: “Container”, “id”: “feedback-block_result_mobile”, “separator”: True, “targetWidth”: “atMost:narrow”, “items”: [ { “type”: “ActionSet”, “actions”: [ { “type”: “Action.ToggleVisibility”, “targetElements”: [“textToToggle”], “id”: “result_like_mo”, “tooltip”: “Helpful”, “title”: “👍”, “style”: “positive” }, { “type”: “Action.ToggleVisibility”, “targetElements”: [“textToToggle”], “id”: “result_dislike_mo”, “tooltip”: “Not helpful”, “title”: “👎”, }, ], }, ], } Read More
جـلب الـحبيب بالتوكيلات :5952049.37 009.66 – شــيخ روحـاني – رقـمـ شــيخ
جـلب الـحبيب بالتوكيلات :5952049.37 009.66 – شــيخ روحـاني – رقـمـ شــيخ
جـلب الـحبيب بالتوكيلات :5952049.37 009.66 – شــيخ روحـاني – رقـمـ شــيخ Read More
Unable to access Update 3 for Microsoft Advanced Threat Analytics 1.9
Hi, Microsoft Tech Community and @Ricky Simpson from Microsoft,
I cannot download Update 3 for Microsoft Advanced Threat Analytics 1.9.
Whenever I tried to access the download update from this article, it seemed the ID number 56725 was missing, and an error code of 404 was returned.
Tried URL:
https://www.microsoft.com/download/details.aspx?id=56725
Hope you can fix this problem as soon as possible, because Microsoft ATA still plays an important role in most of the enterprise network, including my company’s network.
Best regards for all people in the community
Hi, Microsoft Tech Community and @Ricky Simpson from Microsoft, I cannot download Update 3 for Microsoft Advanced Threat Analytics 1.9. Whenever I tried to access the download update from this article, it seemed the ID number 56725 was missing, and an error code of 404 was returned. Tried URL:https://www.microsoft.com/download/details.aspx?id=56725 Hope you can fix this problem as soon as possible, because Microsoft ATA still plays an important role in most of the enterprise network, including my company’s network. Best regards for all people in the community Read More
Outlook cannot access Teams (TeamsAddin is present and successfully installed)
I want to use Teams within Outlook. So I want to add a Teams-Link in new appointments.
My Environment:
Windows 10 ProMicrosoft 365 Subscripton (Familiy).Microsoft Teams (Version: 24243.1309.3132.617. Indicated as most recent version of MS Teams)
What I already tried
I am currently successfully logged into my Microsoft Account.I ran the Office Update to make sure every Office components are up to dateI deinstalled and downloaded most recent version of MS TEams and installed it successfully
What is the problem? In Outlook I checked: File -> Options -> Calendar and found there is not Online Meeting Provider available:
The selected account email address removed for privacy reasons is the correct Office 365 Account which is successfully setup (and displayed as MS Exchange Account)
I checked the Teams Addin under Com-Addins with Files->Options->Addins:
Now I see this
“Anmelden” means login in English.
But when I click on the button the MS Teams Windows pops to front but nothing more happens. So the question is: Is there some functionality (like in my office that I can directly add a Teams Link to an appointment)
And still the online meeting provider dialog still shows no entries:
Perhaps it’s not possible to use Teams with Outlook from a Office 365 (Family) account? Can I find any info on this anywhere?
I want to use Teams within Outlook. So I want to add a Teams-Link in new appointments.My Environment:Windows 10 ProMicrosoft 365 Subscripton (Familiy).Microsoft Teams (Version: 24243.1309.3132.617. Indicated as most recent version of MS Teams)What I already triedI am currently successfully logged into my Microsoft Account.I ran the Office Update to make sure every Office components are up to dateI deinstalled and downloaded most recent version of MS TEams and installed it successfullyWhat is the problem? In Outlook I checked: File -> Options -> Calendar and found there is not Online Meeting Provider available: The selected account email address removed for privacy reasons is the correct Office 365 Account which is successfully setup (and displayed as MS Exchange Account)I checked the Teams Addin under Com-Addins with Files->Options->Addins: Now I see this “Anmelden” means login in English. But when I click on the button the MS Teams Windows pops to front but nothing more happens. So the question is: Is there some functionality (like in my office that I can directly add a Teams Link to an appointment)And still the online meeting provider dialog still shows no entries: Perhaps it’s not possible to use Teams with Outlook from a Office 365 (Family) account? Can I find any info on this anywhere? Read More
How to wipe a hard drive completely on Windows 11?
Hi everyone,
I’m planning to sell my Windows 11 device and want to securely wipe the hard drive to ensure none of my personal data is recoverable. I’ve heard that just deleting files or even resetting the PC may not be enough to completely erase everything. Could anyone recommend a reliable method or tool to wipe a hard drive completely on Windows 11 before I sell it?
I’d prefer a straightforward process that guarantees data can’t be recovered. Any step-by-step instructions or software suggestions would be greatly appreciated!
Thanks for your help!
Hi everyone, I’m planning to sell my Windows 11 device and want to securely wipe the hard drive to ensure none of my personal data is recoverable. I’ve heard that just deleting files or even resetting the PC may not be enough to completely erase everything. Could anyone recommend a reliable method or tool to wipe a hard drive completely on Windows 11 before I sell it? I’d prefer a straightforward process that guarantees data can’t be recovered. Any step-by-step instructions or software suggestions would be greatly appreciated! Thanks for your help! Read More
Working with Copilot Pages
Copilot Pages are a Useful Way to Capture and Refine AI-Generated Text
Copilot pages featured in Microsoft’s Copilot Wave 2 announcements on September 16, 2024. With marketing’s normal ability to construct impenetrable text, Microsoft says: “Copilot Pages is a dynamic, persistent canvas in Copilot chat designed for multiplayer AI collaboration.” Parsing that sentence took me a while, but I think it means that a Copilot page is a Loop component generated from the results of a Copilot chat that can be shared with other users.
If your organization already uses both Copilot for Microsoft 365 and Loop (either the standalone app or components in Teams and Outlook), the ability to save the results generated by Copilot is very useful. Or as Microsoft puts it, a Copilot page takes: ”ephemeral AI-generated content and makes it durable.”
Using Copilot Pages
Figure 1 shows an example where I asked Copilot for a short summary about how to use compliance searches to purge mailbox items. After Copilot responded to the prompt, clicking the Edit in Pages button opens the Loop component to the right with the text generated by Copilot loaded and ready for editing. As you can see, I’ve used a comment to highlight an error in the text.
Figure 1: Editing a Copilot page containing AI-generated text
The page shown in Figure 1 has the Internal sensitivity label. This is the highest-priority sensitivity label assigned to the documents Copilot found and used in its response. The user can assign a different sensitivity label if appropriate. The shield with padlock used to indicate the presence of a sensitivity label doesn’t include the color configured for the label. That’s a pity because the traffic light scheme to indicate the relative sensitivity levels of labels is often used to give a visual clue to users.
After making whatever updates are required, the page can be shared with other people or copied and inserted into a Teams chat or channel conversation, Outlook message, or into the Loop app. The page behaves just like any other Loop component.
Currently, Copilot Pages are only available for user accounts with Copilot for Microsoft 365 licenses. Microsoft says in their Copilot Pages for IT admins post that “soon users with access to Microsoft … will also be able to create pages.”
Managing Pages
Editing and sharing Copilot Pages are all very well, but administrators want to know about where the data is stored and how it is managed. Insight into these and other questions comes from the admins post (notable for featuring the word “Copilot” no less than 64 times). Here we discover several key facts.
Copilot Pages are stored in SharePoint Embedded containers, just like the containers used for Loop app workspaces. The containers are visible through the SharePoint admin center (Figure 2). All the containers are called “Pages,” and although the owner’s name is visible as a property of the container, it would be useful if Microsoft included the owner’s name in the container name.
Microsoft publishes a page describing governance and compliance capabilities for Loop. The page hasn’t been updated for Copilot Pages, and the assumption is that the containers created for pages will function much like those for Loop workspaces with the caveat that “governance and compliance processes apply the same way they would to a user’s OneDrive.”
Microsoft also says that content of Copilot Pages is “lifetime-managed with the user account and is deleted when the user account is deleted from the organization. There is a default timeline where it is first soft deleted (can be recovered by an IT Admin) and then purged.” There’s also a statement about an “Admin workflow to enable access to these containers before deletion so that valuable content can be copied to new locations.”
Even if Microsoft still must deliver some features (and APIs to access Copilot Pages), the comments noted above appear to match the existing capabilities available when removing a Microsoft 365 account. Dealing with personal information can be challenging, especially when OneDrive holds so many kinds of information. Handling Copilot Pages now joins the list of things to take care of when preserving information belonging to people who leave an organization.
Using Pages as a Copilot Notebook
Like any new Microsoft 365 feature, it will take a little time for organizations and users to figure out if Copilot Pages will become part of the work landscape. Having a way to capture the output from Copilot is useful, and I think I will use these pages to record that output rather than the starting point for collaboration. But everyone’s different and it will be interesting to see how this capability evolves over time.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365.
Fill Cell Color for Tracking Formula
Hi, I’m trying to create a tracking spreadsheet indicative by color based on value. Using IF function, will work if I do it individually by cell but I’m wondering if there’s a better way. I’m working with < less than and > greater than. Example: Reference Number in A3, if number in comparison cells (all in the same column) are greater, I need the cell green, if less I need the cell red. I tried conditional formatting but not having much luck.
Hi, I’m trying to create a tracking spreadsheet indicative by color based on value. Using IF function, will work if I do it individually by cell but I’m wondering if there’s a better way. I’m working with < less than and > greater than. Example: Reference Number in A3, if number in comparison cells (all in the same column) are greater, I need the cell green, if less I need the cell red. I tried conditional formatting but not having much luck. Read More
Cannot Disable Named Pipes
I have an installation of SQL Server 2022. I have a named instance.
When I set the port to a non default port, I cannot disable the named Pipes (SQL Server cannot start).
When I change the port to the default port 1433, the named pipes can be disabled without any issue. Is this correct?
I have an installation of SQL Server 2022. I have a named instance. When I set the port to a non default port, I cannot disable the named Pipes (SQL Server cannot start).When I change the port to the default port 1433, the named pipes can be disabled without any issue. Is this correct? Read More
Azure Hands on
Dear All,
I am planning to learn Azure Data engineering, I wanted to get hands-on in cloud environment.
My free trial got expired, is there anyway we can get hands-on without cost involved.
I can go for pay as you go option, but I am not sure how the costing will happen. I am afraid as they may charge unexpectedly if I subscribed for some service unknowingly. I encountered this in my trial period as they charged 78$ (from free credit) overnight for some storage services I created and left it.
Any help is appreciated, thanks in advance.
Dear All, I am planning to learn Azure Data engineering, I wanted to get hands-on in cloud environment. My free trial got expired, is there anyway we can get hands-on without cost involved. I can go for pay as you go option, but I am not sure how the costing will happen. I am afraid as they may charge unexpectedly if I subscribed for some service unknowingly. I encountered this in my trial period as they charged 78$ (from free credit) overnight for some storage services I created and left it. Any help is appreciated, thanks in advance. Read More
Change Cache Settings Days to infinite or unlimited for Excel files
Is there any way on Win10 to change the Days to Keep files in the Office Document Cache to 1 year, or Infinite or Unlimited?
At the moment the max is set to 30.
I was wondering if there is a way with regedit to change this to unlimited?
Thank you!
Is there any way on Win10 to change the Days to Keep files in the Office Document Cache to 1 year, or Infinite or Unlimited? At the moment the max is set to 30.I was wondering if there is a way with regedit to change this to unlimited? Thank you! Read More
Viva Engage community insights exported data missing the first half of the year
Viva Engage community insights doesn’t include the data from the first half of the year when exporting data for 365 days. I am trying to see our community insights data from January 2024 up to the current date, but the csv file only shows July to the current date, leaving the first half of the year.
Viva Engage community insights doesn’t include the data from the first half of the year when exporting data for 365 days. I am trying to see our community insights data from January 2024 up to the current date, but the csv file only shows July to the current date, leaving the first half of the year. Read More
Accelerating Java Applications on Azure Kubernetes Service with CRaC
Overview
Packaging a Java Application
1. Clone the Repository and build the Application:
cd spring-petclinic
<groupId>org.crac</groupId>
<artifactId>crac</artifactId>
<version>1.4.0</version>
</dependency>
2. Create a Dockerfile:
FROM azul/zulu-openjdk:17-jdk-crac-latest as builder
WORKDIR /home/app
ADD . /home/app/spring-petclinic
RUN cd spring-petclinic && ./mvnw -Dmaven.test.skip=true clean package
FROM azul/zulu-openjdk:17-jdk-crac-latest
WORKDIR /home/app
EXPOSE 8080
COPY –from=builder /home/app/spring-petclinic/target/*.jar petclinic.jar
ENTRYPOINT [“java”, “-XX:CRaCCheckpointTo=/test”, “-jar”, “petclinic.jar”]
3. Build the Docker Image:
Creating a Deployment on Azure Kubernetes Service
1. Create an AKS Cluster:
2. Push the Docker Image to Azure Container Registry (ACR):
docker push <acr-name>.azurecr.io/spring-petclinic:crac
3. Create an image pull secret to your ACR
4. Create Azure File to mount to the deployment
az storage share-rm create –resource-group myResourceGroup –storage-account mystorageaccount –name myfileshare
az storage account keys list –resource-group myResourceGroup –account-name mystorageaccount
kubectl create secret generic azure-secret –from-literal=azurestorageaccountname=mystorageaccount –from-literal=azurestorageaccountkey=<storage-account-key>
5. Create a Kubernetes Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
– name: myapp
image: <acr-name>.azurecr.io/spring-petclinic:crac
ports:
– containerPort: 8080
securityContext:
allowPrivilegeEscalation: false
capabilities:
add: # The two capabilities are required to to checkpoint
– SYS_PTRACE
– CHECKPOINT_RESTORE
privileged: false
volumeMounts:
– name: crac-storage
mountPath: /test
volumes:
– name: crac-storage
csi:
driver: file.csi.azure.com
volumeAttributes:
secretName: azure-secret
shareName: myfileshare
mountOptions: ‘dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock,nobrl’
imagePullSecrets:
– name: regcred
6. Deploy to AKS:
7. Check start up logs and duration:
| _,,,–,,_
/,`.-‘`’ ._ -;;,_
_______ __|,4- ) )_ .;.(__`’-‘__ ___ __ _ ___ _______
| | ‘—”(_/._)-‘(__) | | | | | | | | |
| _ | ___|_ _| | | | | |_| | | | __ _ _
| |_| | |___ | | | | | | | | | |
| ___| ___| | | | _| |___| | _ | | _|
| | | |___ | | | |_| | | | | | | |_ ) ) ) )
|___| |_______| |___| |_______|_______|___|_| |__|___|_______| / / / /
==================================================================/_/_/_/
:: Built with Spring Boot :: 3.3.0
2024-09-26T14:59:41.464Z INFO 129 — [ main] o.s.s.petclinic.PetClinicApplication : Starting PetClinicApplication v3.3.0-SNAPSHOT using Java 17.0.12 with PID 129 (/home/app/petclinic.jar started by root in /home/app)
2024-09-26T14:59:41.470Z INFO 129 — [ main] o.s.s.petclinic.PetClinicApplication : No active profile set, falling back to 1 default profile: “default”
2024-09-26T14:59:42.994Z INFO 129 — [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode.
2024-09-26T14:59:43.071Z INFO 129 — [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 66 ms. Found 2 JPA repository interfaces.
2024-09-26T14:59:44.125Z INFO 129 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port 8080 (http)
2024-09-26T14:59:44.134Z INFO 129 — [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2024-09-26T14:59:44.135Z INFO 129 — [ main] o.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/10.1.24]
2024-09-26T14:59:44.176Z INFO 129 — [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2024-09-26T14:59:44.178Z INFO 129 — [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2595 ms
2024-09-26T14:59:44.560Z INFO 129 — [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 – Starting…
2024-09-26T14:59:44.779Z INFO 129 — [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 – Added connection conn0: url=jdbc:h2:mem:131e3017-7e28-4a31-b704-5d3840cd46d6 user=SA
2024-09-26T14:59:44.781Z INFO 129 — [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 – Start completed.
2024-09-26T14:59:45.011Z INFO 129 — [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default]
2024-09-26T14:59:45.073Z INFO 129 — [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 6.5.2.Final
2024-09-26T14:59:45.113Z INFO 129 — [ main] o.h.c.internal.RegionFactoryInitiator : HHH000026: Second-level cache disabled
2024-09-26T14:59:45.451Z INFO 129 — [ main] o.s.o.j.p.SpringPersistenceUnitInfo : No LoadTimeWeaver setup: ignoring JPA class transformer
2024-09-26T14:59:46.466Z INFO 129 — [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000489: No JTA platform available (set ‘hibernate.transaction.jta.platform’ to enable JTA platform integration)
2024-09-26T14:59:46.468Z INFO 129 — [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit ‘default’
2024-09-26T14:59:46.826Z INFO 129 — [ main] o.s.d.j.r.query.QueryEnhancerFactory : Hibernate is in classpath; If applicable, HQL parser will be used.
2024-09-26T14:59:48.666Z INFO 129 — [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 14 endpoints beneath base path ‘/actuator’
2024-09-26T14:59:48.778Z INFO 129 — [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port 8080 (http) with context path ‘/’
2024-09-26T14:59:48.810Z INFO 129 — [ main] o.s.s.petclinic.PetClinicApplication : Started PetClinicApplication in 8.171 seconds (process running for 8.862)
Creating a Checkpoint with CRaC
1. Create the Checkpoint:
Restoring from the Checkpoint
1. Update deployment to restore Image in AKS:
– command:
– java
– -XX:CRaCRestoreFrom=/test
kubectl apply -f deployment.yaml
2. Check startup time
2024-09-26T15:01:42.400Z INFO 129 — [Attach Listener] o.s.c.support.DefaultLifecycleProcessor : Restarting Spring-managed lifecycle beans after JVM restore
2024-09-26T15:01:42.396Z WARN 129 — [l-1 housekeeper] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 – Thread starvation or clock leap detected (housekeeper delta=4m9s910ms846?s988ns).
2024-09-26T15:01:42.473Z INFO 129 — [Attach Listener] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port 8080 (http) with context path ‘/’
2024-09-26T15:01:42.474Z INFO 129 — [Attach Listener] o.s.c.support.DefaultLifecycleProcessor : Spring-managed lifecycle restart completed (restored JVM running for 1009 ms)
Performance Comparison
1. Measure Startup Time:
2. Compare Results:
Conclusion
Microsoft Tech Community – Latest Blogs –Read More
Pytorch PEFT SFT and convert to ONNX Runtime
You’re welcome to follow my GitHub repo and give it a star:https://github.com/xinyuwei-david/david-share.git,lots of useful code is here!
ONNX (Open Neural Network Exchange) is an open format used to represent machine learning and deep learning models. It was introduced by Microsoft and Facebook in 2017, aiming to facilitate model interoperability between different deep learning frameworks. With ONNX, you can seamlessly convert models between different deep learning frameworks such as PyTorch and TensorFlow.
Currently, ONNX fine-tuning can be done using Olive, but it does not yet support LoRA. If you want to perform LoRA fine-tuning with PyTorch and use ORT for inference, how can this be achieved?
First, fine-tune the model using LoRA. Do not use QLoRA, as it may result in significant precision loss during subsequent merging.
Merge the Adapter with the PyTorch base model.
Convert the merged safetensors to ONNX.
Generate the genai_config.json file using ONNX Runtime GenAI Model Builder.
Perform inference using onnxruntime-genai.
Merge adapter:
Consolidated results:
Export to ONNX
Export result:
Generate genai_config.json.
when doing the conversion, you need to use FP32.
Files needed during ONNX inference:
Detailed Code
pip install –pre onnxruntime-genai
LoRA SFT
model_name = “microsoft/Phi-3.5-Mini-instruct”
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, add_eos_token=True, use_fast=True)
tokenizer.pad_token = tokenizer.unk_token
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
tokenizer.padding_side = ‘left’
ds = load_dataset(“timdettmers/openassistant-guanaco”)
model = AutoModelForCausalLM.from_pretrained(
model_name, torch_dtype=compute_dtype, trust_remote_code=True, device_map={“”: 0}, attn_implementation=attn_implementation
)
model.gradient_checkpointing_enable(gradient_checkpointing_kwargs={‘use_reentrant’:True})
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.05,
r=16,
bias=”none”,
task_type=”CAUSAL_LM”,
target_modules= [‘k_proj’, ‘q_proj’, ‘v_proj’, ‘o_proj’, “gate_proj”, “down_proj”, “up_proj”]
)
training_arguments = SFTConfig(
output_dir=”./Phi-3.5/Phi-3.5-Mini_LoRA”,
eval_strategy=”steps”,
do_eval=True,
optim=”adamw_torch”,
per_device_train_batch_size=4,
gradient_accumulation_steps=8,
per_device_eval_batch_size=4,
log_level=”debug”,
save_strategy=”epoch”,
logging_steps=25,
learning_rate=1e-4,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
eval_steps=25,
num_train_epochs=1,
warmup_ratio=0.1,
lr_scheduler_type=”linear”,
dataset_text_field=”text”,
max_seq_length=512
)
trainer = SFTTrainer(
model=model,
train_dataset=ds[‘train’],
eval_dataset=ds[‘test’],
peft_config=peft_config,
tokenizer=tokenizer,
args=training_arguments,
)
trainer.train()
Merge the adapter
model_name = “microsoft/Phi-3.5-Mini-instruct”
adapter_path = “/root/Phi-3.5-Mini_LoRA/checkpoint-411/”
# 加载 tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
# 加载模型
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
# 加载适配器
model = PeftModel.from_pretrained(model, adapter_path)
# 设置模型为评估模式
model.eval()
# 定义推理函数
def generate_text(prompt, max_length=500):
inputs = tokenizer(prompt, return_tensors=”pt”)
attention_mask = inputs[‘attention_mask’]
with torch.no_grad():
outputs = model.generate(
inputs.input_ids,
attention_mask=attention_mask,
max_length=max_length,
num_return_sequences=1,
do_sample=True,
top_k=50,
top_p=0.95
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# 示例推理
prompt = (“1+1=?”)
generated_text = generate_text(prompt)
print(generated_text)
ONNX Export
model_checkpoint = “/root/Phi-3.5-Mini-LoRA-Merge”
save_directory = “/root/onnx1/”
# Load a model from transformers and export it to ONNX
ort_model = ORTModelForCausalLM.from_pretrained(model_checkpoint, export=True)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
# Save the onnx model and tokenizer
ort_model.save_pretrained(save_directory)
tokenizer.save_pretrained(save_directory)
Generate genai_config.json
(phi3.5) root@h100vm:~/onnxruntime-genai/src/python/py/models# python3 builder.py -m microsoft/Phi-3.5-mini-instruct -o /root/onnx4 -p fp16 -e cuda -c /root/onnx1 –extra_options config_only=true
Valid precision + execution provider combinations are: FP32 CPU, FP32 CUDA, FP16 CUDA, FP16 DML, INT4 CPU, INT4 CUDA, INT4 DML
Extra options: {‘config_only’: ‘true’}
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.45k/3.45k [00:00<00:00, 33.5MB/s]
configuration_phi3.py: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.2k/11.2k [00:00<00:00, 79.0MB/s]
A new version of the following files was downloaded from https://huggingface.co/microsoft/Phi-3.5-mini-instruct:
– configuration_phi3.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
GroupQueryAttention (GQA) is used in this model.
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 195/195 [00:00<00:00, 2.19MB/s]
Saving GenAI config in /root/onnx4
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.98k/3.98k [00:00<00:00, 42.8MB/s]
tokenizer.model: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 500k/500k [00:00<00:00, 105MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.84M/1.84M [00:00<00:00, 2.13MB/s]
added_tokens.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 306/306 [00:00<00:00, 3.40MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 665/665 [00:00<00:00, 7.23MB/s]
Saving processing files in /root/onnx4 for GenAI
Copy generated files to ONNX model file:
(phi3.5) root@h100vm:~/onnx4# ls
added_tokens.json genai_config.json special_tokens_map.json tokenizer.json tokenizer.model tokenizer_config.json
(phi3.5) root@h100vm:~/onnx4# cp ./* /root/onnx1
Inference test with ONNX:
model = og.Model(‘/root/onnx1/1’)
tokenizer = og.Tokenizer(model)
tokenizer_stream = tokenizer.create_stream()
# Set the max length to something sensible by default,
# since otherwise it will be set to the entire context length
search_options = {}
search_options[‘max_length’] = 2048
chat_template = ‘<|user|>n{input} <|end|>n<|assistant|>’
text = input(“Input: “)
if not text:
print(“Error, input cannot be empty”)
exit
prompt = f'{chat_template.format(input=text)}’
input_tokens = tokenizer.encode(prompt)
params = og.GeneratorParams(model)
params.set_search_options(**search_options)
params.input_ids = input_tokens
generator = og.Generator(model, params)
print(“Output: “, end=”, flush=True)
try:
while not generator.is_done():
generator.compute_logits()
generator.generate_next_token()
new_token = generator.get_next_tokens()[0]
print(tokenizer_stream.decode(new_token), end=”, flush=True)
except KeyboardInterrupt:
print(” –control+c pressed, aborting generation–“)
print()
del generator
Microsoft Tech Community – Latest Blogs –Read More
Troubleshooting page-related performance issues in Azure SQL
Introduction
Azure SQL is a family of managed, secure, and intelligent products that use the SQL Server database engine in the Azure cloud. Though Azure SQL is built upon the familiar SQL Server engine, there are some differences between SQL Server and Azure SQL, such as availability of certain diagnostic commands like DBCC PAGE. DBCC PAGE is a very useful command in SQL Server for troubleshooting and inspecting the internal structure of data pages, but it is not available in Azure SQL due to differences in the underlying infrastructure and management approaches. This limitation can present some challenges for database administrators and developers who depend on DBCC PAGE for troubleshooting. Nevertheless, Azure SQL provides alternative methods and tools for database troubleshooting, ensuring that DBAs can still achieve effective results, even without the use of DBCC PAGE. This article explores these alternatives, though they do not fully replace DBCC PAGE.
Understanding sys.dm_db_page_info()
The sys.dm_db_page_info() dynamic management function (DMF) was introduced to help DBAs obtain critical metadata about database pages. With DBCC PAGE being unavailable in Azure SQL, sys.dm_db_page_info() serves as a supported, fully documented alternative to view most of the important page-level details.
The primary use of sys.dm_db_page_info() is to identify page details that are essential for diagnosing page-related issues, such as waits, blocking scenarios, and contention. Common performance problems that involve specific pages include:
TempDB contention: Contention on system databases often requires identifying the exact page where bottlenecks occur.
Last page insert contention: Scenarios where multiple transactions are trying to insert rows into the last page of an index, causing bottlenecks.
Page-level blocking: Scenarios where requests for pages are blocked due to contention.
For these issues, understanding whether the page is a data page, index page, or is of another type is critical. sys.dm_db_page_info() makes this possible by providing header information about the page, such as: object_id, index_id, partition_id.
Using sys.fn_PageResCracker along with sys.dm_db_page_info()
While sys.dm_db_page_info() provides crucial page metadata, the challenge arises when working with wait-related information provided by DMVs like sys.dm_exec_requests. The wait resource column, wait_resource, stores page details in a hexadecimal format that’s difficult to parse directly. The sys.fn_PageResCracker function was designed to translate the hexadecimal format inside wait_resource into individual components (namely database ID, file ID, and page ID). This function is vital when you need to crack the hexadecimal string and obtain values for further troubleshooting.
Syntax and Examples
The sys.dm_db_page_info() function accepts four parameters:
Database ID: The ID of the database containing the page.
File ID: The ID of the file in which the page resides.
Page ID: The ID of the page.
Mode: ‘LIMITED’ and ‘DETAILED’ are the two modes supported today, depending on the level of information required.
This simple query example returns some metadata about a page. Please enter a valid value on each of the three “id” parameters:
SELECT page_header_version, page_type, page_type_desc, page_lsn
FROM sys.dm_db_page_info(<database_id>, <file_id>, <page_id>, ‘DETAILED’);
As mentioned earlier, sys.dm_db_page_info() can be joined with other DMVs like sys.dm_exec_requests to correlate the page data with real-time query execution details.
This other example identifies and provides detailed information about requests that are experiencing contention on data pages in memory (PAGELATCH waits) in tempdb:
SELECT er.session_id
,er.wait_type
,er.wait_resource
,[object] = OBJECT_NAME(pi.[object_id], pi.database_id)
,er.command
FROM sys.dm_exec_requests AS er
CROSS APPLY sys.fn_PageResCracker(er.page_resource) AS prc
CROSS APPLY sys.dm_db_page_info(prc.[db_id], prc.[file_id], prc.page_id, ‘DETAILED’) AS pi
WHERE UPPER(er.wait_type) LIKE ‘%PAGELATCH%’
AND pi.database_id = 2
This third example can be used to obtain waits and blocking information along with participating page details related to wait resource information:
SELECT er.session_id
,er.wait_type
,er.wait_resource
,OBJECT_NAME(page_info.[object_id], page_info.database_id) AS [object_name]
,er.blocking_session_id
,er.command
,SUBSTRING(st.TEXT, (er.statement_start_offset / 2) + 1, (
(
CASE er.statement_end_offset
WHEN – 1
THEN DATALENGTH(st.TEXT)
ELSE er.statement_end_offset
END – er.statement_start_offset
) / 2
) + 1) AS statement_text
,page_info.database_id
,page_info.[file_id]
,page_info.page_id
,page_info.[object_id]
,page_info.index_id
,page_info.page_type_desc
FROM sys.dm_exec_requests AS er
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS st
CROSS APPLY sys.fn_PageResCracker(er.page_resource) AS r
CROSS APPLY sys.dm_db_page_info(r.[db_id], r.[file_id], r.page_id, ‘DETAILED’) AS page_info
WHERE er.wait_type LIKE ‘%page%’
Related documentation
sys.dm_db_page_info (Transact-SQL) – SQL Server | Microsoft Learn
sys.fn_PageResCracker (Transact-SQL) – SQL Server | Microsoft Learn
sys.dm_exec_requests (Transact-SQL) – SQL Server | Microsoft Learn
Recommendations to reduce allocation contention – SQL Server | Microsoft Learn
Configure tempdb settings – Azure SQL Managed Instance | Microsoft Learn
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please contact the Databases SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support!
Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide.
Microsoft Tech Community – Latest Blogs –Read More
Help us plan our upcoming “Mastering API Integration with Sentinel and USOP” public webinar
On December 5th, 2024, we will host a public webinar on how to effectively integrate APIs with Microsoft Sentinel and the Unified Security Platform. This session will cover when to use APIs, how to set them up, and potential challenges. We will present live demos to guide you through the process. To ensure this webinar is as engaging and relevant as possible for you, we’d love your input to help us create its agenda!
Help us plan this webinar
Do you have any use cases you think we should feature? Or have you encountered any blockers that you’d like us to address? We’re eager to find out what content matches your needs the most! Please answer this survey to help us with your input. It will remain open until October 31st, 2024.
Take the survey here: https://forms.office.com/r/hrWtm34WFu
Join the webinar on December 5th!
In addition to helping us plan it, we hope to count on your participation. Register at Register for this webinar at https://aka.ms/MasteringAPISentinelUSOPWebinar.
Thank you for your contributions!
Naomi Chistis and Jeremey Tan – Microsoft SIEM & XDR Team
Hello on behalf of the Microsoft SIEM & XDR Engineering organization!
On December 5th, 2024, we will host a public webinar on how to effectively integrate APIs with Microsoft Sentinel and the Unified Security Platform. This session will cover when to use APIs, how to set them up, and potential challenges. We will present live demos to guide you through the process. To ensure this webinar is as engaging and relevant as possible for you, we’d love your input to help us create its agenda!Help us plan this webinarDo you have any use cases you think we should feature? Or have you encountered any blockers that you’d like us to address? We’re eager to find out what content matches your needs the most! Please answer this survey to help us with your input. It will remain open until October 31st, 2024.Take the survey here: https://forms.office.com/r/hrWtm34WFuJoin the webinar on December 5th!In addition to helping us plan it, we hope to count on your participation. Register at Register for this webinar at https://aka.ms/MasteringAPISentinelUSOPWebinar. Thank you for your contributions!Naomi Chistis and Jeremey Tan – Microsoft SIEM & XDR Team Read More