Category: Microsoft
Category Archives: Microsoft
Bulk File Search Using Purview (Data Deletion Request)?
Hello,
Our organization is looking for a solution to discover ~13,000 file names (in a CSV) across our M365 environment. Obviously, we are looking for a solution using PowerShell and possibly the Graph API since this is not possible using the Purview GUI.
Any recommendations on how to do this? We have approached Microsoft regarding this and they provided a Copilot generated PowerShell script as a launching off point but we have not gotten this to work yet.
Best regards,
Luke F.
Hello,Our organization is looking for a solution to discover ~13,000 file names (in a CSV) across our M365 environment. Obviously, we are looking for a solution using PowerShell and possibly the Graph API since this is not possible using the Purview GUI. Any recommendations on how to do this? We have approached Microsoft regarding this and they provided a Copilot generated PowerShell script as a launching off point but we have not gotten this to work yet. Best regards,Luke F. Read More
Future Year and Renewals via Azure Marketplace- Best Practices to align the business
Many customers are modernizing and streamlining the purchasing through Azure Marketplace. Some ISVs license via an upfront payment for 1,2 or 3 years of software and services/SaaS model. If you have transacted many deals through Azure Marketplace, what best practices have you found to be most useful to understand what “renewal” private offers need to be created and which ones are coming due since there is not an auto notification feature for ISVs publishing the private offer?
Many customers are modernizing and streamlining the purchasing through Azure Marketplace. Some ISVs license via an upfront payment for 1,2 or 3 years of software and services/SaaS model. If you have transacted many deals through Azure Marketplace, what best practices have you found to be most useful to understand what “renewal” private offers need to be created and which ones are coming due since there is not an auto notification feature for ISVs publishing the private offer?@justinroyal Read More
Navigation role base access private site Teams site
I have added candidate master list in the left navigation and I want candidate master list to be accessed by sharepoint owner groups .
how to restrict candidate master list is visible to sharepoint owners group
I have added candidate master list in the left navigation and I want candidate master list to be accessed by sharepoint owner groups . how to restrict candidate master list is visible to sharepoint owners group Read More
SharePoint Quick Links Web-Part Doesn’t Automatically Update Title of Files
Wondering if anyone else has this issue. I added in a quick links web part but when I do into my document library and change the file title it doesn’t automatically update on the web-part.
Is this a caching issue? Or do I always have to update the title once I change it? Below is an example of the second web-part I pulled in with the updated Titles for my files.
Wondering if anyone else has this issue. I added in a quick links web part but when I do into my document library and change the file title it doesn’t automatically update on the web-part. Is this a caching issue? Or do I always have to update the title once I change it? Below is an example of the second web-part I pulled in with the updated Titles for my files. Read More
Endpoint DLP – Setting Exclusions
I’ve got some policies enabled in simulation mode.
I have tried to tweak the alerts and monitoring by adding exclusions for the DLP settings. For the life of me, I just can’t figure out why these aren’t excluding from monitoring/alerting.
Am I formatting the exclusions wrong? Or are exclusions for something other than what I thought they were for
Examples:
File path Exclusions
C:CaptureServiceScreen Capture ModuleLogs*%USERPROFILE%ND Office Echo*%systemdrive%Users*(1)OneDrive – *
Network share exclusions
\server-name\server-name*\server-nameshare\server-nameshare\server-nameshare*
I’ve got some policies enabled in simulation mode. I have tried to tweak the alerts and monitoring by adding exclusions for the DLP settings. For the life of me, I just can’t figure out why these aren’t excluding from monitoring/alerting. Am I formatting the exclusions wrong? Or are exclusions for something other than what I thought they were for Examples:File path ExclusionsC:CaptureServiceScreen Capture ModuleLogs*%USERPROFILE%ND Office Echo*%systemdrive%Users*(1)OneDrive – *Network share exclusions\server-name\server-name*\server-nameshare\server-nameshare\server-nameshare* Read More
Understanding HTTP Status Code 304 Not Modified Response and Output Caching Feature in IIS
Introduction
Caching plays a crucial role in optimizing web performance. When Output Caching is enabled in IIS for specific file extensions or URLs, it deliver 304 Not Modified status code. This status code is essential for managing caching, as it indicates whether a resource has been modified since the last time it was requested.
While browsing the site it is evident that requests for .js files return a 304 Not Modified status code, along with that the server sending an ETag which was matched with the Request Header If-None-Match.
Since the purpose of a 304 response is to minimize data transfer when the client already holds a cached version, the server avoids including metadata other than the required fields unless that metadata is necessary for cache management.
Conclusion
The HTTP 304 Not Modified status code is used for web caching and performance optimization. It notifies client to use an existing cached copy considering it is able to validate the specified condition catch condition in request header.
References
HTTP status code overview – Internet Information Services | Microsoft Learn
RFC 9110 – HTTP Semantics (ietf.org)
Microsoft Tech Community – Latest Blogs –Read More
Library Permissions
I want to create a library in a department’s communication site. The library will contain time study documents where I want folks in Finance to be able to add, delete and download documents from. They will need to be able to add and delete folders in the document library as well as we keep track of two years worth of files in this library — one for the current year and one for prior. The others in the department’s communication site will need to be able to edit the documents in the library, but not have the ability to add more documents or delete any of the documents. Would I need to create my own permission group or can I use a standard permission level to accommodate these two options?
I want to create a library in a department’s communication site. The library will contain time study documents where I want folks in Finance to be able to add, delete and download documents from. They will need to be able to add and delete folders in the document library as well as we keep track of two years worth of files in this library — one for the current year and one for prior. The others in the department’s communication site will need to be able to edit the documents in the library, but not have the ability to add more documents or delete any of the documents. Would I need to create my own permission group or can I use a standard permission level to accommodate these two options? Read More
Win 11 File explorer non- display of older files
Insider canary level win 11 file explorer downloads folder only shows current month files.
All older files are not displayed. Please help
Insider canary level win 11 file explorer downloads folder only shows current month files.All older files are not displayed. Please help Read More
Issue in PowerPoint / Power BI plugin – Embed Power Bi report can’t show preloaded objects
Hello everyone,
i have integrated various Power BI graphics that show live data into a PowerPoint report using the URL. The graphics with the live data were automatically reloaded, after the respective slide was clicked on. If the slides were not clicked and the entire report was exported as a PDF instead, the graphics that were loaded the last time were displayed.
You can also see it in the preview on slide 3: the objects shown there were loaded the last time.
For some reason, for the last two weeks, when the graphics have been reloaded and the PowerPoint report has been saved and reopened, the preview no longer shows the last loaded graphics, but blue hexagons. The blue hexagons only disappear when I click on the slide and the graphics have been automatically reloaded. If I don’t do this step, i.e. click through each individual slide but click on PDF export straight away, the PDF export shows the blue hexagons again. The problem is, however, that due to time constraints, I can’t always click on each individual slide, but only on a few slides, and the rest of the slides including the graphics they contain should remain unchanged.
How can I ensure that the last loaded graphics are restored and that the blue hexagons do not appear?
Thanks for any help!
Hello everyone,i have integrated various Power BI graphics that show live data into a PowerPoint report using the URL. The graphics with the live data were automatically reloaded, after the respective slide was clicked on. If the slides were not clicked and the entire report was exported as a PDF instead, the graphics that were loaded the last time were displayed. You can also see it in the preview on slide 3: the objects shown there were loaded the last time. For some reason, for the last two weeks, when the graphics have been reloaded and the PowerPoint report has been saved and reopened, the preview no longer shows the last loaded graphics, but blue hexagons. The blue hexagons only disappear when I click on the slide and the graphics have been automatically reloaded. If I don’t do this step, i.e. click through each individual slide but click on PDF export straight away, the PDF export shows the blue hexagons again. The problem is, however, that due to time constraints, I can’t always click on each individual slide, but only on a few slides, and the rest of the slides including the graphics they contain should remain unchanged. How can I ensure that the last loaded graphics are restored and that the blue hexagons do not appear?Thanks for any help! Read More
DNS Issue
Server 2019
client Windows 11
A client workstation has 4 DNS servers we expect to work in a “round robin”. The servers are:
accounting
research
dev
admin
The systems we are trying to access exist in the DNS server DEV. When we run NSLOOKUP and set the DNS server to DEV, we are able to look up the systems we are trying to access. When we exit NSLOOKUP and try to ping or access a system through its web portal, the system doesn’t recognize the name.
Any ideas?
Thanks!
Server 2019client Windows 11 A client workstation has 4 DNS servers we expect to work in a “round robin”. The servers are:accountingresearchdevadmin The systems we are trying to access exist in the DNS server DEV. When we run NSLOOKUP and set the DNS server to DEV, we are able to look up the systems we are trying to access. When we exit NSLOOKUP and try to ping or access a system through its web portal, the system doesn’t recognize the name. Any ideas? Thanks! Read More
Help with “Fix now” link within Account Management page
Hi,
I have applied to join the Microsoft AI Cloud Partner program and my application was rejected because, I failed the Microsoft standards review. Unfortunately, the email I got from MS Vetting Ops support did not provide any details about what exactly I failed on.
I have been trying to appeal by following the recommended process. However, I am unable to edit any details within the Partner portal. I am a global admin for my organisation within both the company’s Azure subscription and M365 tenant. What exactly am I missing here?
JT
Hi, I have applied to join the Microsoft AI Cloud Partner program and my application was rejected because, I failed the Microsoft standards review. Unfortunately, the email I got from MS Vetting Ops support did not provide any details about what exactly I failed on. I have been trying to appeal by following the recommended process. However, I am unable to edit any details within the Partner portal. I am a global admin for my organisation within both the company’s Azure subscription and M365 tenant. What exactly am I missing here? JT Read More
Cumulative % Complete Formula
Not sure if this is possible as I was presented with an odd request:
I am attempting to write a formula that adds up the percent complete of tasks in the file by UID to calculate the cumulative percent complete for a task representing other tasks throughout the IMS, which are not subtasks of the task with the formula.
Not looking to do the grouping feature, yet know that feature could do that and I could then input the value in the ‘task representing other tasks throughout the IMS, which are not subtasks of the task with the formula.’.
Not sure if this is possible as I was presented with an odd request: I am attempting to write a formula that adds up the percent complete of tasks in the file by UID to calculate the cumulative percent complete for a task representing other tasks throughout the IMS, which are not subtasks of the task with the formula. Not looking to do the grouping feature, yet know that feature could do that and I could then input the value in the ‘task representing other tasks throughout the IMS, which are not subtasks of the task with the formula.’. Cheers,Cole Read More
High Memory Usage or Memory Leaks in Web Applications: Understanding and Data Collections
Introduction
High memory usage or memory leaks is a common challenge for the web applications. The unanticipated memory consumption can lead to performance bottlenecks, system crashes, and degraded user experiences. In this article, we will explore the concept of high memory usage, how to identify it, various types of high memory issues and how to collect logs for analysis.
Identifying high memory
The simplest way to identify high memory is to check the Private Bytes. Private Bytes indicates the amount of private committed memory being used by the process. Making it the key counter to rely on when determining if high memory consumption is happening for the application.
IIS Manager: You can check the Private Bytes form the IIS Manager Worker Process module. The private bytes displayed here is in KB.
Task Manager: You can get the same information from Task Manager Details tab. Find the worker process(w3wp.exe) that match the user name for that application pool. You need to look for Memory (active private working set). You will get the private bytes consumed by the application in KB.
Performance Monitor: Performance Monitor is a great tool to verify high memory usage.
Open Performance Monitor
Click Add
Expand Process
Select Private Bytes
And choose the worker process (w3wp).
Also add the following counters –
Virtual Bytes, you can find the Virtual Bytes under Process tree.
#Bytes in all Heap, you will find the #Bytes in all Heap under .NET CLR Memory.
These counters will help you to identify the memory leaks. And it will also tell you whether a leak is a native leak or a managed leak.
Native Memory Leaks: If the private bytes counter is increasing but the .NET bytes in all heaps counter remains constant. This indicates a native memory leak.
Managed Memory Leaks: If the private bytes counter and the .NET bytes in all heaps counter are increasing at the same rate (the difference between the two remains constant).
indicates a managed memory leak.
Log Collection
Now that you’re familiar with how to confirm high memory and use counters to distinguish between native and managed memory leaks, the next step is to correctly collect logs. In most cases, a full user dump is sufficient for analyzing high memory or memory leaks. However, the process for collecting dumps differs between native and managed leaks.
Native Memory Leaks
Download and install DebugDiag tool from this official download link – Download Debug Diagnostic Tool v2 Update 3 from Official Microsoft Download Center
Open DebugDiag 2 Collection.
Go to Processes tab
Select the worker process (w3wp.exe).
Right click and select Monitor For Leaks
And click Create Full Userdump
This will generate a full user dump, and you should create three of them for proper analysis and comparison.
You can automate this process if the issue is intermittent or you don’t want to monitor it.
Open DebugDiag 2 Collection.
Click on Add Rule
Select Native (non-.NET Memory and handle Leak)
Click Next
Select the worker process (w3wp.exe)
Click Configure and set provide the parameters as in below screenshot.
The above parameters will create the first memory dump of 800 MB. Two more dumps will be created at the increments specified – i.e. 1000 (800+ 200) and 1200 MB. Then the final memory dump will be created 15 minutes after tracking.
Managed Memory Leaks
Download and install DebugDiag tool from this official download link – Download Debug Diagnostic Tool v2 Update 3 from Official Microsoft Download Center
Open DebugDiag 2 Collection.
Go to Processes tab
Select the worker process (w3wp.exe) for the application.
Right click and click Create Userdump Series
Select and set the below options, do not click “Save & Close” at this point.
Wait for memory consumption to raise to decided level (70 to 80%).
Click Save & Close
You can automate this process using the ProcDump utility.
Download Procdump.exe from thi official download link – ProcDump – Sysinternals | Microsoft Learn
Extract the zip files into a folder of your choice.
Open command prompt with administrator privilege and navigate to the folder.
Execute the below command.
procdump.exe -s 30 -m 1000 -ma -n 3 <PID>
-n number of memory dumps
-m Memory commit threshold in MB at which the dumps will be created
-s would indicate the number of consecutive seconds where memory consumption was >= threshold specified with -m
PID is the process id for the worker process.
.NET Core applications
If the app in question is .NET Core and hosted on IIS in-process mode, then this above option applies as is. But if the app is hosted on IIS as out-of-proc mode then the action plan should be modified so that the dotnet process(dotnet.exe unless otherwise specified) is investigated instead of w3wp.exe. Same thing applies self-hosted .NET Core applications.
Conclusion
High memory usage or memory leaks is a complex issue with many potential causes, it can be native or managed. It is very important to isolate the type of memory leaks and carefully capture the logs. You can analyze the dumps using tools like WinDbg or DebugDiag2 Analysis. If you want us to do that, please contact us with a case and we will do it for you.
Microsoft Tech Community – Latest Blogs –Read More
How to Capture Network Traces Using Netsh Without Installing Extra Tools
Introduction:
I recently wrote a blog that has detail of how to capture the network traces from Client and Server via Wireshark. You can refer to that here.
Now, One of the challenges server support teams face is that Wireshark needs to be installed on both the client and server machines. Since many servers host critical applications, installing new tools during business hours is often avoided, or server admins need special approval to install them. So instead of installing new software, why not use a tool that already comes with Windows? This time, we’ll use the built-in Netsh utility.
About Netsh:
Netsh is a simple command-line tool that helps you view and change your computer’s network settings. You can use Netsh by typing commands in the Netsh command prompt, and you can also include these commands in scripts or batch files to automate tasks. Netsh works for both your local computer and remote computers. One useful feature of Netsh is that it lets you create a script with several commands, which you can then run all at once on a specific computer. You can also save these scripts in a text file to use later or to apply the same settings to other computers.
Let’s go capture the problem with Netsh.
We will follow basically the three steps to collect the network traces with Netsh.
Start command to start the capture.
Reproduce the issue.
Stop the command and let it collect the events, and zip.
Starting the trace collection:
Open an Administrative Command Prompt or an Administrative PowerShell console: open the Start menu and type CMD or PowerShell in the search bar, then right-click the command prompt or PowerShell and select Run as Administrator.
Run the following command to start the network capture
netsh trace start scenario=netconnection,WFP-IPsec maxSize = 1024 fileMode =circular Persistent=yes capture=yes report=yes tracefile=c:Clientside.etl
If you don’t specify the tracefile parameter, the default location is %LOCALAPPDATA%TempNetTraces
Like this:
Once you have run the command, immediately go and reproduce the issue you are trying to investigate multiple times.
Now you have the issue reproduced and netsh must have data captured, now its time to command it to stop and merge those events for us.
Run the stop command:
Netsh trace stop
This has been done on Client machine, but if you are troubleshooting something to trace the communication between Client And IIS Server(or any host), then same commands should be run on Server with just the filename as ServerSide.etl for visible secretion of Client and Server.
If you see one additional file created with an extension of .cab, don’t worry about this one. As this just holds some related diagnostic information and compresses that information into a CAB file.
This would generate the ClientSide.etl file which can be opened with Network analyzer tools like NetMon to validate and analyze for the comprehensive review to troubleshoot.
But what if you want to go ahead and try it on your own with Wireshark. And Wireshark does not read .etl files. You can go try the Open source tool etl2pcapng from the Official Microsoft Github repository and convert this etl to pcap and let Wireshark to read it.
Looking for more information about the Netsh? Please refer to the official Microsoft documentation here.
Netsh Command Syntax, Contexts, and Formatting | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
How to accelerate Copilot for Microsoft 365 adoption across every department and job role?
Read the Copilot for Microsoft 365: The Ultimate Skilling Guide
Designed for business leaders, IT pros and end users – this eBook covers detailed insights on how to unlock AI-powered productivity across every department and job role in your organization.
It also deep dives into the art and science of prompting with expert-recommended best practices, tips, and a curated collection of 300+ prompts to accelerate adoption.
Read the Copilot for Microsoft 365: The Ultimate Skilling Guide Designed for business leaders, IT pros and end users – this eBook covers detailed insights on how to unlock AI-powered productivity across every department and job role in your organization.It also deep dives into the art and science of prompting with expert-recommended best practices, tips, and a curated collection of 300+ prompts to accelerate adoption. Read More
Azure AI Assistants with Logic Apps
Introduction to AI Automation with Azure OpenAI Assistants
Intro
Welcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work.
Azure OpenAI Assistants (preview)
Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows:
Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition.
Consumption Logic Apps: Currently supported consumption workflows.
Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling.
AI Automation
So apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and:
Azure OpenAI in the supported regions. This demo is on Sweden Central.Logic Apps consumption Plan.
We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o.
The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants:
Name Description
Assistant nameYour deployment name that is associated with a specific model.InstructionsInstructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses.DeploymentThis is where you set which model deployment to use with your assistant.FunctionsCreate custom function definitions for the models to formulate API calls and structure data outputs based on your specificationsCode interpreterCode interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code.FilesYou can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio.
The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed.
Challenge #1 – Summarize WordPress Blog Posts
How about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow.
We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request.
{
“name”: “__ALA__lgkapp002”, // Remove this for the Logic App Trigger
“description”: “Fetch the latest post from a WordPress website,summarize it, and return the summary.”,
“parameters”: {
“type”: “object”,
“properties”: {
“url”: {
“type”: “string”,
“description”: “The base URL of the WordPress site”
},
“post”: {
“type”: “string”,
“description”: “The page number”
}
},
“required”: [
“url”,
“post”
]
}
}
In the Designer this looks like this :
As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call:
And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step:
Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier!
From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App.
Let’s give our Assistant a Prompt and see what happens:
The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service.
Challenge #2 – Create Azure Virtual Machine based on preferences
For the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource.
This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up:
{
“name”: “__ALA__assistkp02” //remove this for the Logic App Trigger
“description”: “Create an Azure VM based on the user input”,
“parameters”: {
“type”: “object”,
“properties”: {
“name”: {
“type”: “string”,
“description”: “The name of the VM”
},
“location”: {
“type”: “string”,
“description”: “The region of the VM”
},
“size”: {
“type”: “string”,
“description”: “The size of the VM”
},
“os”: {
“type”: “string”,
“description”: “The OS of the VM”
}
},
“required”: [
“name”,
“location”,
“size”,
“os”
]
}
}
And the actual screenshot from the Trigger, observe the absence of the “name” here:
Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku:
if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘Windows-10’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘Windows-11’, ‘default-offer’))
if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘win10-22h2-pro-g2’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘win11-22h2-pro’, ‘default-sku’))
As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems.
After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP:
And the Create VM action with highlight to the storage profile – OS Image setup:
Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body:
Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message.
Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed :
The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts.
Conclusion
In conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey.
References:
Getting started with Azure OpenAI Assistants (Preview)Call Azure Logic apps as functions using Azure OpenAI AssistantsAzure OpenAI Assistants function callingAzure OpenAI Service modelsWhat is Azure Logic Apps?Azure Resource Manager – Rest Operations
Introduction to AI Automation with Azure OpenAI Assistants IntroWelcome to the future of automation! In the world of Azure, AI assistants are becoming your trusty sidekicks, ready to tackle the repetitive tasks that once consumed your valuable time. But what if we could make these assistants even smarter? In this post, we’ll dive into the exciting realm of integrating Azure AI assistants with Logic Apps – Microsoft’s powerful workflow automation tool. Get ready to discover how this dynamic duo can transform your workflows, freeing you up to focus on the big picture and truly innovative work.Azure OpenAI Assistants (preview)Azure OpenAI Assistants (Preview) allows you to create AI assistants tailored to your needs through custom instructions and augmented by advanced tools like code interpreter, and custom functions. To accelerate and simplify the creation of intelligent applications, we can now enable the ability to call Logic Apps workflows through function calling in Azure OpenAI Assistants. The Assistants playground enumerates and lists all the workflows in your subscription that are eligible for function calling. Here are the requirements for these workflows:Schema: The workflows you want to use for function calling should have a JSON schema describing the inputs and expected outputs. Using Logic Apps you can streamline and provide schema in the trigger, which would be automatically imported as a function definition.Consumption Logic Apps: Currently supported consumption workflows.Request trigger: Function calling requires a REST-based API. Logic Apps with a request trigger provides a REST endpoint. Therefore only workflows with a request trigger are supported for function calling.AI AutomationSo apart from the Assistants API, which we will explore in another post, we know that we can Integrate Azure Logic Apps workflows! Isn’t that amazing ? The road now is open for AI Automation and we are on the genesis of it, so let’s explore it. We need an Azure Subscription and:Azure OpenAI in the supported regions. This demo is on Sweden Central.Logic Apps consumption Plan.We will work in Azure OpenAI Studio and utilize the Playground. Our model deployment is GPT-4o.The Assistants Playground offers the ability to create and save our Assistants, so we can start working and return later, open the Assistant and continue. We can find the System Message option and the three tools that enhance the Assistants with Code Interpreter, Function Calling ( Including Logic Apps) and Files upload. The following table describes the configuration elements of our Assistants:Name DescriptionAssistant nameYour deployment name that is associated with a specific model.InstructionsInstructions are similar to system messages this is where you give the model guidance about how it should behave and any context it should reference when generating a response. You can describe the assistant’s personality, tell it what it should and shouldn’t answer, and tell it how to format responses. You can also provide examples of the steps it should take when answering responses.DeploymentThis is where you set which model deployment to use with your assistant.FunctionsCreate custom function definitions for the models to formulate API calls and structure data outputs based on your specificationsCode interpreterCode interpreter provides access to a sandboxed Python environment that can be used to allow the model to test and execute code.FilesYou can upload up to 20 files, with a max file size of 512 MB to use with tools. You can upload up to 10,000 files using AI Studio.The Studio provides 2 sample Functions (Get Weather and Get Stock Price) to get an idea of the schema requirement in JSON for Function Calling. It is important to provide a clear message that makes the Assistant efficient and productive, with careful consideration since the longer the message the more Tokens are consumed.Challenge #1 – Summarize WordPress Blog PostsHow about providing a prompt to the Assistant with a URL instructing it to summarize a WordPress blog post? It is WordPress cause we have a unified API and we only need to change the URL. We can be more strict and narrow down the scope to a specific URL but let’s see the flexibility of Logic Apps in a workflow.We should start with the Logic App. We will generate the JSON schema directly from the Trigger which must be an HTTP request. {
“name”: “__ALA__lgkapp002”, // Remove this for the Logic App Trigger
“description”: “Fetch the latest post from a WordPress website,summarize it, and return the summary.”,
“parameters”: {
“type”: “object”,
“properties”: {
“url”: {
“type”: “string”,
“description”: “The base URL of the WordPress site”
},
“post”: {
“type”: “string”,
“description”: “The page number”
}
},
“required”: [
“url”,
“post”
]
}
} In the Designer this looks like this : As you can see the Schema is the same, excluding the name which is need only in the OpenAI Assistants. We will see this detail later on. Let’s continue with the call to WordPress. An HTTP Rest API call: And finally mandatory as it is, a Response action where we tell the Assistant that the Call was completed and bring some payload, in our case the body of the previous step: Now it is time to open our Azure OpenAI Studio and create a new Assistant. Remember the prerequisites we discussed earlier!From the Assistants menu create a [+New] Assistant, give it a meaningful name, select the deployment and add a System Message . For our case it could be something like : ” You are a helpful Assistant that summarizes the WordPress Blog Posts the users request, using Functions. You can utilize code interpreter in a sandbox Environment for advanced analysis and tasks if needed “. The Code interpreter here could be an overkill but we mention it to see the use of it ! Remember to save the Assistant. Now, in the Functions, do not select Logic Apps, rather stay on the custom box and add the code we presented earlier. The Assistant will understand that the Logic App named xxxx must be called, aka [“name”: “__ALA__lgkapp002“,] in the schema! In fact the Logic App is declared by 2 underscores as prefix and 2 underscores as suffix, with ALA inside and the name of the Logic App.Let’s give our Assistant a Prompt and see what happens:The Assistant responded pretty solidly with a meaningful summary of the post we asked for! Not bad at all for a Preview service.Challenge #2 – Create Azure Virtual Machine based on preferencesFor the purpose of this task we have activated System Assigned managed identity to the Logic App we use, and a pre-provisioned Virtual Network with a subnet as well. The Logic App must reside in the same subscription as our Azure OpenAI resource.This is a more advanced request, but after all it translates to Logic Apps capabilities. Can we do it fast enough so the Assistant won’t time out? Yes we do, by using the Azure Resource Manager latest API which indeed is lightning fast! The process must follow the same pattern, Request – Actions – Response. The request in our case must include such input so the Logic App can carry out the tasks. The Schema should include a “name” input which tells the Assistant which Logic App to look up: {
“name”: “__ALA__assistkp02” //remove this for the Logic App Trigger
“description”: “Create an Azure VM based on the user input”,
“parameters”: {
“type”: “object”,
“properties”: {
“name”: {
“type”: “string”,
“description”: “The name of the VM”
},
“location”: {
“type”: “string”,
“description”: “The region of the VM”
},
“size”: {
“type”: “string”,
“description”: “The size of the VM”
},
“os”: {
“type”: “string”,
“description”: “The OS of the VM”
}
},
“required”: [
“name”,
“location”,
“size”,
“os”
]
}
} And the actual screenshot from the Trigger, observe the absence of the “name” here: Now as we have number of options, this method allows us to keep track of everything including the user’s inputs like VM Name , VM Size, VM OS etc.. Of Course someone can expand this, since we use a default resource group and a default VNET and Subnet, but it’s also configurable! So let’s store the input into variables, we Initialize 5 variables. The name, the size, the location (which is preset for reduced complexity since we don’t create a new VNET), and we break down the OS. Let’s say the user selects Windows 10. The API expects an offer and a sku. So we take Windows 10 and create an offer variable, the same with OS we create an OS variable which is the expected sku: if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘Windows-10’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘Windows-11’, ‘default-offer’)) if(equals(triggerBody()?[‘os’], ‘Windows 10’), ‘win10-22h2-pro-g2’, if(equals(triggerBody()?[‘os’], ‘Windows 11’), ‘win11-22h2-pro’, ‘default-sku’)) As you understand this is narrowed to Windows Desktop only available choices, but we can expand the Logic App to catch most well know Operating Systems.After the Variables all we have to do is create a Public IP (optional) , a Network Interface, and finally the VM. This is the most efficient way i could make, so we won’t get complains from the API and it will complete it very fast ! Like 3 seconds fast ! The API calls are quite straightforward and everything is available in Microsoft Documentation. Let’s see an example for the Public IP: And the Create VM action with highlight to the storage profile – OS Image setup: Finally we need the response which can be as we like it to be. I am facilitating the Assistant’s response with an additional Action “Get Virtual Machine” that allows us to include the properties which we add in the response body: Let’s make our request now, through the Assistants playground in Azure OpenAI Studio. Our prompt is quite clear: “Create a new VM with size=Standard_D4s_v3, location=swedencentral, os=Windows 11, name=mynewvm02”. Even if we don’t add the parameters the Assistant will ask for them as we have set in the System Message. Pay attention to the limitation also . When we ask about the Public IP, the Assistant does not know it. Yet it informs us with a specific message, that makes sense and it is relevant to the whole operation. If we want to have a look of the time it took we will be amazed :The sum of the time starting from the user request till the response from the Assistant is around 10 seconds. We have a limit of 10 minutes for Function Calling execution so we can built a whole Infrastructure using just our prompts.ConclusionIn conclusion, this experiment highlights the powerful synergy between Azure AI Assistant’s Function Calling capability and the automation potential of Logic Apps. By successfully tackling two distinct challenges, we’ve demonstrated how this combination can streamline workflows, boost efficiency, and unlock new possibilities for integrating intelligent decision-making into your business processes. Whether you’re automating customer support interactions, managing data pipelines, or optimizing resource allocation, the integration of AI assistants and Logic Apps opens doors to a more intelligent and responsive future. We encourage you to explore these tools further and discover how they can revolutionize your own automation journey.References:Getting started with Azure OpenAI Assistants (Preview)Call Azure Logic apps as functions using Azure OpenAI AssistantsAzure OpenAI Assistants function callingAzure OpenAI Service modelsWhat is Azure Logic Apps?Azure Resource Manager – Rest Operations Read More
Inventory Sign Out Quantity Issue
Hello, I have something I am trying to solve and have been searching all over.
I have a sheet made to help keep track of stuff taken from the warehouse. We have a scanner that pulls from a sheet that has prices and that all works fine. The Issue I am having is having the quantity determined based off of what is scanned. I was able to get it set to fill to “1” if it pulls a proper part number from the list. I cant find a way to tell it to merge its current quantity value and the one under it IF they meet two conditions: First one being that the part number is the same, Second being if the Equipment number is the same.
For Example this screen shot when you scan it twice it puts two rows. I need it to read the two conditions of same “Barcode scan” and same “job number” so it can merge the quantity of 1+1
But it also needs to know if the value of the two merged is indeed merged so it can clear the second barcode scan so it doesn’t repeat endlessly.
Hello, I have something I am trying to solve and have been searching all over. I have a sheet made to help keep track of stuff taken from the warehouse. We have a scanner that pulls from a sheet that has prices and that all works fine. The Issue I am having is having the quantity determined based off of what is scanned. I was able to get it set to fill to “1” if it pulls a proper part number from the list. I cant find a way to tell it to merge its current quantity value and the one under it IF they meet two conditions: First one being that the part number is the same, Second being if the Equipment number is the same. For Example this screen shot when you scan it twice it puts two rows. I need it to read the two conditions of same “Barcode scan” and same “job number” so it can merge the quantity of 1+1But it also needs to know if the value of the two merged is indeed merged so it can clear the second barcode scan so it doesn’t repeat endlessly. Read More
Important changes to the Windows enrollment experience coming soon
Windows updates are essential for keeping your devices secure and up to date with the latest security, performance, and reliability improvements. One of the top customer requests we receive is to enable Windows updates during provisioning in the out-of-box experience (OOBE), so that devices are fully patched and ready to use as soon as they are enrolled with mobile device management (MDM).
In the coming weeks, the Windows MDM enrollment experience will be updated to automatically enable quality updates during OOBE. Quality updates are monthly updates that provide security and reliability fixes, as well as enhancements to existing features. These updates are critical for the performance and security of your devices, and we want to make sure they’re delivered as soon as possible. Please note that not every monthly quality update will be made available through the OOBE. Microsoft will determine the availability of these updates based on the value of the update and how it relates to a device setup situation.
What’s changing
With the upcoming October Windows update, all Windows 11, version 22H2 and higher, devices that are enrolled with an MDM, e.g. Microsoft Intune, will automatically download and install quality updates during OOBE. This will apply to all MDM-enrolled devices, regardless of whether they’re pre-registered with Windows Autopilot or not. The updates will be applied before the user reaches the desktop, ensuring that the device is fully patched before logging in.
The new experience will look like this:
After the device connects to the internet and checks for updates, if there are available quality updates found, the device displays a message on the updates page stating that updates are available and being installed.
The device then downloads and installs the quality updates in the background, while showing installation progress.
Once the updates are installed, the device restarts and continues to the desktop. The user then signs in to the device and the device completes enrollment.
Please note that this change only applies to quality updates. Feature updates, which are major updates that introduce new functionality, and driver updates, which provide hardware-specific fixes or enhancements will not be applied during OOBE but will be managed by your MDM according to your policies.
Impacts and what this means for you
While we believe that this change will improve the Windows enrollment experience and provide more security and reliability for your devices, we also want to make you aware of some potential impacts and what you need to do to prepare.
Additional time in OOBE
Quality update installation during OOBE adds some additional time to the device setup process, depending on when the device was most recently updated, internet speed, and device performance. We recommend notifying your vendors and customers of this additional time, and plan accordingly for your device deployment scenarios.
Organizations using temporary passwords
With the additional time for setup, if using Temporary Access Pass (TAP), the passcode may expire before the user signs onto the desktop. To avoid this, we recommend that you extend the validity period of the temporary passwords during enrollment.
Summary
There may be instances where the update is not initiated if the Windows Update for Business (WUfB) policies that block or delay updates are applied to the device before reaching the New Device Update Page (NDUP). This is particularly possible if app installations significantly delay the Enrollment Status Page (ESP).
At this time, there’s no option to control or disable quality updates during OOBE. As mentioned earlier in this blog, we’re exploring when all monthly quality updates can be available and manageable during OOBE to provide the best overall experience.
We hope that this change will improve your Windows Autopilot experience and provide more security and reliability for your devices. If you have any feedback or questions, please let us know in the comments or reach out on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Introducing Reporting and Entra ID Authentication for Microsoft Playwright Testing
Microsoft Playwright Testing is a managed service built for running Playwright tests easily at scale. As we aim to improve the developer experience, and through our interactions with users, we recognize the need for simpler, more efficient troubleshooting. Today, we’re excited to introduce a new web-hosted reporting dashboard to help speed up the troubleshooting and make it easier for developers to identify and resolve issues. To further enhance security, we’re also implementing Microsoft Entra ID as the default authentication method, providing a more secure and seamless workflow.
Read on to learn more about what’s now possible with Microsoft Playwright Testing.
Reporting Dashboard
As development teams scale and iterate rapidly, maintaining high quality becomes more critical than ever. Slow issue resolution impacts the entire development process. With our new reporting feature, anyone on your team can quickly access detailed test results from a CI/CD run, complete with rich artifacts like logs, screenshots, and traces for efficient troubleshooting.
The reporting feature streamlines your workflow by bringing the tests that needs your attention to your notice. The test run view is filtered by failed and flaky tests so that you can start troubleshooting instantly. You can click through each test to find all the information you need to troubleshoot.
Screen capture of troubleshooting in the Playwright dashboard
Troubleshoot easily using rich artifacts
All test logs and artifacts, including screenshots, videos, and traces are securely stored in a centralized location. They can be accessed through a unified dashboard with configurable permissions.
The Trace Viewer is a powerful tool that is hosted directly in the dashboard. It allows you to visually step through your test execution, or use the timeline to hover over steps and reveal the page state before and after each action. Detailed logs, DOM snapshot, network activity, errors, and console output are available at each test step for precise troubleshooting.
Screenshot of trace viewer hosted in the Playwright dashboard
Seamless integration with CI pipelines
Test results in the dashboard captures essential CI pipeline details such as commit information, author, and branch, with one click access to the CI pipeline that ran the tests. This enables you to easily investigate code changes that are related to the test result.
For GitHub Actions users, summarized reports are displayed directly in the job summary section, providing a clear overview of test results and direct links to the Playwright dashboard for in-depth analysis.
Screenshot of GitHub Actions job summary
Securely authenticate using Microsoft Entra ID
We are also excited to add Microsoft Entra ID support to achieve a more secure default authentication method for Playwright Testing service. Access tokens, though convenient, pose inherent risks such as potential leaks, frequent rotations, and accidental exposure in code. Microsoft Entra ID mitigates these risks by securely authenticating clients with Azure when running tests on cloud-hosted browsers and publishing test reports and artifacts, streamlining workflows and simplifying access control.
Although we recommend using Microsoft Entra ID authentication, access token authentication will still be supported, ensuring flexibility for existing setups and easing the transition to this more secure approach.
Get started with Playwright Testing service
Getting started is easy—simply install the service package by running this command:
npm init /microsoft-playwright-testing
This will provide you with a configuration file required to and publish test results. You don’t need to modify your test code. Use the newly created Playwright service configuration file to run the tests. The package also facilitates authentication using Microsoft Entra ID and is compatible with Playwright version 1.47 and above.
Next, you can explore our flexible consumption-based pricing where you pay only for what you use
Share your feedback
Your feedback is invaluable to us. Please share your feedback and help us shape the future of Microsoft Playwright Testing.
Learn more about the Microsoft Playwright Testing service
Learn more about using the Playwright Testing service for your web application testing.
Explore the features and benefits that Microsoft Playwright Testing offers.
Learn about our flexible pricing.
Use the pricing calculator to determine your costs based on your business needs.
Microsoft Tech Community – Latest Blogs –Read More
Grab Your Board and Catch a Wave… Copilot Wave 2 That Is
Happy Monday and what a way to kick off the week! Just started back this morning after a 10-week Sabbatical (THANK YOU MICROFT!) and was greeted with the kickoff of the Microsoft 365 Copilot Wave 2 presented by Microsoft’s Satya Nadella and Jared Spataro. The streaming Wave 2 kickoff centered on 3 main areas of focus: 1- Copilot Pages, 2- Copilot in Microsoft 365 Apps, and 3- Copilot Agents. If you missed the session no worries. I have grabbed a link to the recording, the follow-up Copilot blog post, some individual deep dive videos as well as some additional content to help you on your Copilot Wave 2 journey.
Watch the recording of today’s announcements on LinkedIn by clicking here.
Check out the blog post “Microsoft 365 Copilot Wave 2: Pages, Python in Excel, and agents”
Watch videos on:
Copilot Studio Agent Builder
Copilot Pages
Prioritize my Inbox in Outlook
Python in Excel
Narrative Builder in PowerPoint
Get powerful Microsoft 365 Copilot adoption resources available to help your organization on its Copilot journey
Microsoft 365 Copilot home page
Thanks for visiting – Michael Gannotti LinkedIn
Microsoft Tech Community – Latest Blogs –Read More